Academic literature on the topic 'Software Performance Testing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Software Performance Testing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Software Performance Testing"

1

Patel, Charmy, and Dr Ravi Gulati. "Software Performance Testing Measures." INTERNATIONAL JOURNAL OF MANAGEMENT & INFORMATION TECHNOLOGY 8, no. 2 (January 31, 2014): 1297–300. http://dx.doi.org/10.24297/ijmit.v8i2.681.

Full text
Abstract:
Software developers typically measure a Web application's quality of service in terms of webpage availability, response time, and throughput. Performance testing and evaluation of software components becomes a critical task. Poor quality of software performance can lead to bad opportunities. Few research papers address the issues and systematic solutions to performance testing and measurement for modern components of software. This paper proposes a solution and environment to support performance measurement for software. The objective is to provide all kind of important measures which must be tested at the coding phase instead of after completion of software. So developers can make software that can meet performance objectives.
APA, Harvard, Vancouver, ISO, and other styles
2

Srivastava, Nishi, Ujjwal Kumar, and Pawan Singh. "Software and Performance Testing Tools." Journal of Informatics Electrical and Electronics Engineering (JIEEE) 2, no. 1 (January 5, 2021): 1–12. http://dx.doi.org/10.54060/jieee/002.01.001.

Full text
Abstract:
Software Testing may be a method, that involves, death penalty of a software system program/application and finding all errors or bugs therein program/application in order that the result is going to be a defect-free software system. Quality of any software system will solely be acknowledged through means that of testing (software testing). Through the advancement of technology round the world, there inflated the quantity of verification techniques and strategies to check the software system before it goes to production and astray to promote. Automation Testing has created its impact within the testing method. Now-a-days, most of the software system testing is finished with the automation tools that not solely lessens the quantity of individuals operating around that software system however additionally the errors which will be loose through the eyes of the tester. Automation take look acting contains test cases that make the work simple to capture totally different eventualities and store them. Therefore, software system automation testing method plays a significant role within the software system testing success. This study aims in knowing differing kinds of software system testing, software system testing techniques and tools and to match manual testing versus automation testing.
APA, Harvard, Vancouver, ISO, and other styles
3

Varela-González, M., H. González-Jorge, B. Riveiro, and P. Arias. "Performance testing of LiDAR exploitation software." Computers & Geosciences 54 (April 2013): 122–29. http://dx.doi.org/10.1016/j.cageo.2012.12.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Varela-González, M., H. González-Jorge, B. Riveiro, and P. Arias. "Performance testing of 3D point cloud software." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-5/W2 (October 16, 2013): 307–12. http://dx.doi.org/10.5194/isprsannals-ii-5-w2-307-2013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Krauser, E. W., A. P. Mathur, and V. J. Rego. "High performance software testing on SIMD machines." IEEE Transactions on Software Engineering 17, no. 5 (May 1991): 403–23. http://dx.doi.org/10.1109/32.90444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Denaro, Giovanni, Andrea Polini, and Wolfgang Emmerich. "Early performance testing of distributed software applications." ACM SIGSOFT Software Engineering Notes 29, no. 1 (January 2004): 94–103. http://dx.doi.org/10.1145/974043.974059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhuang, Lei, Zhen Gao, Hao Wu, Chun Xin Yang, and Miao Zheng. "Research on DB2 Performance Testing Automation." Advanced Materials Research 756-759 (September 2013): 2204–8. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.2204.

Full text
Abstract:
Software testing play a significant role in modern software development and maintenance process, which is also an important means to ensure software reliability and improve software quality. With the continuous improvement of quality requirements of the software products and software engineering technology become more sophisticated, software testing has been participating into every phase of software lift cycle, become more and more important in software development and maintenance. DB2 Performance testing consists of four parts, which are environment setup, workload run, data measurement and environment clean up. Before all the operations are done manually and need about two hours continuous attention. Whats worse, even three times a day. This mechanical and complicated procedure is clearly unacceptable. This paper put forward a reusable automated testing framework based on IBM automated testing tools RFT to achieve the whole testing procedure automation. It reduces the count of human-computer interaction and greatly improves the efficiency of DB2 performance testing.
APA, Harvard, Vancouver, ISO, and other styles
8

Avritzer, Alberto, and Elaine J. Weyuker. "Deriving Workloads for Performance Testing." Software: Practice and Experience 26, no. 6 (June 1996): 613–33. http://dx.doi.org/10.1002/(sici)1097-024x(199606)26:6<613::aid-spe23>3.0.co;2-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

MA Tian-bo, 马天波, 刘辉 LIU Hui, and 臧佳 ZANG Jia. "Design of Linear CCD Performance Parameter Testing Software." OME Information 28, no. 7 (2011): 41–45. http://dx.doi.org/10.3788/omei20112807.0041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Naik Dessai, Sanket Suresh, and Varuna Eswer. "Embedded Software Testing to Determine BCM5354 Processor Performance." International Journal of Software Engineering and Technologies (IJSET) 1, no. 3 (December 1, 2016): 121. http://dx.doi.org/10.11591/ijset.v1i3.4577.

Full text
Abstract:
Efficiency of a processor is a critical factor for an embedded system. One of the deciding factors for efficiency is the functioning of the L1 cache and Translation Lookaside Buffer (TLB). Certain processors have the L1 cache and TLB managed by the operating system, MIPS32 is one such processor. The performance of the L1 cache and TLB necessitates a detailed study to understand its management during varied load on the processor. This paper presents an implementation of embedded testing procedure to analyse the performance of the MIPS32 processor L1 cache and TLB management by the operating system (OS). The implementation proposed for embedded testing in the paper considers the counting of the respective cache and TLB management instruction execution, which is an event that is measurable with the use of dedicated counters. The lack of hardware counters in the MIPS32 processor results in the usage of software based event counters that are defined in the kernel. This paper implements embedding testbed with a subset of MIPS32 processor performance measurement metrics using software based counters. Techniques were developed to overcome the challenges posed by the kernel source code. To facilitate better understanding of the testbed implementation procedure of the software based processor performance counters; use-case analysis diagram, flow charts, screen shots, and knowledge nuggets are supplemented along with histograms of the cache and TLB events data generated by the proposed implementation. In this testbed twenty-seven metrics have been identified and implemented to provide data related to the events of the L1 cache and TLB on the MIPS32 processor. The generated data can be used in tuning of compiler, OS memory management design, system benchmarking, scalability, analysing architectural issues, address space analysis, understanding bus communication, kernel profiling, and workload characterisation.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Software Performance Testing"

1

Sakko, J. (Janne). "Unintrusive performance profiling and testing in software development." Master's thesis, University of Oulu, 2014. http://urn.fi/URN:NBN:fi:oulu-201411292032.

Full text
Abstract:
Performance is a complex topic in software development. Performance is a result of various interconnected properties of software and hardware. Risks and damages of badly performing software are well known and visible. Still, performance considerations are not thoroughly embedded to whole development life-cycle. Many projects start to consider performance only when issues emerge. Fixing performance problems at late phases of development can be very difficult and expensive. When performance problems emerge, most important goal is to determine root causes of issues. Instrumenting software can be effective way to measure and analyse software, but if not implemented during development, it can be limited and laborious. Unintrusive software profilers don’t require any modifications to the profiled software to be used. Profilers can provide various information about the software and the environment. Performance testing aims to validate and verify that performance targets of a project are achieved. Regression testing is well known method for assuring that regressions are not introduced to the software during development. Performance regression testing has similar targets for performance. This thesis explores usage of performance profilers and performance regression testing in UpWind project. UpWind is a sail boat chart navigation software project conducted in University of Oulu. Evaluation study in context of Design Science Research is used as a research method. In this thesis navigation algorithm of UpWind project was profiled using OProfile and Valgrind profilers. Profiling provided new information about performance behaviour of UpWind project and also new insights about performance profiling. In order to prevent future performance regressions in UpWind project, performance tests and performance regression testing process were drafted. Performance tests were implemented using Qt framework’s QTestLib.
APA, Harvard, Vancouver, ISO, and other styles
2

Charla, Shiva Bhavani Reddy. "Examining Various Input Patterns Effecting Software Application Performance : A Quasi-experiment on Performance Testing." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13587.

Full text
Abstract:
Nowadays, non-functional testing has a great impact on the real-time environment. Non-functional testing helps to analyze the performance of the application on both server and client. Load testing attempts to cause the system under test to respond incorrectly in a situation that differs from its normal operation, but rarely encountered in real world use. Examples include providing abnormal inputs to the software or placing real-time software under unexpectedly high loads. High loads are induced over the application to test the performance, but there is a possibility that particular pattern of the low load could also induce load on a real-time system. For example, repeatedly making a request to the system every 11 seconds might cause a fault if the system transitions to standby state after 10 seconds of inactivity. The primary aim of this study is to find out various low load input patterns affecting the software, rather than simply high load inputs. A quasi-experiment was chosen as a research method for this study. Performance testing was performed on the web application with the help of a tool called HP load runner. A comparison was made between low load and high load patterns to analyze the performance of the application and to identify bottlenecks under different load.
APA, Harvard, Vancouver, ISO, and other styles
3

Khan, Rizwan Bahrawar. "Comparative Study of Performance Testing Tools: Apache JMeter and HP LoadRunner." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12915.

Full text
Abstract:
Software Testing plays a key role in Software Development. There are two approaches to software testing i.e. Manual Testing and Automated Testing which are used to detect the faults. There are numbers of automated software testing tools with different purposes but it is always a problem to select a software testing tool according to the needs. In this research, the author compares two software testing tools i.e. Apache JMeter and HP LoadRunner to determine their usability and efficiency. To compare the tools, different parameters were selected which guide the tool evaluation process. To complete the objective of the research, a scenario-based survey is conducted and two different web applications were tested. From this research, it is found that Apache JMeter got an edge over HP Loadrunner in different aspects which include installation, interface and learning.
APA, Harvard, Vancouver, ISO, and other styles
4

Han, Xue. "CONFPROFITT: A CONFIGURATION-AWARE PERFORMANCE PROFILING, TESTING, AND TUNING FRAMEWORK." UKnowledge, 2019. https://uknowledge.uky.edu/cs_etds/84.

Full text
Abstract:
Modern computer software systems are complicated. Developers can change the behavior of the software system through software configurations. The large number of configuration option and their interactions make the task of software tuning, testing, and debugging very challenging. Performance is one of the key aspects of non-functional qualities, where performance bugs can cause significant performance degradation and lead to poor user experience. However, performance bugs are difficult to expose, primarily because detecting them requires specific inputs, as well as specific configurations. While researchers have developed techniques to analyze, quantify, detect, and fix performance bugs, many of these techniques are not effective in highly-configurable systems. To improve the non-functional qualities of configurable software systems, testing engineers need to be able to understand the performance influence of configuration options, adjust the performance of a system under different configurations, and detect configuration-related performance bugs. This research will provide an automated framework that allows engineers to effectively analyze performance-influence configuration options, detect performance bugs in highly-configurable software systems, and adjust configuration options to achieve higher long-term performance gains. To understand real-world performance bugs in highly-configurable software systems, we first perform a performance bug characteristics study from three large-scale opensource projects. Many researchers have studied the characteristics of performance bugs from the bug report but few have reported what the experience is when trying to replicate confirmed performance bugs from the perspective of non-domain experts such as researchers. This study is meant to report the challenges and potential workaround to replicate confirmed performance bugs. We also want to share a performance benchmark to provide real-world performance bugs to evaluate future performance testing techniques. Inspired by our performance bug study, we propose a performance profiling approach that can help developers to understand how configuration options and their interactions can influence the performance of a system. The approach uses a combination of dynamic analysis and machine learning techniques, together with configuration sampling techniques, to profile the program execution, analyze configuration options relevant to performance. Next, the framework leverages natural language processing and information retrieval techniques to automatically generate test inputs and configurations to expose performance bugs. Finally, the framework combines reinforcement learning and dynamic state reduction techniques to guide subject application towards achieving higher long-term performance gains.
APA, Harvard, Vancouver, ISO, and other styles
5

Khan, Mohsin Javed, and Hussan Iftikhar Iftikhar. "Performance Testing and Analysis of Modern Web Technologies." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-11177.

Full text
Abstract:
The thesis is an empirical case study to predict or estimate the performance and variability of contemporary software frameworks used for web application development. Thesis can be mainly divided into 3 phases. In Phase I, we theoretically try to explore and analyze PHP, EJB 3.0 and ASP.NET considering quality attributes or ilitis of mentioned technologies. In Phase II, we develop two identical web applications i.e. online component’s webstore (applications to purchase components online) in PHP and ASP.NET. In phase III, we conduct automated testing to determine and analyze applications’ performance. We developed web applications in PHP 5.3.0 and Visual Studio 2008 using ASP.NET 3.5 to practically measure and compare the applications’ performance. We used SQL Server 2005 with ASP.NET 3.5 and MySql 5.1.36 with PHP as database servers. Software architecture, CSS, database design, database constraints were tried to keep simple and same for both applications i.e.  Applications developed in PHP and ASP.NET. This similarity helps to establish realistic comparison of applications performance and variability. The applications’ performance and variability is measured with help of automated scripts. These scripts were used to generate thousands of requests on application servers and downloading components simultaneously. More details of performance testing can be found in chapter 6, 7 and 8.
We have gain alot of knowledge from this thesis. We are very happy to finish our Software Engineering.
APA, Harvard, Vancouver, ISO, and other styles
6

Penmetsa, Jyothi Spandana. "AUTOMATION OF A CLOUD HOSTED APPLICATION : Performance, Automated Testing, Cloud Computing." Thesis, Blekinge Tekniska Högskola, Institutionen för kommunikationssystem, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12849.

Full text
Abstract:
Context: Software testing is the process of assessing quality of a software product to determine whether it matches with the existing requirements of the customer or not. Software testing is one of the “Verification and Validation,” or V&V, software practices. The two basic techniques of software testing are Black-box testing and White box testing. Black-box testing focuses solely on the outputs generated in response to the inputs supplied neglecting the internal components of the software. Whereas, White-box testing focuses on the internal mechanism of the software of any application. To explore the feasibility of black-box and white-box testing under a given set of conditions, a proper test automation framework needs to be deployed. Automation is deployed in order to reduce the manual effort and to perform testing continuously, thereby increasing the quality of the product. Objectives: In this research, cloud hosted application is automated using TestComplete tool. The objective of this thesis is to verify the functionality of cloud application such as test appliance library through automation and to measure the impact of the automation on release cycles of the organisation. Methods: Here automation is implemented using scrum methodology which is an agile development software process. Using scrum methodology, the product with working software can be delivered to the customers incrementally and empirically with updating functionalities in it. Test appliance library functionality is verified deploying testing device thereby keeping track of automatic software downloads into the testing device and licenses updating in the testing device. Results: Automation of test appliance functionality of cloud hosted application is made using TestComplete tool and impact of automation on release cycles is found reduced. Through automation of cloud hosted application, nearly 24% of reduction in level of release cycles can be observed thereby reducing the manual effort and increasing the quality of delivery. Conclusion: Automation of a cloud hosted application provides no manual effort thereby utilisation of time can be made effectively and application can be tested continuously increasing the efficiency and
AUTOMATION OF A CLOUD HOSTED APPLICATION
APA, Harvard, Vancouver, ISO, and other styles
7

Eada, Priyanudeep. "Experiment to evaluate an Innovative Test Framework : Automation of non-functional testing." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-10940.

Full text
Abstract:
Context. Performance testing, among other types of non-functional testing, is necessary to assess software quality. Most often, manual approach is employed to test a system for its performance. This approach has several setbacks. The existing body of knowledge lacks empirical evidence on automation of non-functional testing and is largely focused on functional testing. Objectives. The objective of the present study is to evaluate a test framework that automates performance testing. A large-scale distributed project is selected as the context to achieve this objective. The rationale for choosing such a project is that the proposed test framework was designed with an intention to adapt and tailor according to any project’s characteristics. Methods. An experiment was conducted with 15 participants at Ericsson R&D department, India to evaluate an automated test framework. Repeated measures design with counter balancing method was used to understand the accuracy and time taken while using the test framework. To assess the ease-of-use of the proposed framework, a questionnaire was distributed among the experiment participants. Statistical techniques were used to accept or reject the hypothesis. The data analysis was performed using Microsoft Excel. Results. It is observed that the automated test framework is superior to the traditional manual approach. There is a significant reduction in the average time taken to run a test case. Further, the number of errors resulting in a typical testing process is minimized. Also, the time spent by a tester during the actual test is phenomenally reduced while using the automated approach. Finally, as perceived by software testers, the automated approach is easier to use when compared to the manual test approach. Conclusions. It can be concluded that automation of non-functional testing will result in overall reduction in project costs and improves quality of software tested. This will address important performance aspects such as system availability, durability and uptime. It was observed that it is not sufficient if the software meets the functional requirements, but is also necessary to conform to the non-functional requirements.
APA, Harvard, Vancouver, ISO, and other styles
8

Vodrážka, Michal. "Způsoby definování požadavků pro výkonnostní testování softwaru." Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-193128.

Full text
Abstract:
This thesis is focused on ways of defining the requirements for performance testing software, both in practice (based on survey) as well as in theory. The aim of this thesis is to analyze ways of defining performance requirements, which are actually used on IT projects. In order to achieve this goal it is necessary to define concepts of performance testing, implement and evaluate the survey of the using ways of defining performance requirements in practice and then analyze ways of defining performance requirements in terms of their applicability to different types of IT projects. The contribution of this thesis is the comprehensive introduction performance testing software issues and mainly insight into ways of defining performance requirements used in practice and problems associated with them, through the survey, which was implemented and evaluated by myself. The conclusions resulting from this survey summarize which ways of defining performance requirements are applied to specific types of IT projects, which of these ways worked and which problems in certain cases that occur in practice. The thesis is divided into theoretical and practical part. The theoretical part explains the basic concepts associated with performance testing software. In this part, there is also described the methodology of defining performance requirements according to Tom Gilb. The practical part is focused on the realization and evaluation of survey of the using ways of defining performance requirements in practice and on the analysis ways of defining performance requirements with respect to certain types of projects.
APA, Harvard, Vancouver, ISO, and other styles
9

Johnson, Gloria. "The Effect of Applying Design of Experiments Techniques to Software Performance Testing." ScholarWorks, 2015. https://scholarworks.waldenu.edu/dissertations/226.

Full text
Abstract:
Effective software performance testing is essential to the development and delivery of quality software products. Many software testing investigations have reported software performance testing improvements, but few have quantitatively validated measurable software testing performance improvements across an aggregate of studies. This study addressed that gap by conducting a meta-analysis to assess the relationship between applying Design of Experiments (DOE) techniques in the software testing process and the reported software performance testing improvements. Software performance testing theories and DOE techniques composed the theoretical framework for this study. Software testing studies (n = 96) were analyzed, where half had DOE techniques applied and the other half did not. Five research hypotheses were tested, where findings were measured in (a) the number of detected defects, (b) the rate of defect detection, (c) the phase in which the defect was detected, (d) the total number of hours it took to complete the testing, and (e) an overall hypothesis which included all measurements for all findings. The data were analyzed by first computing standard difference in means effect sizes, then through the Z test, the Q test, and the t test in statistical comparisons. Results of the meta-analysis showed that applying DOE techniques in the software testing process improved software performance testing (p < 05). These results have social implications for the software testing industry and software testing professionals, providing another empirically-validated testing methodology. Software organizations can use this methodology to differentiate their software testing process, to create more quality products, and to benefit the consumer and society in general.
APA, Harvard, Vancouver, ISO, and other styles
10

Abdeen, Waleed, and Xingru Chen. "Model-Based Testing for Performance Requirements : A Systematic Mapping Study and A Sample Study." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18778.

Full text
Abstract:
Model-Based Testing is a method that supports automated test design by using amodel. Although it is adopted in industrial, it is still an open area within performancerequirements. We aim to look into MBT for performance requirements and find out aframework that can model the performance requirements. We conducted a systematicmapping study, after that we conducted a sample study on software requirementsspecifications, then we introduced the Performance Requirements Verification andValidation (PRVV) model and finally, we completed another sample study to seehow the model works in practice. We found that there are many models can beused for performance requirement while the maturity is not enough. MBT can beimplemented in the context of performance, and it has been gaining momentum inrecent years compared to earlier. The PRVV model we developed can verify theperformance requirements and help to generate the test case.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Software Performance Testing"

1

Majchrzak, Tim A. Improving Software Testing: Technical and Organizational Developments. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chris, Farrell, and Massey Chris, eds. .NET performance testing and optimization: [the complete guide]. Cambridge: Red Gate Books, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Meier, J. D. Performance testing guidance for web applications: Patterns & practices. [United States?]: Microsoft, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Inc, NetLibrary, ed. Network performance toolkit: Using Netperf, tcptrace, NIST Net, and SSFNet. Indianapolis, Ind: Wiley Pub., 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Apache JMeter: A practical beginner's guide to automated testing and performance measurement for your websites. Birmingham: Packt Publishing Limited, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Weterings, Anet B. R. Do firms benefit from spatial proximity?: Testing the relation between spatial proximity and the performance of small software firms in the Netherlands. Utrecht: Koninklijk Nederlands Aardrijkskundig Genootschap ; International Geographic Union Section The Netherlands, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Weterings, Anet B. R. Do firms benefit from spatial proximity?: Testing the relation between spatial proximity and the performance of small software firms in the Netherlands. Utrecht: Koninklijk Nederlands Aardrijkskundig Genootschap ; International Geographical Union Section The Netherlands, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Saddek, Bensalem, and Peled Doron 1962-, eds. Runtime verification: 9th international workshop, RV 2009, Grenoble, France, June 26-28, 2009 : selected papers. Berlin: Springer, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Braulio, García-Cámara, Prieto Manuel, Ruggiero Martino, Sicard Gilles, and SpringerLink (Online service), eds. Integrated Circuit and System Design. Power and Timing Modeling, Optimization, and Simulation: 21st International Workshop, PATMOS 2011, Madrid, Spain, September 26-29, 2011. Proceedings. Berlin, Heidelberg: Springer-Verlag GmbH Berlin Heidelberg, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Automated Software Testing: Introduction, Management, and Performance. Addison-Wesley Professional, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Software Performance Testing"

1

Yorkston, Keith. "Performance Testing in the Software Lifecycle." In Performance Testing, 107–94. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-7255-8_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mayer, Daniel A., Orie Steele, Susanne Wetzel, and Ulrike Meyer. "CaPTIF: Comprehensive Performance TestIng Framework." In Testing Software and Systems, 55–70. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34691-0_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Singleton, David. "Website Performance Monitoring." In Software Quality and Software Testing in Internet Times, 199–216. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/978-3-642-56333-1_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Huerta-Guevara, Osvaldo, Vanessa Ayala-Rivera, Liam Murphy, and A. Omar Portillo-Dominguez. "Towards an Efficient Performance Testing Through Dynamic Workload Adaptation." In Testing Software and Systems, 215–33. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31280-0_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gupta, Varun, Akshay Mehta, and Chetna Gupta. "Improvisation of Reusability Oriented Software Testing." In System Performance and Management Analytics, 139–45. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-7323-6_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Huerta-Guevara, Osvaldo, Vanessa Ayala-Rivera, Liam Murphy, and A. Omar Portillo-Dominguez. "DYNAMOJM: A JMeter Tool for Performance Testing Using Dynamic Workload Adaptation." In Testing Software and Systems, 234–41. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31280-0_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Klück, Florian, Martin Zimmermann, Franz Wotawa, and Mihai Nica. "Performance Comparison of Two Search-Based Testing Strategies for ADAS System Validation." In Testing Software and Systems, 140–56. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31280-0_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shinbo, Hiroyuki, Atsushi Tagami, Shigehiro Ano, Toru Hasegawa, and Kenji Suzuki. "Practical End-to-End Performance Testing Tool for High Speed 3G-Based Networks." In Testing Software and Systems, 205–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-16573-3_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hiromori, Akihito, Takaaki Umedu, Hirozumi Yamaguchi, and Teruo Higashino. "Protocol Testing and Performance Evaluation for MANETs with Non-uniform Node Density Distribution." In Testing Software and Systems, 231–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34691-0_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Buch, Arnim, Stefan Engelkamp, and Dirk Kirstein. "Applying a Control Loop for Performance Testing and Tuning." In Software Quality and Software Testing in Internet Times, 217–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/978-3-642-56333-1_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Software Performance Testing"

1

Bulej, Lubomír. "Performance Testing in Software Development." In ICPE'16: ACM/SPEC International Conference on Performance Engineering. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2859889.2880448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vokolos, Filippos I., and Elaine J. Weyuker. "Performance testing of software systems." In the first international workshop. New York, New York, USA: ACM Press, 1998. http://dx.doi.org/10.1145/287318.287337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Field, Tony, Robert Chatley, and David Wei. "Software Performance Testing in Virtual Time." In ICPE '18: ACM/SPEC International Conference on Performance Engineering. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3185768.3186409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Swearngin, Amanda, Myra B. Cohen, Bonnie E. John, and Rachel K. E. Bellamy. "Human performance regression testing." In 2013 35th International Conference on Software Engineering (ICSE). IEEE, 2013. http://dx.doi.org/10.1109/icse.2013.6606561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Duttagupta, Subhasri, Rupinder Virk, and Manoj Nambiar. "Software bottleneck analysis during performance testing." In 2015 International Conference and Workshop on Computing and Communication (IEMCON). IEEE, 2015. http://dx.doi.org/10.1109/iemcon.2015.7344508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hoskins, Dean S., Charles J. Colbourn, and Douglas C. Montgomery. "Software performance testing using covering arrays." In the 5th international workshop. New York, New York, USA: ACM Press, 2005. http://dx.doi.org/10.1145/1071021.1071034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Moghadam, Mahshid Helali. "Machine learning-assisted performance testing." In ESEC/FSE '19: 27th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3338906.3342484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Shiping, David Moreland, Surya Nepal, and John Zic. "Yet Another Performance Testing Framework." In 2008 19th Australian Conference on Software Engineering ASWEC. IEEE, 2008. http://dx.doi.org/10.1109/aswec.2008.4483205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Norberto, Marcus, Lukas Gaedicke, Maicon Bernardino, Guilherme Legramante, Fabio Paulo Basso, and Elder Macedo Rodrigues. "Performance Testing in Mobile Application." In SBQS'19: XVIII Brazilian Symposium on Software Quality. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3364641.3364653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Laaber, Christoph. "Continuous software performance assessment: detecting performance problems of software libraries on every build." In ISSTA '19: 28th ACM SIGSOFT International Symposium on Software Testing and Analysis. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3293882.3338982.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Software Performance Testing"

1

Price, Phillip N., Jessica Granderson, Michael Sohn, Nathan Addy, and David Jump. Commercial Building Energy Baseline Modeling Software: Performance Metrics and Method Testing with Open Source Models and Implications for Proprietary Software Testing. Office of Scientific and Technical Information (OSTI), September 2013. http://dx.doi.org/10.2172/1129576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography