To see the other types of publications on this topic, follow the link: Software Performance Testing.

Dissertations / Theses on the topic 'Software Performance Testing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 40 dissertations / theses for your research on the topic 'Software Performance Testing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sakko, J. (Janne). "Unintrusive performance profiling and testing in software development." Master's thesis, University of Oulu, 2014. http://urn.fi/URN:NBN:fi:oulu-201411292032.

Full text
Abstract:
Performance is a complex topic in software development. Performance is a result of various interconnected properties of software and hardware. Risks and damages of badly performing software are well known and visible. Still, performance considerations are not thoroughly embedded to whole development life-cycle. Many projects start to consider performance only when issues emerge. Fixing performance problems at late phases of development can be very difficult and expensive. When performance problems emerge, most important goal is to determine root causes of issues. Instrumenting software can be effective way to measure and analyse software, but if not implemented during development, it can be limited and laborious. Unintrusive software profilers don’t require any modifications to the profiled software to be used. Profilers can provide various information about the software and the environment. Performance testing aims to validate and verify that performance targets of a project are achieved. Regression testing is well known method for assuring that regressions are not introduced to the software during development. Performance regression testing has similar targets for performance. This thesis explores usage of performance profilers and performance regression testing in UpWind project. UpWind is a sail boat chart navigation software project conducted in University of Oulu. Evaluation study in context of Design Science Research is used as a research method. In this thesis navigation algorithm of UpWind project was profiled using OProfile and Valgrind profilers. Profiling provided new information about performance behaviour of UpWind project and also new insights about performance profiling. In order to prevent future performance regressions in UpWind project, performance tests and performance regression testing process were drafted. Performance tests were implemented using Qt framework’s QTestLib.
APA, Harvard, Vancouver, ISO, and other styles
2

Charla, Shiva Bhavani Reddy. "Examining Various Input Patterns Effecting Software Application Performance : A Quasi-experiment on Performance Testing." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13587.

Full text
Abstract:
Nowadays, non-functional testing has a great impact on the real-time environment. Non-functional testing helps to analyze the performance of the application on both server and client. Load testing attempts to cause the system under test to respond incorrectly in a situation that differs from its normal operation, but rarely encountered in real world use. Examples include providing abnormal inputs to the software or placing real-time software under unexpectedly high loads. High loads are induced over the application to test the performance, but there is a possibility that particular pattern of the low load could also induce load on a real-time system. For example, repeatedly making a request to the system every 11 seconds might cause a fault if the system transitions to standby state after 10 seconds of inactivity. The primary aim of this study is to find out various low load input patterns affecting the software, rather than simply high load inputs. A quasi-experiment was chosen as a research method for this study. Performance testing was performed on the web application with the help of a tool called HP load runner. A comparison was made between low load and high load patterns to analyze the performance of the application and to identify bottlenecks under different load.
APA, Harvard, Vancouver, ISO, and other styles
3

Khan, Rizwan Bahrawar. "Comparative Study of Performance Testing Tools: Apache JMeter and HP LoadRunner." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12915.

Full text
Abstract:
Software Testing plays a key role in Software Development. There are two approaches to software testing i.e. Manual Testing and Automated Testing which are used to detect the faults. There are numbers of automated software testing tools with different purposes but it is always a problem to select a software testing tool according to the needs. In this research, the author compares two software testing tools i.e. Apache JMeter and HP LoadRunner to determine their usability and efficiency. To compare the tools, different parameters were selected which guide the tool evaluation process. To complete the objective of the research, a scenario-based survey is conducted and two different web applications were tested. From this research, it is found that Apache JMeter got an edge over HP Loadrunner in different aspects which include installation, interface and learning.
APA, Harvard, Vancouver, ISO, and other styles
4

Han, Xue. "CONFPROFITT: A CONFIGURATION-AWARE PERFORMANCE PROFILING, TESTING, AND TUNING FRAMEWORK." UKnowledge, 2019. https://uknowledge.uky.edu/cs_etds/84.

Full text
Abstract:
Modern computer software systems are complicated. Developers can change the behavior of the software system through software configurations. The large number of configuration option and their interactions make the task of software tuning, testing, and debugging very challenging. Performance is one of the key aspects of non-functional qualities, where performance bugs can cause significant performance degradation and lead to poor user experience. However, performance bugs are difficult to expose, primarily because detecting them requires specific inputs, as well as specific configurations. While researchers have developed techniques to analyze, quantify, detect, and fix performance bugs, many of these techniques are not effective in highly-configurable systems. To improve the non-functional qualities of configurable software systems, testing engineers need to be able to understand the performance influence of configuration options, adjust the performance of a system under different configurations, and detect configuration-related performance bugs. This research will provide an automated framework that allows engineers to effectively analyze performance-influence configuration options, detect performance bugs in highly-configurable software systems, and adjust configuration options to achieve higher long-term performance gains. To understand real-world performance bugs in highly-configurable software systems, we first perform a performance bug characteristics study from three large-scale opensource projects. Many researchers have studied the characteristics of performance bugs from the bug report but few have reported what the experience is when trying to replicate confirmed performance bugs from the perspective of non-domain experts such as researchers. This study is meant to report the challenges and potential workaround to replicate confirmed performance bugs. We also want to share a performance benchmark to provide real-world performance bugs to evaluate future performance testing techniques. Inspired by our performance bug study, we propose a performance profiling approach that can help developers to understand how configuration options and their interactions can influence the performance of a system. The approach uses a combination of dynamic analysis and machine learning techniques, together with configuration sampling techniques, to profile the program execution, analyze configuration options relevant to performance. Next, the framework leverages natural language processing and information retrieval techniques to automatically generate test inputs and configurations to expose performance bugs. Finally, the framework combines reinforcement learning and dynamic state reduction techniques to guide subject application towards achieving higher long-term performance gains.
APA, Harvard, Vancouver, ISO, and other styles
5

Khan, Mohsin Javed, and Hussan Iftikhar Iftikhar. "Performance Testing and Analysis of Modern Web Technologies." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-11177.

Full text
Abstract:
The thesis is an empirical case study to predict or estimate the performance and variability of contemporary software frameworks used for web application development. Thesis can be mainly divided into 3 phases. In Phase I, we theoretically try to explore and analyze PHP, EJB 3.0 and ASP.NET considering quality attributes or ilitis of mentioned technologies. In Phase II, we develop two identical web applications i.e. online component’s webstore (applications to purchase components online) in PHP and ASP.NET. In phase III, we conduct automated testing to determine and analyze applications’ performance. We developed web applications in PHP 5.3.0 and Visual Studio 2008 using ASP.NET 3.5 to practically measure and compare the applications’ performance. We used SQL Server 2005 with ASP.NET 3.5 and MySql 5.1.36 with PHP as database servers. Software architecture, CSS, database design, database constraints were tried to keep simple and same for both applications i.e.  Applications developed in PHP and ASP.NET. This similarity helps to establish realistic comparison of applications performance and variability. The applications’ performance and variability is measured with help of automated scripts. These scripts were used to generate thousands of requests on application servers and downloading components simultaneously. More details of performance testing can be found in chapter 6, 7 and 8.
We have gain alot of knowledge from this thesis. We are very happy to finish our Software Engineering.
APA, Harvard, Vancouver, ISO, and other styles
6

Penmetsa, Jyothi Spandana. "AUTOMATION OF A CLOUD HOSTED APPLICATION : Performance, Automated Testing, Cloud Computing." Thesis, Blekinge Tekniska Högskola, Institutionen för kommunikationssystem, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12849.

Full text
Abstract:
Context: Software testing is the process of assessing quality of a software product to determine whether it matches with the existing requirements of the customer or not. Software testing is one of the “Verification and Validation,” or V&V, software practices. The two basic techniques of software testing are Black-box testing and White box testing. Black-box testing focuses solely on the outputs generated in response to the inputs supplied neglecting the internal components of the software. Whereas, White-box testing focuses on the internal mechanism of the software of any application. To explore the feasibility of black-box and white-box testing under a given set of conditions, a proper test automation framework needs to be deployed. Automation is deployed in order to reduce the manual effort and to perform testing continuously, thereby increasing the quality of the product. Objectives: In this research, cloud hosted application is automated using TestComplete tool. The objective of this thesis is to verify the functionality of cloud application such as test appliance library through automation and to measure the impact of the automation on release cycles of the organisation. Methods: Here automation is implemented using scrum methodology which is an agile development software process. Using scrum methodology, the product with working software can be delivered to the customers incrementally and empirically with updating functionalities in it. Test appliance library functionality is verified deploying testing device thereby keeping track of automatic software downloads into the testing device and licenses updating in the testing device. Results: Automation of test appliance functionality of cloud hosted application is made using TestComplete tool and impact of automation on release cycles is found reduced. Through automation of cloud hosted application, nearly 24% of reduction in level of release cycles can be observed thereby reducing the manual effort and increasing the quality of delivery. Conclusion: Automation of a cloud hosted application provides no manual effort thereby utilisation of time can be made effectively and application can be tested continuously increasing the efficiency and
AUTOMATION OF A CLOUD HOSTED APPLICATION
APA, Harvard, Vancouver, ISO, and other styles
7

Eada, Priyanudeep. "Experiment to evaluate an Innovative Test Framework : Automation of non-functional testing." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-10940.

Full text
Abstract:
Context. Performance testing, among other types of non-functional testing, is necessary to assess software quality. Most often, manual approach is employed to test a system for its performance. This approach has several setbacks. The existing body of knowledge lacks empirical evidence on automation of non-functional testing and is largely focused on functional testing. Objectives. The objective of the present study is to evaluate a test framework that automates performance testing. A large-scale distributed project is selected as the context to achieve this objective. The rationale for choosing such a project is that the proposed test framework was designed with an intention to adapt and tailor according to any project’s characteristics. Methods. An experiment was conducted with 15 participants at Ericsson R&D department, India to evaluate an automated test framework. Repeated measures design with counter balancing method was used to understand the accuracy and time taken while using the test framework. To assess the ease-of-use of the proposed framework, a questionnaire was distributed among the experiment participants. Statistical techniques were used to accept or reject the hypothesis. The data analysis was performed using Microsoft Excel. Results. It is observed that the automated test framework is superior to the traditional manual approach. There is a significant reduction in the average time taken to run a test case. Further, the number of errors resulting in a typical testing process is minimized. Also, the time spent by a tester during the actual test is phenomenally reduced while using the automated approach. Finally, as perceived by software testers, the automated approach is easier to use when compared to the manual test approach. Conclusions. It can be concluded that automation of non-functional testing will result in overall reduction in project costs and improves quality of software tested. This will address important performance aspects such as system availability, durability and uptime. It was observed that it is not sufficient if the software meets the functional requirements, but is also necessary to conform to the non-functional requirements.
APA, Harvard, Vancouver, ISO, and other styles
8

Vodrážka, Michal. "Způsoby definování požadavků pro výkonnostní testování softwaru." Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-193128.

Full text
Abstract:
This thesis is focused on ways of defining the requirements for performance testing software, both in practice (based on survey) as well as in theory. The aim of this thesis is to analyze ways of defining performance requirements, which are actually used on IT projects. In order to achieve this goal it is necessary to define concepts of performance testing, implement and evaluate the survey of the using ways of defining performance requirements in practice and then analyze ways of defining performance requirements in terms of their applicability to different types of IT projects. The contribution of this thesis is the comprehensive introduction performance testing software issues and mainly insight into ways of defining performance requirements used in practice and problems associated with them, through the survey, which was implemented and evaluated by myself. The conclusions resulting from this survey summarize which ways of defining performance requirements are applied to specific types of IT projects, which of these ways worked and which problems in certain cases that occur in practice. The thesis is divided into theoretical and practical part. The theoretical part explains the basic concepts associated with performance testing software. In this part, there is also described the methodology of defining performance requirements according to Tom Gilb. The practical part is focused on the realization and evaluation of survey of the using ways of defining performance requirements in practice and on the analysis ways of defining performance requirements with respect to certain types of projects.
APA, Harvard, Vancouver, ISO, and other styles
9

Johnson, Gloria. "The Effect of Applying Design of Experiments Techniques to Software Performance Testing." ScholarWorks, 2015. https://scholarworks.waldenu.edu/dissertations/226.

Full text
Abstract:
Effective software performance testing is essential to the development and delivery of quality software products. Many software testing investigations have reported software performance testing improvements, but few have quantitatively validated measurable software testing performance improvements across an aggregate of studies. This study addressed that gap by conducting a meta-analysis to assess the relationship between applying Design of Experiments (DOE) techniques in the software testing process and the reported software performance testing improvements. Software performance testing theories and DOE techniques composed the theoretical framework for this study. Software testing studies (n = 96) were analyzed, where half had DOE techniques applied and the other half did not. Five research hypotheses were tested, where findings were measured in (a) the number of detected defects, (b) the rate of defect detection, (c) the phase in which the defect was detected, (d) the total number of hours it took to complete the testing, and (e) an overall hypothesis which included all measurements for all findings. The data were analyzed by first computing standard difference in means effect sizes, then through the Z test, the Q test, and the t test in statistical comparisons. Results of the meta-analysis showed that applying DOE techniques in the software testing process improved software performance testing (p < 05). These results have social implications for the software testing industry and software testing professionals, providing another empirically-validated testing methodology. Software organizations can use this methodology to differentiate their software testing process, to create more quality products, and to benefit the consumer and society in general.
APA, Harvard, Vancouver, ISO, and other styles
10

Abdeen, Waleed, and Xingru Chen. "Model-Based Testing for Performance Requirements : A Systematic Mapping Study and A Sample Study." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18778.

Full text
Abstract:
Model-Based Testing is a method that supports automated test design by using amodel. Although it is adopted in industrial, it is still an open area within performancerequirements. We aim to look into MBT for performance requirements and find out aframework that can model the performance requirements. We conducted a systematicmapping study, after that we conducted a sample study on software requirementsspecifications, then we introduced the Performance Requirements Verification andValidation (PRVV) model and finally, we completed another sample study to seehow the model works in practice. We found that there are many models can beused for performance requirement while the maturity is not enough. MBT can beimplemented in the context of performance, and it has been gaining momentum inrecent years compared to earlier. The PRVV model we developed can verify theperformance requirements and help to generate the test case.
APA, Harvard, Vancouver, ISO, and other styles
11

Silveira, Maicon Bernardino da. "Canopus : a domain-specific language for modeling performance testing." Pontif?cia Universidade Cat?lica do Rio Grande do Sul, 2016. http://tede2.pucrs.br/tede2/handle/tede/6861.

Full text
Abstract:
Submitted by Setor de Tratamento da Informa??o - BC/PUCRS (tede2@pucrs.br) on 2016-07-27T18:05:50Z No. of bitstreams: 1 TES_MAICON_BERNARDINO_DA_SILVEIRA_COMPLETO.pdf: 10974858 bytes, checksum: f8d942c9c91dfd41b28bf30123b91644 (MD5)
Made available in DSpace on 2016-07-27T18:05:50Z (GMT). No. of bitstreams: 1 TES_MAICON_BERNARDINO_DA_SILVEIRA_COMPLETO.pdf: 10974858 bytes, checksum: f8d942c9c91dfd41b28bf30123b91644 (MD5) Previous issue date: 2016-03-07
Performance is a fundamental quality of software systems. Performance testing is a technique able to reveal system bottlenecks and/or lack of scalability of the up-and-running environment. However, usually the software development cycle does not apply this effort on the early development phases, thereby resulting in a weak elicitation process of performance requirements and difficulties for the performance team to integrate them into the project scope. Model-Based Testing (MBT) is an approach to automate the generation of test artifacts from the system models. By doing that, communication is improved among teams, given that the test information is aggregated in the system models since the early stages aiming to automate the testing process. The main contribution of this thesis is to propose a Domain-Specific Language (DSL) for modeling performance testing in Web applications. The language is called Canopus, in which a graphical model and a natural language are proposed to support performance modeling and automatic generation of test scenarios and scripts. Furthermore, this work provides an example of use and an industrial case study to demonstrate the use of Canopus. Based on the results obtained from these studies, we can infer that Canopus can be considered a valid DSL for modeling performance testing. Our motivation to perform this study was to investigate whether a DSL for modeling performance testing can improve quality, cost, and efficiency of performance testing. Therefore, we also carried out a controlled empirical experiment to evaluate the effort (time spent), when comparing Canopus with another industrial approach - UML. Our results indicate that, for performance modeling, effort using Canopus was lower than using UML. Our statistical analysis showed that the results were valid, i.e., that to design performance testing models using Canopus is better than using UML.
Desempenho ? uma qualidade fundamental de sistemas de software. Teste de desempenho ? uma t?cnica capaz de revelar gargalos do sistema na escalabilidade do ambiente de produ??o. No entanto, na maior parte do ciclo de desenvolvimento de software, n?o se aplica este tipo de teste nos seus ciclos iniciais. Deste modo, isto resulta em um fraco processo de elicita??o dos requisitos e dificuldades da equipe em integrar suas atividades ao escopo do projeto. Assim, o teste baseado em modelos ? uma abordagem de teste para automatizar a gera??o de artefatos de teste com base em modelos. Ao fazer isto, permite melhorar a comunica??o da equipe, uma vez que a informa??o de teste ? agregada aos modelos desde as fases iniciais do processo de teste, facilitando assim sua automatiza??o. A principal contribui??o desta tese ? propor uma linguagem espec?fica de dom?nio (Domain-Specific Language - DSL) para modelagem de teste de desempenho em aplica??es Web. A DSL proposta ? chamada Canopus, na qual um modelo gr?fico e uma linguagem semi-natural s?o propostos para apoiar a modelagem de desempenho e gera??o autom?tica de cen?rios e scripts de teste. Al?m disto, apresenta-se um exemplo de uso bem como um estudo de caso realizado na ind?stria para demonstrar o uso da Canopus. Com base nos resultados obtidos, infere-se que a Canopus pode ser considerada uma DSL v?lida para modelagem do teste de desempenho. A motiva??o para realiza??o deste estudo foi investigar se uma DSL para modelagem do teste de desempenho pode melhorar a qualidade, custo e efici?ncia do teste de desempenho. Assim, tamb?m foi realizado um experimento controlado com o objetivo de avaliar o esfor?o (tempo), quando comparado Canopus com outra abordagem industrial - UML. Os resultados obtidos indicam que, estatisticamente, para a modelagem de desempenho usando Canopus o esfor?o foi menor e melhor do que usando UML.
APA, Harvard, Vancouver, ISO, and other styles
12

Björn, Johansson. "End-to-end performance testing of a healthcare alarm system." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-39715.

Full text
Abstract:
Digital services involving large systems with multiple users are ubiquitous in modern society. The systems are often complicated and made up of multiple devices and communication protocols. A fundamental problem in this context is how the behavior of a system changes as the number of users vary. In particular, when do the systems’ resources saturate and how does the system behave when close to saturation. Performance testing is key for addressing this fundamental problem. Performance testing is the scope of this project. Performance tests can be used for inference of, for example, a system's scalability. Furthermore, it can be used to provide general guarantees on the services that can be delivered. Performance testing at the company Phoniro AB is considered. The platform Phoniro Care is the back-end service for the company’s products. The Phoiro 6000 system is one of the products that uses Phoniro Care. The system allows for multiple users and offers alarm services. The primary focus of this project is to determine the behavior of that system during varying levels of simulated load, and furthermore analyze the data extracted from such simulations and tests. The open source software JMeter was used as the tool for performance testing. It was selected from a set of candidate tools that have been evaluated in the literature based on various performance criteria. The results are presented by graphs showing the time evolution of different performance indicators. A conclusion from this work is that the implemented performance testing framework helps to answer questions about the systems’ behavior. Questions that are important for the company’s further development and expansion of the system. Furthermore, the proposed framework establishes a foundation for further inquiries on the subject.
APA, Harvard, Vancouver, ISO, and other styles
13

Onifade, Bosede. "An investigation into the influence of testing and development methods on software project performance." Thesis, Birmingham City University, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.405673.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Javorský, Daniel. "Systém pro výkonnostní a zátěžové testování." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255343.

Full text
Abstract:
This thesis is concerned about performance and stress testing of Xtend product developed by Xura, Inc. Software development knowledge, theoretical knowledge of testing and testing tools are described in opening chapters together with key features and services provided by Xtend. Emphasis was put on implementation of performance and stress testing tool, which focuses on short-term and long-term testing scenarios and output of this tool serves Xtend developers. Part of this thesis also focuses on results of stress and performance tests.
APA, Harvard, Vancouver, ISO, and other styles
15

Fong, Fredric, and Mustafa Raed. "Performance comparison of GraalVM, Oracle JDK andOpenJDK for optimization of test suite execution time." Thesis, Mittuniversitetet, Institutionen för data- och systemvetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-43169.

Full text
Abstract:
Testing, when done correctly, is an important part of software development sinceit is a measure of the quality of a software in question. Most of the highly ratedsoftware projects therefore have test suites implemented that include unit tests,integration tests, and other types of tests. However, a challenge regarding the testsuite is that it needs to run each time new code changes are proposed. From thedeveloper’s perspective, it might not always be necessary to run the whole testsuite for small code changes. Previous studies have tried to tackle this probleme.g., by only running a subset of the test suite. This research investigates runningthe whole test suite of Java projects faster, by testing the Java Development Kits(JDKs) GraalVM Enterprise Edition (EE) and Community Edition (CE) againstOracle JDK and OpenJDK for Java 8 and 11. The research used the test suiteexecution time as a metric to compare the JDKs. Another metric that wasconsidered was the test suites number of test cases, used to try and find a breakingpoint for when GraalVM becomes beneficial. The tests were performed on twotest machines, where the first used 20 out of 48 tested projects and the secondused 11 out of 43 projects tested. When looking at the average of five runs,GraalVM EE 11 performed best in 11 out of 18 projects on the first test machine,compared to its closest competitor, and in 7 out of 11 projects on the second testmachine both for JDK 8 and 11. However GraalVM EE 8 did not give anybenefits to the first test machine compared to its competitors, which might indicatethat the hardware plays a vital role in the performance of GraalVM EE 8. Numberof test cases could not be used to determine a breaking point for when GraalVM isbeneficial, but it was observed that GraalVM did not show any benefits forprojects with an execution time of fewer than 39 seconds. It is observed thatGraalVM CE, does not perform well as compared to the other JDKs, and in allcases, its performance is not countable due to less non-satisfied and inefficientbehavior.
APA, Harvard, Vancouver, ISO, and other styles
16

Magapu, Akshay Kumar, and Nikhil Yarlagadda. "Performance, Scalability, and Reliability (PSR) challenges, metrics and tools for web testing : A Case Study." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12801.

Full text
Abstract:
Context. Testing of web applications is an important task, as it ensures the functionality and quality of web applications. The quality of web application comes under non-functional testing. There are many quality attributes such as performance, scalability, reliability, usability, accessibility and security. Among these attributes, PSR is the most important and commonly used attributes considered in practice. However, there are very few empirical studies conducted on these three attributes. Objectives. The purpose of this study is to identify metrics and tools that are available for testing these three attributes. And also to identify the challenges faced while testing these attributes both from literature and practice. Methods. In this research, a systematic mapping study was conducted in order to collect information regarding the metrics, tools, challenges and mitigations related to PSR attributes. The required information is gathered by searching in five scientific databases. We also conducted a case study to identify the metrics, tools and challenges of the PSR attributes in practice. The case study is conducted at Ericsson, India where eight subjects were interviewed. And four subjects working in other companies (in India) were also interviewed in order to validate the results obtained from the case company. In addition to this, few documents of previous projects from the case company are collected for data triangulation. Results. A total of 69 metrics, 54 tools and 18 challenges are identified from systematic mapping study. And 30 metrics, 18 tools and 13 challenges are identified from interviews. Data is also collected through documents and a total of 16 metrics, 4 tools and 3 challenges were identified from these documents. We formed a list based on the analysis of data that is related to tools, metrics and challenges. Conclusions. We found that metrics available from literature are overlapping with metrics that are used in practice. However, tools found in literature are overlapping only to some extent with practice. The main reason for this deviation is because of the limitations that are identified for the tools, which lead to the development of their own in-house tool by the case company. We also found that challenges are partially overlapped between state of art and practice. We are unable to collect mitigations for all these challenges from literature and hence there is a need for further research to be done. Among the PSR attributes, most of the literature is available on performance attribute and most of the interviewees are comfortable to answer the questions related to performance attribute. Thus, we conclude there is a lack of empirical research related to scalability and reliability attributes. As of now, our research is dealing with PSR attributes in particular and there is a scope for further research in this area. It can be implemented on the other quality attributes and the research can be done in a larger scale (considering more number of companies).
APA, Harvard, Vancouver, ISO, and other styles
17

Olsson, Joakim, and Johan Liljegren. "Examining current academic and industry Enterprise Service Bus knowledge and what an up-to-date testing framework could look like." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3772.

Full text
Abstract:
Nowadays integration and interoperability can make or break an enterprise's business success. In the huge space that is software engi- neering a lot of ESBs have emerged but with vast di erences in imple- mentation, patterns and architectures. To create order in this disarray studies have been made, features eval- uated and performance measured. This is a good thing but it does not clear up all the confusion and adds another layer of confusion regarding the studies and tests themselves. The aim of this thesis is to make an attempt of rectifying some of the disorder by rst evaluating the current body of knowledge and to provide a humble attempt of making a transparent test framework which could be used for a more coherent ESB evaluation.
APA, Harvard, Vancouver, ISO, and other styles
18

Yan, Dacong. "Program Analyses for Understanding the Behavior and Performance of Traditional and Mobile Object-Oriented Software." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1406064286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Waqas, Muhammad. "A simulation-based approach to test the performance of large-scale real time software systems." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20133.

Full text
Abstract:
Background: A real-time system operates with time constraints, and its correctness depends upon the time on which results are generated. Different industries use different types of real-time systems such as telecommunication, air traffic control systems, generation of power, and spacecraft system. There is a category of real-time systems that are required to handle millions of users and operations at the same time; those systems are called large scale real-time systems. In the telecommunication sector, many real-time systems are large scale, as they need to handle millions of users and resources in parallel. Performance is an essential aspect of this type of system; unpredictable behavior queue cost millions of dollars for telecom operators in a matter of seconds. The problem is that existing models for performance analysis of these types of systems are not cost-effective and require lots of knowledge to deploy. In this context, we have developed a performance simulator tool that is based on the XgBoost, Random Forest, and Decision Tree modeling. Objectives: The thesis aims to develop a cost-effective approach to support the analysis of the performance of large-scale real-time telecommunication systems. The idea is to develop and implement a solution to simulate the telecommunication system using some of the most promising identified factors that affect the performance of the system. Methods: We have performed an improvement case study in Ericsson. The identification of performance factors is found through a dataset generated in a performance testing session, the investigation conducted on the same system, and unstructured interviews with the system experts. The approach was selected through a literature review. Validation of the Performance Simulator performed through static analysis and user feedback received from the questionnaire. Results: The results show that Performance Simulator could be helpful to performance analysis of large-scale real-time telecommunication systems. Performance Simulator ability to endorse performance analysis of other real-time systems is a collection of multiple opinions. Conclusions: The developed and validated approach demonstrates potential usefulness in performance analysis and can benefit significantly from further enhancements. The specific amount of data used for training might impact the generalization of the research on other real-time systems. In the future, this study can establish with more numbers of input on real-time systems on a large scale.
APA, Harvard, Vancouver, ISO, and other styles
20

Klaška, Jan. "Zátěžové testování informačních systémů." Master's thesis, Vysoká škola ekonomická v Praze, 2008. http://www.nusl.cz/ntk/nusl-10459.

Full text
Abstract:
This thesis if focused on performance, especially load testing of information systems. In the first part is defined software quality and reader is provided by fundamentals of software testing. Process of sofware testing is well described here. The rest of the paper is oriented on perfomance testing and it's processes and attributes. Specified are goals and principles of performance testing as well as types of performance tests and their purpose. Defined is load testing methodology including metrics for reliability and efficiency of information systems. Metrics are derived from well known international ISO standard 9126 - Software quality model. Methodology and metrics are validated during load test project on which author of this work participated. One chapter is also devoted to the tools used for load testing automatization. This paper is focused on the testing experts as well as on the other readers who want to acquire information on load testing, software efficiency or generally on testing of software quality.
APA, Harvard, Vancouver, ISO, and other styles
21

Martinák, Lukáš. "Výkonnostní testování webových aplikací." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236576.

Full text
Abstract:
This thesis is about software testing and mainly focused on performance testing of web applications. The introductory chapters outline problems of testing, identification of key issues, explanation of general concepts and software quality, differences between desktop and web application testing and finally - performance testing. Then, there is chosen a web application suitable for testing (Kentico CMS 6) and comparison of existing tools for performance and load testing. One of them (Microsoft Visual Studio 2010 Ultimate Edition) is chosen for the further testing. Therefore, several test scenarios are designed and implemented (including demonstration of creating, editing and debugging, extending with plug-ins, maintaining, running in a distributed environment, etc.). Finally, there are introduced testing reports and suggestions for the further testing.
APA, Harvard, Vancouver, ISO, and other styles
22

Fayyaz, Ali Raza, and Madiha Munir. "Performance Evaluation of PHP Frameworks (CakePHP and CodeIgniter) in relation to the Object-Relational Mapping, with respect to Load Testing." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4116.

Full text
Abstract:
Context: Information technology is playing an important role in creating innovation in business. Due to increase in demand of information technology, web development has become an important field. PHP is an open source language, which is widely used in web development. PHP is used to develop dynamic web pages and it has the ability to connect with database. PHP has some good features i.e. cross platform compatibility, scalability, efficient execution and is an open source technology. These features make it a good choice for developers to choose PHP for web development. The maintenance of an application becomes difficult and performance being considerably reduced, if PHP is to be used without using its frameworks. To resolve these issues, different frameworks have been introduced by web development communities on the internet. These frameworks are based on Model, View, Controller design pattern. These frameworks provide, different common functionalities and classes in the form of helpers, components, and plug-in to reduce the development time. Due to these features like robustness, scalability, maintainability and performance, these frameworks are mostly used for web development in PHP, with performance being considered the most important factor. Objectives:The objective of this thesis is to compare and analyze the affect of data abstraction layer (ORM) on the performance of two PHP frameworks. These two frameworks are CakePHP and CodeIgniter. CAKEPHP has built-in support of object-relational mapping (ORM) but CodeIgniter has no built-in support of object-relational mapping (ORM). We considered load testing and stress testing to measure the performance of these two frameworks. Methods: We performed the experiment to show the performance of CakePHP (with ORM) and CodeIgniter (no ORM) frameworks. We developed two applications in both the PHP frameworks, with the same scope and design and measured the performance of these applications, with respect to load testing, with automated testing tool. The results have been obtained by testing the performance of both the applications on local and live servers. Conclusions:After analyzing the results we concluded that CodeIgniter is useful for small and medium sized applications. But CAKEPHP is good for large and enterprise level applications, as in stress conditions the CAKEPHP performed better than CodeIgniter on both local and live environment.
APA, Harvard, Vancouver, ISO, and other styles
23

Vasconcelos, Jansson Erik Sven. "Analysis of Test Coverage Data on a Large-Scale Industrial System." Thesis, Linköpings universitet, Programvara och system, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-131815.

Full text
Abstract:
Software testing verifies the program's functional behavior, one important process when engineering critical software. Measuring the degree of testing is done with code coverage, describing the amount of production code affected by tests. Both concepts are extensively used for industrial systems. Previous research has shown that gathering and analyzing test coverages becomes problematic on large-scale systems. Here, development experience, implementation feasibility, coverage measurements and analysis method are explored; providing potential solutions and insights into these issues. Outlined are methods for constructing and integrating such gathering and analysis system in a large-scale project, along with the problems encountered and given remedies. Instrumentations for gathering coverage information affect performance negatively, these measurements are provided. Since large-scale test suite measurements are quite lacking, the line, branch, and function criteria are presented here. Finally, an analysis method is proposed, by using coverage set operations and Jaccard indices, to find test similarities. Results gathered imply execution time was significantly affected when gathering coverage, [2.656, 2.911] hours for instrumented software, originally between [2.075, 2.260] on the system under test, given under the alpha = 5% and n = 4, while both processor & memory usages were inconclusive. Measured criteria were (59.3, 70.7, 24.6)% for these suites. Analysis method shows potential areas of test redundancy.
APA, Harvard, Vancouver, ISO, and other styles
24

Ženíšek, Jan. "Projekt vývoje Integrovaného testovacího nástroje." Master's thesis, Vysoká škola ekonomická v Praze, 2014. http://www.nusl.cz/ntk/nusl-194727.

Full text
Abstract:
Nowadays the development speed of new software products is a key to success and it is not important whether the aim is to make customer's urges satisfied or get ahead of one's competitors and fill the market gap. Because of the increase of development speed the demands on the saving process of software quality are increasing. There are two types of tools that are supporting the process of software quality assurance. Firstly, we talk about comprehensive testing tools of commercial character that usually include many functions, but their purchase is extremely expensive. On the other hand there are open-source tools that are available for free, they function on many kinds of operating systems and it is possible to modify them. Unfortunately their functions are basically focused on a certain subset of controlling the software quality assurance. Company TRASK solution a.s. has decided to change this current situation, so it asked competence centre Software Quality Assurance at the University of Economics in Prague in order to create Integrated Testing Node (ITN) that would combine the advantages of open-source tools. Moreover, it would offer broad range of functions as commercial solution. The purpose of this thesis is to describe relevant phases of the process of creating the Integrated Testing Node from the factual and methodical point of view. This aim is divided into partial aims included task analysis and the proposal of solving system, open-source products portfolio analysis, choice of the most convenient tools for following integration, choosing the method of information system building, evaluating the feedback from a client and the proposal of future development of this tool. As far as the biggest contribution of this thesis is concerned, it is the realisation of ITN project that can be used during information classes at the University of Economics in Prague. Furthermore, it can be used as the control of software quality in commercial companies.
APA, Harvard, Vancouver, ISO, and other styles
25

Greibus, Justinas. "Duomenų bazių našumo tyrimo įrankis." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20100813_113003-08056.

Full text
Abstract:
Duomenų bazių našumo analizė yra viena iš pagrindinių siekių programinės įrangos testavimo srityje šiuo metu. Atliekant tyrimus sukurta nemažai metodikų, kurios leidžia nustatyti duomenų bazių našumo lygį. Tačiau priemonės sukurtos remiantis šiomis metodikomis yra prieinamos dažniausiai tik uždaroms bendruomenėms. Šiame darbe yra nagrinėjama duomenų bazių našumo tyrimo metodika, pagrįsta programinės įrangos apkrovos ir stresinio testavimo principais. Sukurtas įrankis suteikia vartotojui galimybę atlikti duomenų bazių našumo tyrimo scenarijus, bei pakartotinai atlikti istorinius scenarijus ir palyginti gautus rezultatus. Formuojamos ataskaitos pateikia daug svarbios informacijos skirtos analizei.
The analysis of the database performance is the common challenge in the nowadays software testing. There are several methodologies of the analysis of the database performance in the market. However, tools, which are based on these methodologies, are available for the narrow circle of the privileged persons. According to the results of the analysis, this master thesis investigates a new methodology, which is based on several other methodologies. The methodology of the database performance audit, which is discussed in this project, is based on the principles of the software load and stress testing. In order to identify the issues of database performance, values of the performance parameters are registered. These values are counted during the execution of the scenario of the automatic scenarios. The user has the possibility to re-execute historical scenarios and to compare the results of the separate executions. Generated reports with the deep data level facilitate the analysis of the database performance.
APA, Harvard, Vancouver, ISO, and other styles
26

Dostál, Adam. "Nástroj pro funkční testování." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-377054.

Full text
Abstract:
This work is focused on the implementation of functional tests within the software project in the field of energy in the company Unicorn. The theoretical part describes in general the project methodology of software development and the methodology Rational Unified Process (RUP). In addition, the test methods are included, and the quality management model FURPS+. The last section introduces functional tests, including a description of those that are used to test the developed application. The practical part consists of description of the energy project Nemo Link, developed application Nemo Link Dispatch System (NDS), individial application modules, test enviroment, used tools and designed test methodology. Based on this methodology, individual selected tests are performed and evaluated.
APA, Harvard, Vancouver, ISO, and other styles
27

Bard, Robin, and Simon Banasik. "En prestanda- och funktionsanalys av Hypervisors för molnbaserade datacenter." Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20491.

Full text
Abstract:
I dagens informationssamhälle pågår en växande trend av molnbaserade tjänster. Vid implementering av molnbaserade tjänster används metoden Virtualisering. Denna metod minskar behovet av antal fysiska datorsystem i ett datacenter. Vilket har en positiv miljöpåverkan eftersom energikonsumtionen minskar när hårdvaruresurser kan utnyttjas till sin fulla kapacitet. Molnbaserade tjänster skapar samhällsnytta då nya aktörer utan teknisk bakgrundskunskap snabbt kan komma igång med verksamhetsberoende tjänster. För tillämpning av Virtualisering används en så kallad Hypervisor vars uppgift är att distribuera molnbaserade tjänster. Efter utvärdering av vetenskapliga studier har vi funnit att det finns skillnader i prestanda och funktionalitet mellan olika Hypervisors. Därför väljer vi att göra en prestanda- samt funktionsanalys av Hypervisors som kommer från de största aktörerna på marknaden. Dessa är Microsoft Hyper-V Core Server 2012, Vmware ESXi 5.1.0 och Citrix XenServer 6.1.0 Free edition. Vår uppdragsgivare är försvarsmakten som bekräftade en stor efterfrågan av vår undersökning. Rapporten innefattar en teoretisk grund som beskriver tekniker bakom virtualisering och applicerbara användningsområden. Genomförandet består av två huvudsakliga metoder, en kvalitativ- respektive kvantitativ del. Grunden till den kvantitativa delen utgörs av ett standardsystem som fastställdes utifrån varje Hypervisors begränsningar. På detta standardsystem utfördes prestandatester i form av dataöverföringar med en serie automatiserade testverktyg. Syftet med testverktygen var att simulera datalaster som avsiktligt påverkade CPU och I/O för att avgöra vilka prestandaskillnader som förekommer mellan Hypervisors. Den kvalitativa undersökningen omfattade en utredning av funktionaliteter och begränsningar som varje Hypervisor tillämpar. Med tillämpning av empirisk analys av de kvantitativa mätresultaten kunde vi fastställa orsaken bakom varje Hypervisors prestanda. Resultaten visade att det fanns en korrelation mellan hur väl en Hypervisor presterat och vilken typ av dataöverföring som den utsätts för. Den Hypervisor som uppvisade goda prestandaresultat i samtliga dataöverföringar är ESXi. Resultaten av den kvalitativa undersökningen visade att den Hypervisor som offererade mest funktionalitet och minst begränsningar är Hyper-V. Slutsatsen blev att ett mindre datacenter som inte planerar en expansion bör lämpligtvis välja ESXi. Ett större datacenter som både har behov av funktioner som gynnar molnbaserade tjänster och mer hårdvaruresurser bör välja Hyper-V vid implementation av molntjänster.
A growing trend of cloud-based services can be witnessed in todays information society. To implement cloud-based services a method called virtualization is used. This method reduces the need of physical computer systems in a datacenter and facilitates a sustainable environmental and economical development. Cloud-based services create societal benefits by allowing new operators to quickly launch business-dependent services. Virtualization is applied by a so-called Hypervisor whose task is to distribute cloud-based services. After evaluation of existing scientific studies, we have found that there exists a discernible difference in performance and functionality between different varieties of Hypervisors. We have chosen to perform a functional and performance analysis of Hypervisors from the manufacturers with the largest market share. These are Microsoft Hyper-V Core Server 2012, Vmware ESXi 5.1.0 and Citrix XenServer 6.1.0 Free edition. Our client, the Swedish armed forces, have expressed a great need of the research which we have conducted. The thesis consists of a theoretical base which describes techniques behind virtualization and its applicable fields. Implementation comprises of two main methods, a qualitative and a quantitative research. The basis of the quantitative investigation consists of a standard test system which has been defined by the limitations of each Hypervisor. The system was used for a series of performance tests, where data transfers were initiated and sampled by automated testing tools. The purpose of the testing tools was to simulate workloads which deliberately affected CPU and I/O to determine the performance differences between Hypervisors. The qualitative method comprised of an assessment of functionalities and limitations for each Hypervisor. By using empirical analysis of the quantitative measurements we were able to determine the cause of each Hypervisors performance. The results revealed that there was a correlation between Hypervisor performance and the specific data transfer it was exposed to. The Hypervisor which exhibited good performance results in all data transfers was ESXi. The findings in the qualitative research revealed that the Hypervisor which offered the most functionality and least amount of constraints was Hyper-V. The conclusion of the overall results uncovered that ESXi is most suitable for smaller datacenters which do not intend to expand their operations. However a larger datacenter which is in need of cloud service oriented functionalities and requires greater hardware resources should choose Hyper-V at implementation of cloud-based services.
APA, Harvard, Vancouver, ISO, and other styles
28

WENG, LI-MIN, and 翁歷民. "Applying Contingency Model Embedded with Software Testing to Enhance Software Quality Performance." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/fnfxx3.

Full text
Abstract:
碩士
國立臺北教育大學
資訊科學系碩士班
106
Due to the ever-changing technology and rapid environmental change, enterprises are under pressure to develop products. Therefore, it’s necessary to rapidly develop a new generation of products to keep up with the pace of change which supply market demand. The challenge is in a short period of time to complete the project. Software and hardware development process to avoid product function errors that resulting in the user perception of the shortcomings caused by poor software quality and even a major problem that led to the return product. Software testing plays an important role in the process of software development. It also affects the quality of software. It can reduce software problems and improve software quality. The purpose of this study is to explore the risk factors in the process of software project development in enterprises, solve the problems and risk assessment of software testing by contingency model theory to reduce the software risk and improve the software quality. Finally enhance user satisfaction and trust.
APA, Harvard, Vancouver, ISO, and other styles
29

Ho, Pang-Ning, and 賀邦寧. "Empirical Evidence and Performance Evaluation of Automated Software Testing - A Case Study of A Company's Software Regression Testing System." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/73441506797430218763.

Full text
Abstract:
碩士
國立交通大學
管理學院碩士在職專班管理科學組
98
The application of information systems in business is becoming common and important with the popularity of global E-commerce. As the complication of information systems increase along with the highly integrated modules, the quality of information systems become an issue for IT departments. Software testing is a major segment in software quality management and the proportion of cost is high in software development. Manual software testing is not only time and effort consuming, but also without proper verification standards. This study looks into the performance of building a software regression testing system using automated testing tool to solve the problems in manual testing, and cross reference the application of the individual company and user interviews with the references. Company A provides testing service in the semi-conducting industry. The manufacturing execution system (MES) it uses contains all the work process and equipment utilization information which is an engaged transactional system that runs 24/7. In order to service and fulfill customer needs, system operation integration and customization is required, and the frequently change of programs influences the performance and quality of the system. Therefore, the software system company A adopts is more than adequate for the subject of this study. The performance of the implementation of automated testing to information system and the limitation of using automated testing tool is analyzed by the execution and verification of the software regression testing system in company A. IT managers can refer to the benefits indicated and case illustrations of this study to make automated testing building process more smoothly and meet the actual demands of enterprises.
APA, Harvard, Vancouver, ISO, and other styles
30

Jiang, Zhen Ming. "Automated Analysis of Load Testing Results." Thesis, 2013. http://hdl.handle.net/1974/7775.

Full text
Abstract:
Many software systems must be load tested to ensure that they can scale up under high load while maintaining functional and non-functional requirements. Studies show that field problems are often related to systems not scaling to field workloads instead of feature bugs. To assure the quality of these systems, load testing is a required testing procedure in addition to conventional functional testing procedures, such as unit and integration testing. Current industrial practices for checking the results of a load test remain ad-hoc, involving high-level manual checks. Few research efforts are devoted to the automated analysis of load testing results, mainly due to the limited access to large scale systems for use as case studies. Approaches for the automated and systematic analysis of load tests are needed, as many services are being offered online to an increasing number of users. This dissertation proposes automated approaches to assess the quality of a system under load by mining some of the recorded load testing data (execution logs). Execution logs, which are readily available yet rarely used, are generated by output statements which developers insert into the source code. Execution logs are hard to parse and analyze automatically due to their free-form structure. We first propose a log abstraction approach that uncovers the internal structure of each log line. Then we propose automated approaches to assess the quality of a system under load by deriving various models (functional, performance and reliability models) from the large set of execution logs. Case studies show that our approaches scale well to large enterprise and open source systems and output high precision results that help load testing practitioners effectively analyze the quality of the system under load.
Thesis (Ph.D, Computing) -- Queen's University, 2013-01-26 22:58:29.881
APA, Harvard, Vancouver, ISO, and other styles
31

Yang, Kai-wei, and 楊鎧瑋. "A Testing Platform and QA Software Development forWTPMS Performance Evaluation." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/52376690952393035824.

Full text
Abstract:
碩士
逢甲大學
自動控制工程所
96
To development Wireless Tire Pressure Monitoring Systems (WTPMS), product characteristics measurement and reliability proof are two major and critical issues. There is no WTPMS standard testing facility available so far in automotive industry. The purpose of the thesis is to design a testing platform with enhanced performance to improve system quality and control efficiency. First generation of test platform was made up of aluminium alloy board. The products must lock in an airtight air room via manual way. Because of its structure was made up of metal panel. So the airtight of its structure is not good. Besides the pressure environment is unable to reach above 50psi and has no centrifugal testing. In view of this, this thesis focuses on three main includes the pressure and centrifugal testing, the development of the Human Machine Interface QA software. This thesis has already finished the following : Design of centrifugal device in the airtight chamber, the pressure controlling and detecting in the chamber, the centrifugal force controlling and detecting in the chamber, Input/Output interface, and the development of Human Machine Interface and Quality Assurance Software. These are to automatically obtain WTPMS product characteristics, give error codes of test object to an operator, and to provide a concise chart and graphic interface to an engineer for further analyses in order to meet mass production and quality control requirements.
APA, Harvard, Vancouver, ISO, and other styles
32

CHAVALI, SRIKAVYA. "AUTOMATION OF A CLOUD HOSTED APPLICATION : Performance, Automated Testing, Cloud Computing." Thesis, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12846.

Full text
Abstract:
Context: Software testing is the process of assessing quality of a software product to determine whether it matches with the existing requirements of the customer or not. Software testing is one of the “Verification and Validation,” or V&V, software practices. The two basic techniques of software testing are Black-box testing and White box testing. Black-box testing focuses solely on the outputs generated in response to the inputs supplied neglecting the internal components of the software. Whereas, White-box testing focuses on the internal mechanism of the software of any application. To explore the feasibility of black-box and white-box testing under a given set of conditions, a proper test automation framework needs to be deployed. Automation is deployed in order to reduce the manual effort and to perform testing continuously, thereby increasing the quality of the product.   Objectives: In this research, cloud hosted application is automated using TestComplete tool. The objective of this thesis is to verify the functionality of Cloud application known as Test data library or Test Report Analyzer through automation and to measure the impact of the automation on release cycles of the organization.   Methods: Here automation is implemented using scrum methodology which is an agile development software process. Using scrum methodology, the product with working software can be delivered to the customers incrementally and empirically with updating functionalities in it. Test data library or Test Report Analyzer functionality of Cloud application is verified deploying testing device thereby the test cases can be analyzed thereby analyzing the pass or failed test cases.   Results: Automation of test report analyzer functionality of cloud hosted application is made using TestComplete and impact of automation on release cycles is reduced. Using automation, nearly 24% of change in release cycles can be observed thereby reducing the manual effort and increasing the quality of delivery.   Conclusion: Automation of a cloud hosted application provides no manual effort thereby utilization of time can be made effectively and application can be tested continuously increasing the efficiency and the quality of an application.
AUTOMATION OF A CLOUD HOSTED APPLICATION
APA, Harvard, Vancouver, ISO, and other styles
33

Horký, Vojtěch. "Výkon softwaru jako faktor při agilních metodách vývoje." Doctoral thesis, 2018. http://www.nusl.cz/ntk/nusl-392415.

Full text
Abstract:
Broadly, agile software development is an approach where code is frequently built, tested and shipped, leading to short release cycles. Extreme version is the DevOps approach where the development, testing and deployment pipelines are merged and software is continuously tested and updated. In this context our work focuses on identifying spots where the participants should be more aware of the performance and offers approaches and tools to improve their awareness with the ultimate goal of producing better software in shorter time. In general, the awareness is raised by testing, documenting, and monitoring the performance in all phases of the development cycle. In this thesis we (1) show a framework for writing performance tests for individual components (e.g. libraries). The tests capture and codify assumptions about the performance into runnable artifacts that simplify repeatability and automation. For evaluation of the performance tests we (2) propose new methods, which can automatically detect performance regressions. These methods are designed with inherent variation of performance data in mind and are able to filter it out in order to detect true regressions. Then we (3) reuse the performance tests to provide the developers with accurate and up-to-date performance API documentation that steer them...
APA, Harvard, Vancouver, ISO, and other styles
34

Anderson, Michael. "Performance modelling of reactive web applications using trace data from automated testing." Thesis, 2019. http://hdl.handle.net/1828/10793.

Full text
Abstract:
This thesis evaluates a method for extracting architectural dependencies and performance measures from an evolving distributed software system. The research goal was to establish methods of determining potential scalability issues in a distributed software system as it is being iteratively developed. The research evaluated the use of industry available distributed tracing methods to extract performance measures and queuing network model parameters for common user activities. Additionally, a method was developed to trace and collect system operations the correspond to these user activities utilizing automated acceptance testing. Performance measure extraction was tested across several historical releases of a real-world distributed software system with this method. The trends in performance measures across releases correspond to several scalability issues identified in the production software system.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
35

McNeany, Scott Edward. "Characterizing software components using evolutionary testing and path-guided analysis." 2013. http://hdl.handle.net/1805/3775.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Evolutionary testing (ET) techniques (e.g., mutation, crossover, and natural selection) have been applied successfully to many areas of software engineering, such as error/fault identification, data mining, and software cost estimation. Previous research has also applied ET techniques to performance testing. Its application to performance testing, however, only goes as far as finding the best and worst case, execution times. Although such performance testing is beneficial, it provides little insight into performance characteristics of complex functions with multiple branches. This thesis therefore provides two contributions towards performance testing of software systems. First, this thesis demonstrates how ET and genetic algorithms (GAs), which are search heuristic mechanisms for solving optimization problems using mutation, crossover, and natural selection, can be combined with a constraint solver to target specific paths in the software. Secondly, this thesis demonstrates how such an approach can identify local minima and maxima execution times, which can provide a more detailed characterization of software performance. The results from applying our approach to example software applications show that it is able to characterize different execution paths in relatively short amounts of time. This thesis also examines a modified exhaustive approach which can be plugged in when the constraint solver cannot properly provide the information needed to target specific paths.
APA, Harvard, Vancouver, ISO, and other styles
36

Palit, Rajesh. "Modeling and Evaluating Energy Performance of Smartphones." Thesis, 2012. http://hdl.handle.net/10012/6534.

Full text
Abstract:
With advances in hardware miniaturization and wireless communication technologies even small portable wireless devices have much communication bandwidth and computing power. These devices include smartphones, tablet computers, and personal digital assistants. Users of these devices expect to run software applications that they usually have on their desktop computers as well as the new applications that are being developed for mobile devices. Web browsing, social networking, gaming, online multimedia playing, global positioning system based navigation, and accessing emails are examples of a few popular applications. Mobile versions of thousands of desktop applications are already available in mobile application markets, and consequently, the expected operational time of smartphones is rising rapidly. At the same time, the complexity of these applications is growing in terms of computation and communication needs, and there is a growing demand for energy in smartphones. However, unlike the exponential growth in computing and communication technologies, in terms of speed and packaging density, battery technology has not kept pace with the rapidly growing energy demand of these devices. Therefore, designers are faced with the need to enhance the battery life of smartphones. Knowledge of how energy is used and lost in the system components of the devices is vital to this end. With this view, we focus on modeling and evaluating the energy performance of smartphones in this thesis. We also propose techniques for enhancing the energy efficiency and functionality of smartphones. The detailed contributions of the thesis are as follows: (i) we present a nite state machine based model to estimate the energy cost of an application running on a smartphone, and provide practical approaches to extract model parameters; (ii) the concept of energy cost pro le is introduced to assess the impact of design decisions on energy cost at an early stage of software design; (iii) a generic architecture is proposed and implemented for enhancing the capabilities of smartphones by sharing resources; (iv) we have analyzed the Internet tra c of smartphones to observe the energy saving potentials, and have studied the implications on the existing energy saving techniques; and nally, (v) we have provided a methodology to select user level test cases for performing energy cost evaluation of applications. All of our concepts and proposed methodology have been validated with extensive measurements on a real test bench. Our work contributes to both theoretical understanding of energy e ciency of software applications and practical methodologies for evaluating energy e ciency. In summary, the results of this work can be used by application developers to make implementation level decisions that affect the energy efficiency of software applications on smartphones. In addition, this work leads to the design and implementation of energy e cient smartphones.
APA, Harvard, Vancouver, ISO, and other styles
37

Stefan, Petr. "Testování výkonu Javy pro každého." Master's thesis, 2018. http://www.nusl.cz/ntk/nusl-382991.

Full text
Abstract:
Java is a major platform for performance sensitive applications. Unit testing of functionality has already become a common practice in software devel- opment; however, the amount of projects employing performance tests is substantially lower. A comprehensive study in combination with a short sur- vey among developers is made in order to examine the current situation in open-source projects written in Java. Results show that suitable tools for measurements exist, but they are hard to use or the outputs are difficult to understand. To improve the situation in favor of performance evaluation a set of user friendly tools for collecting, comparing and visualizing the data is designed, implemented, and verified on a sample Java project. 1
APA, Harvard, Vancouver, ISO, and other styles
38

Sridhar, G. "Efficient Whole Program Path Tracing." Thesis, 2017. http://etd.iisc.ernet.in/2005/3708.

Full text
Abstract:
Obtaining an accurate whole program path (WPP) that captures a program’s runtime behaviour in terms of a control-flow trace has a number of well-known benefits, including opportunities for code optimization, bug detection, program analysis refinement, etc. Existing techniques to compute WPPs perform sub-optimal instrumentation resulting in significant space and time overheads. Our goal in this thesis is to minimize these overheads without losing precision. To do so, we design a novel and scalable whole program analysis to determine instrumentation points used to obtain WPPs. Our approach is divided into three components: (a) an efficient summarization technique for inter-procedural path reconstruction, (b) specialized data structures called conflict sets that serve to effectively distinguish between pairs of paths, and (c) an instrumentation algorithm that computes the minimum number of edges to describe a path based on these conflict sets. We show that the overall problem is a variant of the minimum hitting set problem, which is NP-hard, and employ various sound approximation strategies to yield a practical solution. We have implemented our approach and performed elaborate experimentation on Java programs from the DaCapo benchmark suite to demonstrate the efficacy of our approach across multiple dimensions. On average, our approach necessitates instrumenting only 9% of the total number of CFG edges in the program. The average runtime overhead incurred by our approach to collect WPPs is 1.97x, which is only 26% greater than the overhead induced by only instrumenting edges guaranteed to exist in an optimal solution. Furthermore, compared to the state-of-the-art, we observe a reduction in runtime overhead by an average and maximum factor of 2.8 and 5.4, respectively.
APA, Harvard, Vancouver, ISO, and other styles
39

Figueira, André Igor Freitas. "Avaliação de implementações da tecnologia de WebSockets." Master's thesis, 2021. http://hdl.handle.net/10400.13/3472.

Full text
Abstract:
A comunicação em tempo real, constitui a troca quase concomitante de informação, sobre qualquer tipo de serviço de telecomunicação desde o emissor até ao recetor, numa ligação com uma latência insignificante. Comunicações deste tipo podem ser half-duplex, ou full-duplex. A utilização de WebSockets prende-se com a necessidade de resolver os problemas do tráfego e da latência que as soluções tradicionais de comunicação em tempo real apresentam. Esta tecnologia intenciona que se obtenham recursos de forma automática: assim que o servidor os recebe, propaga-os para os clientes, sem que estes efetuem novos pedidos de recursos. A sua utilização resulta num baixo consumo de recursos da rede. Além disso, este protocolo possui comunicações bidirecionais, o que permite que o servidor e o cliente comuniquem em simultâneo, sem interrupções. Este projeto realizou-se em contexto de estágio e teve como finalidade verificar, de entre três bibliotecas de servidor de WebSockets, qual apresentava melhor desempenho, principalmente perante cenários cuja carga de dados era mais elevada. Para que se levasse a cabo o projeto da melhor forma, procedeu-se a uma testagem de três bibliotecas em quatro cenários cuja carga de dados era distinta, recorrendo a duas ferramentas. A solução implementada ao longo do presente estudo utilizou o protocolo de comunicação de WebSockets, devido ao facto de ser uma tecnologia realmente poderosa e profícua para o desenvolvimento de soluções baseadas na comunicação em tempo real.
Real Time Communication (RTC) is considered as the almost simultaneous exchange of information about any type of telecommunication service from the sender to the receiver, in a connection with negligible latency. Communications of this type can be half-duplex or full duplex. The use of WebSockets is related to the necessity of solving network traffic and latency problems, which are presented by the traditional solutions of Real Time Communication. This technology has the intention of obtaining resources automatically: as soon as the server receives them, it sends them to the clients, without the customer placing new requests. The utilization of WebSockets results in a low consumption of network resources. Besides, this protocol has bidirectional communications, that allow the communication between server and customer, simultaneously, without interruptions. This project took place in the context of an internship and it has the goal of verify, between three WebSockets server libraries, which one presented better performance, mainly towards sceneries with higher load. In order to perform this in the best way, the three libraries testing was carried out in four sceneries whose data load was distinct, using two tools. The implemented solution during the study used the WebSockets communication protocol, due to the fact that this technology is powerful and useful for the development of solutions based on Real Time Communication.
APA, Harvard, Vancouver, ISO, and other styles
40

Menninghaus, Mathias. "Automated Performance Test Generation and Comparison for Complex Data Structures - Exemplified on High-Dimensional Spatio-Temporal Indices." Doctoral thesis, 2018. https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-20180823528.

Full text
Abstract:
There exist numerous approaches to index either spatio-temporal or high-dimensional data. None of them is able to efficiently index hybrid data types, thus spatio-temporal and high-dimensional data. As the best high-dimensional indexing techniques are only able to index point-data and not now-relative data and the best spatio-temporal indexing techniques suffer from the curse of dimensionality, this thesis introduces the Spatio-Temporal Pyramid Adapter (STPA). The STPA maps spatio-temporal data on points, now-values on the median of the data set and indexes them with the pyramid technique. For high-dimensional and spatio-temporal index structures no generally accepted benchmark exists. Most index structures are only evaluated by custom benchmarks and compared to a tiny set of competitors. Benchmarks may be biased as a structure may be created to perform well in a certain benchmark or a benchmark does not cover a certain speciality of the investigated structures. In this thesis, the Interface Based Performance Comparison (IBPC) technique is introduced. It automatically generates test sets with a high code coverage on the system under test (SUT) on the basis of all functions defined by a certain interface which all competitors support. Every test set is performed on every SUT and the performance results are weighted by the achieved coverage and summed up. These weighted performance results are then used to compare the structures. An implementation of the IBPC, the Performance Test Automation Framework (PTAF) is compared to a classic custom benchmark, a workload generator whose parameters are optimized by a genetic algorithm and a specific PTAF alternative which incorporates the specific behavior of the systems under test. This is done for a set of two high-dimensional spatio-temporal indices and twelve variants of the R-tree. The evaluation indicates that PTAF performs at least as good as the other approaches in terms of minimal test cases with a maximized coverage. Several case studies on PTAF demonstrate its widespread abilities.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography