To see the other types of publications on this topic, follow the link: Software Performance Testing.

Journal articles on the topic 'Software Performance Testing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Software Performance Testing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Patel, Charmy, and Dr Ravi Gulati. "Software Performance Testing Measures." INTERNATIONAL JOURNAL OF MANAGEMENT & INFORMATION TECHNOLOGY 8, no. 2 (January 31, 2014): 1297–300. http://dx.doi.org/10.24297/ijmit.v8i2.681.

Full text
Abstract:
Software developers typically measure a Web application's quality of service in terms of webpage availability, response time, and throughput. Performance testing and evaluation of software components becomes a critical task. Poor quality of software performance can lead to bad opportunities. Few research papers address the issues and systematic solutions to performance testing and measurement for modern components of software. This paper proposes a solution and environment to support performance measurement for software. The objective is to provide all kind of important measures which must be tested at the coding phase instead of after completion of software. So developers can make software that can meet performance objectives.
APA, Harvard, Vancouver, ISO, and other styles
2

Srivastava, Nishi, Ujjwal Kumar, and Pawan Singh. "Software and Performance Testing Tools." Journal of Informatics Electrical and Electronics Engineering (JIEEE) 2, no. 1 (January 5, 2021): 1–12. http://dx.doi.org/10.54060/jieee/002.01.001.

Full text
Abstract:
Software Testing may be a method, that involves, death penalty of a software system program/application and finding all errors or bugs therein program/application in order that the result is going to be a defect-free software system. Quality of any software system will solely be acknowledged through means that of testing (software testing). Through the advancement of technology round the world, there inflated the quantity of verification techniques and strategies to check the software system before it goes to production and astray to promote. Automation Testing has created its impact within the testing method. Now-a-days, most of the software system testing is finished with the automation tools that not solely lessens the quantity of individuals operating around that software system however additionally the errors which will be loose through the eyes of the tester. Automation take look acting contains test cases that make the work simple to capture totally different eventualities and store them. Therefore, software system automation testing method plays a significant role within the software system testing success. This study aims in knowing differing kinds of software system testing, software system testing techniques and tools and to match manual testing versus automation testing.
APA, Harvard, Vancouver, ISO, and other styles
3

Varela-González, M., H. González-Jorge, B. Riveiro, and P. Arias. "Performance testing of LiDAR exploitation software." Computers & Geosciences 54 (April 2013): 122–29. http://dx.doi.org/10.1016/j.cageo.2012.12.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Varela-González, M., H. González-Jorge, B. Riveiro, and P. Arias. "Performance testing of 3D point cloud software." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-5/W2 (October 16, 2013): 307–12. http://dx.doi.org/10.5194/isprsannals-ii-5-w2-307-2013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Krauser, E. W., A. P. Mathur, and V. J. Rego. "High performance software testing on SIMD machines." IEEE Transactions on Software Engineering 17, no. 5 (May 1991): 403–23. http://dx.doi.org/10.1109/32.90444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Denaro, Giovanni, Andrea Polini, and Wolfgang Emmerich. "Early performance testing of distributed software applications." ACM SIGSOFT Software Engineering Notes 29, no. 1 (January 2004): 94–103. http://dx.doi.org/10.1145/974043.974059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhuang, Lei, Zhen Gao, Hao Wu, Chun Xin Yang, and Miao Zheng. "Research on DB2 Performance Testing Automation." Advanced Materials Research 756-759 (September 2013): 2204–8. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.2204.

Full text
Abstract:
Software testing play a significant role in modern software development and maintenance process, which is also an important means to ensure software reliability and improve software quality. With the continuous improvement of quality requirements of the software products and software engineering technology become more sophisticated, software testing has been participating into every phase of software lift cycle, become more and more important in software development and maintenance. DB2 Performance testing consists of four parts, which are environment setup, workload run, data measurement and environment clean up. Before all the operations are done manually and need about two hours continuous attention. Whats worse, even three times a day. This mechanical and complicated procedure is clearly unacceptable. This paper put forward a reusable automated testing framework based on IBM automated testing tools RFT to achieve the whole testing procedure automation. It reduces the count of human-computer interaction and greatly improves the efficiency of DB2 performance testing.
APA, Harvard, Vancouver, ISO, and other styles
8

Avritzer, Alberto, and Elaine J. Weyuker. "Deriving Workloads for Performance Testing." Software: Practice and Experience 26, no. 6 (June 1996): 613–33. http://dx.doi.org/10.1002/(sici)1097-024x(199606)26:6<613::aid-spe23>3.0.co;2-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

MA Tian-bo, 马天波, 刘辉 LIU Hui, and 臧佳 ZANG Jia. "Design of Linear CCD Performance Parameter Testing Software." OME Information 28, no. 7 (2011): 41–45. http://dx.doi.org/10.3788/omei20112807.0041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Naik Dessai, Sanket Suresh, and Varuna Eswer. "Embedded Software Testing to Determine BCM5354 Processor Performance." International Journal of Software Engineering and Technologies (IJSET) 1, no. 3 (December 1, 2016): 121. http://dx.doi.org/10.11591/ijset.v1i3.4577.

Full text
Abstract:
Efficiency of a processor is a critical factor for an embedded system. One of the deciding factors for efficiency is the functioning of the L1 cache and Translation Lookaside Buffer (TLB). Certain processors have the L1 cache and TLB managed by the operating system, MIPS32 is one such processor. The performance of the L1 cache and TLB necessitates a detailed study to understand its management during varied load on the processor. This paper presents an implementation of embedded testing procedure to analyse the performance of the MIPS32 processor L1 cache and TLB management by the operating system (OS). The implementation proposed for embedded testing in the paper considers the counting of the respective cache and TLB management instruction execution, which is an event that is measurable with the use of dedicated counters. The lack of hardware counters in the MIPS32 processor results in the usage of software based event counters that are defined in the kernel. This paper implements embedding testbed with a subset of MIPS32 processor performance measurement metrics using software based counters. Techniques were developed to overcome the challenges posed by the kernel source code. To facilitate better understanding of the testbed implementation procedure of the software based processor performance counters; use-case analysis diagram, flow charts, screen shots, and knowledge nuggets are supplemented along with histograms of the cache and TLB events data generated by the proposed implementation. In this testbed twenty-seven metrics have been identified and implemented to provide data related to the events of the L1 cache and TLB on the MIPS32 processor. The generated data can be used in tuning of compiler, OS memory management design, system benchmarking, scalability, analysing architectural issues, address space analysis, understanding bus communication, kernel profiling, and workload characterisation.
APA, Harvard, Vancouver, ISO, and other styles
11

Samra, Hardeep Singh. "Study on Non Functional Software Testing." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 4, no. 1 (February 1, 2013): 151–55. http://dx.doi.org/10.24297/ijct.v4i1c.3115.

Full text
Abstract:
Improving software quality involves reducing the quantity of defects within the final product and identifying the remaining defects as early as possible. It involves both its functionality and its non-functional characteristics, such as usability, flexibility, performance, interoperability and security. In fact, defects found earlier in the development lifecycle cost dramatically less to repair than those found later. However, engineers cannot address non-functional quality requirements such as reliability, security, performance and usability early in the lifecycle using the same tools and processes that they use after coding and at later phases. Approaches such as stress testing for reliability, measuring performance and gauging user response to determine usability are inherently post-integration techniques. Accordingly, defects found with these tools are more disruptive and costly to fix. Nonetheless, there has been a lop-sided emphasis in the functionality of the software, even though the functionality is not useful or usable without the necessary non-functional characteristics. This research highlights the sporadic industry acceptance of some popular methods for designing for non-functional requirements and suggests some practical approaches that are applicable for companies that also must consider the demands of schedule and cost.
APA, Harvard, Vancouver, ISO, and other styles
12

Luo, Jun, and Wei Yang. "A Performance Testing Tool for Source Code." Applied Mechanics and Materials 490-491 (January 2014): 1553–59. http://dx.doi.org/10.4028/www.scientific.net/amm.490-491.1553.

Full text
Abstract:
With the rapid development of the information age, computer software develops toward systematization and complication. In application areas such as commerce, finance and medical treatment,the performance of software is attracting more and more attention which even becomes one of the important factors to determine whether users are willing to use a piece of software. Currently, static checking tools are mostly designed to check the code errors but pay little attention to the performance problems. In order to detect the defects in source code that may cause performance problems, this paper designs and achieves a performance testing tool based on static analysis method. The experiments of detecting several open source projects using our testing tool demonstrate that it can quickly find the defects in source code with high accuracy rate. The result of defection removing shows that it can significantly reduce the memory consumption of software, and it can effectively improve software performance.
APA, Harvard, Vancouver, ISO, and other styles
13

Johnson, Michael J., Chih-Wei Ho, E. Michael Maximilien, and Laurie Williams. "Incorporating Performance Testing in Test-Driven Development." IEEE Software 24, no. 3 (May 2007): 67–73. http://dx.doi.org/10.1109/ms.2007.77.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Russell, Seth, Tellen D. Bennett, and Debashis Ghosh. "Software engineering principles to improve quality and performance of R software." PeerJ Computer Science 5 (February 4, 2019): e175. http://dx.doi.org/10.7717/peerj-cs.175.

Full text
Abstract:
Today’s computational researchers are expected to be highly proficient in using software to solve a wide range of problems ranging from processing large datasets to developing personalized treatment strategies from a growing range of options. Researchers are well versed in their own field, but may lack formal training and appropriate mentorship in software engineering principles. Two major themes not covered in most university coursework nor current literature are software testing and software optimization. Through a survey of all currently available Comprehensive R Archive Network packages, we show that reproducible and replicable software tests are frequently not available and that many packages do not appear to employ software performance and optimization tools and techniques. Through use of examples from an existing R package, we demonstrate powerful testing and optimization techniques that can improve the quality of any researcher’s software.
APA, Harvard, Vancouver, ISO, and other styles
15

Fraz Malik, Muhammad, and M. N. A. Khan. "An Analysis of Performance Testing in Distributed Software Applications." International Journal of Modern Education and Computer Science 8, no. 7 (July 8, 2016): 53–60. http://dx.doi.org/10.5815/ijmecs.2016.07.06.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Downs, T., and P. Garrone. "Some new models of software testing with performance comparisons." IEEE Transactions on Reliability 40, no. 3 (1991): 322–28. http://dx.doi.org/10.1109/24.85452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Han, Xue. "A Study of Performance Testing in Configurable Software Systems." Journal of Software Engineering and Applications 14, no. 09 (2021): 474–92. http://dx.doi.org/10.4236/jsea.2021.149028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Adamoli, Andrea, Dmitrijs Zaparanuks, Milan Jovic, and Matthias Hauswirth. "Automated GUI performance testing." Software Quality Journal 19, no. 4 (April 3, 2011): 801–39. http://dx.doi.org/10.1007/s11219-011-9135-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Rexhepi, Burim, and Ali Rexhepi. "SOFTWARE TESTING TECHNIQUES AND PRINCIPLES." Knowledge International Journal 28, no. 4 (December 10, 2018): 1383–87. http://dx.doi.org/10.35120/kij28041383b.

Full text
Abstract:
This paper describes Software testing, need for software testing, Software testing goals and principles. Further it describe about different Software testing techniques and different software testing strategies. Finally it describes the difference between software testing and debugging.To perform testing effectively and efficiently, everyone involved with testing should be familiar with basic software testing goals, principles, limitations and concepts.We further explains different Software testing techniques such as Correctness testing, Performance testing, Reliability testing, Security testing. Further we have discussed the basic principles of black box testing, white box testing and gray box testing. We have surveyed some of the strategies supporting these paradigms, and have discussed their pros and cons. We also describes about different software testing strategies such as unit testing, Integration testing, acceptance testing and system testing.Finally there is comparison between debugging and testing. Testing is more than just debugging .Testing is not only used to locate defects and correct them it is also used in validation, verification process and measurement. A strategy for software Testing integrates software test case design methods into a well planned Series of steps that result in successful Construction of software that result in successful construction of software. Software testing Strategies gives the road map for testing. A software testing Strategy should be flexible enough to promote a customized testing approach at same time it must be right enough. Strategy is generally developed by project managers, software engineer and testing specialist. Software testing is an extremely creative and intellectually challenging task. When testing follows the principles given below, the creative element of test design and execution rivals any of the preceding software development steps, because testing requires high creativity and responsibility only the best personnel must be assigned to design, implement, and analyze test cases, test data and test results.
APA, Harvard, Vancouver, ISO, and other styles
20

Donepudi, Praveen Kumar. "Crowdsourced Software Testing: A Timely Opportunity." Engineering International 8, no. 1 (January 15, 2020): 25–30. http://dx.doi.org/10.18034/ei.v8i1.491.

Full text
Abstract:
The concept of crowdsourcing has gained a lot of attention lately. Many companies are making use of this concept for value creation, as well as the performance of varied tasks. Despite its wide application, little is known about crowdsourcing, especially when it comes to crowdsourced software testing. This paper explores the crowdsourced software testing concept from a wider perspective ranging from a cost-benefit analysis, crowdsourcing intermediaries, and the level of expertise in the crowd. Drawing from a varied range of sources, a systematic literature review is done, where the research narrows down to ten most relevant peer-reviewed sources of high impact rating. In a comparative analysis between crowdsourced software testing and in-house testing, it is found that crowd testing has numerous advantages when it comes to efficiency, user heterogeneity, and cost-effectiveness. The study indicates that intermediaries play a key role in managing the connection between the crowd and crowdsourcing companies despite various challenges. A comparison between novice testers and expert testers reveals that both the two have their unique capabilities in their respective domains.
APA, Harvard, Vancouver, ISO, and other styles
21

Mishra, Deepti, Sofiya Ostrovska, and Tuna Hacaloglu. "Exploring and expanding students’ success in software testing." Information Technology & People 30, no. 4 (November 6, 2017): 927–45. http://dx.doi.org/10.1108/itp-06-2016-0129.

Full text
Abstract:
Purpose Testing is one of the indispensable activities in software development and is being adopted as an independent course by software engineering (SE) departments at universities worldwide. The purpose of this paper is to carry out an investigation of the performance of learners about testing, given the tendencies in the industry and motivation caused by the unavailability of similar studies in software testing field. Design/methodology/approach This study is based on the data collected over three years (between 2012 and 2014) from students taking the software testing course. The course is included in the second year of undergraduate curriculum for the bachelor of engineering (SE). Findings It has been observed that, from the performance perspective, automated testing outperforms structural and functional testing techniques, and that a strong correlation exists among these three approaches. Moreover, a strong programming background does help toward further success in structural and automated testing, but has no effect on functional testing. The results of different teaching styles within the course are also presented together with an analysis exploring the relationship between students’ gender and success in the software testing course, revealing that there is no difference in terms of performance between male and female students in the course. Moreover, it is advisable to introduce teaching concepts one at a time because students find it difficult to grasp the ideas otherwise. Research limitations/implications These findings are based on the analysis conducted using three years of data collected while teaching a course in testing. Obviously, there are some limitations to this study. For example, student’s strength in programming is calculated using the score of C programming courses taken in previous year/semester. Such scores may not reflect their current level of programming knowledge. Furthermore, attempt was made to ensure that the exercises given for different testing techniques have similar difficulty level to guarantee that the difference in success between these testing techniques is due to the inherent complexity of the technique itself and not because of different exercises. Still, there is small probability that a certain degree of change in success may be due to the difference in the difficulty levels of the exercises. As such, it is obviously premature to consider the present results as final since there is a lack of similar type of studies, with which the authors can compare the results. Therefore, more work needs to be done in different settings to draw sound conclusions in this respect. Originality/value Although there are few studies (see e.g. Chan et al., 2005; Garousi and Zhi, 2013; Ng et al., 2004) exploring the preference of testers over distinct software testing techniques in the industry, there appears to be no paper comparing the preferences and performances of learners in terms of different testing techniques.
APA, Harvard, Vancouver, ISO, and other styles
22

Sandoval Alcocer, Juan Pablo, Alexandre Bergel, and Marco Tulio Valente. "Prioritizing versions for performance regression testing: The Pharo case." Science of Computer Programming 191 (June 2020): 102415. http://dx.doi.org/10.1016/j.scico.2020.102415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Yu, Li Li, Xia Zhao, Chao Hui Ye, and Li Jie Yu. "A Web Load Testing Method Based on Performance Target." Applied Mechanics and Materials 380-384 (August 2013): 2187–91. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.2187.

Full text
Abstract:
With the purpose of acquiring the Web system to work correctly under the guidance of software requirement, load testing is one of the important approaches. In this paper a web load testing based on performance target is proposed. Then several key points of achieving the load testing based on LoadRunner are discussed. At last combining a case study about the information management system software, Web load testing automation is achieved and testing results are analyzed.
APA, Harvard, Vancouver, ISO, and other styles
24

Luo, Qi, Aswathy Nair, Mark Grechanik, and Denys Poshyvanyk. "FOREPOST: finding performance problems automatically with feedback-directed learning software testing." Empirical Software Engineering 22, no. 1 (December 11, 2015): 6–56. http://dx.doi.org/10.1007/s10664-015-9413-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Kristanto, Septian Bayu, and Herni Kurniawati. "Testing The Information System Success Models Through Myob Accounting Software." Perspektif Akuntansi 3, no. 2 (October 5, 2020): 167–77. http://dx.doi.org/10.24246/persi.v3i2.p167-177.

Full text
Abstract:
The purpose of this study wants to test the Information System Succes Model through Accounting Software. The test examines the effect of System Quality to Service Quality, the effect of System Quality to Work performance, and the effect of Service Quality to Work Performance. The object in this research is MYOB accounting software. The data gathered with questionnaires in 2 types, paper and online. From 500 questionnaires are distributed, 315 was returned. It consists of 194 respondents through online, and 116 respondents through the paper. The obtained data were analyzed using Structural Equation Modelling. The result of data analysis showed that the System Quality has a positive significant effect on the Service Quality, Service Quality has a positive significant effect on Work Performance, and System Quality have a positive significant effect on Work Performance. The overall result indicates that MYOB accounting software has good quality and related to user performance. Specifically, the user refers to the basic user, which is a university student.
APA, Harvard, Vancouver, ISO, and other styles
26

Kiran, Mariam, and Anthony Simons. "Testing Software Services in Cloud Ecosystems." International Journal of Cloud Applications and Computing 6, no. 1 (January 2016): 42–58. http://dx.doi.org/10.4018/ijcac.2016010103.

Full text
Abstract:
Testing in the Cloud is far more challenging than testing individual software services. A multitude of factors affect testing, including variations across platforms and infrastructure. Architectural issues include differences between private, public Clouds, multi-Clouds and Cloud-bursting. Platform issues include cross-vendor incompatibility, and diverse locales of service deployment and consumption. Software issues include integration with third-party services, the desire to validate competing service offerings to similar standards and need to re-validate services at different stages of service lifecycle. A complete approach to testing whole Cloud ecosystems should involve all relevant stakeholders, such as service provider, consumer and broker. When testing Clouds, the methodologies used should not hinder the advantages Cloud usage brings to the users or programmers and more importantly be simple and cost effective. However, these testing methodologies differ according to the various kinds of Cloud ecosystems and the different user perspectives of the actors involved such as the end-user, the infrastructures, or the different software (i.e. web services). This paper also studies the state-of-the-art in Cloud testing where most research focuses predominantly on web services, functional testing and quality-of-service, usually being considered separately. The authors suggest a framework, Quality-as-a-Service (QaaS) which integrates quality issues such as functional behaviour and performance monitoring with lifecycle governance and security of the service. This paper maps out the themes in the contemporary research literature and links them with the service lifecycle process for validating future Cloud services. Along the way, the authors identify important research questions that the future Cloud service testing agenda should seek to address.
APA, Harvard, Vancouver, ISO, and other styles
27

Chapetta, Wladmir Araujo, Jailton Santos das Neves, and Raphael Carlos Santos Machado. "Quantitative Metrics for Performance Monitoring of Software Code Analysis Accredited Testing Laboratories." Sensors 21, no. 11 (May 24, 2021): 3660. http://dx.doi.org/10.3390/s21113660.

Full text
Abstract:
Modern sensors deployed in most Industry 4.0 applications are intelligent, meaning that they present sophisticated behavior, usually due to embedded software, and network connectivity capabilities. For that reason, the task of calibrating an intelligent sensor currently involves more than measuring physical quantities. As the behavior of modern sensors depends on embedded software, comprehensive assessments of such sensors necessarily demands the analysis of their embedded software. On the other hand, interlaboratory comparisons are comparative analyses of a body of labs involved in such assessments. While interlaboratory comparison is a well-established practice in fields related to physical, chemical and biological sciences, it is a recent challenge for software assessment. Establishing quantitative metrics to compare the performance of software analysis and testing accredited labs is no trivial task. Software is intangible and its requirements accommodate some ambiguity, inconsistency or information loss. Besides, software testing and analysis are highly human-dependent activities. In the present work, we investigate whether performing interlaboratory comparisons for software assessment by using quantitative performance measurement is feasible. The proposal was to evaluate the competence in software code analysis activities of each lab by using two quantitative metrics (code coverage and mutation score). Our results demonstrate the feasibility of establishing quantitative comparisons among software analysis and testing accredited laboratories. One of these rounds was registered as formal proficiency testing in the database—the first registered proficiency testing focused on code analysis.
APA, Harvard, Vancouver, ISO, and other styles
28

Guo, Jing, Zhong Wen Zhao, Chao Yang, and Ya Shuai Lv. "Research on the Integration Testing of Foundational Software and Hardware." Applied Mechanics and Materials 543-547 (March 2014): 3348–51. http://dx.doi.org/10.4028/www.scientific.net/amm.543-547.3348.

Full text
Abstract:
Whereas it plays the more and more important role in modern living, the level of integration testing on foundational SW&HW(software and hardware) was advanced. On the basis of integration testing content analysis, the basis flow of integration testing was advanced. The environment of integration testing was designed, especially the frame of testing environment on performance. The above research provided reference for standardization of integration testing.
APA, Harvard, Vancouver, ISO, and other styles
29

Alagarsamy, Malini, Mano Prathibhan Chandrasekaran, Sundara Rajan Sudarsanam, and Sundarakantham Kambaraj. "MTest-GA: Performance Testing of Online Android Applications Using Genetic Algorithm." International Journal of Software Engineering and Its Applications 11, no. 10 (October 30, 2017): 27–40. http://dx.doi.org/10.14257/ijseia.2017.11.10.03.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Betts, Kevin M., and Mikel D. Petty. "Automated Search-Based Robustness Testing for Autonomous Vehicle Software." Modelling and Simulation in Engineering 2016 (2016): 1–15. http://dx.doi.org/10.1155/2016/5309348.

Full text
Abstract:
Autonomous systems must successfully operate in complex time-varying spatial environments even when dealing with system faults that may occur during a mission. Consequently, evaluating the robustness, or ability to operate correctly under unexpected conditions, of autonomous vehicle control software is an increasingly important issue in software testing. New methods to automatically generate test cases for robustness testing of autonomous vehicle control software in closed-loop simulation are needed. Search-based testing techniques were used to automatically generate test cases, consisting of initial conditions and fault sequences, intended to challenge the control software more than test cases generated using current methods. Two different search-based testing methods, genetic algorithms and surrogate-based optimization, were used to generate test cases for a simulated unmanned aerial vehicle attempting to fly through an entryway. The effectiveness of the search-based methods in generating challenging test cases was compared to both a truth reference (full combinatorial testing) and the method most commonly used today (Monte Carlo testing). The search-based testing techniques demonstrated better performance than Monte Carlo testing for both of the test case generation performance metrics: (1) finding the single most challenging test case and (2) finding the set of fifty test cases with the highest mean degree of challenge.
APA, Harvard, Vancouver, ISO, and other styles
31

Alakeel, Ali M. "Using Fuzzy Logic Techniques for Assertion-Based Software Testing Metrics." Scientific World Journal 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/629430.

Full text
Abstract:
Software testing is a very labor intensive and costly task. Therefore, many software testing techniques to automate the process of software testing have been reported in the literature. Assertion-Based automated software testing has been shown to be effective in detecting program faults as compared to traditional black-box and white-box software testing methods. However, the applicability of this approach in the presence of large numbers of assertions may be very costly. Therefore, software developers need assistance while making decision to apply Assertion-Based testing in order for them to get the benefits of this approach at an acceptable level of costs. In this paper, we present an Assertion-Based testing metrics technique that is based on fuzzy logic. The main goal of the proposed technique is to enhance the performance of Assertion-Based software testing in the presence of large numbers of assertions. To evaluate the proposed technique, an experimental study was performed in which the proposed technique is applied on programs with assertions. The result of this experiment shows that the effectiveness and performance of Assertion-Based software testing have improved when applying the proposed testing metrics technique.
APA, Harvard, Vancouver, ISO, and other styles
32

Krawczyk, Henryk, Marcin Barylski, and Adam Barylski. "On Software Unit Testing for Improving Security and Performance of Distributed Applications." Key Engineering Materials 597 (December 2013): 131–36. http://dx.doi.org/10.4028/www.scientific.net/kem.597.131.

Full text
Abstract:
Performance and security are software (SW) application attributes situated on the opposite corners of system design. In the most drastic example the most secure component is the one totally isolated from the outside world, with communication performance reduced to zero level (e.g. disconnected physically from the network, placed inside a Faraday cage to eliminate possible wireless accessibility). On the other hand the most performance-optimized system is the one with all security rules taken off. Obviously such extreme implementations cannot be accepted, thus a reasonable trade-off between security and performance is desired, starting from the appropriate design, resulting in the adequate implementation, confirmed by security and performance testing in production environment. Unit testing (UT) is a well-know method of examining the smallest portion of SW application source code – units: methods, classes, interfaces in order to verify whether they behave as designed. Ideally, each UT test case is separated from others, taking advantage of stubs and mocks to provide full isolation from external test factors. This paper is an extension to research about joint security testing and performance testing for improving quality of distributed applications working in public-private network environments,addressing SW quality assessment at different, unit test level.
APA, Harvard, Vancouver, ISO, and other styles
33

GUPTA, ANSHU, REECHA KAPUR, and P. C. JHA. "CONSIDERING TESTING EFFICIENCY AND TESTING RESOURCE CONSUMPTION VARIATIONS IN ESTIMATING SOFTWARE RELIABILITY." International Journal of Reliability, Quality and Safety Engineering 15, no. 02 (April 2008): 77–91. http://dx.doi.org/10.1142/s0218539308002940.

Full text
Abstract:
Advances in software technologies have promoted the growth of computer-related applications to a great extent. Building quality in terms of reliability of the software has become one of the main issues for software developers. Software testing is necessary to build highly reliable software. Monitoring and controlling the resource utilization, measuring and controlling the progress of testing, efficiency of testing and debugging personals and reliability growth are important for effective management the testing phase and meeting the quality objectives. Over the past 35 years many Software reliability growth models (SRGM) are proposed to accomplish the above-mentioned activities related to the software testing. From the literature it appears that most of the SRGM do not account the changes in the testing effort consumption. During the testing process especially in the beginning and towards the end of the testing frequent changes are observed in testing resource consumption due to changes in testing strategy, team constitution, schedule pressures etc. Apart from this testing efficiency plays a major role determining the progress of the testing process. In this paper we incorporate the important concept of testing resource consumption variations for Weibull type testing effort functions and testing efficiency in software reliability growth modeling. The performance of the proposed models is demonstrated through two real life data sets existing in literature. The experimental result shows fairly accurate estimating capabilities of the proposed models.
APA, Harvard, Vancouver, ISO, and other styles
34

Lestantri, Inda D., and Rosini Rosini. "Evaluation of Software Quality to Improve Application Performance Using Mc Call Model." Journal of Information Systems Engineering and Business Intelligence 4, no. 1 (April 28, 2018): 18. http://dx.doi.org/10.20473/jisebi.4.1.18-24.

Full text
Abstract:
The existence of software should have more value to improve the performance of the organization in addition to having the primary function to automate. Before being implemented in an operational environment, software must pass the test gradually to ensure that the software is functioning properly, meeting user needs and providing convenience for users to use it. This test is performed on a web-based application, by taking a test case in an e-SAP application. E-SAP is an application used to monitor teaching and learning activities used by a university in Jakarta. To measure software quality, testing can be done on users randomly. The user samples selected in this test are users with an age range of 18 years old up to 25 years, background information technology. This test was conducted on 30 respondents. This test is done by using Mc Call model. Model of testing Mc Call consists of 11 dimensions are grouped into 3 categories. This paper describes the testing with reference to the category of product operation, which includes 5 dimensions. The dimensions of testing performed include the dimensions of correctness, usability, efficiency, reliability, and integrity. This paper discusses testing on each dimension to measure software quality as an effort to improve performance. The result of research is e-SAP application has good quality with product operation value equal to 85.09%. This indicates that the e-SAP application has a great quality, so this application deserves to be examined in the next stage on the operational environment.
APA, Harvard, Vancouver, ISO, and other styles
35

Sagarna, Ramón, and Jose Lozano. "ON THE PERFORMANCE OF ESTIMATION OF DISTRIBUTION ALGORITHMS APPLIED TO SOFTWARE TESTING." Applied Artificial Intelligence 19, no. 5 (April 13, 2005): 457–89. http://dx.doi.org/10.1080/08839510590917861.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Kılınç, Nergiz, Leyla Sezer, and lok Mishra. "Cloud-Based Test Tools: A Brief Comparative View." Cybernetics and Information Technologies 18, no. 4 (November 1, 2018): 3–14. http://dx.doi.org/10.2478/cait-2018-0044.

Full text
Abstract:
Abstract The concept of virtualization has brought life to the new methods of software testing. With the help of cloud technology, testing has become much more popular because of the opportunities it provides. Cloud technologies provides everything as a service, hence the software testing is also provided as a service on cloud with the privileges of lower cost of testing, and relatively less effort. There are various cloud-based test tools focusing on different aspects of software testing such as load tests, regression tests, stress tests, performance tests, scalability tests, security tests, functional tests, browser performance tests, and latency tests. This paper investigates the cloud-based testing tools focusing on different aspects of software testing.
APA, Harvard, Vancouver, ISO, and other styles
37

Chen, Guo Shun, Yan Mei Lv, and Ming Fei Xia. "Software Design of Networked Virtual Instrument System." Applied Mechanics and Materials 128-129 (October 2011): 1334–38. http://dx.doi.org/10.4028/www.scientific.net/amm.128-129.1334.

Full text
Abstract:
Based on the functions and performance analysis of networked virtual instrument (NVI) system, this paper design and implementation the software system of a NVI with the .NET platform. And carry out an in-depth analysis of the software system, especially the key techniques including databases, DataSocket and multithreading. The system uses ASP.NET, C# for development server pages, the instrument operation instructions embedded in Web pages using ActiveX techniques. Using VC to development VI server applications, and through DataSocket to transfer testing data. The testing results show that the software system is operating well, the NVI system achieves a convenient user and testing resource management, and making the system more users to share testing equipment to improve efficiency.
APA, Harvard, Vancouver, ISO, and other styles
38

Andrzejczak, Chris, and Dahai Liu. "The effect of testing location on usability testing performance, participant stress levels, and subjective testing experience." Journal of Systems and Software 83, no. 7 (July 2010): 1258–66. http://dx.doi.org/10.1016/j.jss.2010.01.052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Chai, Yu, and Yan Chun Li. "The Design and Applied Research of Robot Performance Testing Platform." Applied Mechanics and Materials 20-23 (January 2010): 135–40. http://dx.doi.org/10.4028/www.scientific.net/amm.20-23.135.

Full text
Abstract:
A robot performance testing platform is designed and implemented to aid the production and study of the competition robot. The underlying design principle, the mechanical structure, hardware and the data acquisition system software, as well as the PC data processing and display software which make up the testing platform are introduced. The applications of the platform in robot performance testing, track race and the gait planning of humanoid robot are stated, and the future research direction and difficulties of the platform are identified. Practice shows that the research of the platform has significant practical application and theoretical value.
APA, Harvard, Vancouver, ISO, and other styles
40

Segura, Sergio, Javier Troya, Amador Durán, and Antonio Ruiz-Cortés. "Performance metamorphic testing: A Proof of concept." Information and Software Technology 98 (June 2018): 1–4. http://dx.doi.org/10.1016/j.infsof.2018.01.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Sánchez, Ana B., Pedro Delgado-Pérez, Sergio Segura, and Inmaculada Medina-Bulo. "Performance mutation testing: Hypothesis and open questions." Information and Software Technology 103 (November 2018): 159–61. http://dx.doi.org/10.1016/j.infsof.2018.06.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Hu, Song Hua, and De En. "The Application of Load Runner in Message Broker Software of Financial Fusion in a Stated-Owned Bank." Applied Mechanics and Materials 143-144 (December 2011): 907–12. http://dx.doi.org/10.4028/www.scientific.net/amm.143-144.907.

Full text
Abstract:
LoadRunner is a powerful tool for performance testing, facing the widely use of Financial Fusion's Message Oriented Middleware----Message Broker in a stated-owned bank, with the use the system hardware and software environment of the bank, build the performance testing program to test Message Broker. Using LoadRunner to achieve the Base Testing, Absolute Concurrent Testing and Stability Testing, From these testing, it has been achieved the testing results of performance testing of the messaging middleware software Message Broker, this can provide a theoretical basisfor the application of Message Broker in processing messaging in banks, it also provide theory with the stability of banking systems.
APA, Harvard, Vancouver, ISO, and other styles
43

He, Cheng, and Yan Fei Liu. "Research on Software Testing to Ensure Web Application Usability, Reliability and Security." Advanced Materials Research 1049-1050 (October 2014): 1972–76. http://dx.doi.org/10.4028/www.scientific.net/amr.1049-1050.1972.

Full text
Abstract:
Compared with traditional web sites, there are some new features on modern web applications, as follows: dynamic functionalities, diverse representation, uncertainty for running performance, innovative data handling and data transferring mechanism, vulnerability Subsequently, the problems in testing web application are discussed from functional testing , reliability testing and security testing. At last, in order to solve these problems,new testing methods are proposed, which are systematic web application testing method,random test methods, reliability testing methods and security testing methods.
APA, Harvard, Vancouver, ISO, and other styles
44

Avritzer, A., and E. J. Weyuker. "The role of modeling in the performance testing of e-commerce applications." IEEE Transactions on Software Engineering 30, no. 12 (December 2004): 1072–83. http://dx.doi.org/10.1109/tse.2004.107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Kołtun, Agata, and Beata Pańczyk. "Comparative analysis of web application performance testing tools." Journal of Computer Sciences Institute 17 (December 21, 2020): 351–57. http://dx.doi.org/10.35784/jcsi.2209.

Full text
Abstract:
Recent years have brought the rise of importance of quality of developed software. Web applications should be functional, user friendly as also efficient. There are many tools available on the market for testing the performance of web applications. To help you choose the right tool, the article compares three of them: Apache JMeter, LoadNinja and Gatling. They were analyzed in terms of a user-friendly interface, parameterization of the requests and creation of own testing scripts. The research was carried out using a specially prepared application. The summary indicates the most important advantages and disadvantages of the selected tools.
APA, Harvard, Vancouver, ISO, and other styles
46

Albasir, Abdurhman, Valuppillai Mahinthan, Kshirasagar Naik, Abdulhakim Abogharaf, Nishith Goel, and Bernard J. Plourde. "Performance Testing of Mobile Applications on Smartphones." International Journal of Handheld Computing Research 5, no. 4 (October 2014): 36–47. http://dx.doi.org/10.4018/ijhcr.2014100103.

Full text
Abstract:
Smartphones became the preferred means of communication among users due to the availability of thousands of applications (apps). Although the hardware and software capabilities of smartphones are on the rise, the apps are primarily constrained by the wireless bandwidth and battery life. In this paper, the authors present a test architecture to: (i) evaluate the energy performance of two different designs of the same mobile app service; and (ii) evaluate the bandwidth and energy impacts of advertisements (ads) on smartphones. The authors' measurements on two video players show that, the proper design results a more energy efficient video players. Next, they compare the bandwidth and energy performance news and magazine websites with ads and without ads. In some cases, ads bandwidth cost reaches 50%, whereas ads energy cost reaches 17.8%. The authors also identified the challenges in reliably performing such tests on a large scale. App developers, users, manufacturers, and Internet Service Providers will benefit from this research.
APA, Harvard, Vancouver, ISO, and other styles
47

Sun, Tao, and Xinming Ye. "A Model Reduction Method for Parallel Software Testing." Journal of Applied Mathematics 2013 (2013): 1–13. http://dx.doi.org/10.1155/2013/595897.

Full text
Abstract:
Modeling and testing for parallel software systems are very difficult, because the number of states and execution sequences expands significantly caused by parallel behaviors. In this paper, a model reduction method based on Coloured Petri Net (CPN) is shown, which could generate a functionality-equivalent and trace-equivalent model with smaller scale. Model-based testing for parallel software systems becomes much easier after the model is reduced by the reduction method. Specifically, a formal model for software system specification is constructed based on CPN. Then the places in the model are divided into input places, output places, and internal places; the transitions in the model are divided into input transitions, output transitions, and internal transitions. Internal places and internal transitions could be reduced if preconditions are matching, and some other operations should be done for functionality equivalence and trace equivalence. If the place and the transition are in a parallel structure, then many execution sequences will be removed from the state space. We have proved the equivalence and have analyzed the reduction effort, so that we could get the same testing result with much lower testing workload. Finally, some practices and a performance analysis show that the method is effective.
APA, Harvard, Vancouver, ISO, and other styles
48

Khanna, Munish, Abhishek Toofani, Siddharth Bansal, and Mohammad Asif. "Performance Comparison of Various Algorithms During Software Fault Prediction." International Journal of Grid and High Performance Computing 13, no. 2 (April 2021): 70–94. http://dx.doi.org/10.4018/ijghpc.2021040105.

Full text
Abstract:
Producing software of high quality is challenging in view of the large volume, size, and complexity of the developed software. Checking the software for faults in the early phases helps to bring down testing resources. This empirical study explores the performance of different machine learning model, fuzzy logic algorithms against the problem of predicting software fault proneness. The work experiments on the public domain KC1 NASA data set. Performance of different methods of fault prediction is evaluated using parameters such as receiver characteristics (ROC) analysis and RMS (root mean squared), etc. Comparison is made among different algorithms/models using such results which are presented in this paper.
APA, Harvard, Vancouver, ISO, and other styles
49

Zheng, Xiang Ming, and Jie Hu. "Air Compressor Testing System Based on the HMI Software." Advanced Materials Research 139-141 (October 2010): 1874–78. http://dx.doi.org/10.4028/www.scientific.net/amr.139-141.1874.

Full text
Abstract:
Assembly performance test bed is an indispensable equipment in the development and production of air compressor assembly. The test bed studied is a mechanical, electronic and hydraulic integrated equipment and its measuring and controlling system mainly consist of computer system, process channel, control object and measuring and controlling software. Guided by the “soft hardware” design conception, two “soft” subsystems was adopted in a one-level computer system to replace the generally adopted two-level “hard” system consisting of measuring and controlling subsystem and management subsystem. This improvement allows the more applications of integrated sensor and intelligent controller, resulting in a simpler hardware and greater openness. The practice indicated that, with rapid parameter adjustment, accurate measuring, large testing scope and high automation, the system meets all the technical indexes of test requirement.
APA, Harvard, Vancouver, ISO, and other styles
50

Shadura, Oksana, Vassil Vassilev, and Brian Paul Bockelman. "Continuous Performance Benchmarking Framework for ROOT." EPJ Web of Conferences 214 (2019): 05003. http://dx.doi.org/10.1051/epjconf/201921405003.

Full text
Abstract:
Foundational software libraries such as ROOT are under intense pressure to avoid software regression, including performance regressions. Continuous performance benchmarking, as a part of continuous integration and other code quality testing, is an industry best-practice to understand how the performance of a software product evolves. We present a framework, built from industry best practices and tools, to help to understand ROOT code performance and monitor the efficiency of the code for several processor architectures. It additionally allows historical performance measurements for ROOT I/O, vectorization and parallelization sub-systems.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography