To see the other types of publications on this topic, follow the link: Automated test set generation.

Journal articles on the topic 'Automated test set generation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Automated test set generation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Movva, Himadeep. "Automated Testing Using UiPath Test Suite: A Framework for Scalable and Efficient Testing." International Scientific Journal of Engineering and Management 02, no. 06 (2023): 1–8. https://doi.org/10.55041/isjem01208.

Full text
Abstract:
Testing is one of the key components of any software development. RPA, with its advanced features, including those of Artificial Intelligence, can bring state-of-the-art solutions to automate use cases that were never imagined could be automated. This research study, through an in-depth analysis of features of the UiPath Test suite, explores the functionality of automated testing within UiPath and how a robust mechanism of an automated test management system through an efficient testing framework produces an RPA software product that is of robust design and highest quality. This research study also explores, in detail, the essential features of the UiPath Test Suite and how they can be used effectively to develop an effective RPA software testing strategy in UiPath projects. Test case generation is one of the crucial factors in determining how efficient testing would be and how robust a software product would be. This is the main reason for exploring automated test case generation through the UiPath Test suite. Automated software testing has transformed quality assurance, increased productivity, and decreased human labor. This study also examines the advanced features of UiPath Test Suite and how they can be used for end-to-end testing. Keywords: Test Suite, Test Manager, Test Sets, Test Cases, Data-Driven Test Cases, Task Capture, UiPath, Orchestrator, Software Testing, Data Service Entities, Choice set, Framework, JSON, and Test Data Queue
APA, Harvard, Vancouver, ISO, and other styles
2

CUKIC, BOJAN, BRIAN J. TAYLOR, and HARSHINDER SINGH. "AUTOMATED GENERATION OF TEST TRAJECTORIES FOR EMBEDDED FLIGHT CONTROL SYSTEMS." International Journal of Software Engineering and Knowledge Engineering 12, no. 02 (2002): 175–200. http://dx.doi.org/10.1142/s0218194002000895.

Full text
Abstract:
Automated generation of test cases is a prerequisite for fast testing. Whereas the research in automated test data generation addressed the creation of individual test points, test trajectory generation has attracted limited attention. In simple terms, a test trajectory is defined as a series of data points, with each (possibly multidimensional) point relying upon the value(s) of previous point(s). Many embedded systems use data trajectories as inputs, including closed-loop process controllers, robotic manipulators, nuclear monitoring systems, and flight control systems. For these systems, testers can either handcraft test trajectories, use input trajectories from older versions of the system or, perhaps, collect test data in a high fidelity system simulator. While these are valid approaches, they are expensive and time-consuming, especially if the assessment goals require many tests. We developed a framework for expanding a small, conventionally developed set of test trajectories into a large set suitable, for example, for system safety assurance. Statistical regression is the core of this framework. The regression analysis builds a relationship between controllable independent variables and closely correlated dependent variables, which represent test trajectories. By perturbing the independent variables, new test trajectories are generated automatically. Our approach has been applied in the safety assessment of a fault tolerant flight control system. Linear regression, multiple linear regression, and autoregressive techniques are compared. The performance metrics include the speed of test generation and the percentage of "acceptable" trajectories, measured by the domain specific reasonableness checks.
APA, Harvard, Vancouver, ISO, and other styles
3

Avritzer, A., E. de Souza e Silva, R. M. M. Leão, and E. J. Weyuker. "Automated generation of test cases using a performability model." IET Software 5, no. 2 (2011): 113. http://dx.doi.org/10.1049/iet-sen.2010.0035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Farooq, M. S., and Tayyaba Tahreem. "Requirement-Based Automated Test Case Generation: Systematic Literature Review." VFAST Transactions on Software Engineering 10, no. 2 (2022): 133–42. http://dx.doi.org/10.21015/vtse.v10i2.940.

Full text
Abstract:
There exist multiple techniques of software testing like requirement-based testing (RBT) an approach of software testing from which the tester can generate test cases on the base of requirements without considering the internal system’s structure. In the current area, automation testing is used to minimize time, cost, and human effort. As compared to automated testing, manual testing processes consume more human effort and time. Requirements are documented in natural language so there is no extra training required to understand requirements, RBT is the most used testing technique. Test cases generated with customer requirements are mainly focused on functional test cases. Most approaches focus on real-time embedded systems rather than UML diagrams because non-functional needs are not captured in test cases derived from UML diagrams. Metamodels can be used to extract information from requirements in some cases. Active testing approaches, bounded model checking, activity diagrams, Petri nets round strip strategy, and extended use cases are just a few of the typical ways used to generate test cases. In this article, multiple techniques of automated test case generation have been discussed which are not being addressed in state-of-art literature reviews. Studies included in this systematic literature review (SLR) are built on a set of three research objectives and a variety of high-quality evaluation criteria. Taxonomy has been presented based on test case generation with requirement-based techniques and tools. In the end, gaps and challenges have been discussed to assist researchers to pursue future work.
APA, Harvard, Vancouver, ISO, and other styles
5

Kodanda, Rami Reddy Manukonda. "Efficient Test Case Generation using Combinatorial Test Design: Towards Enhanced Testing Effectiveness and Resource Utilization." European Journal of Advances in Engineering and Technology 7, no. 12 (2020): 78–83. https://doi.org/10.5281/zenodo.12737422.

Full text
Abstract:
Combinatorial testing is a promising approach to software testing that aims to improve testing effectiveness and optimize resource utilization. It involves systematically exploring interactions among input parameters, generating a reduced set of test cases while maintaining adequate coverage. Empirical research shows that most software defects result from a few input parameter interactions, emphasizing the importance of adopting combinatorial testing methodologies. Automated combinatorial testing tools offer consistency, efficiency, and resource optimization in test case generation. However, the paper acknowledges limitations like the need for accurate parameter selection. Practical examples demonstrate its effectiveness in reducing testing times and costs. The paper also provides insights into combinatorial test design algorithms and tools, including the Advanced Combinatorial Testing System (ACTS).
APA, Harvard, Vancouver, ISO, and other styles
6

Marculescu, Bogdan, Man Zhang, and Andrea Arcuri. "On the Faults Found in REST APIs by Automated Test Generation." ACM Transactions on Software Engineering and Methodology 31, no. 3 (2022): 1–43. http://dx.doi.org/10.1145/3491038.

Full text
Abstract:
RESTful web services are often used for building a wide variety of enterprise applications. The diversity and increased number of applications using RESTful APIs means that increasing amounts of resources are spent developing and testing these systems. Automation in test data generation provides a useful way of generating test data in a fast and efficient manner. However, automated test generation often results in large test suites that are hard to evaluate and investigate manually. This article proposes a taxonomy of the faults we have found using search-based software testing techniques applied on RESTful APIs. The taxonomy is a first step in understanding, analyzing, and ultimately fixing software faults in web services and enterprise applications. We propose to apply a density-based clustering algorithm to the test cases evolved during the search to allow a better separation between different groups of faults. This is needed to enable engineers to highlight and focus on the most serious faults. Tests were automatically generated for a set of eight case studies, seven open-source and one industrial. The test cases generated during the search are clustered based on the reported last executed line and based on the error messages returned, when such error messages were available. The tests were manually evaluated to determine their root causes and to obtain additional information. The article presents a taxonomy of the faults found based on the manual analysis of 415 faults in the eight case studies and proposes a method to support the classification using clustering of the resulting test cases.
APA, Harvard, Vancouver, ISO, and other styles
7

V. Chandra Prakash, Dr, Subhash Tatale, Vrushali Kondhalkar, and Laxmi Bewoor. "A Critical Review on Automated Test Case Generation for Conducting Combinatorial Testing Using Particle Swarm Optimization." International Journal of Engineering & Technology 7, no. 3.8 (2018): 22. http://dx.doi.org/10.14419/ijet.v7i3.8.15212.

Full text
Abstract:
In software development life cycle, testing plays the significant role to verify requirement specification, analysis, design, coding and to estimate the reliability of software system. A test manager can write a set of test cases manually for the smaller software systems. However, for the extensive software system, normally the size of test suite is large, and the test suite is prone to an error committed like omissions of important test cases, duplication of some test cases and contradicting test cases etc. When test cases are generated automatically by a tool in an intelligent way, test errors can be eliminated. In addition, it is even possible to reduce the size of test suite and thereby to decrease the cost & time of software testing.It is a challenging job to reduce test suite size. When there are interacting inputs of Software under Test (SUT), combinatorial testing is highly essential to ensure higher reliability from 72 % to 91 % or even more than that. A meta-heuristic algorithm like Particle Swarm Optimization (PSO) solves optimization problem of automated combinatorial test case generation. Many authors have contributed in the field of combinatorial test case generation using PSO algorithms.We have reviewed some important research papers on automated test case generation for combinatorial testing using PSO. This paper provides a critical review of use of PSO and its variants for solving the classical optimization problem of automatic test case generation for conducting combinatorial testing.
APA, Harvard, Vancouver, ISO, and other styles
8

Gagan, Kumar, and Chopra Vinay. "Automatic Test Data Generation for Basis Path Testing." Indian Journal of Science and Technology 15, no. 41 (2022): 2151–61. https://doi.org/10.17485/IJST/v15i41.1503.

Full text
Abstract:
Abstract <strong>Objectives:</strong>&nbsp;This paper presents a new hybrid ACO-NSA algorithm for the automatic test data generation problem with path coverage as an objective function.&nbsp;<strong>Method:</strong>&nbsp;In it, at the first instance, test data (detectors) are generated with the ant colony optimization algorithm (ACO), and then the generated data set (detector set) has been refined by a negative selection algorithm (NSA) with Hamming distance.&nbsp;<strong>Findings:</strong>&nbsp;The algorithm&rsquo;s performance is tested on several benchmark problems with different data types and variables for metrics average coverage, average generations, average time and success rate, Iteration value 1000 is set for average coverage, average generations, average time and 200 for success rate. The obtained results from the proposed approach are compared with some existing approaches. The results are very efficient with high efficacy, higher path coverage, minimal data redundancy, and less execution time.<strong>&nbsp;Applications:</strong>&nbsp;This approach can be applied in any type of software development process in software engineering to reduce the testing efforts.&nbsp;<strong>Novelty:</strong>&nbsp;The approach is based on two distinct methodologies: metaheuristic search and artificial immune search, and its fitness is measured using path coverage as the fitness function. The approach provides 99.5% average path coverage, 2.72% average number of generations in 0.07 ns, and 99.9% success rate, which is significantly better than comparable approaches. <strong>Keywords:</strong> Test data generation; Metaheuristic search; Artificial immune search; Ant colony optimization; Negative selection algorithm; Path coverage
APA, Harvard, Vancouver, ISO, and other styles
9

Imam, Muhammad Hasan, Imran Ali Tasadduq, Abdul-Rahim Ahmad, Fahd Aldosari, and Haris Khan. "Automated Generation of Course Improvement Plans Using Expert System." International Journal of Quality Assurance in Engineering and Technology Education 6, no. 1 (2017): 1–12. http://dx.doi.org/10.4018/ijqaete.2017010101.

Full text
Abstract:
To satisfy ABET's continuous improvement criterion, an instructor, teaching a course suggests, at the end of the course, an improvement plan to be implemented when the same course is taught next time. Preparation of such a course improvement plan may be mandatory if a pre-specified target level of students' learning is not attained. Since, manual preparation of a course improvement plan is difficult, an idea of generating it using an expert system is presented. The objective is to make the task of improvement plan preparation easier and enjoyable. The proposed expert system has a set of remedies and a set of rules in a data base. A web-based interface queries the instructor about teaching and assessment tools used in the course. The inference engine selects the most appropriate remedy based on instructor's preferences. A cloud implementation of the expert system has been used to test it for a course.
APA, Harvard, Vancouver, ISO, and other styles
10

Antonelli, Leandro, Mariángeles Hozikian, Guy Camilleri, et al. "Wiki support for automated definition of software test cases." Kybernetes 49, no. 4 (2019): 1305–24. http://dx.doi.org/10.1108/k-10-2018-0548.

Full text
Abstract:
Purpose The design of tests is a very important step in the software development process because it allows us to match the users’ expectations with the finished product. Considered as a cumbersome activity, efforts have been made to automatize and alleviate the burden of test generation, but it is still a largely neglected step. The study aims to propose taking advantage of existing requirement artifacts, like scenarios that describe the dynamic of the domain in a very early stage of software development, to obtain tests from them. Design/methodology/approach In particular, the approach proposed complement the scenarios that are textually described with a glossary, the language extended lexicon. Thus, a set of rules to derive tests from scenarios is also proposed. The tests are then described using the task/method model. Findings The main findings of this study consist of an extension of a previously presented set of rules. And, a tool based on a media wiki platform that makes possible to record scenarios and the language extended lexicon and implement the rules to obtain the tests. Originality/value The main originality of this study is the glossary which complements scenarios, the semantic support to obtain tests and the tool to automatize the approach.
APA, Harvard, Vancouver, ISO, and other styles
11

Avdeenko, Tatiana, and Konstantin Serdyukov. "Automated Test Data Generation Based on a Genetic Algorithm with Maximum Code Coverage and Population Diversity." Applied Sciences 11, no. 10 (2021): 4673. http://dx.doi.org/10.3390/app11104673.

Full text
Abstract:
In the present paper, we investigate an approach to intelligent support of the software white-box testing process based on an evolutionary paradigm. As a part of this approach, we solve the urgent problem of automated generation of the optimal set of test data that provides maximum statement coverage of the code when it is used in the testing process. We propose the formulation of a fitness function containing two terms, and, accordingly, two versions for implementing genetic algorithms (GA). The first term of the fitness function is responsible for the complexity of the code statements executed on the path generated by the current individual test case (current set of statements). The second term formulates the maximum possible difference between the current set of statements and the set of statements covered by the remaining test cases in the population. Using only the first term does not make it possible to obtain 100 percent statement coverage by generated test cases in one population, and therefore implies repeated launch of the GA with changed weights of the code statements which requires recompiling the code under the test. By using both terms of the proposed fitness function, we obtain maximum statement coverage and population diversity in one launch of the GA. Optimal relation between the two terms of fitness function was obtained for two very different programs under testing.
APA, Harvard, Vancouver, ISO, and other styles
12

Le Thi My, Hanh, Binh Nguyen Thanh, and Tung Khuat Thanh. "Survey on Mutation-based Test Data Generation." International Journal of Electrical and Computer Engineering (IJECE) 5, no. 5 (2015): 1164. http://dx.doi.org/10.11591/ijece.v5i5.pp1164-1173.

Full text
Abstract:
&lt;span&gt;The critical activity of testing is the systematic selection of suitable test cases, which be able to reveal highly the faults. Therefore, mutation coverage is an effective criterion for generating test data. Since the test data generation process is very labor intensive, time-consuming and error-prone when done manually, the automation of this process is highly aspired. The researches about automatic test data generation contributed a set of tools, approaches, development and empirical results. In this paper, we will analyse and conduct a comprehensive survey on generating test data based on mutation. The paper also analyses the trends in this field.&lt;/span&gt;
APA, Harvard, Vancouver, ISO, and other styles
13

Leskó, Dániel, and Máté Tejfel. "A domain based new code coverage metric and a related automated test data generation method." Annales Universitatis Scientiarum Budapestinensis de Rolando Eötvös Nominatae. Sectio computatorica, no. 36 (2012): 217–40. https://doi.org/10.71352/ac.36.217.

Full text
Abstract:
ince programmers write programs there has always been a need to analyze the correctness of these programs, which is mostly done by testing. However, testing our programs does not give any direct quality guarantee on them, because it highly depends on the used test data set. Numerous code coverage metrics can be applied to measure the quality of our test set, but the majority of them were primarily designed for imperative programs, and they rely mostly on control structures like branching and looping. The problem is that expression-heavy programs and functional programming languages normally do not have these structures. Hence, the corresponding code coverage metrics are unsuitable at least, but mainly useless for these kinds of programs. In this paper we propose a new code coverage (domain coverage) metric, which is based on (arithmetic) expressions. The relations and effects among them are taken into account, such as some kind of semantics information about the programming language constructs. The paper also presents an automated test data generation method, which is related to domain coverage, and aims to reach the highest possible coverage ratio.
APA, Harvard, Vancouver, ISO, and other styles
14

Creuse, L., M. Eyraud, and V. Garèse. "Automatic Test Value Generation for Ada." ACM SIGAda Ada Letters 43, no. 1 (2023): 100–105. http://dx.doi.org/10.1145/3631483.3631500.

Full text
Abstract:
This article introduces novel tools to automatically generate pertinent Ada values in order to produce higher quality tests for Ada subprograms. A first tool will generate valid Ada values based on a structural analysis of the types of the parameters of the subprogram under test following various customizable strategies. Those values will then be filtered in order to satisfy the specifications of the subprogram, and new coverage criteria for executable specifications will be used to assess the relevance of the generated testsuite. This first set of values will then be used as seeds both for a fuzzing process, and a symbolic execution campaign, from which values of interest will be then extracted. This integrated process will enable users to generate a high value starting test corpus, which can then be expanded upon by domain-specific tests.
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Ruipeng, Zulie Pan, Fan Shi, and Min Zhang. "AEMB: An Automated Exploit Mitigation Bypassing Solution." Applied Sciences 11, no. 20 (2021): 9727. http://dx.doi.org/10.3390/app11209727.

Full text
Abstract:
Modern operating systems set exploit mitigations to thwart the exploit, which has also become a barrier to automated exploit generation (AEG). Many current AEG solutions do not fully account for exploit mitigations, and as a result, they are unable to accurately assess the exploitability of vulnerabilities in such settings.This paper proposes AEMB, an automated solution for bypassing exploit mitigations and generating useable exploits (EXPs). Initially, AEMB identifies exploit mitigations in the system based on characteristics of the program execution environment. Then, AEMB implements exploit mitigations bypassing the payload generation by modeling expert experience and constructs the corresponding constraints. Next, during the program’s execution, AEMB uses symbol execution to collect symbol information and create exploit constraints. Finally, AEMB utilizes a solver to solve the constraints, including payload constraints and exploit constraints, to generate the EXP. In this paper, we evaluated a prototype of AEMB on six test programs and seven real-world applications. Furthermore, we conducted 54 sets of experiments on six different combinations of exploit mitigations. Experiment results indicate that AEMB can automatically overcome exploit mitigations and produce successful exploits for 11 out of 13 applications.
APA, Harvard, Vancouver, ISO, and other styles
16

Betts, Kevin M., and Mikel D. Petty. "Automated Search-Based Robustness Testing for Autonomous Vehicle Software." Modelling and Simulation in Engineering 2016 (2016): 1–15. http://dx.doi.org/10.1155/2016/5309348.

Full text
Abstract:
Autonomous systems must successfully operate in complex time-varying spatial environments even when dealing with system faults that may occur during a mission. Consequently, evaluating the robustness, or ability to operate correctly under unexpected conditions, of autonomous vehicle control software is an increasingly important issue in software testing. New methods to automatically generate test cases for robustness testing of autonomous vehicle control software in closed-loop simulation are needed. Search-based testing techniques were used to automatically generate test cases, consisting of initial conditions and fault sequences, intended to challenge the control software more than test cases generated using current methods. Two different search-based testing methods, genetic algorithms and surrogate-based optimization, were used to generate test cases for a simulated unmanned aerial vehicle attempting to fly through an entryway. The effectiveness of the search-based methods in generating challenging test cases was compared to both a truth reference (full combinatorial testing) and the method most commonly used today (Monte Carlo testing). The search-based testing techniques demonstrated better performance than Monte Carlo testing for both of the test case generation performance metrics: (1) finding the single most challenging test case and (2) finding the set of fifty test cases with the highest mean degree of challenge.
APA, Harvard, Vancouver, ISO, and other styles
17

Chung, In-Sang. "Automated Black-Box Test Case Generation for MC/DC with SAT." KIPS Transactions:PartD 16D, no. 6 (2009): 911–20. http://dx.doi.org/10.3745/kipstd.2009.16d.6.911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Sahoo, Rashmi Rekha, and Mitrabinda Ray. "Metaheuristic Techniques for Test Case Generation." Journal of Information Technology Research 11, no. 1 (2018): 158–71. http://dx.doi.org/10.4018/jitr.2018010110.

Full text
Abstract:
The primary objective of software testing is to locate bugs as many as possible in software by using an optimum set of test cases. Optimum set of test cases are obtained by selection procedure which can be viewed as an optimization problem. So metaheuristic optimizing (searching) techniques have been immensely used to automate software testing task. The application of metaheuristic searching techniques in software testing is termed as Search Based Testing. Non-redundant, reliable and optimized test cases can be generated by the search based testing with less effort and time. This article presents a systematic review on several meta heuristic techniques like Genetic Algorithms, Particle Swarm optimization, Ant Colony Optimization, Bee Colony optimization, Cuckoo Searches, Tabu Searches and some modified version of these algorithms used for test case generation. The authors also provide one framework, showing the advantages, limitations and future scope or gap of these research works which will help in further research on these works.
APA, Harvard, Vancouver, ISO, and other styles
19

Onishchenko, Volodymyr, Oleksandr Puchkov, and Ihor Subach. "Investigation of associative rule search method for detection of cyber incidents in information management systems and security events using CICIDS2018 test data set." Collection "Information Technology and Security" 12, no. 1 (2024): 91–101. http://dx.doi.org/10.20535/2411-1031.2024.12.1.306275.

Full text
Abstract:
Automated rule generation for cyber incident identification in information management and security event systems (SIEM, SYSTEM, etc.) plays a crucial role in modern cyberspace defense, where data volumes are exponentially increasing, and the complexity and speed of cyber-attacks are constantly rising. This article explores approaches and methods for automating the process of cyber incident identification rule generation to reduce the need for manual work and ensure flexibility in adapting to changes in threat models. The research highlights the need for utilizing modern techniques of Intelligent Data Analysis (IDA) to process large volumes of data and formulate behavior rules for systems and activities in information systems. The conclusion emphasizes the necessity of integrating multiple research directions, including analyzing existing methods and applying IDA algorithms to search for associative rules from large datasets. Key challenges addressed include the complexity of data modeling, the need to adapt to changes in data from dynamic cyber attack landscapes, and the speed of rule generation algorithms for their identification. The issue of the "dimensionality curse" and the identification of cybersecurity event sequences over time, particularly relevant to SIEM, are discussed. The research objective is defined as the analysis and evaluation of various mathematical methods for automated associative rule generation to identify cyber incidents in SIEM. The most effective strategies for enhancing the efficiency of associative rule generation and their adaptation to the dynamic change of the cybersecurity system state are identified to strengthen the protection of information infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
20

Radovic, Maja, Nenad Petrovic, and Milorad Tosic. "An Ontology-Driven Learning Assessment Using the Script Concordance Test." Applied Sciences 12, no. 3 (2022): 1472. http://dx.doi.org/10.3390/app12031472.

Full text
Abstract:
Assessing the level of domain-specific reasoning acquired by students is one of the major challenges in education particularly in medical education. Considering the importance of clinical reasoning in preclinical and clinical practice, it is necessary to evaluate students’ learning achievements accordingly. The traditional way of assessing clinical reasoning includes long-case exams, oral exams, and objective structured clinical examinations. However, the traditional assessment techniques are not enough to answer emerging requirements in the new reality due to limited scalability and difficulty for adoption in online education. In recent decades, the script concordance test (SCT) has emerged as a promising tool for assessment, particularly in medical education. The question is whether the usability of SCT could be raised to a level high enough to match the current education requirements by exploiting opportunities that new technologies provide, particularly semantic knowledge graphs (SCGs) and ontologies. In this paper, an ontology-driven learning assessment is proposed using a novel automated SCT generation platform. SCTonto ontology is adopted for knowledge representation in SCT question generation with the focus on using electronic health records data for medical education. Direct and indirect strategies for generating Likert-type scores of SCT are described in detail as well. The proposed automatic question generation was evaluated against the traditional manually created SCT, and the results showed that the time required for tests creation significantly reduced, which confirms significant scalability improvements with respect to traditional approaches.
APA, Harvard, Vancouver, ISO, and other styles
21

Xiao, Lei, Ru-Xue Bai, Ke-Shou Wu, and Rong-Shang Chen. "Research and Application of Automatic Test Case Generation Method Based on User Interface and Business Flow Chart." Journal of Internet Technology 26, no. 3 (2025): 367–78. https://doi.org/10.70003/160792642025052603009.

Full text
Abstract:
Test case design is a critical task in software testing. Manual test case generation is time-consuming and challenging to maintain. To address these issues, this paper proposes a method for automatically generating test cases based on user interface and flowchart analysis. Firstly, YOLOv8 object detection and EasyOCR text recognition are used to identify control information within the interface. Secondly, the Faker library is utilized to generate corresponding test data. Finally, a text generation program is employed to transform control information and test data into a set of interface test cases. Additionally, a circular traversal algorithm is applied to traverse the flowchart, generating test paths that are combined with interface test cases to form a complete set of test cases. To validate the effectiveness of the method, corresponding tools were developed, and 209 test cases were generated for three systems using this approach. Experimental results demonstrate that our proposed method performs well in terms of test case generation efficiency, defect discovery, and maintainability.
APA, Harvard, Vancouver, ISO, and other styles
22

Hooda, Itti, and R. S. Chhillar. "Test Case Optimization and Redundancy Reduction Using GA and Neural Networks." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 6 (2018): 5449. http://dx.doi.org/10.11591/ijece.v8i6.pp5449-5456.

Full text
Abstract:
More than 50% of software development effort is spent in testing phase in a typical software development project. Test case design as well as execution consume a lot of time. Hence, automated generation of test cases is highly required. Here a novel testing methodology is being presented to test object-oriented software based on UML state chart diagrams. In this approach, function minimization technique is being applied and generate test cases automatically from UML state chart diagrams. Software testing forms an integral part of the software development life cycle. Since the objective of testing is to ensure the conformity of an application to its specification, a test “oracle” is needed to determine whether a given test case exposes a fault or not. An automated oracle to support the activities of human testers can reduce the actual cost of the testing process and the related maintenance costs. In this paper, a new concept is being presented using an UML state chart diagram and tables for the test case generation, artificial neural network as an optimization tool for reducing the redundancy in the test case generated using the genetic algorithm. A neural network is trained by the back-propagation algorithm on a set of test cases applied to the original version of the system.
APA, Harvard, Vancouver, ISO, and other styles
23

Krak, Yu V., O. V. Barmak, and O. V. Mazurets. "The practice investigation of the information technology efficiency for automated definition of terms in the semantic content of educational materials." PROBLEMS IN PROGRAMMING, no. 2-3 (June 2016): 237–45. http://dx.doi.org/10.15407/pp2016.02-03.237.

Full text
Abstract:
The information technology on base of the disperse evaluation, which with enough high efficiency allows automated define the semantic terms in content of educational materials article is given. The factors that hinder effective analysis of educational materials have been considered. High efficiency offered technologies gives possible of its using in row of the problems, such as estimation of the correspondence of educational materials to requirements, estimation of the correspondence of set test tasks to educational materials, semantic help of making tests, automated keyword list and abstract generation.
APA, Harvard, Vancouver, ISO, and other styles
24

Agrawal, Nishant. "Automatic Test Pattern Generation using Grover’s Algorithm." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (2021): 2373–79. http://dx.doi.org/10.22214/ijraset.2021.34837.

Full text
Abstract:
Quantum computing is an exciting new field in the intersection of computer science, physics and mathematics. It refines the central concepts from Quantum mechanics into its least difficult structures, peeling away the complications from the physical world. Any combinational circuit that has only one stuck at fault can be tested by applying a set of inputs that drive the circuit to verify the output response. The outputs of that circuit will be different from the one desired if the faults exist. This project describes a method of generating test patterns using the Boolean satisfaction method. First, the Boolean formula is constructed to express the Boolean difference between a fault-free circuit and a faulty circuit. Second, the Boolean satisfaction algorithm is applied to the formula in the previous step. The Grover algorithm is used to solve the Boolean satisfaction problem. The Boolean Satisfiability problem for Automatic Test Pattern Generation(ATPG) is implemented on IBM Quantum Experience. The Python program initially generates the boolean expression from the file and converts it into Conjunctive Normal Form(CNF) which is passed on to Grover Oracle and runs on IBM simulator and produces excellent results on combinational circuits for test pattern generation with a quadratic speedup. Grover’s Algorithm on this problem has a run time of O(√N).
APA, Harvard, Vancouver, ISO, and other styles
25

Ko, Woori, Sangmin Park, Jaewoong Yun, Sungho Park, and Ilsoo Yun. "Development of a Framework for Generating Driving Safety Assessment Scenarios for Automated Vehicles." Sensors 22, no. 16 (2022): 6031. http://dx.doi.org/10.3390/s22166031.

Full text
Abstract:
Despite the technological advances in automated driving systems, traffic accidents involving automated vehicles (AVs) continue to occur, raising concerns over the safety and reliability of automated driving. For the smooth commercialization of AVs, it is necessary to systematically assess the driving safety of AVs under the various situations that they potentially face. In this context, these various situations are mostly implemented by using systematically developed scenarios. In accordance with this need, a scenario generation framework for the assessment of the driving safety of AVs is proposed by this study. The proposed framework provides a unified form of assessment with key components for each scenario stage to facilitate systematization and objectivity. The performance of the driving safety assessment scenarios generated within the proposed framework was verified. Traffic accident report data were used for verification, and the usefulness of the proposed framework was confirmed by generating a set of scenarios, ranging from functional scenarios to test cases. The scenario generation framework proposed in this study can be used to provide sustainable scenarios. In addition, from this, it is possible to create assessment scenarios for all road types and various assessment spaces, such as simulations, proving grounds, and real roads.
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Simin, XiaoNing Feng, Xiaohong Han, Cong Liu, and Wei Yang. "PPM: Automated Generation of Diverse Programming Problems for Benchmarking Code Generation Models." Proceedings of the ACM on Software Engineering 1, FSE (2024): 1194–215. http://dx.doi.org/10.1145/3643780.

Full text
Abstract:
In recent times, a plethora of Large Code Generation Models (LCGMs) have been proposed, showcasing significant potential in assisting developers with complex programming tasks. Within the surge of LCGM proposals, a critical aspect of code generation research involves effectively benchmarking the programming capabilities of models. Benchmarking LCGMs necessitates the creation of a set of diverse programming problems, and each problem comprises the prompt (including the task description), canonical solution, and test inputs. The existing methods for constructing such a problem set can be categorized into two main types: manual methods and perturbation-based methods. However, %both these methods exhibit major limitations. %Firstly, manually-based methods require substantial human effort and are not easily scalable. Moreover, programming problem sets created manually struggle to maintain long-term data integrity due to the greedy training data collection mechanism in LCGMs. On the other hand, perturbation-based approaches primarily produce semantically homogeneous problems, resulting in generated programming problems with identical Canonical Solutions to the seed problem. These methods also tend to introduce typos to the prompt, easily detectable by IDEs, rendering them unrealistic. manual methods demand high effort and lack scalability, while also risking data integrity due to LCGMs' potentially contaminated data collection, and perturbation-based approaches mainly generate semantically homogeneous problems with the same canonical solutions and introduce typos that can be easily auto-corrected by IDE, making them ineffective and unrealistic. Addressing the aforementioned limitations presents several challenges: (1) How to automatically generate semantically diverse Canonical Solutions to enable comprehensive benchmarking on the models, (2) how to ensure long-term data integrity to prevent data contamination, and (3) how to generate natural and realistic programming problems. To tackle the first challenge, we draw key insights from viewing a program as a series of mappings from the input to the output domain. These mappings can be transformed, split, reordered, or merged to construct new programs. Based on this insight, we propose programming problem merging, where two existing programming problems are combined to create new ones. In addressing the second challenge, we incorporate randomness to our programming problem-generation process. Our tool can probabilistically guarantee no data repetition across two random trials. To tackle the third challenge, we propose the concept of a Lambda Programming Problem, comprising a concise one-sentence task description in natural language accompanied by a corresponding program implementation. Our tool ensures the program prompt is grammatically correct. Additionally, the tool leverages return value type analysis to verify the correctness of newly created Canonical Solutions. In our empirical evaluation, we utilize our tool on two widely-used datasets and compare it against nine baseline methods using eight code generation models. The results demonstrate the effectiveness of our tool in generating more challenging, diverse, and natural programming problems, comparing to the baselines.
APA, Harvard, Vancouver, ISO, and other styles
27

Malinova, Anna, and Olga Rahneva. "AUTOMATIC GENERATION OF ENGLISH LANGUAGE TEST QUESTIONS USING MATHEMATICA." CBU International Conference Proceedings 4 (September 17, 2017): 906–9. http://dx.doi.org/10.12955/cbup.v4.794.

Full text
Abstract:
This paper describes a computer algebra-aided generation of two types of English language tests, which further develops our recent work in this domain. The computer algebra system Wolfram Mathematica significantly advances the process of English language testing and assessment. The automatic generation of questions allows us to create a large set of equivalent questions of a certain topic based on a small amount of input values. This reduces authoring time during test creation, avails application of equal criteria and a fair assessment, and decreases the influence of subjective factors. In our previous work, we proposed methods for automatic generation of English language test questions. These were aimed at evaluating the students’ knowledge of lexical and grammatical structures found in the text using test questions that involved matching words and their meaning, matching parts of the whole, and finding synonyms, antonyms, and generalizations or specializations of words. This paper provides new methods for the automatic generation of English language test questions. This includes generating questions for testing the students’ knowledge of adverbs and adjectives, as well as word formation, especially with negative forms of adjectives.
APA, Harvard, Vancouver, ISO, and other styles
28

Álvez, Javier, Montserrat Hermo, Paqui Lucio, and German Rigau. "Automatic white-box testing of first-order logic ontologies." Journal of Logic and Computation 29, no. 5 (2019): 723–51. http://dx.doi.org/10.1093/logcom/exz001.

Full text
Abstract:
AbstractFormal ontologies are axiomatizations in a logic-based formalism. The development of formal ontologies is generating considerable research on the use of automated reasoning techniques and tools that help in ontology engineering. One of the main aims is to refine and to improve axiomatizations for enabling automated reasoning tools to efficiently infer reliable information. Defects in the axiomatization cannot only cause wrong inferences, but can also hinder the inference of expected information, either by increasing the computational cost of or even preventing the inference. In this paper, we introduce a novel, fully automatic white-box testing framework for first-order logic (FOL) ontologies. Our methodology is based on the detection of inference-based redundancies in the given axiomatization. The application of the proposed testing method is fully automatic since (i) the automated generation of tests is guided only by the syntax of axioms and (ii) the evaluation of tests is performed by automated theorem provers (ATPs). Our proposal enables the detection of defects and serves to certify the grade of suitability—for reasoning purposes—of every axiom. We formally define the set of tests that are (automatically) generated from any axiom and prove that every test is logically related to redundancies in the axiom from which the test has been generated. We have implemented our method and used this implementation to automatically detect several non-trivial defects that were hidden in various FOL ontologies. Throughout the paper we provide illustrative examples of these defects, explain how they were found and how each proof—given by an ATP—provides useful hints on the nature of each defect. Additionally, by correcting all the detected defects, we have obtained an improved version of one of the tested ontologies: Adimen-SUMO.
APA, Harvard, Vancouver, ISO, and other styles
29

Barlybayev, Alibek, and Bakhyt Matkarimov. "Development of system for generating questions, answers, distractors using transformers." International Journal of Electrical and Computer Engineering (IJECE) 14, no. 2 (2024): 1851–63. https://doi.org/10.11591/ijece.v14i2.pp1851-1863.

Full text
Abstract:
The goal of this article is to develop a multiple-choice questions generation system that has a number of advantages, including quick scoring, consistent grading, and a short exam period. To overcome this difficulty, we suggest treating the problem of question creation as a sequence-to-sequence learning problem, where a sentence from a text passage can directly mapped to a question. Our approach is data-driven, which eliminates the need for manual rule implementation. This strategy is more effective and gets rid of potential errors that could result from incorrect human input. Our work on question generation, particularly the usage of the transformer model, has been impacted by recent developments in a number of domains, including neural machine translation, generalization, and picture captioning.
APA, Harvard, Vancouver, ISO, and other styles
30

Les, Tomasz, Tomasz Markiewicz, Miroslaw Dziekiewicz, and Malgorzata Lorent. "Kidney Boundary Detection Algorithm Based on Extended Maxima Transformations for Computed Tomography Diagnosis." Applied Sciences 10, no. 21 (2020): 7512. http://dx.doi.org/10.3390/app10217512.

Full text
Abstract:
This article describes the automated computed tomography (CT) image processing technique supporting kidney detection. The main goal of the study is a fully automatic generation of a kidney boundary for each slice in the set of slices obtained in the computed tomography examination. This work describes three main tasks in the process of automatic kidney identification: the initial location of the kidneys using the U-Net convolutional neural network, the generation of an accurate kidney boundary using extended maxima transformation, and the application of the slice scanning algorithm supporting the process of generating the result for the next slice, using the result of the previous one. To assess the quality of the proposed technique of medical image analysis, automatic numerical tests were performed. In the test section, we presented numerical results, calculating the F1-score of kidney boundary detection by an automatic system, compared to the kidneys boundaries manually generated by a human expert from a medical center. The influence of the use of U-Net support in the initial detection of the kidney on the final F1-score of generating the kidney outline was also evaluated. The F1-score achieved by the automated system is 84% ± 10% for the system without U-Net support and 89% ± 9% for the system with U-Net support. Performance tests show that the presented technique can generate the kidney boundary up to 3 times faster than raw U-Net-based network. The proposed kidney recognition system can be successfully used in systems that require a very fast image processing time. The measurable effect of the developed techniques is a practical help for doctors, specialists from medical centers dealing with the analysis and description of medical image data.
APA, Harvard, Vancouver, ISO, and other styles
31

Koszelew, Jolanta, Joanna Karbowska-Chilinska, Krzysztof Ostrowski, Piotr Kuczyński, Eric Kulbiej, and Piotr Wołejsza. "Beam Search Algorithm for Anti-Collision Trajectory Planning for Many-to-Many Encounter Situations with Autonomous Surface Vehicles." Sensors 20, no. 15 (2020): 4115. http://dx.doi.org/10.3390/s20154115.

Full text
Abstract:
A single anti-collision trajectory generation problem for an “own” vessel only is significantly different from the challenge of generating a whole set of safe trajectories for multi-surface vehicle encounter situations in the open sea. Effective solutions for such problems are needed these days, as we are entering the era of autonomous ships. The article specifies the problem of anti-collision trajectory planning in many-to-many encounter situations. The proposed original multi-surface vehicle beam search algorithm (MBSA), based on the beam search strategy, solves the problem. The general idea of the MBSA involves the application of a solution for one-to-many encounter situations (using the beam search algorithm, BSA), which was tested on real automated radar plotting aid (ARPA) and automatic identification system (AIS) data. The test results for the MBSA were from simulated data, which are discussed in the final part. The article specifies the problem of anti-collision trajectory planning in many-to-many encounter situations involving moving autonomous surface vehicles, excluding Collision Regulations (COLREGs) and vehicle dynamics.
APA, Harvard, Vancouver, ISO, and other styles
32

Chung, In-Sang. "Automated Test Data Generation for Testing Programs with Flag Variables Based on SAT." KIPS Transactions:PartD 16D, no. 3 (2009): 371–80. http://dx.doi.org/10.3745/kipstd.2009.16-d.3.371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Dong, Guanting, Xiaoshuai Song, Yutao Zhu, Runqi Qiao, Zhicheng Dou, and Ji-Rong Wen. "Toward Verifiable Instruction-Following Alignment for Retrieval Augmented Generation." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 22 (2025): 23796–804. https://doi.org/10.1609/aaai.v39i22.34551.

Full text
Abstract:
Following natural instructions is crucial for the effective application of Retrieval-Augmented Generation (RAG) systems. Despite recent advancements in Large Language Models (LLMs), research on assessing and improving instruction-following (IF) alignment within the RAG domain remains limited. To address this issue, we propose VIF-RAG, an automated, scalable, and verifiable synthetic pipeline for instruction-following alignment in RAG systems. We start by manually crafting a minimal set of atomic instructions (100k) through automated processes. To further bridge the gap in instruction-following auto-evaluation for RAG systems, we introduce FollowRAG Benchmark, which includes approximately 3K test samples, covering 22 categories of general instruction constraints and four knowledge-intensive QA datasets. Due to its robust pipeline design, FollowRAG can seamlessly integrate with different RAG benchmarks. Using FollowRAG and eight widely-used IF and foundational abilities benchmarks for LLMs, we demonstrate that VIF-RAG markedly enhances LLM performance across a broad range of general instruction constraints while effectively leveraging its capabilities in RAG scenarios. Further analysis offers practical insights for achieving IF alignment in RAG systems.
APA, Harvard, Vancouver, ISO, and other styles
34

R, Mysiuk, Yuzevych V, and Mysiuk I. "Api test automation of search functionality with artificial intelligence." Artificial Intelligence 27, jai2022.27(1) (2022): 269–74. http://dx.doi.org/10.15407/jai2022.01.269.

Full text
Abstract:
One of the steps in software development is to test the software product. With the development of technology, the testing process has improved to automated testing, which reduces the impact of the human factor on error and speeds up testing. The main software products for testing are considered to be web applications, web services, mobile applications and performance testing. According to the testing pyramid, when testing web services, you need to develop more test cases than when testing a web application. Because automation involves writing software code for testing, the use of ready-made tools will speed up the software development process. One of the most important test indicators is the coverage of search functionality. The search functionality of a web application or web service requires a large number of cases, as you need to provide many conditions for its operation through the free entry of any information on the web page. There is an approach to data-based testing, which involves working with a test data set through files such as CSV, XLS, JSON, XML and others. However, finding input for testing takes a lot of time when creating test cases and automated test scenarios. It is proposed to use artificial data set generators based on real values and popular queries on the website to form a test data set. In addition, it is possible to take into account the probable techniques of developing test cases. It is proposed to conditionally divide the software for testing into several layers: client, test, work with data, checks and reports. The Java programming language has a number of libraries for working at each of these levels. It is proposed to use Rest Assured as a Restful client, TestNG as a library for writing tests with checks, and Allure report for generating reports. It is noted that the proposed approach uses artificial intelligence for automated selection of test cases when creating a test to diversify test approaches and simulate human input and behavior to maximize the use of cases.
APA, Harvard, Vancouver, ISO, and other styles
35

Liang, Guanghui, Jianmin Pang, Zheng Shan, Runqing Yang, and Yihang Chen. "Automatic Benchmark Generation Framework for Malware Detection." Security and Communication Networks 2018 (September 6, 2018): 1–8. http://dx.doi.org/10.1155/2018/4947695.

Full text
Abstract:
To address emerging security threats, various malware detection methods have been proposed every year. Therefore, a small but representative set of malware samples are usually needed for detection model, especially for machine-learning-based malware detection models. However, current manual selection of representative samples from large unknown file collection is labor intensive and not scalable. In this paper, we firstly propose a framework that can automatically generate a small data set for malware detection. With this framework, we extract behavior features from a large initial data set and then use a hierarchical clustering technique to identify different types of malware. An improved genetic algorithm based on roulette wheel sampling is implemented to generate final test data set. The final data set is only one-eighteenth the volume of the initial data set, and evaluations show that the data set selected by the proposed framework is much smaller than the original one but does not lose nearly any semantics.
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Yufei, Bohua Sun, Yaxin Li, et al. "Research on the Physics–Intelligence Hybrid Theory Based Dynamic Scenario Library Generation for Automated Vehicles." Sensors 22, no. 21 (2022): 8391. http://dx.doi.org/10.3390/s22218391.

Full text
Abstract:
The testing and evaluation system has been the key technology and security with its necessity in the development and deployment of maturing automated vehicles. In this research, the physics–intelligence hybrid theory-based dynamic scenario library generation method is proposed to improve system performance, in particular, the testing efficiency and accuracy for automated vehicles. A general framework of the dynamic scenario library generation is established. Then, the parameterized scenario based on the dimension optimization method is specified to obtain the effective scenario element set. Long-tail functions for performance testing of specific ODD are constructed as optimization boundaries and critical scenario searching methods are proposed based on the node optimization and sample expansion methods for the low-dimensional scenario library generation and the reinforcement learning for the high-dimensional one, respectively. The scenario library generation method is evaluated with the naturalistic driving data (NDD) of the intelligent electric vehicle in the field test. Results show better efficient and accuracy performances compared with the ideal testing library and the NDD, respectively, in both low- and high-dimensional scenarios.
APA, Harvard, Vancouver, ISO, and other styles
37

Arendasy, Martin, Markus Sommer, Georg Gittler, and Andreas Hergovich. "Automatic Generation of Quantitative Reasoning Items." Journal of Individual Differences 27, no. 1 (2006): 2–14. http://dx.doi.org/10.1027/1614-0001.27.1.2.

Full text
Abstract:
This paper deals with three studies on the computer-based, automatic generation of algebra word problems. The cognitive psychology based generative/quality control frameworks of the item generator are presented. In Study I the quality control framework is empirically tested using a first set of automatically generated items. Study II replicates the findings of Study I using a larger set of automatically generated algebra word problems. Study III deals with the generative framework of the item generator by testing construct validity aspects of the item generator produced items. Using nine Rasch-homogeneous subscales of the new intelligence structure battery (INSBAT, Hornke et al., 2004 ), a hierarchical confirmatory factor analysis is reported, which provides first evidence of convergent as well as divergent validity of the automatically generated items. The end of the paper discusses possible advantages of automatic item generation in general ranging from test security issues and the possibility of a more precise psychological assessment to mass testing and economical questions of test construction.
APA, Harvard, Vancouver, ISO, and other styles
38

Santise, M., M. Fornari, G. Forlani, and R. Roncella. "Evaluation of DEM generation accuracy from UAS imagery." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-5 (June 6, 2014): 529–36. http://dx.doi.org/10.5194/isprsarchives-xl-5-529-2014.

Full text
Abstract:
The growing use of UAS platform for aerial photogrammetry comes with a new family of Computer Vision highly automated processing software expressly built to manage the peculiar characteristics of these blocks of images. It is of interest to photogrammetrist and professionals, therefore, to find out whether the image orientation and DSM generation methods implemented in such software are reliable and the DSMs and orthophotos are accurate. On a more general basis, it is interesting to figure out whether it is still worth applying the standard rules of aerial photogrammetry to the case of drones, achieving the same inner strength and the same accuracies as well. With such goals in mind, a test area has been set up at the University Campus in Parma. A large number of ground points has been measured on natural as well as signalized points, to provide a comprehensive test field, to check the accuracy performance of different UAS systems. In the test area, points both at ground-level and features on the buildings roofs were measured, in order to obtain a distributed support also altimetrically. Control points were set on different types of surfaces (buildings, asphalt, target, fields of grass and bumps); break lines, were also employed. The paper presents the results of a comparison between two different surveys for DEM (Digital Elevation Model) generation, performed at 70 m and 140 m flying height, using a Falcon 8 UAS.
APA, Harvard, Vancouver, ISO, and other styles
39

Jamal, S., V. Le Brun, O. Le Fèvre, et al. "Automated reliability assessment for spectroscopic redshift measurements." Astronomy & Astrophysics 611 (March 2018): A53. http://dx.doi.org/10.1051/0004-6361/201731305.

Full text
Abstract:
Context. Future large-scale surveys, such as the ESA Euclid mission, will produce a large set of galaxy redshifts (≥106) that will require fully automated data-processing pipelines to analyze the data, extract crucial information and ensure that all requirements are met. A fundamental element in these pipelines is to associate to each galaxy redshift measurement a quality, or reliability, estimate.Aim. In this work, we introduce a new approach to automate the spectroscopic redshift reliability assessment based on machine learning (ML) and characteristics of the redshift probability density function.Methods. We propose to rephrase the spectroscopic redshift estimation into a Bayesian framework, in order to incorporate all sources of information and uncertainties related to the redshift estimation process and produce a redshift posterior probability density function (PDF). To automate the assessment of a reliability flag, we exploit key features in the redshift posterior PDF and machine learning algorithms.Results. As a working example, public data from the VIMOS VLT Deep Survey is exploited to present and test this new methodology. We first tried to reproduce the existing reliability flags using supervised classification in order to describe different types of redshift PDFs, but due to the subjective definition of these flags (classification accuracy ~58%), we soon opted for a new homogeneous partitioning of the data into distinct clusters via unsupervised classification. After assessing the accuracy of the new clusters via resubstitution and test predictions (classification accuracy ~98%), we projected unlabeled data from preliminary mock simulations for the Euclid space mission into this mapping to predict their redshift reliability labels.Conclusions. Through the development of a methodology in which a system can build its own experience to assess the quality of a parameter, we are able to set a preliminary basis of an automated reliability assessment for spectroscopic redshift measurements. This newly-defined method is very promising for next-generation large spectroscopic surveys from the ground and in space, such as Euclid and WFIRST.
APA, Harvard, Vancouver, ISO, and other styles
40

Gargantini, Angelo, and Elvinia Riccobene. "ASM-Based Testing: Coverage Criteria and Automatic Test Sequence." JUCS - Journal of Universal Computer Science 7, no. (11) (2001): 1050–67. https://doi.org/10.3217/jucs-007-11-1050.

Full text
Abstract:
This paper tackles some aspects concerning the exploitation of Abstract State Machines (ASMs) for testing purposes. We define for ASM specifications a set of adequacy criteria measuring the coverage achieved by a test suite, and determining whether sufficient testing has been performed. We introduce a method to automatically generate from ASM specifications test sequences which accomplish a desired coverage. This method exploits the counter example generation of the model checker SMV. We use ASMs as test oracles to predict the expected outputs of units under test.
APA, Harvard, Vancouver, ISO, and other styles
41

MALA, D. JEYA, and V. MOHAN. "ON THE USE OF INTELLIGENT AGENTS TO GUIDE TEST SEQUENCE SELECTION AND OPTIMIZATION." International Journal of Computational Intelligence and Applications 08, no. 02 (2009): 155–79. http://dx.doi.org/10.1142/s1469026809002515.

Full text
Abstract:
Many of the automated testing tools concentrate on the automatic generation of test cases but do not worry about their optimization. In our paper, we analyzed the system model, which is represented as a state diagram and selected a very limited set of test sequences to be executed from the extreme large number (usually infinitely many) of potential ones. This paper demonstrates a way to generate optimal test sequences that are guaranteed to take the least possible time to execute and also satisfies both state and branch coverage criteria using Intelligent Agents. In the proposed approach, the Intelligent Search Agent (ISA) takes the decision of optimized test sequences by searching through the SUT, which is represented as a graph in which each node is associated with a heuristic value and each edge is associated with an edge weight. The Intelligent Agent finds the best sequence by following the nodes that satisfy the fitness criteria and generates the optimized test sequences for the SUT. Finally we compared our approach with existing Ant Colony Optimization (ACO)-based test sequence optimization approach and proved that our approach produces better results.
APA, Harvard, Vancouver, ISO, and other styles
42

Lan, Tian, and Zhilin Li. "Automated Generation of Schematic Network Maps with Preservation of Main Structures." Abstracts of the ICA 1 (July 15, 2019): 1–2. http://dx.doi.org/10.5194/ica-abs-1-206-2019.

Full text
Abstract:
&lt;p&gt;&lt;strong&gt;Abstract.&lt;/strong&gt; Schematic (network) maps are helpful for people to perform route planning and orientation tasks. The London Underground Map designed by Harry Beck is an excellent example of such maps. Generally, there are three approaches to generate schematic maps: manual, semi-automated (or computer-aided) and fully automated. In the past twenty years, many researchers have been devoted to the development of automated methods for generation of schematic maps. In these automated methods, various sets of constraints are used. Most of these constraints are for geometric properties of individual features (such as the lengths and orientations of lines); a few constraints are for relations between features (such as the minimum distance threshold between non-incident edges); but none are explicitly for the main structures of whole networks. It is believed that preservation of the main structure is the most important, because main structure is represented by global features which is first recognized by a pre-attentive process in human cognition – a global-to-local process (in which local features are then recognized by an attentive process). It is hypothesized here that an automated method with the preservation of main structures of networks should be able to generate schematic maps with improved clarity and aesthetics.&lt;/p&gt;&lt;p&gt;This paper describes the development of an automated method with the preservation of the main structures of line networks. In this method, automated schematization is treated as an optimization problem and is represented as a Mixed-Integer Programming (MIP) model, which consists of an objective function and a set of constraints. The preservation of main structures is modelled into constraints (i.e., making important lines straight and orientating them to specific directions) for the model. The MIP model is imported into a commercial optimization software called “IBM ILOG CPLEX Optimization Studio” (version 12.6.3) for the acquisition of optimal solutions (i.e., coordinates of vertices and edges on schematic maps). The whole process is shown in Figure 1.&lt;/p&gt;&lt;p&gt;Experimental evaluations have been conducted with a set of real-life data as shown in Figure 2a and 2d. Schematic maps are generated by this new method with the preservation of main structures and by an old method without the particular consideration for main structures, as shown in Figures 2b, 2c, 2e and 2f. A psychological test with a questionnaire has been conducted, which consists of questions regarding “clarity”, “recognition of major lines”, “visual simplicity” and “satisfaction”. It is found that, in all these four aspects, the map generated by new methods with preservation of main structures have higher scores than those by the old method. These improvements are proved to be significant after paired-t tests.&lt;/p&gt;&lt;p&gt;Therefore, it is concluded that the new automated method with the preservation of main structures can generate schematic maps with significant improvement in clarity and aesthetics. This study is helpful to improve automated methods for generation of schematic maps and other visual representations.&lt;/p&gt;
APA, Harvard, Vancouver, ISO, and other styles
43

Yan, Xiang Wu, He Chuan Zhang, and Li Na Wang. "Research on the Data Management of Electric Vehicle Charging Equipment Testing Platform." Advanced Materials Research 953-954 (June 2014): 1332–37. http://dx.doi.org/10.4028/www.scientific.net/amr.953-954.1332.

Full text
Abstract:
Compared with common switching power supply, EV charger has more functional requirements, more severe demands on electrical performance, and wider range of output voltage and current value, what will make the test more complex. Variety of conditions will be set to test the electrical performance of the charger completely, and a large amount of test data will be generated. This article describes the data management platform of EV charging equipment test, which uses real-time database to manage the data. It also detailed introduce data collection and storage, data analysis and report generation and database design of the test platform. The test results show that this system can manage the test data safely, quickly and reliably, providing a convenient and fast way to achieve rapid automated charger test.
APA, Harvard, Vancouver, ISO, and other styles
44

SOFOKLEOUS, ANASTASIS A., and ANDREAS S. ANDREOU. "AUTOMATIC PRODUCTION OF TEST DATA WITH A MULTIPLE BATCH-OPTIMISTIC METHOD." International Journal on Artificial Intelligence Tools 18, no. 01 (2009): 61–80. http://dx.doi.org/10.1142/s0218213009000044.

Full text
Abstract:
Recent research on software testing focuses on integrating techniques, such as computational intelligence, with special purpose software tools so as to minimize human effort, reduce costs and automate the testing process. This work proposes a complete software testing framework that utilizes a series of specially designed genetic algorithms to generate automatically test data with reference to the edge/condition testing coverage criterion. The framework utilizes a program analyzer, which examines the program's source code and builds dynamically program models for automatic testing, and a test data generation system that utilizes genetic algorithms to search the input space and determine a near to optimum set of test cases with respect to the testing coverage criterion. The performance of the framework is evaluated on a pool of programs consisting of both standard and random-generated programs. Finally, the proposed test data generation system is compared against other similar approaches and the results are discussed.
APA, Harvard, Vancouver, ISO, and other styles
45

Ivanyuk, Vitaliy, Maryna Myastkovska, and Vadym Ponedilok. "Automated Means of Testing Software Modules for Solving Volterra Integral Equations of the Second Kind." Mathematical and computer modelling. Series: Technical sciences 24 (December 5, 2023): 26–34. http://dx.doi.org/10.32626/2308-5916.2023-24.26-34.

Full text
Abstract:
The article is part of the methodology of automated testing of software modules for solving Volterra integral levels of the second kind. For the implementation of automated testing, the Matlab software environment was selected, which has a wide range of software testing capabilities, in particular: functions for generating data sets for testing; functions for comparing test results; functions for generating test reports, etc. For the development of automated testing tools, the Unit Testing Framework was selected, which is a component of the MATLAB Test Framework, has many ready-made methods for checking the correctness of values and the formation of statistical errors. A set of test problems has been developed for Volterra integral equations of the second kind, which are divided into different types, including linear Volterra integral equations of the second kind, which consist of a kernel with power, exponential, hyperbolic, logarithmic, trigonometric, inverse trigonometric functions and their combination. Developed testing tools used for automated quality control of software modules built on the basis of left rectangle, right rectangle, trapezoidal, and Simpson methods. The developed set of test tasks covers a wide range of possible operating conditions of software modules. The results of the testing allowed to improve the existing software modules to achieve the set conditions for their operation. The conducted research should contribute to the development of more reliable and efficient software modules for solving Volterra integral levels of the second kind. The obtained results are the basis for further research, which will be used in the following directions: development of testing methods for more complex types of Volterra integral levels of the second kind, including the level with nonlinear and non-stationary kernels.
APA, Harvard, Vancouver, ISO, and other styles
46

Girgis, Moheb. "Automatic Test Data Generation for Data Flow Testing Using a Genetic Algorithm." JUCS - Journal of Universal Computer Science 11, no. (6) (2005): 898–915. https://doi.org/10.3217/jucs-011-06-0898.

Full text
Abstract:
One of the major difficulties in software testing is the automatic generation of test data that satisfy a given adequacy criterion. This paper presents an automatic test data generation technique that uses a genetic algorithm (GA), which is guided by the data flow dependencies in the program, to search for test data to cover its def-use associations. The GA conducts its search by constructing new test data from previously generated test data that are evaluated as effective test data. The approach can be used in test data generation for programs with/without loops and procedures. The proposed GA accepts as input an instrumented version of the program to be tested, the list of def-use associations to be covered, the number of input variables, and the domain and precision of each input variable. The algorithm produces a set of test cases, the set of def-use associations covered by each test case, and a list of uncovered def-use associations, if any. In the parent selection process, the GA uses one of two methods: the roulette wheel method or a proposed method, called the random selection method, according to the user choice. Finally, the paper presents the results of the experiments that have been carried out to evaluate the effectiveness of the proposed GA compared to the random testing technique, and to compare the proposed random selection method to the roulette wheel method.
APA, Harvard, Vancouver, ISO, and other styles
47

Jaffari, Aman, Cheol-Jung Yoo, and Jihyun Lee. "Automatic Test Data Generation Using the Activity Diagram and Search-Based Technique." Applied Sciences 10, no. 10 (2020): 3397. http://dx.doi.org/10.3390/app10103397.

Full text
Abstract:
In software testing, generating test data is quite expensive and time-consuming. The manual generation of an appropriately large set of test data to satisfy a specified coverage criterion carries a high cost and requires significant human effort. Currently, test automation has come at the cost of low quality. In this paper, we are motivated to propose a model-based approach utilizing the activity diagram of the system under test as a test base, focusing on its data flow aspect. The technique is incorporated with a search-based optimization heuristic to fully automate the test data generation process and deliver test cases with more improved quality. Our experimental investigation used three open-source software systems to assess and compare the proposed technique with two alternative approaches. The experimental results indicate the improved fault-detection performance of the proposed technique, which was 11.1% better than DFAAD and 38.4% better than EvoSuite, although the techniques did not differ significantly in terms of statement and branch coverage. The proposed technique was able to detect more computation-related faults and tends to have better fault detection capability as the system complexity increases.
APA, Harvard, Vancouver, ISO, and other styles
48

Hudec, Ján, and Elena Gramatová. "An Efficient Functional Test Generation Method For Processors Using Genetic Algorithms." Journal of Electrical Engineering 66, no. 4 (2015): 185–93. http://dx.doi.org/10.2478/jee-2015-0031.

Full text
Abstract:
Abstract The paper presents a new functional test generation method for processors testing based on genetic algorithms and evolutionary strategies. The tests are generated over an instruction set architecture and a processor description. Such functional tests belong to the software-oriented testing. Quality of the tests is evaluated by code coverage of the processor description using simulation. The presented test generation method uses VHDL models of processors and the professional simulator ModelSim. The rules, parameters and fitness functions were defined for various genetic algorithms used in automatic test generation. Functionality and effectiveness were evaluated using the RISC type processor DP32.
APA, Harvard, Vancouver, ISO, and other styles
49

Ernst, Lisa, Marcin Kopaczka, Mareike Schulz, et al. "Semi-automated generation of pictures for the Mouse Grimace Scale: A multi-laboratory analysis (Part 2)." Laboratory Animals 54, no. 1 (2019): 92–98. http://dx.doi.org/10.1177/0023677219881664.

Full text
Abstract:
The Mouse Grimace Scale (MGS) is an established method for estimating pain in mice during animal studies. Recently, an improved and standardized MGS set-up and an algorithm for automated and blinded output of images for MGS evaluation were introduced. The present study evaluated the application of this standardized set-up and the robustness of the associated algorithm at four facilities in different locations and as part of varied experimental projects. Experiments using the MGS performed at four facilities (F1–F4) were included in the study; 200 pictures per facility (100 pictures each rated as positive and negative by the algorithm) were evaluated by three raters for image quality and reliability of the algorithm. In three of the four facilities, sufficient image quality and consistency were demonstrated. Intraclass correlation coefficient, calculated to demonstrate the correlation among raters at the three facilities (F1–F3), showed excellent correlation. The specificity and sensitivity of the results obtained by different raters and the algorithm were analysed using Fisher's exact test ( p &lt; 0.05). The analysis indicated a sensitivity of 77% and a specificity of 64%. The results of our study showed that the algorithm demonstrated robust performance at facilities in different locations in accordance with the strict application of our MGS setup.
APA, Harvard, Vancouver, ISO, and other styles
50

Zhou, Wei Feng, Xin Min Li, Sheng Qing Lv, and Zhuo Zhang. "Automatic Test Case Generation for Context Based Multiplicity Checking in UML." Applied Mechanics and Materials 433-435 (October 2013): 1643–48. http://dx.doi.org/10.4028/www.scientific.net/amm.433-435.1643.

Full text
Abstract:
UML is considered as the standard for object-oriented modeling and design. Automatic test case generation is an important method for the verification and validation of UML specifications to reduce the development cost and help increase the reliability. In this paper, we present a method to model the specific constraints using context-based multiplicity, which defines on the instances of the class associated with the context, instead of using constraints defined informally or in OCL. Then, an algorithm is proposed to generation a set of test cases to verify the context-based multiplicity in an implementation. The example and implementation for a real system are also presented.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!