Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Reinforcement Learning in Testing.

Статті в журналах з теми "Reinforcement Learning in Testing"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Reinforcement Learning in Testing".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Desharnais, Josée, François Laviolette, and Sami Zhioua. "Testing probabilistic equivalence through Reinforcement Learning." Information and Computation 227 (June 2013): 21–57. http://dx.doi.org/10.1016/j.ic.2013.02.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Varshosaz, Mahsa, Mohsen Ghaffari, Einar Broch Johnsen, and Andrzej Wąsowski. "Formal Specification and Testing for Reinforcement Learning." Proceedings of the ACM on Programming Languages 7, ICFP (2023): 125–58. http://dx.doi.org/10.1145/3607835.

Повний текст джерела
Анотація:
The development process for reinforcement learning applications is still exploratory rather than systematic. This exploratory nature reduces reuse of specifications between applications and increases the chances of introducing programming errors. This paper takes a step towards systematizing the development of reinforcement learning applications. We introduce a formal specification of reinforcement learning problems and algorithms, with a particular focus on temporal difference methods and their definitions in backup diagrams. We further develop a test harness for a large class of reinforcemen
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ghanem, Mohamed C., and Thomas M. Chen. "Reinforcement Learning for Efficient Network Penetration Testing." Information 11, no. 1 (2019): 6. http://dx.doi.org/10.3390/info11010006.

Повний текст джерела
Анотація:
Penetration testing (also known as pentesting or PT) is a common practice for actively assessing the defenses of a computer network by planning and executing all possible attacks to discover and exploit existing vulnerabilities. Current penetration testing methods are increasingly becoming non-standard, composite and resource-consuming despite the use of evolving tools. In this paper, we propose and evaluate an AI-based pentesting system which makes use of machine learning techniques, namely reinforcement learning (RL) to learn and reproduce average and complex pentesting activities. The propo
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Deviatko, Anna. "Evolution of Automated Testing Methods Using Machine Learning." American Journal of Engineering and Technology 07, no. 05 (2025): 88–100. https://doi.org/10.37547/tajet/volume07issue05-07.

Повний текст джерела
Анотація:
program testing is crucial for guaranteeing program dependability, but it has historically included a lot of manual labor, which restricts coverage and raises expenses. By creating and selecting test cases, anticipating defect-prone locations, and evaluating test results, machine learning (ML)-driven testing approaches automate and improve traditional software testing. This study examines the development of these techniques. Significant enhancements are provided by ML-driven techniques, such as early fault detection, shorter testing times, and increased test coverage. The paper offers a thorou
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Abo-eleneen, Amr, Ahammed Palliyali, and Cagatay Catal. "The role of Reinforcement Learning in software testing." Information and Software Technology 164 (December 2023): 107325. http://dx.doi.org/10.1016/j.infsof.2023.107325.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Sun, Chang-Ai, Ming-Jun Xiao, He-Peng Dai, and Huai Liu. "A Reinforcement Learning Based Approach to Partition Testing." Journal of Computer Science and Technology 40, no. 1 (2025): 99–118. https://doi.org/10.1007/s11390-024-2900-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Tao, Jiaye, Chao Hong, Yun Fu, et al. "Coverage-guided fuzz testing method based on reinforcement learning seed scheduling." Journal of Physics: Conference Series 2816, no. 1 (2024): 012107. http://dx.doi.org/10.1088/1742-6596/2816/1/012107.

Повний текст джерела
Анотація:
Abstract The existing fuzz testing methods for industrial control protocols suffer from insufficient coverage, false positives, and an inability to handle protocol semantics. This paper proposes a reinforcement learning-based seed scheduling coverage-guided fuzz testing method. Building upon coverage-guided fuzz testing techniques, we integrate reinforcement learning with seed scheduling to optimize the seed selection strategy, thereby enhancing the efficiency of protocol vulnerability detection. Experimental results demonstrate the feasibility and effectiveness of this approach. Through reinf
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Wang, Cong, Qifeng Zhang, Qiyan Tian, et al. "Learning Mobile Manipulation through Deep Reinforcement Learning." Sensors 20, no. 3 (2020): 939. http://dx.doi.org/10.3390/s20030939.

Повний текст джерела
Анотація:
Mobile manipulation has a broad range of applications in robotics. However, it is usually more challenging than fixed-base manipulation due to the complex coordination of a mobile base and a manipulator. Although recent works have demonstrated that deep reinforcement learning is a powerful technique for fixed-base manipulation tasks, most of them are not applicable to mobile manipulation. This paper investigates how to leverage deep reinforcement learning to tackle whole-body mobile manipulation tasks in unstructured environments using only on-board sensors. A novel mobile manipulation system
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Levytskyi, Volodymyr, and Oleksii Lopuha. "Test data generation using deep reinforcement learning." Management of Development of Complex Systems, no. 59 (September 27, 2024): 155–64. http://dx.doi.org/10.32347/2412-9933.2024.59.155-164.

Повний текст джерела
Анотація:
The process of creating test data for software is one of the most complex and labor-intensive stages in the software development cycle. It requires significant resources and effort, especially when it comes to achieving high test coverage. Search-Based Software Testing (SBST) is an approach that automates this process by using metaheuristic algorithms to generate test data. Metaheuristic algorithms, such as genetic algorithms or simulated annealing, operate on the principle of systematically exploring possible options and selecting the most effective solutions based on feedback from a fitness
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Pradhan, Shreeja. "Evaluating Deep Reinforcement Learning Algorithms." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 008 (2024): 1–6. http://dx.doi.org/10.55041/ijsrem37434.

Повний текст джерела
Анотація:
Abstract—Recent advancements in machine learning, partic- ularly in reinforcement learning (RL), have enabled solutions to previously intractable problems. This research paper delves into the mathematical underpinnings of several prominent deep RL algorithms, including REINFORCE, A2C, DDPG, and SAC. By implementing and testing these algorithms in the MuJoCo simulator, I evaluate their performance in training agents to achieve complex tasks, such as walking in a 3D environment. Our findings demonstrate the efficacy of these algorithms in real-time learning and adaptation, underscored by the sup
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Borgarelli, Andrea, Constantin Enea, Rupak Majumdar, and Srinidhi Nagendra. "Reward Augmentation in Reinforcement Learning for Testing Distributed Systems." Proceedings of the ACM on Programming Languages 8, OOPSLA2 (2024): 1928–54. http://dx.doi.org/10.1145/3689779.

Повний текст джерела
Анотація:
Bugs in popular distributed protocol implementations have been the source of many downtimes in popular internet services. We describe a randomized testing approach for distributed protocol implementations based on reinforcement learning. Since the natural reward structure is very sparse, the key to successful exploration in reinforcement learning is reward augmentation. We show two different techniques that build on one another. First, we provide a decaying exploration bonus based on the discovery of new states---the reward decays as the same state is visited multiple times. The exploration bo
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Yi, Junkai, and Xiaoyan Liu. "Deep Reinforcement Learning for Intelligent Penetration Testing Path Design." Applied Sciences 13, no. 16 (2023): 9467. http://dx.doi.org/10.3390/app13169467.

Повний текст джерела
Анотація:
Penetration testing is an important method to evaluate the security degree of a network system. The importance of penetration testing attack path planning lies in its ability to simulate attacker behavior, identify vulnerabilities, reduce potential losses, and continuously improve security strategies. By systematically simulating various attack scenarios, it enables proactive risk assessment and the development of robust security measures. To address the problems of inaccurate path prediction and difficult convergence in the training process of attack path planning, an algorithm which combines
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Tanwir, Ahmad, Ashraf Adnan, Truscan Dragos, Domi Andi, and Porres Ivan. "Using Deep Reinforcement Learning for Exploratory Performance Testing of Software Systems With Multi-Dimensional Input Spaces." IEEEE Access 8 (October 26, 2020): 195000–195020. https://doi.org/10.1109/ACCESS.2020.3033888.

Повний текст джерела
Анотація:
During exploratory performance testing, software testers evaluate the performance of a software system with different input combinations in order to identify combinations that cause performance problems in the system under test. Performance problems such as low throughput, high response times, hangs, or crashes in software applications have an adverse effect on the customer’s satisfaction. Since many of today’s large-scale, complex software systems (e.g., eCommerce applications, databases, web servers) exhibit very large multi-dimensional input spaces with many input parameters and
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Cui, Jing, Yufei Han, Yuzhe Ma, Jianbin Jiao, and Junge Zhang. "BadRL: Sparse Targeted Backdoor Attack against Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (2024): 11687–94. http://dx.doi.org/10.1609/aaai.v38i10.29052.

Повний текст джерела
Анотація:
Backdoor attacks in reinforcement learning (RL) have previously employed intense attack strategies to ensure attack success. However, these methods suffer from high attack costs and increased detectability. In this work, we propose a novel approach, BadRL, which focuses on conducting highly sparse backdoor poisoning efforts during training and testing while maintaining successful attacks. Our algorithm, BadRL, strategically chooses state observations with high attack values to inject triggers during training and testing, thereby reducing the chances of detection. In contrast to the previous me
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Song, Mingyu, Persis A. Baah, Ming Bo Cai, and Yael Niv. "Humans combine value learning and hypothesis testing strategically in multi-dimensional probabilistic reward learning." PLOS Computational Biology 18, no. 11 (2022): e1010699. http://dx.doi.org/10.1371/journal.pcbi.1010699.

Повний текст джерела
Анотація:
Realistic and complex decision tasks often allow for many possible solutions. How do we find the correct one? Introspection suggests a process of trying out solutions one after the other until success. However, such methodical serial testing may be too slow, especially in environments with noisy feedback. Alternatively, the underlying learning process may involve implicit reinforcement learning that learns about many possibilities in parallel. Here we designed a multi-dimensional probabilistic active-learning task tailored to study how people learn to solve such complex problems. Participants
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Kumar Karne, Vinod, Noone Srinivas, Nagaraj Mandaloju, and Parameshwar Reddy Kothamali. "Reinforcement Learning for Optimizing Test Case Execution in Automated Testing." Innovative Research Thoughts 6, no. 3 (2020): 13–27. http://dx.doi.org/10.36676/irt.v6.i3.1494.

Повний текст джерела
Анотація:
This study explores the use of reinforcement learning (RL) techniques to optimize test case execution in automated testing frameworks, addressing the inefficiencies of traditional testing methods. The primary research problem involves enhancing testing efficiency, improving coverage, and reducing redundancy through intelligent RL-based optimization. The study employed a design that integrated RL algorithms into automated testing frameworks, involving the development and training of an RL model, followed by empirical evaluation. ajor findings indicate that RL-based optimization significantly re
Стилі APA, Harvard, Vancouver, ISO та ін.
17

S, Shilpasree. "Road Structure Quality Classification using Reinforcement Learning." International Journal for Research in Applied Science and Engineering Technology 10, no. 12 (2022): 8–15. http://dx.doi.org/10.22214/ijraset.2022.47751.

Повний текст джерела
Анотація:
Abstract: Owing to the exponentially growing amount of transportation on the road, the number of mishaps on daily basis is also growing at a shocking rate. There is one death every 4 minutes due to road mishaps in India. One of the foremost causes of these road mishaps to occur is the poor road environment. In the present work, Reinforcement learning is used to classify road quality. Reinforcement learning is a part of machine learning. Suitable actions are taken to maximize the reward. Reinforcement learning is different from supervised learning, in reinforcement learning, there is no trained
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Liu, Hongri, Chuhan Liu, Xiansheng Wu, Yun Qu, and Hongmei Liu. "An Automated Penetration Testing Framework Based on Hierarchical Reinforcement Learning." Electronics 13, no. 21 (2024): 4311. http://dx.doi.org/10.3390/electronics13214311.

Повний текст джерела
Анотація:
Given the large action space and state space involved in penetration testing, reinforcement learning is widely applied to enhance testing efficiency. This paper proposes an automatic penetration testing scheme based on hierarchical reinforcement learning to reduce both action space and state space. The scheme includes a network intelligence responsible for specifying the penetration host and a host intelligence designated to perform penetration testing on the selected host. Specifically, within the network intelligence, an action-masking mechanism is adopted to shield unenabled actions, thereb
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Konda, Ravikanth. "Machine Learning-Based Test Case Generation Comparing: Reinforcement Learning vs Genetic Algorithms." International Journal of Multidisciplinary Research and Growth Evaluation 3, no. 6 (2022): 738–42. https://doi.org/10.54660/.ijmrge.2022.3.6.738-742.

Повний текст джерела
Анотація:
Software testing is a pillar of high-quality software development, which guarantees that software systems satisfy given requirements and behave correctly under various conditions. With the growing complexity and dynamism of contemporary software, conventional testing techniques, based on manual test case construction or rule-based heuristics, are not able to scale properly. This has prompted the development of automated test case generation techniques, with the goal of minimizing human effort and maximizing the reliability, efficiency, and coverage of the testing process. Among the sophisticat
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Usman, Asmau, Moussa Mahamat Boukar, Muhammed Aliyu Suleiman, and Ibrahim Anka Salihu. "Test Case Generation Approach for Android Applications using Reinforcement Learning." Engineering, Technology & Applied Science Research 14, no. 4 (2024): 15127–32. http://dx.doi.org/10.48084/etasr.7422.

Повний текст джерела
Анотація:
Mobile applications can recognize their computational setting and adjust and respond to actions in the context. This is known as context-aware computing. Testing context-aware applications is difficult due to their dynamic nature, as the context is constantly changing. Most mobile testing tools and approaches focus only on GUI events, adding to the deficient coverage of applications throughout testing. Generating test cases for various context events in Android applications can be achieved using reinforcement learning algorithms. This study proposes an approach for generating Android applicati
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Jadhav, Rutuja. "Tracking Locomotion using Reinforcement Learning." International Journal for Research in Applied Science and Engineering Technology 10, no. 7 (2022): 1777–83. http://dx.doi.org/10.22214/ijraset.2022.45509.

Повний текст джерела
Анотація:
Abstract: This article presents the concept of reinforcement learning, which prepares a static direct approach for consistent control problems, and adjusts cutting-edge techniques for testing effectiveness in benchmark Mujoco locomotion tasks. This model was designed and developed to use the Mujoco Engine to track the movement of robotic structures and eliminate problems with assessment calculations using perceptron’s and random search algorithms. Here, the machine learning model is trained to make a series of decisions. The humanoid model is considered to be one of the most difficult and ongo
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Meng, Terry Lingze, and Matloob Khushi. "Reinforcement Learning in Financial Markets." Data 4, no. 3 (2019): 110. http://dx.doi.org/10.3390/data4030110.

Повний текст джерела
Анотація:
Recently there has been an exponential increase in the use of artificial intelligence for trading in financial markets such as stock and forex. Reinforcement learning has become of particular interest to financial traders ever since the program AlphaGo defeated the strongest human contemporary Go board game player Lee Sedol in 2016. We systematically reviewed all recent stock/forex prediction or trading articles that used reinforcement learning as their primary machine learning method. All reviewed articles had some unrealistic assumptions such as no transaction costs, no liquidity issues and
Стилі APA, Harvard, Vancouver, ISO та ін.
23

AlMajali, Anas, Loiy Al-Abed, Khalil M. Ahmad Yousef, Bassam J. Mohd, Zaid Samamah, and Anas Abu Shhadeh. "Automated Vulnerability Exploitation Using Deep Reinforcement Learning." Applied Sciences 14, no. 20 (2024): 9331. http://dx.doi.org/10.3390/app14209331.

Повний текст джерела
Анотація:
The main objective of this paper is to develop a reinforcement agent capable of effectively exploiting a specific vulnerability. Automating pentesting can reduce the cost and time of the operation. While there are existing tools like Metasploit Pro that offer automated exploitation capabilities, they often require significant execution times and resources due to their reliance on exhaustive payload testing. In this paper, we have created a deep reinforcement agent specifically configured to exploit a targeted vulnerability. Through a training phase, the agent learns and stores payloads along w
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Lee, Ritchie, Ole J. Mengshoel, Anshu Saksena, et al. "Adaptive Stress Testing: Finding Likely Failure Events with Reinforcement Learning." Journal of Artificial Intelligence Research 69 (December 6, 2020): 1165–201. http://dx.doi.org/10.1613/jair.1.12190.

Повний текст джерела
Анотація:

 
 
 Finding the most likely path to a set of failure states is important to the analysis of safety-critical systems that operate over a sequence of time steps, such as aircraft collision avoidance systems and autonomous cars. In many applications such as autonomous driving, failures cannot be completely eliminated due to the complex stochastic environment in which the system operates. As a result, safety validation is not only concerned about whether a failure can occur, but also discovering which failures are most likely to occur. This article presents adaptive stress testing
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Hayes, William M., and Douglas H. Wedell. "Testing models of context-dependent outcome encoding in reinforcement learning." Cognition 230 (January 2023): 105280. http://dx.doi.org/10.1016/j.cognition.2022.105280.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Yang, Yang, Chaoyue Pan, Zheng Li, and Ruilian Zhao. "Adaptive Reward Computation in Reinforcement Learning-Based Continuous Integration Testing." IEEE Access 9 (2021): 36674–88. http://dx.doi.org/10.1109/access.2021.3063232.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Brandsen, Sarah, Kevin D. Stubbs, and Henry D. Pfister. "Reinforcement Learning with Neural Networks for Quantum Multiple Hypothesis Testing." Quantum 6 (January 26, 2022): 633. http://dx.doi.org/10.22331/q-2022-01-26-633.

Повний текст джерела
Анотація:
Reinforcement learning with neural networks (RLNN) has recently demonstrated great promise for many problems, including some problems in quantum information theory. In this work, we apply RLNN to quantum hypothesis testing and determine the optimal measurement strategy for distinguishing between multiple quantum states {ρj} while minimizing the error probability. In the case where the candidate states correspond to a quantum system with many qubit subsystems, implementing the optimal measurement on the entire system is experimentally infeasible. We use RLNN to find locally-adaptive
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Ren, Zhilei, Yitao Li, Xiaochen Li, Guanxiao Qi, Jifeng Xuan, and He Jiang. "Reinforcement Learning-Based Fuzz Testing for the Gazebo Robotic Simulator." Proceedings of the ACM on Software Engineering 2, ISSTA (2025): 1467–88. https://doi.org/10.1145/3728942.

Повний текст джерела
Анотація:
Gazebo, being the most widely utilized simulator in robotics, plays a pivotal role in developing and testing robotic systems. Given its impact on the safety and reliability of robotic operations, early bug detection is critical. However, due to the challenges of strict input structures and vast state space, it is not effective to directly use existing fuzz testing approach to Gazebo. In this paper, we present GzFuzz, the first fuzz testing framework designed for Gazebo. GzFuzz addresses these challenges through a syntax-aware feasible command generation mechanism to handle strict input require
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Wen, Linlin, Chengying Mao, Dave Towey, and Jifu Chen. "An adaptive pairwise testing algorithm based on deep reinforcement learning." Science of Computer Programming 247 (January 2026): 103353. https://doi.org/10.1016/j.scico.2025.103353.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Laukaitis, Algirdas, Andrej Šareiko, and Dalius Mažeika. "Facilitating Robot Learning in Virtual Environments: A Deep Reinforcement Learning Framework." Applied Sciences 15, no. 9 (2025): 5016. https://doi.org/10.3390/app15095016.

Повний текст джерела
Анотація:
Deep reinforcement learning algorithms have demonstrated significant potential in showcasing robotic capabilities within virtual environments. However, applying DRL for practical robot development in realistic simulators like Webots remains challenging due to limitations in existing frameworks, such as complex dependencies and reliance on unrealistic control paradigms like a ‘supervisor’. This study introduces an open-source framework and a novel pattern-based method designed to facilitate the exploration of robot learning capabilities through reinforcement learning algorithms in specialized v
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Li, Dezhi, Yunjun Lu, Jianping Wu, Wenlu Zhou, and Guangjun Zeng. "Causal Reinforcement Learning for Knowledge Graph Reasoning." Applied Sciences 14, no. 6 (2024): 2498. http://dx.doi.org/10.3390/app14062498.

Повний текст джерела
Анотація:
Knowledge graph reasoning can deduce new facts and relationships, which is an important research direction of knowledge graphs. Most of the existing methods are based on end-to-end reasoning which cannot effectively use the knowledge graph, so consequently the performance of the method still needs to be improved. Therefore, we combine causal inference with reinforcement learning and propose a new framework for knowledge graph reasoning. By combining the counterfactual method in causal inference, our method can obtain more information as prior knowledge and integrate it into the control strateg
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Ismael, Aya Abdullah, and Türeli Didem Kıvanç. "Study of a Smarter AQM Algorithm to Reduce Network Delay." AINTELIA SCIENCE NOTES 1, no. 1 (2022): 149–54. https://doi.org/10.5281/zenodo.8071388.

Повний текст джерела
Анотація:
The focus of our study was to study the behavior of the smart RED algorithm with parameter adaptation based on a neural network structure. Previously Kim et. al. [2] and Basheer et. al. [1], introduced the use of deep reinforcement learning for active queue management (AQM). This work studies the performance deep reinforcement learning, using a simple topology. Transmitters and receivers communicate and all information is routed over a single bottleneck link. This work studies the effect of changing the bottleneck link bandwidth and bottleneck latency on this algorithm. It is observed that inc
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Thapaliya, Suman, and Saroj Dhital. "AI-Augmented Penetration Testing: A New Frontier in Ethical Hacking." International Journal of Atharva 3, no. 2 (2025): 28–37. https://doi.org/10.3126/ija.v3i2.80099.

Повний текст джерела
Анотація:
The accelerating sophistication of cyber threats has outpaced the capabilities of traditional, manual penetration testing approaches. This paper proposes an AI-augmented penetration testing framework that leverages machine learning and reinforcement learning to enhance the efficiency, scalability, and adaptability of ethical hacking efforts. We detail the integration of AI in key phases of the penetration testing lifecycle, including automated reconnaissance via NLP-based parsing of open-source intelligence, vulnerability prediction through supervised learning models trained on historical expl
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Kotha, Satyanandam. "Reinforcement Learning for Adaptive Traffic Rule Compliance in Autonomous Driving Systems: A Multi-agent Framework for Dynamic Regulatory Adaptation." International Journal of Education, Learning and Development 13, no. 3 (2025): 40–52. https://doi.org/10.37745/ijeld.2013/vol13n34052.

Повний текст джерела
Анотація:
This article investigates the application of reinforcement learning for adaptive traffic rule compliance in autonomous driving systems. Current rule-based approaches lack flexibility in handling unpredictable driving scenarios and varying regulatory requirements across jurisdictions. This article proposes a novel multi-agent reinforcement learning framework that enables self-driving vehicles to dynamically adjust their behavior to different traffic rules while optimizing for safety, efficiency, and legal compliance. It integrates deep reinforcement learning techniques, specifically Proximal Po
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Waqar, Muhammad, Imran, Muhammad Atif Zaman, Muhammad Muzammal, and Jungsuk Kim. "Test Suite Prioritization Based on Optimization Approach Using Reinforcement Learning." Applied Sciences 12, no. 13 (2022): 6772. http://dx.doi.org/10.3390/app12136772.

Повний текст джерела
Анотація:
Regression testing ensures that modified software code changes have not adversely affected existing code modules. The test suite size increases with modification to the software based on the end-user requirements. Regression testing executes the complete test suite after updates in the software. Re-execution of new test cases along with existing test cases is costly. The scientific community has proposed test suite prioritization techniques for selecting and minimizing the test suite to minimize the cost of regression testing. The test suite prioritization goal is to maximize fault detection w
Стилі APA, Harvard, Vancouver, ISO та ін.
36

R.Shankar and D. Sridhar Dr. "A Comprehensive Review on Test Case Prioritization in Continuous Integration Platforms." International Journal of Innovative Science and Research Technology 8, no. 4 (2023): 3223–29. https://doi.org/10.5281/zenodo.8282823.

Повний текст джерела
Анотація:
Continuous Integration (CI) platforms enable recurrent integration of software variations, creating software development rapidly and cost-effectively. In these platforms, integration, and regression testing play an essential role in Test Case Prioritization (TCP) to detect the test case order, which enhances specific objectives like early failure discovery. Currently, Artificial Intelligence (AI) models have emerged widely to solve complex software testing problems like integration and regression testing that create a huge quantity of data from iterative code commits and test executions. In CI
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Nguyen-Tang, Thanh, Sunil Gupta, and Svetha Venkatesh. "Distributional Reinforcement Learning via Moment Matching." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (2021): 9144–52. http://dx.doi.org/10.1609/aaai.v35i10.17104.

Повний текст джерела
Анотація:
We consider the problem of learning a set of probability distributions from the empirical Bellman dynamics in distributional reinforcement learning (RL), a class of state-of-the-art methods that estimate the distribution, as opposed to only the expectation, of the total return. We formulate a method that learns a finite set of statistics from each return distribution via neural networks, as in the distributional RL literature. Existing distributional RL methods however constrain the learned statistics to predefined functional forms of the return distribution which is both restrictive in repres
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Koumoulos, Elias, George Konstantopoulos, and Costas Charitidis. "Applying Machine Learning to Nanoindentation Data of (Nano-) Enhanced Composites." Fibers 8, no. 1 (2019): 3. http://dx.doi.org/10.3390/fib8010003.

Повний текст джерела
Анотація:
Carbon fiber reinforced polymers (CFRPs) are continuously gaining attention in aerospace and space applications, and especially their multi-scale reinforcement with nanoadditives. Carbon nanotubes (CNTs), graphene, carbon nanofibers (CNFs), and their functionalized forms are often incorporated into interactive systems to engage specific changes in the environment of application to a smart response. Structural integrity of these nanoscale reinforced composites is assessed with advanced characterization techniques, with the most prominent being nanoindentation testing. Nanoindentation is a well-
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Patil, Manaswi, Devaki Thakare, Arzoo Bhure, Shweta Kaundanyapure, and Dr Ankit Mune. "An AI-Based Approach for Automating Penetration Testing." International Journal for Research in Applied Science and Engineering Technology 12, no. 4 (2024): 5019–28. http://dx.doi.org/10.22214/ijraset.2024.61113.

Повний текст джерела
Анотація:
Abstract: Cyber penetration testing (pen-testing) is important in revealing possible weaknesses and breaches in network systems that can ultimately help in curbing cybercrimes. Nevertheless, even with the current drive to mechanize pen- testing, there are still a number of challenges which include incomplete frameworks and low precision in automation methods. This paper aims at addressing them by suggesting hybrid AI-based automation framework specifically for Pen- Testing through integration of smart algorithms and automated tools. As indicated by recent studies, it goes further into proposin
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Moreno, Ariadna Claudia, Aldo Hernandez-Suarez, Gabriel Sanchez-Perez, et al. "Analysis of Autonomous Penetration Testing Through Reinforcement Learning and Recommender Systems." Sensors 25, no. 1 (2025): 211. https://doi.org/10.3390/s25010211.

Повний текст джерела
Анотація:
Conducting penetration testing (pentesting) in cybersecurity is a crucial turning point for identifying vulnerabilities within the framework of Information Technology (IT), where real malicious offensive behavior is simulated to identify potential weaknesses and strengthen preventive controls. Given the complexity of the tests, time constraints, and the specialized level of expertise required for pentesting, analysis and exploitation tools are commonly used. Although useful, these tools often introduce uncertainty in findings, resulting in high rates of false positives. To enhance the effectiv
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Bertoluzzo, Francesco, and Marco Corazza. "Testing Different Reinforcement Learning Configurations for Financial Trading: Introduction and Applications." Procedia Economics and Finance 3 (2012): 68–77. http://dx.doi.org/10.1016/s2212-5671(12)00122-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Guo, Xiaotong, Jing Ren, Jiangong Zheng, et al. "Automated Penetration Testing with Fine-Grained Control through Deep Reinforcement Learning." Journal of Communications and Information Networks 8, no. 3 (2023): 212–20. http://dx.doi.org/10.23919/jcin.2023.10272349.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Park, Se-chan, Deock-Yeop Kim, and Woo-Jin Lee. "UnityPGTA: A Unity Platformer Game Testing Automation Tool Using Reinforcement Learning." Journal of KIISE 51, no. 2 (2024): 149–56. http://dx.doi.org/10.5626/jok.2024.51.2.149.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Srinivasa Rao Kongarana. "Nonlinear Reinforcement Learning-Based Dynamic Test Case Prioritization with Anomaly Detection for Continuous Integration." Communications on Applied Nonlinear Analysis 32, no. 1s (2024): 122–42. http://dx.doi.org/10.52783/cana.v32.2114.

Повний текст джерела
Анотація:
This research article introduces DynamicR-TCP, a nonlinear approach to dynamic test case prioritization (TCP) leveraging reinforcement learning and anomaly detection. Designed to enhance the efficiency of software testing in continuous integration (CI) environments, the proposed model dynamically adapts to changing conditions by learning complex patterns from historical test data. A reinforcement learning agent, employing policy and value networks, guides nonlinear prioritization by optimizing the sequence of test case executions. The model integrates a sliding window strategy for adaptive foc
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Wahyuni, Sri, Samirah Dunakhir, and Abdul Rijal. "The Effect of Providing Reinforcement on Learning Motivation in Class XI Accounting at SMKN 1 Polewali." Edumaspul: Jurnal Pendidikan 7, no. 2 (2023): 3618–25. http://dx.doi.org/10.33487/edumaspul.v7i2.6939.

Повний текст джерела
Анотація:
This study aims to determine the effect of giving reinforcement skills by teachers on learning motivation in Financial Accounting in class XI Accounting at SMKN 1 Polewali. The variables in this study are reinforcement skills as the independent variable and learning motivation as the dependent variable. The population in this study were all 64 students in class XI Accounting at SMKN 1 Polewali for the 2022/2023 period, while the sample in this study was taken using a purposive sampling technique with a sample of 35 students. Data collection techniques used are questionnaires and documentation.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Tran, Khuong, Maxwell Standen, Junae Kim, et al. "Cascaded Reinforcement Learning Agents for Large Action Spaces in Autonomous Penetration Testing." Applied Sciences 12, no. 21 (2022): 11265. http://dx.doi.org/10.3390/app122111265.

Повний текст джерела
Анотація:
Organised attacks on a computer system to test existing defences, i.e., penetration testing, have been used extensively to evaluate network security. However, penetration testing is a time-consuming process. Additionally, establishing a strategy that resembles a real cyber-attack typically requires in-depth knowledge of the cybersecurity domain. This paper presents a novel architecture, named deep cascaded reinforcement learning agents, or CRLA, that addresses large discrete action spaces in an autonomous penetration testing simulator, where the number of actions exponentially increases with t
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Semenov, Serhii, Cao Weilin, Liqiang Zhang, and Serhii Bulba. "AUTOMATED PENETRATION TESTING METHOD USING DEEP MACHINE LEARNING TECHNOLOGY." Advanced Information Systems 5, no. 3 (2021): 119–27. http://dx.doi.org/10.20998/2522-9052.2021.3.16.

Повний текст джерела
Анотація:
The article developed a method for automated penetration testing using deep machine learning technology. The main purpose of the development is to improve the security of computer systems. To achieve this goal, the analysis of existing penetration testing methods was carried out and their main disadvantages were identified. They are mainly related to the subjectivity of assessments in the case of manual testing. In cases of automated testing, most authors confirm the fact that there is no unified effective solution for the procedures used. This contradiction is resolved using intelligent metho
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Lin, Che, Gaofei Han, Qingling Wu, et al. "Improving Generalization in Collision Avoidance for Multiple Unmanned Aerial Vehicles via Causal Representation Learning." Sensors 25, no. 11 (2025): 3303. https://doi.org/10.3390/s25113303.

Повний текст джерела
Анотація:
Deep-reinforcement-learning-based multi-UAV collision avoidance and navigation methods have made significant progress. However, the fundamental challenge of those methods is their restricted capability to generalize beyond the specific scenario in which they are trained on. We find that the cause of the generalization failures is attributed to spurious correlation. To solve this generalization problem, we propose a causal representation learning method to identify the causal representations from images. Specifically, our method can neglect factors of variation that are irrelevant to the deep r
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Liu, Yong, Xuexin Qi, Jiali Zhang, Hui Li, Xin Ge, and Jun Ai. "Automatic Bug Triaging via Deep Reinforcement Learning." Applied Sciences 12, no. 7 (2022): 3565. http://dx.doi.org/10.3390/app12073565.

Повний текст джерела
Анотація:
Software maintenance and evolution account for approximately 90% of the software development process (e.g., implementation, testing, and maintenance). Bug triaging refers to an activity where developers diagnose, fix, test, and document bug reports during software development and maintenance to improve the speed of bug repair and project progress. However, the large number of bug reports submitted daily increases the triaging workload, and open-source software has a long maintenance cycle. Meanwhile, the developer activity is not stable and changes significantly during software development. He
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Wang, Shikai, Haodong Zhang, Shiji Zhou, Jun Sun, and Qi Shen. "Chip Floorplanning Optimization Using Deep Reinforcement Learning." International Journal of Innovative Research in Computer Science and Technology 12, no. 5 (2024): 100–109. http://dx.doi.org/10.55524/ijircst.2024.12.5.14.

Повний текст джерела
Анотація:
This paper presents a new method for chip floorplanning optimization using deep learning (DRL) combined with graph neural networks (GNNs). The plan addresses the challenges of traditional floor plans by applying AI to space design and intelligent space decisions. Three-head network architecture, including a policy network, cost network, and reconstruction head, is introduced to improve feature extraction and overall performance. GNNs are employed for state representation and feature extraction, enabling the capture of intricate topological information from chip netlists. A carefully designed r
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!