To see the other types of publications on this topic, follow the link: Evasion attack.

Journal articles on the topic 'Evasion attack'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Evasion attack.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Alshahrani, Ebtihaj, Daniyal Alghazzawi, Reem Alotaibi, and Osama Rabie. "Adversarial attacks against supervised machine learning based network intrusion detection systems." PLOS ONE 17, no. 10 (2022): e0275971. http://dx.doi.org/10.1371/journal.pone.0275971.

Full text
Abstract:
Adversarial machine learning is a recent area of study that explores both adversarial attack strategy and detection systems of adversarial attacks, which are inputs specially crafted to outwit the classification of detection systems or disrupt the training process of detection systems. In this research, we performed two adversarial attack scenarios, we used a Generative Adversarial Network (GAN) to generate synthetic intrusion traffic to test the influence of these attacks on the accuracy of machine learning-based Intrusion Detection Systems(IDSs). We conducted two experiments on adversarial a
APA, Harvard, Vancouver, ISO, and other styles
2

Mrs.Ashvinee, N.Kharat* Prof.Mr.(Dr.).B.D.Phulpagar. "SURVEY ON METHODS FOR EVASION ATTACK DETECTION." INTERNATIONAL JOURNALOFRESEARCH SCIENCE& MANAGEMENT 4, no. 7 (2017): 54–60. https://doi.org/10.5281/zenodo.831440.

Full text
Abstract:
Network security has become more necessary to personal computer users, organizations, and the military. With the advent of the internet, security became a major concern and the history of security sanctions a better understanding of the emergence of security technology. Network attacks had become a curse to technology, where attacker destroy or gain illegal access to system recourses and restrict the legitimate users from accessing the information. In this paper, we are going study different type of network security attack and learn counter measures for that attack. We are also going to study
APA, Harvard, Vancouver, ISO, and other styles
3

Brohi, Sarfraz, and Qurat-ul-ain Mastoi. "AI Under Attack: Metric-Driven Analysis of Cybersecurity Threats in Deep Learning Models for Healthcare Applications." Algorithms 18, no. 3 (2025): 157. https://doi.org/10.3390/a18030157.

Full text
Abstract:
Incorporating Artificial Intelligence (AI) in healthcare has transformed disease diagnosis and treatment by offering unprecedented benefits. However, it has also revealed critical cybersecurity vulnerabilities in Deep Learning (DL) models, which raise significant risks to patient safety and their trust in AI-driven applications. Existing studies primarily focus on theoretical vulnerabilities or specific attack types, leaving a gap in understanding the practical implications of multiple attack scenarios on healthcare AI. In this paper, we provide a comprehensive analysis of key attack vectors,
APA, Harvard, Vancouver, ISO, and other styles
4

Nirmal, Santosh, and Pramod Patil. "Deceptive Maneuvers: Subverting CNN-AdaBoost Model for Energy Theft Detection." Electronics ETF 28, no. 2 (2024): 46–53. https://doi.org/10.53314/els2428046n.

Full text
Abstract:
As deep learning models become more prevalent in smart grid systems, ensuring their accuracy in tasks like identifying abnormal customer behavior is increasingly important. As its use is increased in smart grids to detect energy theft, crafting adversarial data by attackers to deceive the model to get the desired output is also increased. Evasion attacks (EA) attempt to evade detection by misclassifying input data during testing. The manipulation of data inputs is done so that it is not noticeable to humans but can cause the machine learning (ML) model to produce incorrect results. Electricity
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Qingya, Yi Wu, Haojun Xuan, and Huishu Wu. "FLARE: A Backdoor Attack to Federated Learning with Refined Evasion." Mathematics 12, no. 23 (2024): 3751. http://dx.doi.org/10.3390/math12233751.

Full text
Abstract:
Federated Learning (FL) is vulnerable to backdoor attacks in which attackers inject malicious behaviors into the global model. To counter these attacks, existing works mainly introduce sophisticated defenses by analyzing model parameters and utilizing robust aggregation strategies. However, we find that FL systems can still be attacked by exploiting their inherent complexity. In this paper, we propose a novel three-stage backdoor attack strategy named FLARE: A Backdoor Attack to Federated Learning with Refined Evasion, which is designed to operate under the radar of conventional defense strate
APA, Harvard, Vancouver, ISO, and other styles
6

Speth, Cornelia, and Günter Rambach. "Complement Attack againstAspergillusand Corresponding Evasion Mechanisms." Interdisciplinary Perspectives on Infectious Diseases 2012 (2012): 1–9. http://dx.doi.org/10.1155/2012/463794.

Full text
Abstract:
Invasive aspergillosis shows a high mortality rate particularly in immunocompromised patients. Perpetually increasing numbers of affected patients highlight the importance of a clearer understanding of interactions between innate immunity and fungi. Innate immunity is considered to be the most significant host defence against invasive fungal infections. Complement represents a crucial part of this first line defence and comprises direct effects against invading pathogens as well as bridging functions to other parts of the immune network. However, despite the potency of complement to attack for
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Hongyi, Jinshu Su, Linbo Qiao, and Qin Xin. "Malware Collusion Attack against SVM: Issues and Countermeasures." Applied Sciences 8, no. 10 (2018): 1718. http://dx.doi.org/10.3390/app8101718.

Full text
Abstract:
Android has become the most popular mobile platform, and a hot target for malware developers. At the same time, researchers have come up with numerous ways to deal with malware. Among them, machine learning based methods are quite effective in Android malware detection, the accuracy of which can be as high as 98%. Thus, malware developers have the incentives to develop more advanced malware to evade detection. This paper presents an adversary attack scenario (Collusion Attack) that will compromise current machine learning based malware detection methods, especially Support Vector Machines (SVM
APA, Harvard, Vancouver, ISO, and other styles
8

Evdokimenkov, Veniamin N., Dmitriy A. Kozorez, and Lev N. Rabinskiy. "Unmanned aerial vehicle evasion manoeuvres from enemy aircraft attack." Journal of the Mechanical Behavior of Materials 30, no. 1 (2021): 87–94. http://dx.doi.org/10.1515/jmbm-2021-0009.

Full text
Abstract:
Abstract One of the most important problems associated with the combat use of unmanned aerial vehicles remains to ensure their high survivability in conditions of deliberate countermeasures, the source of which can be both ground-based air defence systems and fighter aircraft. For this reason, the study and optimization of evasive manoeuvres of an unmanned aerial vehicle from an enemy aircraft attack remains relevant. Based on the game approach, the authors of this paper propose an algorithm for guaranteeing control of the trajectory of an unmanned aerial vehicle, which ensures its evasion fro
APA, Harvard, Vancouver, ISO, and other styles
9

Sheikh, Zakir Ahmad, Yashwant Singh, Pradeep Kumar Singh, and Paulo J. Sequeira Gonçalves. "Defending the Defender: Adversarial Learning Based Defending Strategy for Learning Based Security Methods in Cyber-Physical Systems (CPS)." Sensors 23, no. 12 (2023): 5459. http://dx.doi.org/10.3390/s23125459.

Full text
Abstract:
Cyber-Physical Systems (CPS) are prone to many security exploitations due to a greater attack surface being introduced by their cyber component by the nature of their remote accessibility or non-isolated capability. Security exploitations, on the other hand, rise in complexities, aiming for more powerful attacks and evasion from detections. The real-world applicability of CPS thus poses a question mark due to security infringements. Researchers have been developing new and robust techniques to enhance the security of these systems. Many techniques and security aspects are being considered to b
APA, Harvard, Vancouver, ISO, and other styles
10

Alzaidy, Sharoug, and Hamad Binsalleeh. "Adversarial Attacks with Defense Mechanisms on Convolutional Neural Networks and Recurrent Neural Networks for Malware Classification." Applied Sciences 14, no. 4 (2024): 1673. http://dx.doi.org/10.3390/app14041673.

Full text
Abstract:
In the field of behavioral detection, deep learning has been extensively utilized. For example, deep learning models have been utilized to detect and classify malware. Deep learning, however, has vulnerabilities that can be exploited with crafted inputs, resulting in malicious files being misclassified. Cyber-Physical Systems (CPS) may be compromised by malicious files, which can have catastrophic consequences. This paper presents a method for classifying Windows portable executables (PEs) using Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs). To generate malware exec
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Jiate, Meng Pang, and Binghui Wang. "Practicable Black-Box Evasion Attacks on Link Prediction in Dynamic Graphs—a Graph Sequential Embedding Method." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 25 (2025): 26265–72. https://doi.org/10.1609/aaai.v39i25.34824.

Full text
Abstract:
Link prediction in dynamic graphs (LPDG) has been widely applied to real-world applications such as website recommendation, traffic flow prediction, organizational studies, etc. These models are usually kept local and secure, with only the interactive interface restrictively available to the public. Thus, the problem of the black-box evasion attack on the LPDG model, where model interactions and data perturbations are restricted, seems to be essential and meaningful in practice. In this paper, we propose the first practicable black-box evasion attack method that achieves effective attacks agai
APA, Harvard, Vancouver, ISO, and other styles
12

Dai, Jiazhu, and Siwei Xiong. "An Evasion Attack against Stacked Capsule Autoencoder." Algorithms 15, no. 2 (2022): 32. http://dx.doi.org/10.3390/a15020032.

Full text
Abstract:
Capsule networks are a type of neural network that use the spatial relationship between features to classify images. By capturing the poses and relative positions between features, this network is better able to recognize affine transformation and surpass traditional convolutional neural networks (CNNs) when handling translation, rotation, and scaling. The stacked capsule autoencoder (SCAE) is a state-of-the-art capsule network that encodes an image in capsules which each contain poses of features and their correlations. The encoded contents are then input into the downstream classifier to pre
APA, Harvard, Vancouver, ISO, and other styles
13

Khorshidpour, Zeinab, Jafar Tahmoresnezhad, Sattar Hashemi, and Ali Hamzeh. "Domain invariant feature extraction against evasion attack." International Journal of Machine Learning and Cybernetics 9, no. 12 (2017): 2093–104. http://dx.doi.org/10.1007/s13042-017-0692-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Friedman, HM, C. Saldanha, L. Wang, et al. "Herpes simplex virus evasion of complement attack." Immunopharmacology 49, no. 1-2 (2000): 58. http://dx.doi.org/10.1016/s0162-3109(00)80166-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Fu, Zhongwang, and Xiaohui Cui. "ELAA: An Ensemble-Learning-Based Adversarial Attack Targeting Image-Classification Model." Entropy 25, no. 2 (2023): 215. http://dx.doi.org/10.3390/e25020215.

Full text
Abstract:
The research on image-classification-adversarial attacks is crucial in the realm of artificial intelligence (AI) security. Most of the image-classification-adversarial attack methods are for white-box settings, demanding target model gradients and network architectures, which is less practical when facing real-world cases. However, black-box adversarial attacks immune to the above limitations and reinforcement learning (RL) seem to be a feasible solution to explore an optimized evasion policy. Unfortunately, existing RL-based works perform worse than expected in the attack success rate. In lig
APA, Harvard, Vancouver, ISO, and other styles
16

Ghanem, Khadoudja, Ziad Kherbache, and Omar Ourdighi. "Enhancing Adversarial Examples for Evading Malware Detection Systems: A Memetic Algorithm Approach." International Journal of Computer Network and Information Security 17, no. 1 (2025): 1–16. https://doi.org/10.5815/ijcnis.2025.01.01.

Full text
Abstract:
Malware detection using Machine Learning techniques has gained popularity due to their high accuracy. However, ML models are susceptible to Adversarial Examples, specifically crafted samples intended to deceive the detectors. This paper presents a novel method for generating evasive AEs by augmenting existing malware with a new section at the end of the PE file, populated with binary data using memetic algorithms. Our method hybridizes global search and local search techniques to achieve optimized results. The Malconv Model, a well-known state-of-the-art deep learning model designed explicitly
APA, Harvard, Vancouver, ISO, and other styles
17

Zhao, Maochang, and Jing Zhang. "Highly Imperceptible Black-Box Graph Injection Attacks with Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 12 (2025): 13357–64. https://doi.org/10.1609/aaai.v39i12.33458.

Full text
Abstract:
Recent studies have revealed the vulnerability of graph neural networks (GNNs) to adversarial attacks. In practice, effectively attacking GNNs is not easy. Existing attack methods primarily focus on modifying the topology of the graph data. In many scenarios, attackers do not have the authority to manipulate the graph's topology, making such attacks challenging to execute. Although node injection attacks are more feasible than modifying the topology, current injection attacks rely on knowledge of the victim model's architecture. This dependency significantly degrades attack quality when there
APA, Harvard, Vancouver, ISO, and other styles
18

Larsen, Mads Delbo, Sisse Ditlev, Rafael Bayarri Olmos, et al. "Malaria parasite evasion of classical complement pathway attack." Molecular Immunology 89 (September 2017): 159. http://dx.doi.org/10.1016/j.molimm.2017.06.123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Miller, David, Yujia Wang, and George Kesidis. "When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time." Neural Computation 31, no. 8 (2019): 1624–70. http://dx.doi.org/10.1162/neco_a_01209.

Full text
Abstract:
A significant threat to the recent, wide deployment of machine learning–based systems, including deep neural networks (DNNs), is adversarial learning attacks. The main focus here is on evasion attacks against DNN-based classifiers at test time. While much work has focused on devising attacks that make small perturbations to a test pattern (e.g., an image) that induce a change in the classifier's decision, until recently there has been a relative paucity of work defending against such attacks. Some works robustify the classifier to make correct decisions on perturbed patterns. This is an import
APA, Harvard, Vancouver, ISO, and other styles
20

Lior, Sidi, Nadler Asaf, and Shabtai Asaf. "MaskDGA: An Evasion Attack Against DGA Classifiers and Adversarial Defenses." IEEE Access 8 (September 10, 2020): 161580–92. https://doi.org/10.1109/ACCESS.2020.3020964.

Full text
Abstract:
Domain generation algorithms (DGAs) are commonly used by botnets to generate domain names that bots can use to establish communication channels with their command and control servers. Recent publications presented deep learning classifiers that detect algorithmically generated domain (AGD) names in real time with high accuracy and thus significantly reduce the effectiveness of DGAs for botnet communication. In this paper, we present MaskDGA, an evasion technique that uses adversarial learning to modify AGD names in order to evade inline DGA classifiers, without the need for the attacker to pos
APA, Harvard, Vancouver, ISO, and other styles
21

B, Priyadarsana. "The Qilin(Agenda) Ransomware Campaign: Attack Vectors, Evasion Techniques, and Mitigation Strategies." International Journal for Research in Applied Science and Engineering Technology 13, no. 7 (2025): 334–40. https://doi.org/10.22214/ijraset.2025.73005.

Full text
Abstract:
Qilin ransomware was a sophisticated cyber-attack that targeted critical infrastructure systems in various locations around the world. The attacks were highly sophisticated. Qilin was able to bypass authentication and execute remote code using an unpatched vulnerability in Fortinet's second FortiGate and Fortisuite devices. The service offers Ransomware-as-As a Service with the ability to load any payload as required. This review paper examines the warhead dispatch mechanism, the evasion tactic, and a multi-phase extenuation and recovery technique
APA, Harvard, Vancouver, ISO, and other styles
22

ICHETOVKIN, E. A., and I. V. KOTENKO. "MODELS AND ALGORITHMS FOR PROTECTING INTRUSION DETECTION SYSTEMS FROM ATTACKS ON MACHINE LEARNING COMPONENTS." Computational Nanotechnology 12, no. 1 (2025): 17–25. https://doi.org/10.33693/2313-223x-2025-12-1-17-25.

Full text
Abstract:
Today, one of the means of protecting network infrastructure from cyberattacks is intrusion detection systems. Digitalization requires the use of tools that can cope not only with known types of attacks, but also with previously undescribed ones. Machine learning can be used to protect against such threats. The paper presents models and algorithms for protecting against evasion attacks on machine learning components of intrusion detection systems. The novelty is that for the first time, a simulation of the use of a protection subsystem based on long-short-term memory autoencoders during a fast
APA, Harvard, Vancouver, ISO, and other styles
23

ILYA V., VOLODIN, PUTYATO MICHAEL M., MAKARYAN ALEXANDER S., and EVGLEVSKY VYACHESLAV YU. "CLASSIFICATION OF ATTACK MECHANISMS AND RESEARCH OF PROTECTION METHODS FOR SYSTEMS USING MACHINE LEARNING AND ARTIFICIAL INTELLIGENCE ALGORITHMS." CASPIAN JOURNAL: Control and High Technologies 54, no. 2 (2021): 90–98. http://dx.doi.org/10.21672/2074-1707.2021.53.1.090-098.

Full text
Abstract:
This article provides a complete classification of attacks using artificial intelligence. Three main identified sections were considered: attacks on information systems and computer networks, attacks on artificial intelligence models (poisoning attacks, evasion attacks, extraction attacks, privacy attacks), attacks on human consciousness and opinion (all types of deepfake). In each of these sections, the mechanisms of attacks were identified and studied, in accordance with them, the methods of protection were set. In conclusion, a specific example of an attack using a pretrained model was anal
APA, Harvard, Vancouver, ISO, and other styles
24

Imran, Muhammad, Annalisa Appice, and Donato Malerba. "Evaluating Realistic Adversarial Attacks against Machine Learning Models for Windows PE Malware Detection." Future Internet 16, no. 5 (2024): 168. http://dx.doi.org/10.3390/fi16050168.

Full text
Abstract:
During the last decade, the cybersecurity literature has conferred a high-level role to machine learning as a powerful security paradigm to recognise malicious software in modern anti-malware systems. However, a non-negligible limitation of machine learning methods used to train decision models is that adversarial attacks can easily fool them. Adversarial attacks are attack samples produced by carefully manipulating the samples at the test time to violate the model integrity by causing detection mistakes. In this paper, we analyse the performance of five realistic target-based adversarial atta
APA, Harvard, Vancouver, ISO, and other styles
25

Bao, Hongyan, Yufei Han, Yujun Zhou, Xin Gao, and Xiangliang Zhang. "Towards Efficient and Domain-Agnostic Evasion Attack with High-Dimensional Categorical Inputs." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (2023): 6753–61. http://dx.doi.org/10.1609/aaai.v37i6.25828.

Full text
Abstract:
Our work targets at searching feasible adversarial perturbation to attack a classifier with high-dimensional categorical inputs in a domain-agnostic setting. This is intrinsically a NP-hard knapsack problem where the exploration space becomes explosively larger as the feature dimension increases. Without the help of domain knowledge, solving this problem via heuristic method, such as Branch-and-Bound, suffers from exponential complexity, yet can bring arbitrarily bad attack results. We address the challenge via the lens of multi-armed bandit based combinatorial search. Our proposed method, nam
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Yunchun, Jiaqi Jiang, Chao Yi, et al. "A Robust CNN for Malware Classification against Executable Adversarial Attack." Electronics 13, no. 5 (2024): 989. http://dx.doi.org/10.3390/electronics13050989.

Full text
Abstract:
Deep-learning-based malware-detection models are threatened by adversarial attacks. This paper designs a robust and secure convolutional neural network (CNN) for malware classification. First, three CNNs with different pooling layers, including global average pooling (GAP), global max pooling (GMP), and spatial pyramid pooling (SPP), are proposed. Second, we designed an executable adversarial attack to construct adversarial malware by changing the meaningless and unimportant segments within the Portable Executable (PE) header file. Finally, to consolidate the GMP-based CNN, a header-aware loss
APA, Harvard, Vancouver, ISO, and other styles
27

Jari, Sawsan Darweesh. "Review: Parasite Strategies to Escape Attack by The Immune System of Their Hosts." European Journal of Theoretical and Applied Sciences 2, no. 2 (2024): 154–56. http://dx.doi.org/10.59324/ejtas.2024.2(2).14.

Full text
Abstract:
This review aims to investigate the parasitic strategies that enable them to escape attack by the immune systems of their hosts and cause infection. A parasite is a creature that obtains its sustenance and other requirements from a host, which is another organism that provides support to the parasite. Medical parasitology encompasses protozoa, helminths, and some arthropods. Parasitic diseases lead to host immune responses that expel the infesting parasites. Parasites also have improved a number of strategies to prevent host immune attacks and survive in the environment of the host. In this st
APA, Harvard, Vancouver, ISO, and other styles
28

Sawsan, Darweesh Jari. "Review: Parasite Strategies to Escape Attack by The Immune System of Their Hosts." European Journal of Theoretical and Applied Sciences 2, no. 2 (2024): 154–56. https://doi.org/10.59324/ejtas.2024.2(2).14.

Full text
Abstract:
This review aims to investigate the parasitic strategies that enable them to escape attack by the immune systems of their hosts and cause infection. A parasite is a creature that obtains its sustenance and other requirements from a host, which is another organism that provides support to the parasite. Medical parasitology encompasses protozoa, helminths, and some arthropods. Parasitic diseases lead to host immune responses that expel the infesting parasites. Parasites also have improved a number of strategies to prevent host immune attacks and survive in the environment of the host. In this st
APA, Harvard, Vancouver, ISO, and other styles
29

Fan, Zuoe, Hao Ding, Linping Feng, Bochen Li, and Lei Song. "Research on Time-Cooperative Guidance with Evasive Maneuver for Multiple Underwater Intelligent Vehicles." Journal of Marine Science and Engineering 12, no. 6 (2024): 1018. http://dx.doi.org/10.3390/jmse12061018.

Full text
Abstract:
In order to achieve the precise attack of multiple underwater intelligent vehicles (UIVs) on the same target ship at a fixed impact time, and to improve the penetration capability of the UIVs themselves, this study investigated the guidance law for the time-cooperative guidance of UIVs with maneuvering evasion. The evasive maneuver of the UIV increases the line-of-sight angle between the UIV and the target, which decreases the guidance precision of the UIV. A segmented control strategy is proposed to solve the problem of decreasing guidance precision caused by evading maneuvers, which is also
APA, Harvard, Vancouver, ISO, and other styles
30

Aslan, Ömer, Semih Serkant Aktuğ, Merve Ozkan-Okay, Abdullah Asim Yilmaz, and Erdal Akin. "A Comprehensive Review of Cyber Security Vulnerabilities, Threats, Attacks, and Solutions." Electronics 12, no. 6 (2023): 1333. http://dx.doi.org/10.3390/electronics12061333.

Full text
Abstract:
Internet usage has grown exponentially, with individuals and companies performing multiple daily transactions in cyberspace rather than in the real world. The coronavirus (COVID-19) pandemic has accelerated this process. As a result of the widespread usage of the digital environment, traditional crimes have also shifted to the digital space. Emerging technologies such as cloud computing, the Internet of Things (IoT), social media, wireless communication, and cryptocurrencies are raising security concerns in cyberspace. Recently, cyber criminals have started to use cyber attacks as a service to
APA, Harvard, Vancouver, ISO, and other styles
31

Zhou, Qi, Haipeng Chen, Yitao Zheng, and Zhen Wang. "EvaLDA: Efficient Evasion Attacks Towards Latent Dirichlet Allocation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 16 (2021): 14602–11. http://dx.doi.org/10.1609/aaai.v35i16.17716.

Full text
Abstract:
As one of the most powerful topic models, Latent Dirichlet Allocation (LDA) has been used in a vast range of tasks, including document understanding, information retrieval and peer-reviewer assignment. Despite its tremendous popularity, the security of LDA has rarely been studied. This poses severe risks to security-critical tasks such as sentiment analysis and peer-reviewer assignment that are based on LDA. In this paper, we are interested in knowing whether LDA models are vulnerable to adversarial perturbations of benign document examples during inference time. We formalize the evasion attac
APA, Harvard, Vancouver, ISO, and other styles
32

Naheed Sultana and T C Swetha Priya. "Role of Accomplices in Morphing Attacks: AComprehensive Survey." international journal of engineering technology and management sciences 9, Special Issue 1 (2025): 43–50. https://doi.org/10.46647/ijetms.2025.v09si01.006.

Full text
Abstract:
Morphing attacks, where digital objects like images, videos, or biometric information are manipulated to look legitimate while hiding malicious intent, are serious threats to security systems. This extensive survey explores the vital role accomplices play in carrying out morphing attacks. Accomplices can be from human insiders and outside collaborators to automated systems, all of whom play their role in different stages of the attack—ranging from data gathering and morphing methods to delivery and evasion of detection. Human accomplices can help by providing access to sensitive information, c
APA, Harvard, Vancouver, ISO, and other styles
33

Li, Deqiang, Qianmu Li, Yanfang (Fanny) Ye, and Shouhuai Xu. "Arms Race in Adversarial Malware Detection: A Survey." ACM Computing Surveys 55, no. 1 (2023): 1–35. http://dx.doi.org/10.1145/3484491.

Full text
Abstract:
Malicious software (malware) is a major cyber threat that has to be tackled with Machine Learning (ML) techniques because millions of new malware examples are injected into cyberspace on a daily basis. However, ML is vulnerable to attacks known as adversarial examples. In this article, we survey and systematize the field of Adversarial Malware Detection (AMD) through the lens of a unified conceptual framework of assumptions, attacks, defenses, and security properties. This not only leads us to map attacks and defenses to partial order structures, but also allows us to clearly describe the atta
APA, Harvard, Vancouver, ISO, and other styles
34

KWON, Hyun, Changhyun CHO, and Jun LEE. "Priority Evasion Attack: An Adversarial Example That Considers the Priority of Attack on Each Classifier." IEICE Transactions on Information and Systems E105.D, no. 11 (2022): 1880–89. http://dx.doi.org/10.1587/transinf.2022ngp0002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Cao, Han, Chengxiang Si, Qindong Sun, Yanxiao Liu, Shancang Li, and Prosanta Gope. "ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers." Entropy 24, no. 3 (2022): 412. http://dx.doi.org/10.3390/e24030412.

Full text
Abstract:
The vulnerability of deep neural network (DNN)-based systems makes them susceptible to adversarial perturbation and may cause classification task failure. In this work, we propose an adversarial attack model using the Artificial Bee Colony (ABC) algorithm to generate adversarial samples without the need for a further gradient evaluation and training of the substitute model, which can further improve the chance of task failure caused by adversarial perturbation. In untargeted attacks, the proposed method obtained 100%, 98.6%, and 90.00% success rates on the MNIST, CIFAR-10 and ImageNet datasets
APA, Harvard, Vancouver, ISO, and other styles
36

Sidi, Lior, Asaf Nadler, and Asaf Shabtai. "MaskDGA: An Evasion Attack Against DGA Classifiers and Adversarial Defenses." IEEE Access 8 (2020): 161580–92. http://dx.doi.org/10.1109/access.2020.3020964.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Chan, Patrick P. K., Zhe Lin, Xian Hu, Eric C. C. Tsang, and Daniel S. Yeung. "Sensitivity based robust learning for stacked autoencoder against evasion attack." Neurocomputing 267 (December 2017): 572–80. http://dx.doi.org/10.1016/j.neucom.2017.06.032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Wes Leid, R., C. M. Suquet, and L. Tanigoshi. "Parasite defense mechanisms for evasion of host attack; A review." Veterinary Parasitology 25, no. 2 (1987): 147–62. http://dx.doi.org/10.1016/0304-4017(87)90101-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Kleber, Stephan, and Patrick Wachter. "[Titel] A Strategy to Evaluate Test Time Evasion Attack Feasibility." Datenschutz und Datensicherheit - DuD 47, no. 8 (2023): 478–82. http://dx.doi.org/10.1007/s11623-023-1802-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Echeberria-Barrio, Xabier, Amaia Gil-Lerchundi, Iñigo Mendialdua, and Raul Orduna-Urrutia. "Topological safeguard for evasion attack interpreting the neural networks’ behavior." Pattern Recognition 147 (March 2024): 110130. http://dx.doi.org/10.1016/j.patcog.2023.110130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

WANG, Zhong, Zhiwen WEN, Weijun CAI, and Pei WANG. "Research on game strategy of underwater attack and defense process in typical situation." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 41, no. 4 (2023): 774–83. http://dx.doi.org/10.1051/jnwpu/20234140774.

Full text
Abstract:
Aiming at the problem of underwater attack and defense in typical situations, a mathematical model of three-party attack and defense problem composed of torpedo, submarines and anti-torpedo torpedo is established. Under the condition of considering three-party game confrontation, the three-party game problem is transformed into the pursuit and escape game problem between the submarine and torpedo, and the interception and evasion game problem between the submarine launched ATT and the torpedo. Based on the optimal control theory and differential game theory, the three-party optimal game proble
APA, Harvard, Vancouver, ISO, and other styles
42

Tayyab, Umm-e.-Hani, Muhammad Zain, Faiza Babar Khan, and Dr Muhammad Hanif Durad. "Identification of Malicious PDFs Using Convolutional Neural Networks." VFAST Transactions on Software Engineering 10, no. 3 (2022): 51–57. http://dx.doi.org/10.21015/vtse.v10i3.1114.

Full text
Abstract:
With multiple possible carriers of malware, one of the most targeted file formats by malware writers is PDF format due to its inherent shortcomings as well as the limitations of PDF readers. The PDF-based attack is one of the most common attacks among document-based attacks. Some of the most luring features of PDF files include their widespread use which has replaced Word documents quite dominantly, secondly the ease of crafting a malicious PDF, and above all its capability of containing javascript. Initially, researchers focused on identifying malicious PDFs by comparing the structural change
APA, Harvard, Vancouver, ISO, and other styles
43

Rejito, Juli, Deris Stiawan, Ahmed Alshaflut, and Rahmat Budiarto. "Machine learning-based anomaly detection for smart home networks under adversarial attack." Computer Science and Information Technologies 5, no. 2 (2024): 122–29. http://dx.doi.org/10.11591/csit.v5i2.pp122-129.

Full text
Abstract:
As smart home networks become more widespread and complex, they are capable of providing users with a wide range of applications and services. At the same time, the networks are also vulnerable to attack from malicious adversaries who can take advantage of the weaknesses in the network's devices and protocols. Detection of anomalies is an effective way to identify and mitigate these attacks; however, it requires a high degree of accuracy and reliability. This paper proposes an anomaly detection method based on machine learning (ML) that can provide a robust and reliable solution for the detect
APA, Harvard, Vancouver, ISO, and other styles
44

Rejito, Juli, Deris Stiawan, Ahmed Alshaflut, and Rahmat Budiarto. "Machine learning-based anomaly detection for smart home networks under adversarial attack." Computer Science and Information Technologies 5, no. 2 (2024): 122–29. http://dx.doi.org/10.11591/csit.v5i2.p122-129.

Full text
Abstract:
As smart home networks become more widespread and complex, they are capable of providing users with a wide range of applications and services. At the same time, the networks are also vulnerable to attack from malicious adversaries who can take advantage of the weaknesses in the network's devices and protocols. Detection of anomalies is an effective way to identify and mitigate these attacks; however, it requires a high degree of accuracy and reliability. This paper proposes an anomaly detection method based on machine learning (ML) that can provide a robust and reliable solution for the detect
APA, Harvard, Vancouver, ISO, and other styles
45

Juli, Rejito, Stiawan Deris, Alshaflut Ahmed, and Budiarto Rahmat. "Machine learning-based anomaly detection for smart home networks under adversarial attack." Computer Science and Information Technologies 5, no. 2 (2024): 122–29. https://doi.org/10.11591/csit.v5i2.pp122-129.

Full text
Abstract:
As smart home networks become more widespread and complex, they are capable of providing users with a wide range of applications and services. At the same time, the networks are also vulnerable to attack from malicious adversaries who can take advantage of the weaknesses in the network's devices and protocols. Detection of anomalies is an effective way to identify and mitigate these attacks; however, it requires a high degree of accuracy and reliability. This paper proposes an anomaly detection method based on machine learning (ML) that can provide a robust and reliable solution for the detect
APA, Harvard, Vancouver, ISO, and other styles
46

Evdokimenkov, V. N., M. N. Krasilshchikov, and N. A. Lyapin. "THE RESEARCH OF UNMANNED AIRCRAFT EVASIVE MANEUVERS FROM ATTACK BY ENEMY AIRCRAFT ON THE BASIS OF THE GAME APPROACH." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 184 (October 2019): 21–31. http://dx.doi.org/10.14489/vkit.2019.10.pp.021-031.

Full text
Abstract:
Actual level of unmanned aerial vehicles development allows us to consider them as an effective tool for solving a variety of civil and military tasks (primarily reconnaissance and strike). At the same time, one of the most important problems associated with the combat use of unmanned aerial vehicles remains to ensure their high survivability in organized counteraction conditions, the source of which can be both ground-based air defense and fighter aircraft (manned or unmanned). For this reason, the study and optimization of unmanned aerial vehicle evasion maneuvers from an enemy air attack re
APA, Harvard, Vancouver, ISO, and other styles
47

Moussaileb, Routa, Nora Cuppens, Jean-Louis Lanet, and Hélène Le Bouder. "A Survey on Windows-based Ransomware Taxonomy and Detection Mechanisms." ACM Computing Surveys 54, no. 6 (2021): 1–36. http://dx.doi.org/10.1145/3453153.

Full text
Abstract:
Ransomware remains an alarming threat in the 21st century. It has evolved from being a simple scare tactic into a complex malware capable of evasion. Formerly, end-users were targeted via mass infection campaigns. Nevertheless, in recent years, the attackers have focused on targeted attacks, since the latter are profitable and can induce severe damage. A vast number of detection mechanisms have been proposed in the literature. We provide a systematic review of ransomware countermeasures starting from its deployment on the victim machine until the ransom payment via cryptocurrency. We define fo
APA, Harvard, Vancouver, ISO, and other styles
48

Kwon, Hyun. "Untargeted Evasion Attacks on Deep Neural Networks Using StyleGAN." Electronics 14, no. 3 (2025): 574. https://doi.org/10.3390/electronics14030574.

Full text
Abstract:
In this study, we propose a novel method for generating untargeted adversarial examples using a Generative Adversarial Network (GAN) in an unrestricted black-box environment. The proposed approach produces adversarial examples that are classified into random classes distinct from their original labels, while maintaining high visual similarity to the original samples from a human perspective. This is achieved by leveraging the capabilities of StyleGAN to manipulate the latent space representation of images, enabling precise control over visual distortions. To evaluate the efficacy of the propos
APA, Harvard, Vancouver, ISO, and other styles
49

Cao, Ning, Yingying Wang, Guofu Li, Yuyan Shen, Junshe Wang, and Hongbin Zhang. "Improve the robustness of data mining algorithm against adversarial evasion attack." International Journal of Innovative Computing and Applications 9, no. 3 (2018): 142. http://dx.doi.org/10.1504/ijica.2018.093732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Zhang, Hongbin, Junshe Wang, Ning Cao, Yuyan Shen, Guofu Li, and Yingying Wang. "Improve the robustness of data mining algorithm against adversarial evasion attack." International Journal of Innovative Computing and Applications 9, no. 3 (2018): 142. http://dx.doi.org/10.1504/ijica.2018.10014854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!