To see the other types of publications on this topic, follow the link: Physical adversarial attack.

Journal articles on the topic 'Physical adversarial attack'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Physical adversarial attack.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yang, Kaichen, Tzungyu Tsai, Honggang Yu, Tsung-Yi Ho, and Yier Jin. "Beyond Digital Domain: Fooling Deep Learning Based Recognition System in Physical World." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (2020): 1088–95. http://dx.doi.org/10.1609/aaai.v34i01.5459.

Full text
Abstract:
Adversarial examples that can fool deep neural network (DNN) models in computer vision present a growing threat. The current methods of launching adversarial attacks concentrate on attacking image classifiers by adding noise to digital inputs. The problem of attacking object detection models and adversarial attacks in physical world are rarely touched. Some prior works are proposed to launch physical adversarial attack against object detection models, but limited by certain aspects. In this paper, we propose a novel physical adversarial attack targeting object detection models. Instead of simp
APA, Harvard, Vancouver, ISO, and other styles
2

Bi, Chuanxiang, Shang Shi, and Jian Qu. "Enhancing Autonomous Driving: A Novel Approach of Mixed Attack and Physical Defense Strategies." ASEAN Journal of Scientific and Technological Reports 28, no. 1 (2024): e254093. https://doi.org/10.55164/ajstr.v28i1.254093.

Full text
Abstract:
Adversarial attacks are a significant threat to autonomous driving safety, especially in the physical world where there is a prevalence of "sticker-paste" attacks on traffic signs. However, most of these attacks are single-category attacks with little interference effect. This paper builds an autonomous driving platform and conducts extensive experiments on five single-category attacks. Moreover, we proposed a new physical attack - a mixed attack consisting of different single-category physical attacks. The proposed method outperforms existing methods and can reduce the accuracy of traffic sig
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Ximin, Jinyin Chen, Haibin Zheng, and Zhenguang Liu. "PhyCamo: A Robust Physical Camouflage via Contrastive Learning for Multi-View Physical Adversarial Attack." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 10 (2025): 10230–38. https://doi.org/10.1609/aaai.v39i10.33110.

Full text
Abstract:
Deep neural networks (DNNs) have achieved remarkable success in widespread applications. Meanwhile, its vulnerability towards carefully crafted adversarial attacks captures special attention. Not only adversarial perturbations in digital space will fool the target DNNs-based detectors making a wrong decision, but also actually printed patches can be camouflaged to defeat detectors in physical space. In particular, multi-view physical adversarial attacks pose a more serious threat to practical scenarios. The existing attacks are still challenged in three aspects, i.e., high-cost data augmentati
APA, Harvard, Vancouver, ISO, and other styles
4

Jiang, Wei, Tianyuan Zhang , Shuangcheng Liu , Weiyu Ji , Zichao Zhang , and Gang Xiao . "Exploring the Physical-World Adversarial Robustness of Vehicle Detection." Electronics 12, no. 18 (2023): 3921. http://dx.doi.org/10.3390/electronics12183921.

Full text
Abstract:
Adversarial attacks can compromise the robustness of real-world detection models. However, evaluating these models under real-world conditions poses challenges due to resource-intensive experiments. Virtual simulations offer an alternative, but the absence of standardized benchmarks hampers progress. Addressing this, we propose an innovative instant-level data generation pipeline using the CARLA simulator. Through this pipeline, we establish the Discrete and Continuous Instant-level (DCI) dataset, enabling comprehensive experiments involving three detection models and three physical adversaria
APA, Harvard, Vancouver, ISO, and other styles
5

Wei, Hui, Zhixiang Wang, Xuemei Jia, et al. "HOTCOLD Block: Fooling Thermal Infrared Detectors with a Novel Wearable Design." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (2023): 15233–41. http://dx.doi.org/10.1609/aaai.v37i12.26777.

Full text
Abstract:
Adversarial attacks on thermal infrared imaging expose the risk of related applications. Estimating the security of these systems is essential for safely deploying them in the real world. In many cases, realizing the attacks in the physical space requires elaborate special perturbations. These solutions are often impractical and attention-grabbing. To address the need for a physically practical and stealthy adversarial attack, we introduce HotCold Block, a novel physical attack for infrared detectors that hide persons utilizing the wearable Warming Paste and Cooling Paste. By attaching these r
APA, Harvard, Vancouver, ISO, and other styles
6

Sheikh, Zakir Ahmad, Yashwant Singh, Pradeep Kumar Singh, and Paulo J. Sequeira Gonçalves. "Defending the Defender: Adversarial Learning Based Defending Strategy for Learning Based Security Methods in Cyber-Physical Systems (CPS)." Sensors 23, no. 12 (2023): 5459. http://dx.doi.org/10.3390/s23125459.

Full text
Abstract:
Cyber-Physical Systems (CPS) are prone to many security exploitations due to a greater attack surface being introduced by their cyber component by the nature of their remote accessibility or non-isolated capability. Security exploitations, on the other hand, rise in complexities, aiming for more powerful attacks and evasion from detections. The real-world applicability of CPS thus poses a question mark due to security infringements. Researchers have been developing new and robust techniques to enhance the security of these systems. Many techniques and security aspects are being considered to b
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, Hong, Yang Yang, and Yunfei Wang. "AdvFaceGAN: a face dual-identity impersonation attack method based on generative adversarial networks." PeerJ Computer Science 11 (June 11, 2025): e2904. https://doi.org/10.7717/peerj-cs.2904.

Full text
Abstract:
This article aims to reveal security vulnerabilities in current commercial facial recognition systems and promote advancements in facial recognition technology security. Previous research on both digital-domain and physical-domain attacks has lacked consideration of real-world attack scenarios: Digital-domain attacks with good stealthiness often fail to achieve physical implementation, while wearable-based physical-domain attacks typically appear unnatural and cannot evade human visual inspection. We propose AdvFaceGAN, a generative adversarial network (GAN)-based impersonation attack method t
APA, Harvard, Vancouver, ISO, and other styles
8

Qiu, Shilin, Qihe Liu, Shijie Zhou, and Chunjiang Wu. "Review of Artificial Intelligence Adversarial Attack and Defense Technologies." Applied Sciences 9, no. 5 (2019): 909. http://dx.doi.org/10.3390/app9050909.

Full text
Abstract:
In recent years, artificial intelligence technologies have been widely used in computer vision, natural language processing, automatic driving, and other fields. However, artificial intelligence systems are vulnerable to adversarial attacks, which limit the applications of artificial intelligence (AI) technologies in key security fields. Therefore, improving the robustness of AI systems against adversarial attacks has played an increasingly important role in the further development of AI. This paper aims to comprehensively summarize the latest research progress on adversarial attack and defens
APA, Harvard, Vancouver, ISO, and other styles
9

Cai, Wei, Xingyu Di, Xin Wang, Weijie Gao, and Haoran Jia. "Stealthy Vehicle Adversarial Camouflage Texture Generation Based on Neural Style Transfer." Entropy 26, no. 11 (2024): 903. http://dx.doi.org/10.3390/e26110903.

Full text
Abstract:
Adversarial attacks that mislead deep neural networks (DNNs) into making incorrect predictions can also be implemented in the physical world. However, most of the existing adversarial camouflage textures that attack object detection models only consider the effectiveness of the attack, ignoring the stealthiness of adversarial attacks, resulting in the generated adversarial camouflage textures appearing abrupt to human observers. To address this issue, we propose a style transfer module added to an adversarial texture generation framework. By calculating the style loss between the texture and t
APA, Harvard, Vancouver, ISO, and other styles
10

Tiliwalidi, Kalibinuer, Bei Hui, Chengyin Hu, and Jingjing Ge. "Adversarial Camera Patch: An Effective and Robust Physical-World Attack on Object Detectors." International Conference on Cyber Warfare and Security 19, no. 1 (2024): 374–84. http://dx.doi.org/10.34190/iccws.19.1.2044.

Full text
Abstract:
Physical adversarial attacks present a novel and growing challenge in cybersecurity, especially for systems reliant on physical inputs for Deep Neural Networks (DNNs), such as those found in Internet of Things (IoT) devices. They are vulnerable to physical adversarial attacks where real-world objects or environments are manipulated to mislead DNNs, thereby threatening the operational integrity and security of IoT devices. The camera-based attacks are one of the most practical adversarial attacks, which are easy to implement and more robust than all the other attack methods, and pose a big thre
APA, Harvard, Vancouver, ISO, and other styles
11

Luo, Binyan, Hang Cao, Jiahao Cui, et al. "SAR-PATT: A Physical Adversarial Attack for SAR Image Automatic Target Recognition." Remote Sensing 17, no. 1 (2024): 21. https://doi.org/10.3390/rs17010021.

Full text
Abstract:
Deep neural network-based synthetic aperture radar (SAR) automatic target recognition (ATR) systems are susceptible to attack by adversarial examples, which leads to misclassification by the SAR ATR system, resulting in theoretical model robustness problems and security problems in practice. Inspired by optical images, current SAR ATR adversarial example generation is performed in the image domain. However, the imaging principle of SAR images is based on the imaging of the echo signals interacting between the SAR and objects. Generating adversarial examples only in the image domain cannot chan
APA, Harvard, Vancouver, ISO, and other styles
12

Stein, Zvi, Adir Hazan, and Adrian Stern. "Invisible CMOS Camera Dazzling for Conducting Adversarial Attacks on Deep Neural Networks." Sensors 25, no. 7 (2025): 2301. https://doi.org/10.3390/s25072301.

Full text
Abstract:
Despite the outstanding performance of deep neural networks, they remain vulnerable to adversarial attacks. While digital domain adversarial attacks are well-documented, most physical-world attacks are typically visible to the human eye. Here, we present a novel invisible optical-based physical adversarial attack via dazzling a CMOS camera. This attack involves using a designed light pulse sequence spatially transformed within the acquired image due to the camera’s shutter mechanism. We provide a detailed analysis of the photopic conditions required to keep the attacking light source invisible
APA, Harvard, Vancouver, ISO, and other styles
13

Kim, Jeonghun, Hunmin Yang, and Se-Yoon Oh. "Camouflaged Adversarial Patch Attack on Object Detector." Journal of the Korea Institute of Military Science and Technology 26, no. 1 (2023): 44–53. http://dx.doi.org/10.9766/kimst.2023.26.1.044.

Full text
Abstract:
Adversarial attacks have received great attentions for their capacity to distract state-of-the-art neural networks by modifying objects in physical domain. Patch-based attack especially have got much attention for its optimization effectiveness and feasible adaptation to any objects to attack neural network-based object detectors. However, despite their strong attack performance, generated patches are strongly perceptible for humans, violating the fundamental assumption of adversarial examples. In this paper, we propose a camouflaged adversarial patch optimization method using military camoufl
APA, Harvard, Vancouver, ISO, and other styles
14

Gomez-Alanis, Alejandro, Jose A. Gonzalez-Lopez, and Antonio M. Peinado. "GANBA: Generative Adversarial Network for Biometric Anti-Spoofing." Applied Sciences 12, no. 3 (2022): 1454. http://dx.doi.org/10.3390/app12031454.

Full text
Abstract:
Automatic speaker verification (ASV) is a voice biometric technology whose security might be compromised by spoofing attacks. To increase the robustness against spoofing attacks, presentation attack detection (PAD) or anti-spoofing systems for detecting replay, text-to-speech and voice conversion-based spoofing attacks are being developed. However, it was recently shown that adversarial spoofing attacks may seriously fool anti-spoofing systems. Moreover, the robustness of the whole biometric system (ASV + PAD) against this new type of attack is completely unexplored. In this work, a new genera
APA, Harvard, Vancouver, ISO, and other styles
15

Deng, Binyue, Denghui Zhang, Fashan Dong, Junjian Zhang, Muhammad Shafiq, and Zhaoquan Gu. "Rust-Style Patch: A Physical and Naturalistic Camouflage Attacks on Object Detector for Remote Sensing Images." Remote Sensing 15, no. 4 (2023): 885. http://dx.doi.org/10.3390/rs15040885.

Full text
Abstract:
Deep neural networks (DNNs) can improve the image analysis and interpretation of remote sensing technology by extracting valuable information from images, and has extensive applications such as military affairs, agriculture, environment, transportation, and urban division. The DNNs for object detection can identify and analyze objects in remote sensing images through fruitful features of images, which improves the efficiency of image processing and enables the recognition of large-scale remote sensing images. However, many studies have shown that deep neural networks are vulnerable to adversar
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Donghua, Tingsong Jiang, Jialiang Sun, et al. "FCA: Learning a 3D Full-Coverage Vehicle Camouflage for Multi-View Physical Adversarial Attack." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (2022): 2414–22. http://dx.doi.org/10.1609/aaai.v36i2.20141.

Full text
Abstract:
Physical adversarial attacks in object detection have attracted increasing attention. However, most previous works focus on hiding the objects from the detector by generating an individual adversarial patch, which only covers the planar part of the vehicle’s surface and fails to attack the detector in physical scenarios for multi-view, long-distance and partially occluded objects. To bridge the gap between digital attacks and physical attacks, we exploit the full 3D vehicle surface to propose a robust Full-coverage Camouflage Attack (FCA) to fool detectors. Specifically, we first try rendering
APA, Harvard, Vancouver, ISO, and other styles
17

Oyama, Tatsuya, Shunsuke Okura, Kota Yoshida, and Takeshi Fujino. "Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface." Sensors 23, no. 10 (2023): 4742. http://dx.doi.org/10.3390/s23104742.

Full text
Abstract:
A backdoor attack is a type of attack method that induces deep neural network (DNN) misclassification. The adversary who aims to trigger the backdoor attack inputs the image with a specific pattern (the adversarial mark) into the DNN model (backdoor model). In general, the adversary mark is created on the physical object input to an image by capturing a photo. With this conventional method, the success of the backdoor attack is not stable because the size and position change depending on the shooting environment. So far, we have proposed a method of creating an adversarial mark for triggering
APA, Harvard, Vancouver, ISO, and other styles
18

Dimitriu, Adonisz, Tamás Vilmos Michaletzky, and Viktor Remeli. "Improving Transferability of Physical Adversarial Attacks on Object Detectors Through Multi-Model Optimization." Applied Sciences 14, no. 23 (2024): 11423. https://doi.org/10.3390/app142311423.

Full text
Abstract:
Physical adversarial attacks face significant challenges in achieving transferability across different object detection models, especially in real-world conditions. This is primarily due to variations in model architectures, training data, and detection strategies, which can make adversarial examples highly model-specific. This study introduces a multi-model adversarial training approach to improve the transferability of adversarial textures across diverse detection models, including one-stage, two-stage, and transformer-based architectures. Using the Truck Adversarial Camouflage Optimization
APA, Harvard, Vancouver, ISO, and other styles
19

Zhou, Shuangju, Yang Li, Wenyi Tan, Chenxing Zhao, Xin Zhou, and Quan Pan. "Infrared Adversarial Patch Generation Based on Reinforcement Learning." Mathematics 12, no. 21 (2024): 3335. http://dx.doi.org/10.3390/math12213335.

Full text
Abstract:
Recently, there has been an increasing concern about the vulnerability of infrared object detectors to adversarial attacks, where the object detector can be easily spoofed by adversarial samples with aggressive patches. Existing attacks employ light bulbs, insulators, and both hot and cold blocks to construct adversarial patches. These patches are complex to create, expensive to produce, or time-sensitive, rendering them unsuitable for practical use. In this work, a straightforward and efficacious attack methodology applicable in the physical realm, wherein the patch configuration is simplifie
APA, Harvard, Vancouver, ISO, and other styles
20

Xue, Meng, Kuang Peng, Xueluan Gong, Qian Zhang, Yanjiao Chen, and Routing Li. "Echo." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, no. 3 (2023): 1–24. http://dx.doi.org/10.1145/3610874.

Full text
Abstract:
Intelligent audio systems are ubiquitous in our lives, such as speech command recognition and speaker recognition. However, it is shown that deep learning-based intelligent audio systems are vulnerable to adversarial attacks. In this paper, we propose a physical adversarial attack that exploits reverberation, a natural indoor acoustic effect, to realize imperceptible, fast, and targeted black-box attacks. Unlike existing attacks that constrain the magnitude of adversarial perturbations within a fixed radius, we generate reverberation-alike perturbations that blend naturally with the original v
APA, Harvard, Vancouver, ISO, and other styles
21

Zhou, Yuxuan, Huangxun Chen, Chenyu Huang, and Qian Zhang. "WiAdv." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, no. 2 (2022): 1–25. http://dx.doi.org/10.1145/3534618.

Full text
Abstract:
WiFi-based gesture recognition systems have attracted enormous interest owing to the non-intrusive of WiFi signals and the wide adoption of WiFi for communication. Despite boosted performance via integrating advanced deep neural network (DNN) classifiers, there lacks sufficient investigation on their security vulnerabilities, which are rooted in the open nature of the wireless medium and the inherent defects (e.g., adversarial attacks) of classifiers. To fill this gap, we aim to study adversarial attacks to DNN-powered WiFi-based gesture recognition to encourage proper countermeasures. We desi
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Hao, Fanggao Wan, Yue Su, Yue Wu, Mingyang Zhang, and Maoguo Gong. "AdvDisplay: Adversarial Display Assembled by Thermoelectric Cooler for Fooling Thermal Infrared Detectors." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 17 (2025): 18279–86. https://doi.org/10.1609/aaai.v39i17.34011.

Full text
Abstract:
When the current physical adversarial patches cannot deceive thermal infrared detectors, the existing techniques implement adversarial attacks from scratch, such as digital patch generation, material production, and physical deployment. Besides, it is difficult to finely regulate infrared radiation. To address these issues, this paper designs an adversarial thermal display (AdvDisplay ) by assembling thermoelectric coolers (TECs) as an array. Specifically, to reduce the gap between patches in the physical and digital worlds and decrease the power of AdvDisplay device, heat transfer loss and el
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Yichuang, Yu Zhang, Jiahao Qi, et al. "Adversarial Patch Attack on Multi-Scale Object Detection for UAV Remote Sensing Images." Remote Sensing 14, no. 21 (2022): 5298. http://dx.doi.org/10.3390/rs14215298.

Full text
Abstract:
Although deep learning has received extensive attention and achieved excellent performance in various scenarios, it suffers from adversarial examples to some extent. In particular, physical attack poses a greater threat than digital attack. However, existing research has paid less attention to the physical attack of object detection in UAV remote sensing images (RSIs). In this work, we carefully analyze the universal adversarial patch attack for multi-scale objects in the field of remote sensing. There are two challenges faced by an adversarial attack in RSIs. On one hand, the number of object
APA, Harvard, Vancouver, ISO, and other styles
24

Lal, Sheeba, Saeed Ur Rehman, Jamal Hussain Shah, et al. "Adversarial Attack and Defence through Adversarial Training and Feature Fusion for Diabetic Retinopathy Recognition." Sensors 21, no. 11 (2021): 3922. http://dx.doi.org/10.3390/s21113922.

Full text
Abstract:
Due to the rapid growth in artificial intelligence (AI) and deep learning (DL) approaches, the security and robustness of the deployed algorithms need to be guaranteed. The security susceptibility of the DL algorithms to adversarial examples has been widely acknowledged. The artificially created examples will lead to different instances negatively identified by the DL models that are humanly considered benign. Practical application in actual physical scenarios with adversarial threats shows their features. Thus, adversarial attacks and defense, including machine learning and its reliability, h
APA, Harvard, Vancouver, ISO, and other styles
25

Lee, Seungyeol, Seongwoo Hong, Gwangyeol Kim, and Jaecheol Ha. "SSIM-Based Autoencoder Modeling to Defeat Adversarial Patch Attacks." Sensors 24, no. 19 (2024): 6461. http://dx.doi.org/10.3390/s24196461.

Full text
Abstract:
Object detection systems are used in various fields such as autonomous vehicles and facial recognition. In particular, object detection using deep learning networks enables real-time processing in low-performance edge devices and can maintain high detection rates. However, edge devices that operate far from administrators are vulnerable to various physical attacks by malicious adversaries. In this paper, we implement a function for detecting traffic signs by using You Only Look Once (YOLO) as well as Faster-RCNN, which can be adopted by edge devices of autonomous vehicles. Then, assuming the r
APA, Harvard, Vancouver, ISO, and other styles
26

Guesmi, Amira, Muhammad Abdullah Hanif, and Muhammad Shafique. "AdvRain: Adversarial Raindrops to Attack Camera-Based Smart Vision Systems." Information 14, no. 12 (2023): 634. http://dx.doi.org/10.3390/info14120634.

Full text
Abstract:
Vision-based perception modules are increasingly deployed in many applications, especially autonomous vehicles and intelligent robots. These modules are being used to acquire information about the surroundings and identify obstacles. Hence, accurate detection and classification are essential to reach appropriate decisions and take appropriate and safe actions at all times. Current studies have demonstrated that “printed adversarial attacks”, known as physical adversarial attacks, can successfully mislead perception models such as object detectors and image classifiers. However, most of these p
APA, Harvard, Vancouver, ISO, and other styles
27

Almedires, Motaz Abdulaziz, Ahmed Elkhalil, and Mohammed Amin. "Adversarial Attack Detection in Industrial Control Systems Using LSTM-Based Intrusion Detection and Black-Box Defense Strategies." Journal of Cyber Security and Risk Auditing 2025, no. 3 (2025): 4–22. https://doi.org/10.63180/jcsra.thestap.2025.3.2.

Full text
Abstract:
In industrial control systems (ICS), neural networks are increasingly being utilized to detect intrusions. The term ICS refers to a group of controlling technology and associated equipment that includes the devices, systems, networks, and controllers that are used to manage and/or execute manufacturing processes. Each ICS is developed to successfully handle work digitally and operates differently depending on the business. ICS devices and procedures are now found in practically every industry sector and key infrastructure, including production, transportation, power, and treatment plants. To a
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Yuanwan, Yalun Wu, Xiaoshu Cui, Qiong Li, Jiqiang Liu, and Wenjia Niu. "Reflective Adversarial Attacks against Pedestrian Detection Systems for Vehicles at Night." Symmetry 16, no. 10 (2024): 1262. http://dx.doi.org/10.3390/sym16101262.

Full text
Abstract:
The advancements in deep learning have significantly enhanced the accuracy and robustness of pedestrian detection. However, recent studies reveal that adversarial attacks can exploit the vulnerabilities of deep learning models to mislead detection systems. These attacks are effective not only in digital environments but also pose significant threats to the reliability of pedestrian detection systems in the physical world. Existing adversarial attacks targeting pedestrian detection primarily focus on daytime scenarios and are easily noticeable by road observers. In this paper, we propose a nove
APA, Harvard, Vancouver, ISO, and other styles
29

Kim, Tae Hoon, Moez Krichen, Meznah A. Alamro, and Gabreil Avelino Sampedro. "A Novel Dataset and Approach for Adversarial Attack Detection in Connected and Automated Vehicles." Electronics 13, no. 12 (2024): 2420. http://dx.doi.org/10.3390/electronics13122420.

Full text
Abstract:
Adversarial attacks have received much attention as communication network applications rise in popularity. Connected and Automated Vehicles (CAVs) must be protected against adversarial attacks to ensure passenger and vehicle safety on the road. Nevertheless, CAVs are susceptible to several types of attacks, such as those that target intra- and inter-vehicle networks. These harmful attacks not only cause user privacy and confidentiality to be lost, but they also have more grave repercussions, such as physical harm and death. It is critical to precisely and quickly identify adversarial attacks t
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Zhen, Buhong Wang, Chuanlei Zhang, and Yaohui Liu. "Defense against Adversarial Patch Attacks for Aerial Image Semantic Segmentation by Robust Feature Extraction." Remote Sensing 15, no. 6 (2023): 1690. http://dx.doi.org/10.3390/rs15061690.

Full text
Abstract:
Deep learning (DL) models have recently been widely used in UAV aerial image semantic segmentation tasks and have achieved excellent performance. However, DL models are vulnerable to adversarial examples, which bring significant security risks to safety-critical systems. Existing research mainly focuses on solving digital attacks for aerial image semantic segmentation, but adversarial patches with physical attack attributes are more threatening than digital attacks. In this article, we systematically evaluate the threat of adversarial patches on the aerial image semantic segmentation task for
APA, Harvard, Vancouver, ISO, and other styles
31

DAIMO, Renya, and Satoshi ONO. "Projection-Based Physical Adversarial Attack for Monocular Depth Estimation." IEICE Transactions on Information and Systems E106.D, no. 1 (2023): 31–35. http://dx.doi.org/10.1587/transinf.2022mul0001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Singh, Vinit Kumar. "A Deep Neural Network Assisted Physical Layer Security Mechanism for Wireless Networks." International Journal for Research in Applied Science and Engineering Technology 13, no. 7 (2025): 109–18. https://doi.org/10.22214/ijraset.2025.72926.

Full text
Abstract:
The physical layer in wireless systems is inherently susceptible to eavesdropping, jamming, spoofing, and signal injection attacks because wireless signals propagate through open space. Unlike wired networks, where physical access is more controlled, wireless communication can be intercepted by any nearby device. Attackers can exploit features such as channel reciprocity, power control, or modulation characteristics to compromise communications. These vulnerabilities make it imperative to secure the physical layer, especially in applications involving sensitive data like military, healthcare,
APA, Harvard, Vancouver, ISO, and other styles
33

Xue, Wei, Zhiming Chen, Weiwei Tian, Yunhua Wu, and Bing Hua. "A Cascade Defense Method for Multidomain Adversarial Attacks under Remote Sensing Detection." Remote Sensing 14, no. 15 (2022): 3559. http://dx.doi.org/10.3390/rs14153559.

Full text
Abstract:
Deep neural networks have been widely used in detection tasks based on optical remote sensing images. However, in recent studies, deep neural networks have been shown to be vulnerable to adversarial examples. Adversarial examples are threatening in both the digital and physical domains. Specifically, they make it possible for adversarial examples to attack aerial remote sensing detection. To defend against adversarial attacks on aerial remote sensing detection, we propose a cascaded adversarial defense framework, which locates the adversarial patch according to its high frequency and saliency
APA, Harvard, Vancouver, ISO, and other styles
34

Zhao, Ling, Xun Lv, Lili Zhu, et al. "A Local Adversarial Attack with a Maximum Aggregated Region Sparseness Strategy for 3D Objects." Journal of Imaging 11, no. 1 (2025): 25. https://doi.org/10.3390/jimaging11010025.

Full text
Abstract:
The increasing reliance on deep neural network-based object detection models in various applications has raised significant security concerns due to their vulnerability to adversarial attacks. In physical 3D environments, existing adversarial attacks that target object detection (3D-AE) face significant challenges. These attacks often require large and dispersed modifications to objects, making them easily noticeable and reducing their effectiveness in real-world scenarios. To maximize the attack effectiveness, large and dispersed attack camouflages are often employed, which makes the camoufla
APA, Harvard, Vancouver, ISO, and other styles
35

Yang, Zhongguo, Irshad Ahmed Abbasi, Fahad Algarni, Sikandar Ali, and Mingzhu Zhang. "An IoT Time Series Data Security Model for Adversarial Attack Based on Thermometer Encoding." Security and Communication Networks 2021 (March 9, 2021): 1–11. http://dx.doi.org/10.1155/2021/5537041.

Full text
Abstract:
Nowadays, an Internet of Things (IoT) device consists of algorithms, datasets, and models. Due to good performance of deep learning methods, many devices integrated well-trained models in them. IoT empowers users to communicate and control physical devices to achieve vital information. However, these models are vulnerable to adversarial attacks, which largely bring potential risks to the normal application of deep learning methods. For instance, very little changes even one point in the IoT time-series data could lead to unreliable or wrong decisions. Moreover, these changes could be deliberat
APA, Harvard, Vancouver, ISO, and other styles
36

Wang, Yichen, Yuxuan Chou, Ziqi Zhou, et al. "Breaking Barriers in Physical-World Adversarial Examples: Improving Robustness and Transferability via Robust Feature." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 8 (2025): 8069–77. https://doi.org/10.1609/aaai.v39i8.32870.

Full text
Abstract:
As deep neural networks (DNNs) are widely applied in the physical world, many researches are focusing on physical-world adversarial examples (PAEs), which introduce perturbations to inputs and cause the model's incorrect outputs. However, existing PAEs face two challenges: unsatisfactory attack performance (i.e., poor transferability and insufficient robustness to environment conditions), and difficulty in balancing attack effectiveness with stealthiness, where better attack effectiveness often makes PAEs more perceptible. In this paper, we explore a novel perturbation-based method to overcome
APA, Harvard, Vancouver, ISO, and other styles
37

Zhao, Renhe, Dongqi He, and Fangyi You. "Neural Network-Adaptive Secure Control for Nonlinear Cyber-Physical Systems Against Adversarial Attacks." Applied Sciences 15, no. 7 (2025): 3893. https://doi.org/10.3390/app15073893.

Full text
Abstract:
The “insecurity of the network” characterizes each agent as being remotely controlled through unreliable network channels. In such an insecure network, the output signal can be altered through carefully designed adversarial attacks to produce erroneous results. To address this, this paper proposes a neural network (NN) adaptive secure control scheme for cyber-physical systems (CPSs) via attack reconstruction strategies, where the attack reconstruction strategy serves as the solution to the NNs estimation problem on the insecurity of the network. Consequently, by introducing a novel error trans
APA, Harvard, Vancouver, ISO, and other styles
38

Alzaidy, Sharoug, and Hamad Binsalleeh. "Adversarial Attacks with Defense Mechanisms on Convolutional Neural Networks and Recurrent Neural Networks for Malware Classification." Applied Sciences 14, no. 4 (2024): 1673. http://dx.doi.org/10.3390/app14041673.

Full text
Abstract:
In the field of behavioral detection, deep learning has been extensively utilized. For example, deep learning models have been utilized to detect and classify malware. Deep learning, however, has vulnerabilities that can be exploited with crafted inputs, resulting in malicious files being misclassified. Cyber-Physical Systems (CPS) may be compromised by malicious files, which can have catastrophic consequences. This paper presents a method for classifying Windows portable executables (PEs) using Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs). To generate malware exec
APA, Harvard, Vancouver, ISO, and other styles
39

Jeong, Hyeon-Jae, Jubin Lee, Yu-Seung Ma, and Seung-Ik Lee. "Attack Success Rate Analysis of Adversarial Patch in Physical Environment." Journal of KIISE 50, no. 2 (2023): 185–95. http://dx.doi.org/10.5626/jok.2023.50.2.185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Zolotukhin, Mikhail, Di Zhang, Timo Hämäläinen, and Parsa Miraghaei. "On Attacking Future 5G Networks with Adversarial Examples: Survey." Network 3, no. 1 (2022): 39–90. http://dx.doi.org/10.3390/network3010003.

Full text
Abstract:
The introduction of 5G technology along with the exponential growth in connected devices is expected to cause a challenge for the efficient and reliable network resource allocation. Network providers are now required to dynamically create and deploy multiple services which function under various requirements in different vertical sectors while operating on top of the same physical infrastructure. The recent progress in artificial intelligence and machine learning is theorized to be a potential answer to the arising resource allocation challenges. It is therefore expected that future generation
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Bo, Xin Jin, Tingjie Ba, Tingzhe Pan, En Wang, and Zhiming Gu. "Deceptive Cyber-Resilience in PV Grids: Digital Twin-Assisted Optimization Against Cyber-Physical Attacks." Energies 18, no. 12 (2025): 3145. https://doi.org/10.3390/en18123145.

Full text
Abstract:
The increasing integration of photovoltaic (PV) systems into smart grids introduces new cybersecurity vulnerabilities, particularly against cyber-physical attacks that can manipulate grid operations and disrupt renewable energy generation. This paper proposes a multi-layered cyber-resilient PV optimization framework, leveraging digital twin-based deception, reinforcement learning-driven cyber defense, and blockchain authentication to enhance grid security and operational efficiency. A deceptive cyber-defense mechanism is developed using digital twin technology to mislead adversaries, dynamical
APA, Harvard, Vancouver, ISO, and other styles
42

Lee, Xian Yeow, Sambit Ghadai, Kai Liang Tan, Chinmay Hegde, and Soumik Sarkar. "Spatiotemporally Constrained Action Space Attacks on Deep Reinforcement Learning Agents." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 4577–84. http://dx.doi.org/10.1609/aaai.v34i04.5887.

Full text
Abstract:
Robustness of Deep Reinforcement Learning (DRL) algorithms towards adversarial attacks in real world applications such as those deployed in cyber-physical systems (CPS) are of increasing concern. Numerous studies have investigated the mechanisms of attacks on the RL agent's state space. Nonetheless, attacks on the RL agent's action space (corresponding to actuators in engineering systems) are equally perverse, but such attacks are relatively less studied in the ML literature. In this work, we first frame the problem as an optimization problem of minimizing the cumulative reward of an RL agent
APA, Harvard, Vancouver, ISO, and other styles
43

Zhou, Buxiang, Xuan Li, Tianlei Zang, Yating Cai, Jiale Wu, and Shijun Wang. "The Detection of False Data Injection Attack for Cyber–Physical Power Systems Considering a Multi-Attack Mode." Applied Sciences 13, no. 19 (2023): 10596. http://dx.doi.org/10.3390/app131910596.

Full text
Abstract:
Amidst the evolving communication technology landscape, conventional distribution networks have gradually metamorphosed into cyber–physical power systems (CPPSs). Within this transformative milieu, the cyber infrastructure not only bolsters grid security but also introduces a novel security peril—the false data injection attack (FDIA). Owing to the variable knowledge held by cyber assailants regarding the system’s network structure, current achievements exhibit deficiencies in accommodating the detection of FDIA across diverse attacker profiles. To address the historical data imbalances encoun
APA, Harvard, Vancouver, ISO, and other styles
44

Liu, Qiao, Guang Gong, Yong Wang, and Hui Li. "A Novel Secure Transmission Scheme in MIMO Two-Way Relay Channels with Physical Layer Approach." Mobile Information Systems 2017 (2017): 1–12. http://dx.doi.org/10.1155/2017/7843843.

Full text
Abstract:
Security issue has been considered as one of the most pivotal aspects for the fifth-generation mobile network (5G) due to the increasing demands of security service as well as the growing occurrence of security threat. In this paper, instead of focusing on the security architecture in the upper layer, we investigate the secure transmission for a basic channel model in a heterogeneous network, that is, two-way relay channels. By exploiting the properties of the transmission medium in the physical layer, we propose a novel secure scheme for the aforementioned channel mode. With precoding design,
APA, Harvard, Vancouver, ISO, and other styles
45

Farraj, Abdallah, and Eman Hammad. "A Physical-Layer Security Cooperative Framework for Mitigating Interference and Eavesdropping Attacks in Internet of Things Environments." Sensors 24, no. 16 (2024): 5171. http://dx.doi.org/10.3390/s24165171.

Full text
Abstract:
Intentional electromagnetic interference attacks (e.g., jamming) against wireless connected devices such as the Internet of Things (IoT) remain a serious challenge, especially as such attacks evolve in complexity. Similarly, eavesdropping on wireless communication channels persists as an inherent vulnerability that is often exploited by adversaries. This article investigates a novel approach to enhancing information security for IoT systems via collaborative strategies that can effectively mitigate attacks targeting availability via interference and confidentiality via eavesdropping. We examin
APA, Harvard, Vancouver, ISO, and other styles
46

Islam, Md Tawfiqul. "A QUANTITATIVE ASSESSMENT OF SECURE NEURAL NETWORK ARCHITECTURES FOR FAULT DETECTION IN INDUSTRIAL CONTROL SYSTEMS." Review of Applied Science and Technology 02, no. 04 (2023): 01–24. https://doi.org/10.63125/3m7gbs97.

Full text
Abstract:
Industrial Control Systems (ICS) form the core infrastructure for critical sectors such as energy, water, manufacturing, and transportation, yet their increasing digital interconnectivity has exposed them to complex fault dynamics and sophisticated cyber-physical threats. Traditional fault detection mechanisms—whether rule-based or model-driven—often fail to cope with the nonlinearity, high dimensionality, and adversarial vulnerabilities prevalent in modern ICS environments. To address these limitations, this study conducts a comprehensive quantitative evaluation of secure neural network archi
APA, Harvard, Vancouver, ISO, and other styles
47

KADIRE, SUMALATA, V. SAI ANJANI, P AKSHITHA, and V. RAMYA SRI. "SECURITY ANALYSIS AND EXPLOITATION OF IC CHIP LEVEL COUNTER QUANTITY AGAINST BODILY OCCURRENCES." Industrial Engineering Journal 53, no. 12 (2024): 154–61. https://doi.org/10.36893/iej.2024.v53i12.020.

Full text
Abstract:
This article addresses the growing concern of security vulnerabilities in hardware systems, particularly in the context of integrated circuit (IC) chips used in cryptographic applications, which are increasingly susceptible to adversarial physical attacks in real-world environments. These attacks, often targeting the physical integrity of ICs, exploit weaknesses in both the design and manufacturing processes of cryptographic circuits. The paper provides a comprehensive overview of these threats, focusing on various physical attack methods, such as sidechannel attacks, electromagnetic (EM) atta
APA, Harvard, Vancouver, ISO, and other styles
48

Cultice, Tyler, Joseph Clark, Wu Yang, and Himanshu Thapliyal. "A Novel Hierarchical Security Solution for Controller-Area-Network-Based 3D Printing in a Post-Quantum World." Sensors 23, no. 24 (2023): 9886. http://dx.doi.org/10.3390/s23249886.

Full text
Abstract:
As the popularity of 3D printing or additive manufacturing (AM) continues to increase for use in commercial and defense supply chains, the requirement for reliable, robust protection from adversaries has become more important than ever. Three-dimensional printing security focuses on protecting both the individual Industrial Internet of Things (I-IoT) AM devices and the networks that connect hundreds of these machines together. Additionally, rapid improvements in quantum computing demonstrate a vital need for robust security in a post-quantum future for critical AM manufacturing, especially for
APA, Harvard, Vancouver, ISO, and other styles
49

Li, Kunzhan, Fengyong Li, Baonan Wang, and Meijing Shan. "False data injection attack sample generation using an adversarial attention-diffusion model in smart grids." AIMS Energy 12, no. 6 (2024): 1271–93. https://doi.org/10.3934/energy.2024058.

Full text
Abstract:
<p>A false data injection attack (FDIA) indicates that attackers mislead system decisions by inputting false or tampered data into the system, which seriously threatens the security of power cyber-physical systems. Considering the scarcity of FDIA attack samples, the traditional FDIA detection models based on neural networks are always limited in their detection capabilities due to imbalanced training samples. To address this problem, this paper proposes an efficient FDIA attack sample generation method by an adversarial attention-diffusion model. The proposed scheme consists of a diffus
APA, Harvard, Vancouver, ISO, and other styles
50

Niu, Luyao, Bhaskar Ramasubramanian, Andrew Clark, and Radha Poovendran. "Robust Satisfaction of Metric Interval Temporal Logic Objectives in Adversarial Environments." Games 14, no. 2 (2023): 30. http://dx.doi.org/10.3390/g14020030.

Full text
Abstract:
This paper studies the synthesis of controllers for cyber-physical systems (CPSs) that are required to carry out complex time-sensitive tasks in the presence of an adversary. The time-sensitive task is specified as a formula in the metric interval temporal logic (MITL). CPSs that operate in adversarial environments have typically been abstracted as stochastic games (SGs); however, because traditional SG models do not incorporate a notion of time, they cannot be used in a setting where the objective is time-sensitive. To address this, we introduce durational stochastic games (DSGs). DSGs genera
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!