Academic literature on the topic 'Physical adversarial attack'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Physical adversarial attack.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Physical adversarial attack"

1

Yang, Kaichen, Tzungyu Tsai, Honggang Yu, Tsung-Yi Ho, and Yier Jin. "Beyond Digital Domain: Fooling Deep Learning Based Recognition System in Physical World." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (2020): 1088–95. http://dx.doi.org/10.1609/aaai.v34i01.5459.

Full text
Abstract:
Adversarial examples that can fool deep neural network (DNN) models in computer vision present a growing threat. The current methods of launching adversarial attacks concentrate on attacking image classifiers by adding noise to digital inputs. The problem of attacking object detection models and adversarial attacks in physical world are rarely touched. Some prior works are proposed to launch physical adversarial attack against object detection models, but limited by certain aspects. In this paper, we propose a novel physical adversarial attack targeting object detection models. Instead of simp
APA, Harvard, Vancouver, ISO, and other styles
2

Bi, Chuanxiang, Shang Shi, and Jian Qu. "Enhancing Autonomous Driving: A Novel Approach of Mixed Attack and Physical Defense Strategies." ASEAN Journal of Scientific and Technological Reports 28, no. 1 (2024): e254093. https://doi.org/10.55164/ajstr.v28i1.254093.

Full text
Abstract:
Adversarial attacks are a significant threat to autonomous driving safety, especially in the physical world where there is a prevalence of "sticker-paste" attacks on traffic signs. However, most of these attacks are single-category attacks with little interference effect. This paper builds an autonomous driving platform and conducts extensive experiments on five single-category attacks. Moreover, we proposed a new physical attack - a mixed attack consisting of different single-category physical attacks. The proposed method outperforms existing methods and can reduce the accuracy of traffic sig
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Ximin, Jinyin Chen, Haibin Zheng, and Zhenguang Liu. "PhyCamo: A Robust Physical Camouflage via Contrastive Learning for Multi-View Physical Adversarial Attack." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 10 (2025): 10230–38. https://doi.org/10.1609/aaai.v39i10.33110.

Full text
Abstract:
Deep neural networks (DNNs) have achieved remarkable success in widespread applications. Meanwhile, its vulnerability towards carefully crafted adversarial attacks captures special attention. Not only adversarial perturbations in digital space will fool the target DNNs-based detectors making a wrong decision, but also actually printed patches can be camouflaged to defeat detectors in physical space. In particular, multi-view physical adversarial attacks pose a more serious threat to practical scenarios. The existing attacks are still challenged in three aspects, i.e., high-cost data augmentati
APA, Harvard, Vancouver, ISO, and other styles
4

Jiang, Wei, Tianyuan Zhang , Shuangcheng Liu , Weiyu Ji , Zichao Zhang , and Gang Xiao . "Exploring the Physical-World Adversarial Robustness of Vehicle Detection." Electronics 12, no. 18 (2023): 3921. http://dx.doi.org/10.3390/electronics12183921.

Full text
Abstract:
Adversarial attacks can compromise the robustness of real-world detection models. However, evaluating these models under real-world conditions poses challenges due to resource-intensive experiments. Virtual simulations offer an alternative, but the absence of standardized benchmarks hampers progress. Addressing this, we propose an innovative instant-level data generation pipeline using the CARLA simulator. Through this pipeline, we establish the Discrete and Continuous Instant-level (DCI) dataset, enabling comprehensive experiments involving three detection models and three physical adversaria
APA, Harvard, Vancouver, ISO, and other styles
5

Wei, Hui, Zhixiang Wang, Xuemei Jia, et al. "HOTCOLD Block: Fooling Thermal Infrared Detectors with a Novel Wearable Design." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (2023): 15233–41. http://dx.doi.org/10.1609/aaai.v37i12.26777.

Full text
Abstract:
Adversarial attacks on thermal infrared imaging expose the risk of related applications. Estimating the security of these systems is essential for safely deploying them in the real world. In many cases, realizing the attacks in the physical space requires elaborate special perturbations. These solutions are often impractical and attention-grabbing. To address the need for a physically practical and stealthy adversarial attack, we introduce HotCold Block, a novel physical attack for infrared detectors that hide persons utilizing the wearable Warming Paste and Cooling Paste. By attaching these r
APA, Harvard, Vancouver, ISO, and other styles
6

Sheikh, Zakir Ahmad, Yashwant Singh, Pradeep Kumar Singh, and Paulo J. Sequeira Gonçalves. "Defending the Defender: Adversarial Learning Based Defending Strategy for Learning Based Security Methods in Cyber-Physical Systems (CPS)." Sensors 23, no. 12 (2023): 5459. http://dx.doi.org/10.3390/s23125459.

Full text
Abstract:
Cyber-Physical Systems (CPS) are prone to many security exploitations due to a greater attack surface being introduced by their cyber component by the nature of their remote accessibility or non-isolated capability. Security exploitations, on the other hand, rise in complexities, aiming for more powerful attacks and evasion from detections. The real-world applicability of CPS thus poses a question mark due to security infringements. Researchers have been developing new and robust techniques to enhance the security of these systems. Many techniques and security aspects are being considered to b
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, Hong, Yang Yang, and Yunfei Wang. "AdvFaceGAN: a face dual-identity impersonation attack method based on generative adversarial networks." PeerJ Computer Science 11 (June 11, 2025): e2904. https://doi.org/10.7717/peerj-cs.2904.

Full text
Abstract:
This article aims to reveal security vulnerabilities in current commercial facial recognition systems and promote advancements in facial recognition technology security. Previous research on both digital-domain and physical-domain attacks has lacked consideration of real-world attack scenarios: Digital-domain attacks with good stealthiness often fail to achieve physical implementation, while wearable-based physical-domain attacks typically appear unnatural and cannot evade human visual inspection. We propose AdvFaceGAN, a generative adversarial network (GAN)-based impersonation attack method t
APA, Harvard, Vancouver, ISO, and other styles
8

Qiu, Shilin, Qihe Liu, Shijie Zhou, and Chunjiang Wu. "Review of Artificial Intelligence Adversarial Attack and Defense Technologies." Applied Sciences 9, no. 5 (2019): 909. http://dx.doi.org/10.3390/app9050909.

Full text
Abstract:
In recent years, artificial intelligence technologies have been widely used in computer vision, natural language processing, automatic driving, and other fields. However, artificial intelligence systems are vulnerable to adversarial attacks, which limit the applications of artificial intelligence (AI) technologies in key security fields. Therefore, improving the robustness of AI systems against adversarial attacks has played an increasingly important role in the further development of AI. This paper aims to comprehensively summarize the latest research progress on adversarial attack and defens
APA, Harvard, Vancouver, ISO, and other styles
9

Cai, Wei, Xingyu Di, Xin Wang, Weijie Gao, and Haoran Jia. "Stealthy Vehicle Adversarial Camouflage Texture Generation Based on Neural Style Transfer." Entropy 26, no. 11 (2024): 903. http://dx.doi.org/10.3390/e26110903.

Full text
Abstract:
Adversarial attacks that mislead deep neural networks (DNNs) into making incorrect predictions can also be implemented in the physical world. However, most of the existing adversarial camouflage textures that attack object detection models only consider the effectiveness of the attack, ignoring the stealthiness of adversarial attacks, resulting in the generated adversarial camouflage textures appearing abrupt to human observers. To address this issue, we propose a style transfer module added to an adversarial texture generation framework. By calculating the style loss between the texture and t
APA, Harvard, Vancouver, ISO, and other styles
10

Tiliwalidi, Kalibinuer, Bei Hui, Chengyin Hu, and Jingjing Ge. "Adversarial Camera Patch: An Effective and Robust Physical-World Attack on Object Detectors." International Conference on Cyber Warfare and Security 19, no. 1 (2024): 374–84. http://dx.doi.org/10.34190/iccws.19.1.2044.

Full text
Abstract:
Physical adversarial attacks present a novel and growing challenge in cybersecurity, especially for systems reliant on physical inputs for Deep Neural Networks (DNNs), such as those found in Internet of Things (IoT) devices. They are vulnerable to physical adversarial attacks where real-world objects or environments are manipulated to mislead DNNs, thereby threatening the operational integrity and security of IoT devices. The camera-based attacks are one of the most practical adversarial attacks, which are easy to implement and more robust than all the other attack methods, and pose a big thre
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Physical adversarial attack"

1

Chi, Lijun. "Security and Robustness of Autonomous Driving Systems Against Physical Adversarial Attack." Electronic Thesis or Diss., Institut polytechnique de Paris, 2025. http://www.theses.fr/2025IPPAT009.

Full text
Abstract:
Grâce à des mises à jour matérielles itératives et aux avancées dans les réseaux neuronaux profonds (DNN), les systèmes de conduite autonome (ADS) sont de plus en plus intégrés à la vie quotidienne. Cependant, avant que cette technologie ne se généralise, un problème de sécurité qui doit être résolu est celui des attaques adversariales physiques. Ces attaques peuvent manipuler des objets réels pour perturber la perception des ADS et provoquer des accidents de la route. De plus, la diversité des attaques physiques complique la tâche des défenseurs passifs.Cette étude aborde ces défis en analysa
APA, Harvard, Vancouver, ISO, and other styles
2

Gabelli, Filippo. "Security analysis of physical attacks to offshore O&G facilities." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
Chemical and petrochemical plants are susceptible to malicious acts due to the attractiveness brought by the substances handled, the possibility to thieve secret information, and their strategic importance. Thus, a number of qualitative and semi-quantitative methodologies have been developed for the assessment of vulnerabilities of such sites. Similarly, offshore facilities, that are central in the production of hydrocarbons, were subjected to physical and cyber security events as is outlined in relevant past accident analysis. However, there are just few methodologies tailored for the identif
APA, Harvard, Vancouver, ISO, and other styles
3

Hamza, Anis Amazigh. "Improving cooperative non-orthogonal multiple access (CNOMA) and enhancing the physical layer security (PLS) for beyond 5G (B5G) and future eHealth wireless networks." Electronic Thesis or Diss., Valenciennes, Université Polytechnique Hauts-de-France, 2023. http://www.theses.fr/2023UPHF0006.

Full text
Abstract:
La cinquième génération de réseaux cellulaires (5G) a été une véritable révolution des technologies du réseau d'accès radio et du réseau mobile de base, se présentant comme la génération de rupture qui permet la cohabitation d'applications et usages extrêmement diversifiés, unifiés au sein d'une même technologie. Néanmoins, la 5G n'est qu'un début : de nouveaux scénarios et défis émergent. Par conséquent, la communauté des chercheurs prépare le terrain pour les systèmes cellulaires au-delà de la 5G (B5G). À cet égard, plusieurs technologies habilitantes sont étudiées. Outre la radio intelligen
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Physical adversarial attack"

1

Okada, Satoshi, and Takuho Mitsunaga. "An Improved Technique for Generating Effective Noises of Adversarial Camera Stickers." In Lecture Notes in Networks and Systems. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-4581-4_21.

Full text
Abstract:
AbstractCyber-physical systems (CPS) represent the integration of the physical world with digital technologies and are expected to change our everyday lives significantly. With the rapid development of CPS, the importance of artificial intelligence (AI) has been increasingly recognized. Concurrently, adversarial attacks that cause incorrect predictions in AI models have emerged as a new risk. They are no longer limited to digital data and now extend to the physical environment. Thus, they are pointed out to pose serious practical threats to CPS. In this paper, we focus on the “adversarial came
APA, Harvard, Vancouver, ISO, and other styles
2

Specht, Felix, and Jens Otto. "Hardening Deep Neural Networks in Condition Monitoring Systems against Adversarial Example Attacks." In Machine Learning for Cyber Physical Systems. Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-662-62746-4_11.

Full text
Abstract:
AbstractCondition monitoring systems based on deep neural networks are used for system failure detection in cyber-physical production systems. However, deep neural networks are vulnerable to attacks with adversarial examples. Adversarial examples are manipulated inputs, e.g. sensor signals, are able to mislead a deep neural network into misclassification. A consequence of such an attack may be the manipulation of the physical production process of a cyber-physical production system without being recognized by the condition monitoring system. This can result in a serious threat for production s
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Yidan, Juan Wang, Yuanzhang Li, Yajie Wang, Zixuan Xu, and Dianxin Wang. "Universal Physical Adversarial Attack via Background Image." In Lecture Notes in Computer Science. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16815-4_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Marrone, Stefano, Roberto Casula, Giulia Orrù, Gian Luca Marcialis, and Carlo Sansone. "Fingerprint Adversarial Presentation Attack in the Physical Domain." In Pattern Recognition. ICPR International Workshops and Challenges. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68780-9_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xiang, Jingyu, Xuanxiang Lin, Ke Chen, and Kui Jia. "Adversarial Geometric Transformations of Point Clouds for Physical Attack." In Computational Visual Media. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-2095-8_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cheng, Zhiyuan, James Liang, Hongjun Choi, et al. "Physical Attack on Monocular Depth Estimation with Optimal Adversarial Patches." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19839-7_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Yisheng, Xuekang Peng, and Zhichao Lian. "Multi-texture Fusion Attack: A Robust Adversarial Camouflage in Physical World." In Lecture Notes in Computer Science. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-5606-3_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Shang-Tse, Cory Cornelius, Jason Martin, and Duen Horng Chau. "ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector." In Machine Learning and Knowledge Discovery in Databases. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-10925-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhou, Chuan, Huiyun Jing, Xin He, Liming Wang, Kai Chen, and Duohe Ma. "Disappeared Face: A Physical Adversarial Attack Method on Black-Box Face Detection Models." In Information and Communications Security. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86890-1_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fang, Junbin, Zewei Yang, Siyuan Dai, et al. "Cross-Task Physical Adversarial Attack Against Lane Detection System Based on LED Illumination Modulation." In Pattern Recognition and Computer Vision. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8435-0_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Physical adversarial attack"

1

Wang, Di, Wenxuan Zhu, Ke Li, Xiao Gao, and Pengfei Yang. "Transferable Physical Adversarial Patch Attack for Remote Sensing Object Detection." In IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2024. http://dx.doi.org/10.1109/igarss53475.2024.10640565.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Yao, Yinpeng Dong, Shouwei Ruan, Xiao Yang, Hang Su, and Xingxing Wei. "Towards Transferable Targeted 3D Adversarial Attack in the Physical World." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.02314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Zichen, Jiaxin Dai, and Jie Geng. "A Novel Physical World Adversarial Patch Attack for Infrared Pedestrian Detector." In 2024 IEEE International Conference on Unmanned Systems (ICUS). IEEE, 2024. https://doi.org/10.1109/icus61736.2024.10840169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ghimire, Ashutosh, Mohammed Alkurdi, Karma Gurung, and Fathi Amsaad. "Adversarial Attack Against Golden Reference-Free Hardware Trojan Detection Approach." In 2024 IEEE Physical Assurance and Inspection of Electronics (PAINE). IEEE, 2024. https://doi.org/10.1109/paine62042.2024.10792798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yan, Hongbo, Xing Yang, Haoqi Gao, Haiyuan Yu, Zhiyang Hu, and Hanpei Xiao. "A survey of physical-domain adversarial attack methods against object detection models." In Seventh Global Intelligent Industry Conference (GIIC 2024), edited by Xingjun Wang. SPIE, 2024. http://dx.doi.org/10.1117/12.3032331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bai, Mingqiang, Puzhuo Liu, Fei Lv, et al. "Adversarial Attack against Intrusion Detectors in Cyber-Physical Systems With Minimal Perturbations." In 2024 IEEE International Symposium on Parallel and Distributed Processing with Applications (ISPA). IEEE, 2024. https://doi.org/10.1109/ispa63168.2024.00109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wu, Wenxi, Fabio Pierazzi, Yali Du, and Martim Brandão. "Characterizing Physical Adversarial Attacks on Robot Motion Planners." In 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024. http://dx.doi.org/10.1109/icra57147.2024.10610344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Purnekar, Nischay, Benedetta Tondi, and Mauro Barni. "Physical Domain Adversarial Attacks Against Source Printer Image Attribution." In 2024 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2024. https://doi.org/10.1109/apsipaasc63619.2025.10848809.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Alharthi, Naif Wasel, and Martim Brandão. "Physical and Digital Adversarial Attacks on Grasp Quality Networks." In 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024. http://dx.doi.org/10.1109/icra57147.2024.10610886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Strack, Lukas, Futa Waseda, Huy H. Nguyen, Yinqiang Zheng, and Isao Echizen. "Defending Against Physical Adversarial Patch attacks On Infrared Human Detection." In 2024 IEEE International Conference on Image Processing (ICIP). IEEE, 2024. http://dx.doi.org/10.1109/icip51287.2024.10647435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!