To see the other types of publications on this topic, follow the link: Adversarial samples.

Journal articles on the topic 'Adversarial samples'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Adversarial samples.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Liu, Faqiang, Mingkun Xu, Guoqi Li, Jing Pei, Luping Shi, and Rong Zhao. "Adversarial symmetric GANs: Bridging adversarial samples and adversarial networks." Neural Networks 133 (January 2021): 148–56. http://dx.doi.org/10.1016/j.neunet.2020.10.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Yang, Yuling Chen, Xuewei Wang, Jing Yang, and Qi Wang. "Promoting Adversarial Transferability via Dual-Sampling Variance Aggregation and Feature Heterogeneity Attacks." Electronics 12, no. 3 (February 3, 2023): 767. http://dx.doi.org/10.3390/electronics12030767.

Full text
Abstract:
At present, deep neural networks have been widely used in various fields, but their vulnerability requires attention. The adversarial attack aims to mislead the model by generating imperceptible perturbations on the source model, and although white-box attacks have achieved good success rates, existing adversarial samples exhibit weak migration in the black-box case, especially on some adversarially trained defense models. Previous work for gradient-based optimization either optimizes the image before iteration or optimizes the gradient during iteration, so it results in the generated adversarial samples overfitting the source model and exhibiting poor mobility to the adversarially trained model. To solve these problems, we propose the dual-sample variance aggregation with feature heterogeneity attack; our method is optimized before and during iterations to produce adversarial samples with better transferability. In addition, our method can be integrated with various input transformations. A large amount of experimental data demonstrate the effectiveness of the proposed method, which improves the attack success rate by 5.9% for the normally trained model and 11.5% for the adversarially trained model compared with the current state-of-the-art migration-enhancing attack methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Ding, Yuxin, Miaomiao Shao, Cai Nie, and Kunyang Fu. "An Efficient Method for Generating Adversarial Malware Samples." Electronics 11, no. 1 (January 4, 2022): 154. http://dx.doi.org/10.3390/electronics11010154.

Full text
Abstract:
Deep learning methods have been applied to malware detection. However, deep learning algorithms are not safe, which can easily be fooled by adversarial samples. In this paper, we study how to generate malware adversarial samples using deep learning models. Gradient-based methods are usually used to generate adversarial samples. These methods generate adversarial samples case-by-case, which is very time-consuming to generate a large number of adversarial samples. To address this issue, we propose a novel method to generate adversarial malware samples. Different from gradient-based methods, we extract feature byte sequences from benign samples. Feature byte sequences represent the characteristics of benign samples and can affect classification decision. We directly inject feature byte sequences into malware samples to generate adversarial samples. Feature byte sequences can be shared to produce different adversarial samples, which can efficiently generate a large number of adversarial samples. We compare the proposed method with the randomly injecting and gradient-based methods. The experimental results show that the adversarial samples generated using our proposed method have a high successful rate.
APA, Harvard, Vancouver, ISO, and other styles
4

Zheng, Tianhang, Changyou Chen, and Kui Ren. "Distributionally Adversarial Attack." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 2253–60. http://dx.doi.org/10.1609/aaai.v33i01.33012253.

Full text
Abstract:
Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks. It is worth noting that the original objective of an attack/defense model relies on a data distribution p(x), typically in the form of risk maximization/minimization, e.g., max/min Ep(x) L(x) with p(x) some unknown data distribution and L(·) a loss function. However, since PGD generates attack samples independently for each data sample based on L(·), the procedure does not necessarily lead to good generalization in terms of risk optimization. In this paper, we achieve the goal by proposing distributionally adversarial attack (DAA), a framework to solve an optimal adversarial-data distribution, a perturbed distribution that satisfies the L∞ constraint but deviates from the original data distribution to increase the generalization risk maximally. Algorithmically, DAA performs optimization on the space of potential data distributions, which introduces direct dependency between all data points when generating adversarial samples. DAA is evaluated by attacking state-of-the-art defense models, including the adversarially-trained models provided by MIT MadryLab. Notably, DAA ranks the first place on MadryLab’s white-box leaderboards, reducing the accuracy of their secret MNIST model to 88.56% (with l∞ perturbations of ε = 0.3) and the accuracy of their secret CIFAR model to 44.71% (with l∞ perturbations of ε = 8.0). Code for the experiments is released on https://github.com/tianzheng4/Distributionally-Adversarial-Attack.
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Daeha, and Byung Cheol Song. "Contrastive Adversarial Learning for Person Independent Facial Emotion Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 7 (May 18, 2021): 5948–56. http://dx.doi.org/10.1609/aaai.v35i7.16743.

Full text
Abstract:
Since most facial emotion recognition (FER) methods significantly rely on supervision information, they have a limit to analyzing emotions independently of persons. On the other hand, adversarial learning is a well-known approach for generalized representation learning because it never requires supervision information. This paper presents a new adversarial learning for FER. In detail, the proposed learning enables the FER network to better understand complex emotional elements inherent in strong emotions by adversarially learning weak emotion samples based on strong emotion samples. As a result, the proposed method can recognize the emotions independently of persons because it understands facial expressions more accurately. In addition, we propose a contrastive loss function for efficient adversarial learning. Finally, the proposed adversarial learning scheme was theoretically verified, and it was experimentally proven to show state of the art (SOTA) performance.
APA, Harvard, Vancouver, ISO, and other styles
6

Bhatia, Siddharth, Arjit Jain, and Bryan Hooi. "ExGAN: Adversarial Generation of Extreme Samples." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 6750–58. http://dx.doi.org/10.1609/aaai.v35i8.16834.

Full text
Abstract:
Mitigating the risk arising from extreme events is a fundamental goal with many applications, such as the modelling of natural disasters, financial crashes, epidemics, and many others. To manage this risk, a vital step is to be able to understand or generate a wide range of extreme scenarios. Existing approaches based on Generative Adversarial Networks (GANs) excel at generating realistic samples, but seek to generate typical samples, rather than extreme samples. Hence, in this work, we propose ExGAN, a GAN-based approach to generate realistic and extreme samples. To model the extremes of the training distribution in a principled way, our work draws from Extreme Value Theory (EVT), a probabilistic approach for modelling the extreme tails of distributions. For practical utility, our framework allows the user to specify both the desired extremeness measure, as well as the desired extremeness probability they wish to sample at. Experiments on real US Precipitation data show that our method generates realistic samples, based on visual inspection and quantitative measures, in an efficient manner. Moreover, generating increasingly extreme examples using ExGAN can be done in constant time (with respect to the extremeness probability τ), as opposed to the O(1/τ) time required by the baseline approach.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Pengfei, and Xiaoming Ju. "Adversarial Sample Detection with Gaussian Mixture Conditional Generative Adversarial Networks." Mathematical Problems in Engineering 2021 (September 13, 2021): 1–18. http://dx.doi.org/10.1155/2021/8268249.

Full text
Abstract:
It is important to detect adversarial samples in the physical world that are far away from the training data distribution. Some adversarial samples can make a machine learning model generate a highly overconfident distribution in the testing stage. Thus, we proposed a mechanism for detecting adversarial samples based on semisupervised generative adversarial networks (GANs) with an encoder-decoder structure; this mechanism can be applied to any pretrained neural network without changing the network’s structure. The semisupervised GANs also give us insight into the behavior of adversarial samples and their flow through the layers of a deep neural network. In the supervised scenario, the latent feature of the semisupervised GAN and the target network’s logit information are used as the input of the external classifier support vector machine to detect the adversarial samples. In the unsupervised scenario, first, we proposed a one-class classier based on the semisupervised Gaussian mixture conditional generative adversarial network (GM-CGAN) to fit the joint feature information of the normal data, and then, we used a discriminator network to detect normal data and adversarial samples. In both supervised scenarios and unsupervised scenarios, experimental results show that our method outperforms latest methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Xin, Xiangrui Li, Deng Pan, and Dongxiao Zhu. "Improving Adversarial Robustness via Probabilistically Compact Loss with Logit Constraints." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 8482–90. http://dx.doi.org/10.1609/aaai.v35i10.17030.

Full text
Abstract:
Convolutional neural networks (CNNs) have achieved state-of-the-art performance on various tasks in computer vision. However, recent studies demonstrate that these models are vulnerable to carefully crafted adversarial samples and suffer from a significant performance drop when predicting them. Many methods have been proposed to improve adversarial robustness (e.g., adversarial training and new loss functions to learn adversarially robust feature representations). Here we offer a unique insight into the predictive behavior of CNNs that they tend to misclassify adversarial samples into the most probable false classes. This inspires us to propose a new Probabilistically Compact (PC) loss with logit constraints which can be used as a drop-in replacement for cross-entropy (CE) loss to improve CNN's adversarial robustness. Specifically, PC loss enlarges the probability gaps between true class and false classes meanwhile the logit constraints prevent the gaps from being melted by a small perturbation. We extensively compare our method with the state-of-the-art using large scale datasets under both white-box and black-box attacks to demonstrate its effectiveness. The source codes are available at https://github.com/xinli0928/PC-LC.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Fangwei, Yuanyuan Lu, Changguang Wang, and Qingru Li. "Binary Black-Box Adversarial Attacks with Evolutionary Learning against IoT Malware Detection." Wireless Communications and Mobile Computing 2021 (August 30, 2021): 1–9. http://dx.doi.org/10.1155/2021/8736946.

Full text
Abstract:
5G is about to open Pandora’s box of security threats to the Internet of Things (IoT). Key technologies, such as network function virtualization and edge computing introduced by the 5G network, bring new security threats and risks to the Internet infrastructure. Therefore, higher detection and defense against malware are required. Nowadays, deep learning (DL) is widely used in malware detection. Recently, research has demonstrated that adversarial attacks have posed a hazard to DL-based models. The key issue of enhancing the antiattack performance of malware detection systems that are used to detect adversarial attacks is to generate effective adversarial samples. However, numerous existing methods to generate adversarial samples are manual feature extraction or using white-box models, which makes it not applicable in the actual scenarios. This paper presents an effective binary manipulation-based attack framework, which generates adversarial samples with an evolutionary learning algorithm. The framework chooses some appropriate action sequences to modify malicious samples. Thus, the modified malware can successfully circumvent the detection system. The evolutionary algorithm can adaptively simplify the modification actions and make the adversarial sample more targeted. Our approach can efficiently generate adversarial samples without human intervention. The generated adversarial samples can effectively combat DL-based malware detection models while preserving the consistency of the executable and malicious behavior of the original malware samples. We apply the generated adversarial samples to attack the detection engines of VirusTotal. Experimental results illustrate that the adversarial samples generated by our method reach an evasion success rate of 47.8%, which outperforms other attack methods. By adding adversarial samples in the training process, the MalConv network is retrained. We show that the detection accuracy is improved by 10.3%.
APA, Harvard, Vancouver, ISO, and other styles
10

Hu, Yongjin, Jin Tian, and Jun Ma. "A Novel Way to Generate Adversarial Network Traffic Samples against Network Traffic Classification." Wireless Communications and Mobile Computing 2021 (August 23, 2021): 1–12. http://dx.doi.org/10.1155/2021/7367107.

Full text
Abstract:
Network traffic classification technologies could be used by attackers to implement network monitoring and then launch traffic analysis attacks or website fingerprint attacks. In order to prevent such attacks, a novel way to generate adversarial samples of network traffic from the perspective of the defender is proposed. By adding perturbation to the normal network traffic, a kind of adversarial network traffic is formed, which will cause misclassification when the attackers are implementing network traffic classification with deep convolutional neural networks (CNN) as a classification model. The paper uses the concept of adversarial samples in image recognition for reference to the field of network traffic classification and chooses several different methods to generate adversarial samples of network traffic. The experiment, in which the LeNet-5 CNN is selected as a classification model used by attackers and Vgg16 CNN is selected as the model to test the transferability of the adversarial network traffic generated, shows the effect of the adversarial network traffic samples.
APA, Harvard, Vancouver, ISO, and other styles
11

Park, Sanglee, and Jungmin So. "On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification." Applied Sciences 10, no. 22 (November 14, 2020): 8079. http://dx.doi.org/10.3390/app10228079.

Full text
Abstract:
State-of-the-art neural network models are actively used in various fields, but it is well-known that they are vulnerable to adversarial example attacks. Throughout the efforts to make the models robust against adversarial example attacks, it has been found to be a very difficult task. While many defense approaches were shown to be not effective, adversarial training remains as one of the promising methods. In adversarial training, the training data are augmented by “adversarial” samples generated using an attack algorithm. If the attacker uses a similar attack algorithm to generate adversarial examples, the adversarially trained network can be quite robust to the attack. However, there are numerous ways of creating adversarial examples, and the defender does not know what algorithm the attacker may use. A natural question is: Can we use adversarial training to train a model robust to multiple types of attack? Previous work have shown that, when a network is trained with adversarial examples generated from multiple attack methods, the network is still vulnerable to white-box attacks where the attacker has complete access to the model parameters. In this paper, we study this question in the context of black-box attacks, which can be a more realistic assumption for practical applications. Experiments with the MNIST dataset show that adversarially training a network with an attack method helps defending against that particular attack method, but has limited effect for other attack methods. In addition, even if the defender trains a network with multiple types of adversarial examples and the attacker attacks with one of the methods, the network could lose accuracy to the attack if the attacker uses a different data augmentation strategy on the target network. These results show that it is very difficult to make a robust network using adversarial training, even for black-box settings where the attacker has restricted information on the target network.
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Kedi, Ping Yi, Futai Zou, and Yue Wu. "Generating Adversarial Samples With Constrained Wasserstein Distance." IEEE Access 7 (2019): 136812–21. http://dx.doi.org/10.1109/access.2019.2942607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Xiaolei, Xiaojiang Du, Xiaosong Zhang, Qingxin Zhu, Hao Wang, and Mohsen Guizani. "Adversarial Samples on Android Malware Detection Systems for IoT Systems." Sensors 19, no. 4 (February 25, 2019): 974. http://dx.doi.org/10.3390/s19040974.

Full text
Abstract:
Many IoT (Internet of Things) systems run Android systems or Android-like systems. With the continuous development of machine learning algorithms, the learning-based Android malware detection system for IoT devices has gradually increased. However, these learning-based detection models are often vulnerable to adversarial samples. An automated testing framework is needed to help these learning-based malware detection systems for IoT devices perform security analysis. The current methods of generating adversarial samples mostly require training parameters of models and most of the methods are aimed at image data. To solve this problem, we propose a testing framework for learning-based Android malware detection systems (TLAMD) for IoT Devices. The key challenge is how to construct a suitable fitness function to generate an effective adversarial sample without affecting the features of the application. By introducing genetic algorithms and some technical improvements, our test framework can generate adversarial samples for the IoT Android application with a success rate of nearly 100% and can perform black-box testing on the system.
APA, Harvard, Vancouver, ISO, and other styles
14

Rasheed, Bader, Adil Khan, Muhammad Ahmad, Manuel Mazzara, and S. M. Ahsan Kazmi. "Multiple Adversarial Domains Adaptation Approach for Mitigating Adversarial Attacks Effects." International Transactions on Electrical Energy Systems 2022 (October 10, 2022): 1–11. http://dx.doi.org/10.1155/2022/2890761.

Full text
Abstract:
Although neural networks are near achieving performance similar to humans in many tasks, they are susceptible to adversarial attacks in the form of a small, intentionally designed perturbation, which could lead to misclassifications. The best defense against these attacks, so far, is adversarial training (AT), which improves a model’s robustness by augmenting the training data with adversarial examples. However, AT usually decreases the model’s accuracy on clean samples and could overfit to a specific attack, inhibiting its ability to generalize to new attacks. In this paper, we investigate the usage of domain adaptation to enhance AT’s performance. We propose a novel multiple adversarial domain adaptation (MADA) method, which looks at this problem as a domain adaptation task to discover robust features. Specifically, we use adversarial learning to learn features that are domain-invariant between multiple adversarial domains and the clean domain. We evaluated MADA on MNIST and CIFAR-10 datasets with multiple adversarial attacks during training and testing. The results of our experiments show that MADA is superior to AT on adversarial samples by about 4% on average and on clean samples by about 1% on average.
APA, Harvard, Vancouver, ISO, and other styles
15

Ghosh, Partha, Arpan Losalka, and Michael J. Black. "Resisting Adversarial Attacks Using Gaussian Mixture Variational Autoencoders." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 541–48. http://dx.doi.org/10.1609/aaai.v33i01.3301541.

Full text
Abstract:
Susceptibility of deep neural networks to adversarial attacks poses a major theoretical and practical challenge. All efforts to harden classifiers against such attacks have seen limited success till now. Two distinct categories of samples against which deep neural networks are vulnerable, “adversarial samples” and “fooling samples”, have been tackled separately so far due to the difficulty posed when considered together. In this work, we show how one can defend against them both under a unified framework. Our model has the form of a variational autoencoder with a Gaussian mixture prior on the latent variable, such that each mixture component corresponds to a single class. We show how selective classification can be performed using this model, thereby causing the adversarial objective to entail a conflict. The proposed method leads to the rejection of adversarial samples instead of misclassification, while maintaining high precision and recall on test data. It also inherently provides a way of learning a selective classifier in a semi-supervised scenario, which can similarly resist adversarial attacks. We further show how one can reclassify the detected adversarial samples by iterative optimization.1
APA, Harvard, Vancouver, ISO, and other styles
16

Iranmanesh, Seyed Mehdi, and Nasser M. Nasrabadi. "HGAN: Hybrid generative adversarial network." Journal of Intelligent & Fuzzy Systems 40, no. 5 (April 22, 2021): 8927–38. http://dx.doi.org/10.3233/jifs-201202.

Full text
Abstract:
In this paper, we present a simple approach to train Generative Adversarial Networks (GANs) in order to avoid a mode collapse issue. Implicit models such as GANs tend to generate better samples compared to explicit models that are trained on tractable data likelihood. However, GANs overlook the explicit data density characteristics which leads to undesirable quantitative evaluations and mode collapse. To bridge this gap, we propose a hybrid generative adversarial network (HGAN) for which we can enforce data density estimation via an autoregressive model and support both adversarial and likelihood framework in a joint training manner which diversify the estimated density in order to cover different modes. We propose to use an adversarial network to transfer knowledge from an autoregressive model (teacher) to the generator (student) of a GAN model. A novel deep architecture within the GAN formulation is developed to adversarially distill the autoregressive model information in addition to simple GAN training approach. We conduct extensive experiments on real-world datasets (i.e., MNIST, CIFAR-10, STL-10) to demonstrate the effectiveness of the proposed HGAN under qualitative and quantitative evaluations. The experimental results show the superiority and competitiveness of our method compared to the baselines.
APA, Harvard, Vancouver, ISO, and other styles
17

Huo, Lin, Huanchao Qi, Simiao Fei, Cong Guan, and Ji Li. "A Generative Adversarial Network Based a Rolling Bearing Data Generation Method Towards Fault Diagnosis." Computational Intelligence and Neuroscience 2022 (July 13, 2022): 1–21. http://dx.doi.org/10.1155/2022/7592258.

Full text
Abstract:
As a new generative model, the generative adversarial network (GAN) has great potential in the accuracy and efficiency of generating pseudoreal data. Nowadays, bearing fault diagnosis based on machine learning usually needs sufficient data. If enough near-real data can be generated in the case of insufficient samples in the actual operating condition, the effect of fault diagnosis will be greatly improved. In this study, a new rolling bearing data generation method based on the generative adversarial network (GAN) is proposed, which can be trained adversarially and jointly via a learned embedding, and applied to solve fault diagnosis problems with insufficient data. By analyzing the time-domain characteristics of rolling bearing life cycle monitoring data in actual working conditions, the operation data are divided into three periods, and the construction and training of the generative adversarial network model are carried out. Data generated by adversarial are compared with the real data in the time domain and frequency domain, respectively, and the similarity between the generated data and the real data is verified.
APA, Harvard, Vancouver, ISO, and other styles
18

Hashemi, Atiye Sadat, and Saeed Mozaffari. "CNN adversarial attack mitigation using perturbed samples training." Multimedia Tools and Applications 80, no. 14 (March 23, 2021): 22077–95. http://dx.doi.org/10.1007/s11042-020-10379-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Changrui, Dengpan Ye, Yueyun Shang, Shunzhi Jiang, Shiyu Li, Yuan Mei, and Liqiang Wang. "Defend Against Adversarial Samples by Using Perceptual Hash." Computers, Materials & Continua 62, no. 3 (2020): 1365–86. http://dx.doi.org/10.32604/cmc.2020.07421.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Heo, Byeongho, Minsik Lee, Sangdoo Yun, and Jin Young Choi. "Knowledge Distillation with Adversarial Samples Supporting Decision Boundary." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3771–78. http://dx.doi.org/10.1609/aaai.v33i01.33013771.

Full text
Abstract:
Many recent works on knowledge distillation have provided ways to transfer the knowledge of a trained network for improving the learning process of a new one, but finding a good technique for knowledge distillation is still an open problem. In this paper, we provide a new perspective based on a decision boundary, which is one of the most important component of a classifier. The generalization performance of a classifier is closely related to the adequacy of its decision boundary, so a good classifier bears a good decision boundary. Therefore, transferring information closely related to the decision boundary can be a good attempt for knowledge distillation. To realize this goal, we utilize an adversarial attack to discover samples supporting a decision boundary. Based on this idea, to transfer more accurate information about the decision boundary, the proposed algorithm trains a student classifier based on the adversarial samples supporting the decision boundary. Experiments show that the proposed method indeed improves knowledge distillation and achieves the state-of-the-arts performance.
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Jinrui, Baokun Han, Huaiqian Bao, Mingyan Wang, Zhenyun Chu, and Yuwei Shen. "Data augment method for machine fault diagnosis using conditional generative adversarial networks." Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 234, no. 12 (June 7, 2020): 2719–27. http://dx.doi.org/10.1177/0954407020923258.

Full text
Abstract:
As a useful data augmentation technique, generative adversarial networks have been successfully applied in fault diagnosis field. But traditional generative adversarial networks can only generate one category fault signals in one time, which is time-consuming and costly. To overcome this weakness, we develop a novel fault diagnosis method which combines conditional generative adversarial networks and stacked autoencoders, and both of them are built by stacking one-dimensional full connection layers. First, conditional generative adversarial networks is used to generate artificial samples based on the frequency samples, and category labels are adopted as the conditional information to simultaneously generate different category signals. Meanwhile, spectrum normalization is added to the discriminator of conditional generative adversarial networks to enhance the model training. Then, the augmented training samples are transferred to stacked autoencoders for feature extraction and fault classification. Finally, two datasets of bearing and gearbox are employed to investigate the effectiveness of the proposed conditional generative adversarial network–stacked autoencoder method.
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Dan, Ming Li, and Yushu Zhang. "Adversarial Data Hiding in Digital Images." Entropy 24, no. 6 (May 25, 2022): 749. http://dx.doi.org/10.3390/e24060749.

Full text
Abstract:
In recent studies of generative adversarial networks (GAN), researchers have attempted to combine adversarial perturbation with data hiding in order to protect the privacy and authenticity of the host image simultaneously. However, most of the studied approaches can only achieve adversarial perturbation through a visible watermark; the quality of the host image is low, and the concealment of data hiding cannot be achieved. In this work, we propose a true data hiding method with adversarial effect for generating high-quality covers. Based on GAN, the data hiding area is selected precisely by limiting the modification strength in order to preserve the fidelity of the image. We devise a genetic algorithm that can explore decision boundaries in an artificially constrained search space to improve the attack effect as well as construct aggressive covert adversarial samples by detecting “sensitive pixels” in ordinary samples to place discontinuous perturbations. The results reveal that the stego-image has good visual quality and attack effect. To the best of our knowledge, this is the first attempt to use covert data hiding to generate adversarial samples based on GAN.
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Chenyue, Linlin Zhang, Kai Zhao, Xuhui Ding, and Xusheng Wang. "AdvAndMal: Adversarial Training for Android Malware Detection and Family Classification." Symmetry 13, no. 6 (June 17, 2021): 1081. http://dx.doi.org/10.3390/sym13061081.

Full text
Abstract:
In recent years, Android malware has continued to evolve against detection technologies, becoming more concealed and harmful, making it difficult for existing models to resist adversarial sample attacks. At the current stage, the detection result is no longer the only criterion for evaluating the pros and cons of the model with its algorithms, it is also vital to take the model’s defensive ability against adversarial samples into consideration. In this study, we propose a general framework named AdvAndMal, which consists of a two-layer network for adversarial training to generate adversarial samples and improve the effectiveness of the classifiers in Android malware detection and family classification. The adversarial sample generation layer is composed of a conditional generative adversarial network called pix2pix, which can generate malware variants to extend the classifiers’ training set, and the malware classification layer is trained by RGB image visualized from the sequence of system calls. To evaluate the adversarial training effect of the framework, we propose the robustness coefficient, a symmetric interval i = [−1, 1], and conduct controlled experiments on the dataset to measure the robustness of the overall framework for the adversarial training. Experimental results on 12 families with the largest number of samples in the Drebin dataset show that the accuracy of the overall framework is increased from 0.976 to 0.989, and its robustness coefficient is increased from 0.857 to 0.917, which proves the effectiveness of the adversarial training method.
APA, Harvard, Vancouver, ISO, and other styles
24

Kang, Ah, Young-Seob Jeong, Se Kim, and Jiyoung Woo. "Malicious PDF Detection Model against Adversarial Attack Built from Benign PDF Containing JavaScript." Applied Sciences 9, no. 22 (November 8, 2019): 4764. http://dx.doi.org/10.3390/app9224764.

Full text
Abstract:
Intelligent attacks using document-based malware that exploit vulnerabilities in document viewing software programs or document file structure are increasing rapidly. There are many cases of using PDF (portable document format) in proportion to its usage. We provide in-depth analysis on PDF structure and JavaScript content embedded in PDFs. Then, we develop the diverse feature set encompassing the structure and metadata such as file size, version, encoding method and keywords, and the content features such as object names, keywords, and readable strings in JavaScript. When features are diverse, it is hard to develop adversarial examples because small changes are robust for machine-learning algorithms. We develop a detection model using black-box type models with the structure and content features to minimize the risk of adversarial attacks. To validate the proposed model, we design the adversarial attack. We collect benign documents containing multiple JavaScript codes for the base of adversarial samples. We build the adversarial samples by injecting the malware codes into base samples. The proposed model is evaluated against a large collection of malicious and benign PDFs. We found that random forest, an ensemble algorithm of a decision tree, exhibits a good performance on malware detection and is robust for adversarial samples.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Guangxing, and Peng Ren. "Hyperspectral Image Classification with Feature-Oriented Adversarial Active Learning." Remote Sensing 12, no. 23 (November 26, 2020): 3879. http://dx.doi.org/10.3390/rs12233879.

Full text
Abstract:
Deep learning classifiers exhibit remarkable performance for hyperspectral image classification given sufficient labeled samples but show deficiency in the situation of learning with limited labeled samples. Active learning endows deep learning classifiers with the ability to alleviate this deficiency. However, existing active deep learning methods tend to underestimate the feature variability of hyperspectral images when querying informative unlabeled samples subject to certain acquisition heuristics. A major reason for this bias is that the acquisition heuristics are normally derived based on the output of a deep learning classifier, in which representational power is bounded by the number of labeled training samples at hand. To address this limitation, we developed a feature-oriented adversarial active learning (FAAL) strategy, which exploits the high-level features from one intermediate layer of a deep learning classifier for establishing an acquisition heuristic based on a generative adversarial network (GAN). Specifically, we developed a feature generator for generating fake high-level features and a feature discriminator for discriminating between the real high-level features and the fake ones. Trained with both the real and the fake high-level features, the feature discriminator comprehensively captures the feature variability of hyperspectral images and yields a powerful and generalized discriminative capability. We leverage the well-trained feature discriminator as the acquisition heuristic to measure the informativeness of unlabeled samples. Experimental results validate the effectiveness of both (i) the full FAAL framework and (ii) the adversarially learned acquisition heuristic, for the task of classifying hyperspectral images with limited labeled samples.
APA, Harvard, Vancouver, ISO, and other styles
26

Taheri, Shayan, Milad Salem, and Jiann-Shiun Yuan. "RazorNet: Adversarial Training and Noise Training on a Deep Neural Network Fooled by a Shallow Neural Network." Big Data and Cognitive Computing 3, no. 3 (July 23, 2019): 43. http://dx.doi.org/10.3390/bdcc3030043.

Full text
Abstract:
In this work, we propose ShallowDeepNet, a novel system architecture that includes a shallow and a deep neural network. The shallow neural network has the duty of data preprocessing and generating adversarial samples. The deep neural network has the duty of understanding data and information as well as detecting adversarial samples. The deep neural network gets its weights from transfer learning, adversarial training, and noise training. The system is examined on the biometric (fingerprint and iris) and the pharmaceutical data (pill image). According to the simulation results, the system is capable of improving the detection accuracy of the biometric data from 1.31% to 80.65% when the adversarial data is used and to 93.4% when the adversarial data as well as the noisy data are given to the network. The system performance on the pill image data is increased from 34.55% to 96.03% and then to 98.2%, respectively. Training on different types of noise can benefit us in detecting samples from unknown and unseen adversarial attacks. Meanwhile, the system training on the adversarial data as well as noisy data occurs only once. In fact, retraining the system may improve the performance further. Furthermore, training the system on new types of attacks and noise can help in enhancing the system performance.
APA, Harvard, Vancouver, ISO, and other styles
27

Fang, Yong, Cheng Huang, Yijia Xu, and Yang Li. "RLXSS: Optimizing XSS Detection Model to Defend Against Adversarial Attacks Based on Reinforcement Learning." Future Internet 11, no. 8 (August 14, 2019): 177. http://dx.doi.org/10.3390/fi11080177.

Full text
Abstract:
With the development of artificial intelligence, machine learning algorithms and deep learning algorithms are widely applied to attack detection models. Adversarial attacks against artificial intelligence models become inevitable problems when there is a lack of research on the cross-site scripting (XSS) attack detection model for defense against attacks. It is extremely important to design a method that can effectively improve the detection model against attack. In this paper, we present a method based on reinforcement learning (called RLXSS), which aims to optimize the XSS detection model to defend against adversarial attacks. First, the adversarial samples of the detection model are mined by the adversarial attack model based on reinforcement learning. Secondly, the detection model and the adversarial model are alternately trained. After each round, the newly-excavated adversarial samples are marked as a malicious sample and are used to retrain the detection model. Experimental results show that the proposed RLXSS model can successfully mine adversarial samples that escape black-box and white-box detection and retain aggressive features. What is more, by alternately training the detection model and the confrontation attack model, the escape rate of the detection model is continuously reduced, which indicates that the model can improve the ability of the detection model to defend against attacks.
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Maosen, Yanhua Yang, Kun Wei, Xu Yang, and Heng Huang. "Learning Universal Adversarial Perturbation by Adversarial Example." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (June 28, 2022): 1350–58. http://dx.doi.org/10.1609/aaai.v36i2.20023.

Full text
Abstract:
Deep learning models have shown to be susceptible to universal adversarial perturbation (UAP), which has aroused wide concerns in the community. Compared with the conventional adversarial attacks that generate adversarial samples at the instance level, UAP can fool the target model for different instances with only a single perturbation, enabling us to evaluate the robustness of the model from a more effective and accurate perspective. The existing universal attack methods fail to exploit the differences and connections between the instance and universal levels to produce dominant perturbations. To address this challenge, we propose a new universal attack method that unifies instance-specific and universal attacks from a feature perspective to generate a more dominant UAP. Specifically, we reformulate the UAP generation task as a minimax optimization problem and then utilize the instance-specific attack method to solve the minimization problem thereby obtaining better training data for generating UAP. At the same time, we also introduce a consistency regularizer to explore the relationship between training data, thus further improving the dominance of the generated UAP. Furthermore, our method is generic with no additional assumptions about the training data and hence can be applied to both data-dependent (supervised) and data-independent (unsupervised) manners. Extensive experiments demonstrate that the proposed method improves the performance by a significant margin over the existing methods in both data-dependent and data-independent settings. Code is available at https://github.com/lisenxd/AT-UAP.
APA, Harvard, Vancouver, ISO, and other styles
29

Shirazi, Hossein, Bruhadeshwar Bezawada, Indrakshi Ray, and Chuck Anderson. "Directed adversarial sampling attacks on phishing detection." Journal of Computer Security 29, no. 1 (February 3, 2021): 1–23. http://dx.doi.org/10.3233/jcs-191411.

Full text
Abstract:
Phishing websites trick honest users into believing that they interact with a legitimate website and capture sensitive information, such as user names, passwords, credit card numbers, and other personal information. Machine learning is a promising technique to distinguish between phishing and legitimate websites. However, machine learning approaches are susceptible to adversarial learning attacks where a phishing sample can bypass classifiers. Our experiments on publicly available datasets reveal that the phishing detection mechanisms are vulnerable to adversarial learning attacks. We investigate the robustness of machine learning-based phishing detection in the face of adversarial learning attacks. We propose a practical approach to simulate such attacks by generating adversarial samples through direct feature manipulation. To enhance the sample’s success probability, we describe a clustering approach that guides an attacker to select the best possible phishing samples that can bypass the classifier by appearing as legitimate samples. We define the notion of vulnerability level for each dataset that measures the number of features that can be manipulated and the cost for such manipulation. Further, we clustered phishing samples and showed that some clusters of samples are more likely to exhibit higher vulnerability levels than others. This helps an adversary identify the best candidates of phishing samples to generate adversarial samples at a lower cost. Our finding can be used to refine the dataset and develop better learning models to compensate for the weak samples in the training dataset.
APA, Harvard, Vancouver, ISO, and other styles
30

Xu, Guangquan, Guofeng Feng, Litao Jiao, Meiqi Feng, Xi Zheng, and Jian Liu. "FNet: A Two-Stream Model for Detecting Adversarial Attacks against 5G-Based Deep Learning Services." Security and Communication Networks 2021 (September 6, 2021): 1–10. http://dx.doi.org/10.1155/2021/5395705.

Full text
Abstract:
With the extensive application of artificial intelligence technology in 5G and Beyond Fifth Generation (B5G) networks, it has become a common trend for artificial intelligence to integrate into modern communication networks. Deep learning is a subset of machine learning and has recently led to significant improvements in many fields. In particular, many 5G-based services use deep learning technology to provide better services. Although deep learning is powerful, it is still vulnerable when faced with 5G-based deep learning services. Because of the nonlinearity of deep learning algorithms, slight perturbation input by the attacker will result in big changes in the output. Although many researchers have proposed methods against adversarial attacks, these methods are not always effective against powerful attacks such as CW. In this paper, we propose a new two-stream network which includes RGB stream and spatial rich model (SRM) noise stream to discover the difference between adversarial examples and clean examples. The RGB stream uses raw data to capture subtle differences in adversarial samples. The SRM noise stream uses the SRM filters to get noise features. We regard the noise features as additional evidence for adversarial detection. Then, we adopt bilinear pooling to fuse the RGB features and the SRM features. Finally, the final features are input into the decision network to decide whether the image is adversarial or not. Experimental results show that our proposed method can accurately detect adversarial examples. Even with powerful attacks, we can still achieve a detection rate of 91.3%. Moreover, our method has good transferability to generalize to other adversaries.
APA, Harvard, Vancouver, ISO, and other styles
31

Chhabra, Anshuman, Abhishek Roy, and Prasant Mohapatra. "Suspicion-Free Adversarial Attacks on Clustering Algorithms." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3625–32. http://dx.doi.org/10.1609/aaai.v34i04.5770.

Full text
Abstract:
Clustering algorithms are used in a large number of applications and play an important role in modern machine learning– yet, adversarial attacks on clustering algorithms seem to be broadly overlooked unlike supervised learning. In this paper, we seek to bridge this gap by proposing a black-box adversarial attack for clustering models for linearly separable clusters. Our attack works by perturbing a single sample close to the decision boundary, which leads to the misclustering of multiple unperturbed samples, named spill-over adversarial samples. We theoretically show the existence of such adversarial samples for the K-Means clustering. Our attack is especially strong as (1) we ensure the perturbed sample is not an outlier, hence not detectable, and (2) the exact metric used for clustering is not known to the attacker. We theoretically justify that the attack can indeed be successful without the knowledge of the true metric. We conclude by providing empirical results on a number of datasets, and clustering algorithms. To the best of our knowledge, this is the first work that generates spill-over adversarial samples without the knowledge of the true metric ensuring that the perturbed sample is not an outlier, and theoretically proves the above.
APA, Harvard, Vancouver, ISO, and other styles
32

Bartolo, Max, Alastair Roberts, Johannes Welbl, Sebastian Riedel, and Pontus Stenetorp. "Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension." Transactions of the Association for Computational Linguistics 8 (November 2020): 662–78. http://dx.doi.org/10.1162/tacl_a_00338.

Full text
Abstract:
Innovations in annotation methodology have been a catalyst for Reading Comprehension (RC) datasets and models. One recent trend to challenge current RC models is to involve a model in the annotation process: Humans create questions adversarially, such that the model fails to answer them correctly. In this work we investigate this annotation methodology and apply it in three different settings, collecting a total of 36,000 samples with progressively stronger models in the annotation loop. This allows us to explore questions such as the reproducibility of the adversarial effect, transfer from data collected with varying model-in-the-loop strengths, and generalization to data collected without a model. We find that training on adversarially collected samples leads to strong generalization to non-adversarially collected datasets, yet with progressive performance deterioration with increasingly stronger models-in-the-loop. Furthermore, we find that stronger models can still learn from datasets collected with substantially weaker models-in-the-loop. When trained on data collected with a BiDAF model in the loop, RoBERTa achieves 39.9F1 on questions that it cannot answer when trained on SQuAD—only marginally lower than when trained on data collected using RoBERTa itself (41.0F1).
APA, Harvard, Vancouver, ISO, and other styles
33

Maldonado-Romo, Javier, Alberto Maldonado-Romo, and Mario Aldape-Pérez. "Path Generator with Unpaired Samples Employing Generative Adversarial Networks." Sensors 22, no. 23 (December 2, 2022): 9411. http://dx.doi.org/10.3390/s22239411.

Full text
Abstract:
Interactive technologies such as augmented reality have grown in popularity, but specialized sensors and high computer power must be used to perceive and analyze the environment in order to obtain an immersive experience in real time. However, these kinds of implementations have high costs. On the other hand, machine learning has helped create alternative solutions for reducing costs, but it is limited to particular solutions because the creation of datasets is complicated. Due to this problem, this work suggests an alternate strategy for dealing with limited information: unpaired samples from known and unknown surroundings are used to generate a path on embedded devices, such as smartphones, in real time. This strategy creates a path that avoids virtual elements through physical objects. The authors suggest an architecture for creating a path using imperfect knowledge. Additionally, an augmented reality experience is used to describe the generated path, and some users tested the proposal to evaluate the performance. Finally, the primary contribution is the approximation of a path produced from a known environment by using an unpaired dataset.
APA, Harvard, Vancouver, ISO, and other styles
34

Jiang, Yan, Guisheng Yin, Ye Yuan, and Qingan Da. "Project Gradient Descent Adversarial Attack against Multisource Remote Sensing Image Scene Classification." Security and Communication Networks 2021 (June 12, 2021): 1–13. http://dx.doi.org/10.1155/2021/6663028.

Full text
Abstract:
Deep learning technology (a deeper and optimized network structure) and remote sensing imaging (i.e., the more multisource and the more multicategory remote sensing data) have developed rapidly. Although the deep convolutional neural network (CNN) has achieved state-of-the-art performance on remote sensing image (RSI) scene classification, the existence of adversarial attacks poses a potential security threat to the RSI scene classification task based on CNN. The corresponding adversarial samples can be generated by adding a small perturbation to the original images. Feeding the CNN-based classifier with the adversarial samples leads to the classifier misclassify with high confidence. To achieve a higher attack success rate against scene classification based on CNN, we introduce the projected gradient descent method to generate adversarial remote sensing images. Then, we select several mainstream CNN-based classifiers as the attacked models to demonstrate the effectiveness of our method. The experimental results show that our proposed method can dramatically reduce the classification accuracy under untargeted and targeted attacks. Furthermore, we also evaluate the quality of the generated adversarial images by visual and quantitative comparisons. The results show that our method can generate the imperceptible adversarial samples and has a stronger attack ability for the RSI scene classification.
APA, Harvard, Vancouver, ISO, and other styles
35

Xiang, Fengtao, Jiahui Xu, Wanpeng Zhang, and Weidong Wang. "A Distributed Biased Boundary Attack Method in Black-Box Attack." Applied Sciences 11, no. 21 (November 8, 2021): 10479. http://dx.doi.org/10.3390/app112110479.

Full text
Abstract:
The adversarial samples threaten the effectiveness of machine learning (ML) models and algorithms in many applications. In particular, black-box attack methods are quite close to actual scenarios. Research on black-box attack methods and the generation of adversarial samples is helpful to discover the defects of machine learning models. It can strengthen the robustness of machine learning algorithms models. Such methods require queries frequently, which are less efficient. This paper has made improvements in the initial generation and the search for the most effective adversarial examples. Besides, it is found that some indicators can be used to detect attacks, which is a new foundation compared with our previous studies. Firstly, the paper proposed an algorithm to generate initial adversarial samples with a smaller L2 norm; secondly, a combination between particle swarm optimization (PSO) and biased boundary adversarial attack (BBA) is proposed. It is the PSO-BBA. Experiments are conducted on the ImageNet. The PSO-BBA is compared with the baseline method. Experimental comparison results certificate that: (1) A distributed framework for adversarial attack methods is proposed; (2) The proposed initial point selection method can reduces query numbers effectively; (3) Compared to the original BBA, the proposed PSO-BBA algorithm accelerates the convergence speed and improves the accuracy of attack accuracy; (4) The improved PSO-BBA algorithm has preferable performance on targeted and non-targeted attacks; (5) The mean structural similarity (MSSIM) can be used as the indicators of adversarial attack.
APA, Harvard, Vancouver, ISO, and other styles
36

Abd Aziz, Nurhakimah, Mohd Azman Hanif Sulaiman, Azlee Zabidi, Ihsan Mohd Yassin, Megat Syahirul Amin Megat Ali, and Zairi Ismael Rizman. "Lightweight Generative Adversarial Network Fundus Image Synthesis." JOIV : International Journal on Informatics Visualization 6, no. 1-2 (May 28, 2022): 270. http://dx.doi.org/10.30630/joiv.6.1-2.924.

Full text
Abstract:
Blindness is a global health problem that affects billions of lives. Recent advancements in Artificial Intelligence (AI), (Deep Learning (DL)) has the intervention potential to address the blindness issue, particularly as an accurate and non-invasive technique for early detection and treatment of Diabetic Retinopathy (DR). DL-based techniques rely on extensive examples to be robust and accurate in capturing the features responsible for representing the data. However, the number of samples required is tremendous for the DL classifier to learn properly. This presents an issue in collecting and categorizing many samples. Therefore, in this paper, we present a lightweight Generative Neural Network (GAN) to synthesize fundus samples to train AI-based systems. The GAN was trained using samples collected from publicly available datasets. The GAN follows the structure of the recent Lightweight GAN (LGAN) architecture. The implementation and results of the LGAN training and image generation are described. Results indicate that the trained network was able to generate realistic high-resolution samples of normal and diseased fundus images accurately as the generated results managed to realistically represent key structures and their placements inside the generated samples, such as the optic disc, blood vessels, exudates, and others. Successful and unsuccessful generation samples were sorted manually, yielding 56.66% realistic results relative to the total generated samples. Rejected generated samples appear to be due to inconsistencies in shape, key structures, placements, and color.
APA, Harvard, Vancouver, ISO, and other styles
37

Demetrio, Luca, Scott E. Coull, Battista Biggio, Giovanni Lagorio, Alessandro Armando, and Fabio Roli. "Adversarial EXEmples." ACM Transactions on Privacy and Security 24, no. 4 (November 30, 2021): 1–31. http://dx.doi.org/10.1145/3473039.

Full text
Abstract:
Recent work has shown that adversarial Windows malware samples—referred to as adversarial EXE mples in this article—can bypass machine learning-based detection relying on static code analysis by perturbing relatively few input bytes. To preserve malicious functionality, previous attacks either add bytes to existing non-functional areas of the file, potentially limiting their effectiveness, or require running computationally demanding validation steps to discard malware variants that do not correctly execute in sandbox environments. In this work, we overcome these limitations by developing a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks based on practical, functionality-preserving manipulations to the Windows Portable Executable file format. These attacks, named Full DOS , Extend , and Shift , inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section. Our experimental results show that these attacks outperform existing ones in both white-box and black-box scenarios, achieving a better tradeoff in terms of evasion rate and size of the injected payload, while also enabling evasion of models that have been shown to be robust to previous attacks. To facilitate reproducibility of our findings, we open source our framework and all the corresponding attack implementations as part of the secml-malware Python library. We conclude this work by discussing the limitations of current machine learning-based malware detectors, along with potential mitigation strategies based on embedding domain knowledge coming from subject-matter experts directly into the learning process.
APA, Harvard, Vancouver, ISO, and other styles
38

Santana, Everton Jose, Ricardo Petri Silva, Bruno Bogaz Zarpelão, and Sylvio Barbon Junior. "Detecting and Mitigating Adversarial Examples in Regression Tasks: A Photovoltaic Power Generation Forecasting Case Study." Information 12, no. 10 (September 26, 2021): 394. http://dx.doi.org/10.3390/info12100394.

Full text
Abstract:
With data collected by Internet of Things sensors, deep learning (DL) models can forecast the generation capacity of photovoltaic (PV) power plants. This functionality is especially relevant for PV power operators and users as PV plants exhibit irregular behavior related to environmental conditions. However, DL models are vulnerable to adversarial examples, which may lead to increased predictive error and wrong operational decisions. This work proposes a new scheme to detect adversarial examples and mitigate their impact on DL forecasting models. This approach is based on one-class classifiers and features extracted from the data inputted to the forecasting models. Tests were performed using data collected from a real-world PV power plant along with adversarial samples generated by the Fast Gradient Sign Method under multiple attack patterns and magnitudes. One-class Support Vector Machine and Local Outlier Factor were evaluated as detectors of attacks to Long-Short Term Memory and Temporal Convolutional Network forecasting models. According to the results, the proposed scheme showed a high capability of detecting adversarial samples with an average F1-score close to 90%. Moreover, the detection and mitigation approach strongly reduced the prediction error increase caused by adversarial samples.
APA, Harvard, Vancouver, ISO, and other styles
39

Man, Junfeng, Minglei Zheng, Yi Liu, Yiping Shen, and Qianqian Li. "Bearing Remaining Useful Life Prediction Based on AdCNN and CWGAN under Few Samples." Shock and Vibration 2022 (June 30, 2022): 1–17. http://dx.doi.org/10.1155/2022/1709071.

Full text
Abstract:
At present, deep learning is widely used to predict the remaining useful life (RUL) of rotation machinery in failure prediction and health management (PHM). However, in the actual manufacturing process, massive rotating machinery data are not easily obtained, which will lead to the decline of the prediction accuracy of the data-driven deep learning method. Firstly, a novel prognostic framework is proposed, which is comprised of conditional Wasserstein distance-based generative adversarial networks (CWGAN) and adversarial convolution neural networks (AdCNN), which can stably generate high-quality training samples to augment the bearing degradation dataset and solve the problem of few samples. Then, the bearing RUL prediction method is realized by inputting the monitoring data into the one-dimensional convolutional neural network (1DCNN) for adversarial training. Via the bearing degradation dataset of the IEEE 2012 PHM data challenge, the reliability of the proposed method is verified. Finally, experimental results show that our approach is better than others in RUL prediction on average absolute deviation and average square root error.
APA, Harvard, Vancouver, ISO, and other styles
40

Singla, Yaman Kumar, Swapnil Parekh, Somesh Singh, Changyou Chen, Balaji Krishnamurthy, and Rajiv Ratn Shah. "MINIMAL: Mining Models for Universal Adversarial Triggers." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (June 28, 2022): 11330–39. http://dx.doi.org/10.1609/aaai.v36i10.21384.

Full text
Abstract:
It is well known that natural language models are vulnerable to adversarial attacks, which are mostly input-specific in nature. Recently, it has been shown that there also exist input-agnostic attacks in NLP models, called universal adversarial triggers. However, existing methods to craft universal triggers are data intensive. They require large amounts of data samples to generate adversarial triggers, which are typically inaccessible by attackers. For instance, previous works take 3000 data samples per class for the SNLI dataset to generate adversarial triggers. In this paper, we present a novel data-free approach, MINIMAL, to mine input-agnostic adversarial triggers from models. Using the triggers produced with our data-free algorithm, we reduce the accuracy of Stanford Sentiment Treebank’s positive class from 93.6% to 9.6%. Similarly, for the Stanford Natural LanguageInference (SNLI), our single-word trigger reduces the accuracy of the entailment class from 90.95% to less than 0.6%. Despite being completely data-free, we get equivalent accuracy drops as data-dependent methods
APA, Harvard, Vancouver, ISO, and other styles
41

Wei, Zhipeng, Jingjing Chen, Micah Goldblum, Zuxuan Wu, Tom Goldstein, and Yu-Gang Jiang. "Towards Transferable Adversarial Attacks on Vision Transformers." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 2668–76. http://dx.doi.org/10.1609/aaai.v36i3.20169.

Full text
Abstract:
Vision transformers (ViTs) have demonstrated impressive performance on a series of computer vision tasks, yet they still suffer from adversarial examples. In this paper, we posit that adversarial attacks on transformers should be specially tailored for their architecture, jointly considering both patches and self-attention, in order to achieve high transferability. More specifically, we introduce a dual attack framework, which contains a Pay No Attention (PNA) attack and a PatchOut attack, to improve the transferability of adversarial samples across different ViTs. We show that skipping the gradients of attention during backpropagation can generate adversarial examples with high transferability. In addition, adversarial perturbations generated by optimizing randomly sampled subsets of patches at each iteration achieve higher attack success rates than attacks using all patches. We evaluate the transferability of attacks on state-of-the-art ViTs, CNNs and robustly trained CNNs. The results of these experiments demonstrate that the proposed dual attack can greatly boost transferability between ViTs and from ViTs to CNNs. In addition, the proposed method can easily be combined with existing transfer methods to boost performance.
APA, Harvard, Vancouver, ISO, and other styles
42

Cai, Zhipeng, Zuobin Xiong, Honghui Xu, Peng Wang, Wei Li, and Yi Pan. "Generative Adversarial Networks." ACM Computing Surveys 54, no. 6 (July 2021): 1–38. http://dx.doi.org/10.1145/3459992.

Full text
Abstract:
Generative Adversarial Networks (GANs) have promoted a variety of applications in computer vision and natural language processing, among others, due to its generative model’s compelling ability to generate realistic examples plausibly drawn from an existing distribution of samples. GAN not only provides impressive performance on data generation-based tasks but also stimulates fertilization for privacy and security oriented research because of its game theoretic optimization strategy. Unfortunately, there are no comprehensive surveys on GAN in privacy and security, which motivates this survey to summarize systematically. The existing works are classified into proper categories based on privacy and security functions, and this survey conducts a comprehensive analysis of their advantages and drawbacks. Considering that GAN in privacy and security is still at a very initial stage and has imposed unique challenges that are yet to be well addressed, this article also sheds light on some potential privacy and security applications with GAN and elaborates on some future research directions.
APA, Harvard, Vancouver, ISO, and other styles
43

Hennessy, Andrew, Kenneth Clarke, and Megan Lewis. "Generative Adversarial Network Synthesis of Hyperspectral Vegetation Data." Remote Sensing 13, no. 12 (June 8, 2021): 2243. http://dx.doi.org/10.3390/rs13122243.

Full text
Abstract:
New, accurate and generalizable methods are required to transform the ever-increasing amount of raw hyperspectral data into actionable knowledge for applications such as environmental monitoring and precision agriculture. Here, we apply advances in generative deep learning models to produce realistic synthetic hyperspectral vegetation data, whilst maintaining class relationships. Specifically, a Generative Adversarial Network (GAN) is trained using the Cramér distance on two vegetation hyperspectral datasets, demonstrating the ability to approximate the distribution of the training samples. Evaluation of the synthetic spectra shows that they respect many of the statistical properties of the real spectra, conforming well to the sampled distributions of all real classes. Creation of an augmented dataset consisting of synthetic and original samples was used to train multiple classifiers, with increases in classification accuracy seen under almost all circumstances. Both datasets showed improvements in classification accuracy ranging from a modest 0.16% for the Indian Pines set and a substantial increase of 7.0% for the New Zealand vegetation. Selection of synthetic samples from sparse or outlying regions of the feature space of real spectral classes demonstrated increased discriminatory power over those from more central portions of the distributions.
APA, Harvard, Vancouver, ISO, and other styles
44

Kwon, Hyun, and Jun Lee. "Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks." Symmetry 13, no. 3 (March 6, 2021): 428. http://dx.doi.org/10.3390/sym13030428.

Full text
Abstract:
This paper presents research focusing on visualization and pattern recognition based on computer science. Although deep neural networks demonstrate satisfactory performance regarding image and voice recognition, as well as pattern analysis and intrusion detection, they exhibit inferior performance towards adversarial examples. Noise introduction, to some degree, to the original data could lead adversarial examples to be misclassified by deep neural networks, even though they can still be deemed as normal by humans. In this paper, a robust diversity adversarial training method against adversarial attacks was demonstrated. In this approach, the target model is more robust to unknown adversarial examples, as it trains various adversarial samples. During the experiment, Tensorflow was employed as our deep learning framework, while MNIST and Fashion-MNIST were used as experimental datasets. Results revealed that the diversity training method has lowered the attack success rate by an average of 27.2 and 24.3% for various adversarial examples, while maintaining the 98.7 and 91.5% accuracy rates regarding the original data of MNIST and Fashion-MNIST.
APA, Harvard, Vancouver, ISO, and other styles
45

Qureshi, Ayyaz Ul Haq, Hadi Larijani, Mehdi Yousefi, Ahsan Adeel, and Nhamoinesu Mtetwa. "An Adversarial Approach for Intrusion Detection Systems Using Jacobian Saliency Map Attacks (JSMA) Algorithm." Computers 9, no. 3 (July 20, 2020): 58. http://dx.doi.org/10.3390/computers9030058.

Full text
Abstract:
In today’s digital world, the information systems are revolutionizing the way we connect. As the people are trying to adopt and integrate intelligent systems into daily lives, the risks around cyberattacks on user-specific information have significantly grown. To ensure safe communication, the Intrusion Detection Systems (IDS) were developed often by using machine learning (ML) algorithms that have the unique ability to detect malware against network security violations. Recently, it was reported that the IDS are prone to carefully crafted perturbations known as adversaries. With the aim to understand the impact of such attacks, in this paper, we have proposed a novel random neural network-based adversarial intrusion detection system (RNN-ADV). The NSL-KDD dataset is utilized for training. For adversarial attack crafting, the Jacobian Saliency Map Attack (JSMA) algorithm is used, which identifies the feature which can cause maximum change to the benign samples with minimum added perturbation. To check the effectiveness of the proposed adversarial scheme, the results are compared with a deep neural network which indicates that RNN-ADV performs better in terms of accuracy, precision, recall, F1 score and training epochs.
APA, Harvard, Vancouver, ISO, and other styles
46

Luo, Zhirui, Qingqing Li, and Jun Zheng. "A Study of Adversarial Attacks and Detection on Deep Learning-Based Plant Disease Identification." Applied Sciences 11, no. 4 (February 20, 2021): 1878. http://dx.doi.org/10.3390/app11041878.

Full text
Abstract:
Transfer learning using pre-trained deep neural networks (DNNs) has been widely used for plant disease identification recently. However, pre-trained DNNs are susceptible to adversarial attacks which generate adversarial samples causing DNN models to make wrong predictions. Successful adversarial attacks on deep learning (DL)-based plant disease identification systems could result in a significant delay of treatments and huge economic losses. This paper is the first attempt to study adversarial attacks and detection on DL-based plant disease identification. Our results show that adversarial attacks with a small number of perturbations can dramatically degrade the performance of DNN models for plant disease identification. We also find that adversarial attacks can be effectively defended by using adversarial sample detection with an appropriate choice of features. Our work will serve as a basis for developing more robust DNN models for plant disease identification and guiding the defense against adversarial attacks.
APA, Harvard, Vancouver, ISO, and other styles
47

Gu, Peng, Chengfei Zhu, Xiaosong Lan, Jie Wang, and Shuxiao Li. "Robust Image Classification with Cognitive-Driven Color Priors." Electronics 9, no. 11 (November 3, 2020): 1837. http://dx.doi.org/10.3390/electronics9111837.

Full text
Abstract:
Existing image classification methods based on convolutional neural networks usually use a large number of samples to learn classification features hierarchically, causing the problems of over-fitting and error propagation layer by layer. Thus, they are vulnerable to adversarial samples generated by adding imperceptible disturbances to input samples. To address the above issue, we propose a cognitive-driven color prior model to memorize the color attributes of target samples inspired by the characteristics of human memory. At inference stage, color priors are indexed from the memory and fused with features of convolutional neural networks to achieve robust image classification. The proposed color prior model is cognitive-driven and has no training parameters, thus it has strong generalization and can effectively defend against adversarial samples. In addition, our method directly combines the features of the prior model with the classification probability of the convolutional neural network, without changing the network structure and its parameters of the existing algorithm. It can be combined with other adversarial attack defense methods, such as various preprocessing modules such as PixelDefense or adversarial training methods, to improve the robustness of image classification. Experiments on several benchmark datasets show that the proposed method improves the anti-interference ability of image classification algorithms.
APA, Harvard, Vancouver, ISO, and other styles
48

Hashemi, Seyed Mohammad, Ruxandra Mihaela Botez, and Teodor Lucian Grigorie. "New Reliability Studies of Data-Driven Aircraft Trajectory Prediction." Aerospace 7, no. 10 (October 9, 2020): 145. http://dx.doi.org/10.3390/aerospace7100145.

Full text
Abstract:
Two main factors, including regression accuracy and adversarial attack robustness, of six trajectory prediction models are measured in this paper using the traffic flow management system (TFMS) public dataset of fixed-wing aircraft trajectories in a specific route provided by the Federal Aviation Administration. Six data-driven regressors with their desired architectures, from basic conventional to advanced deep learning, are explored in terms of the accuracy and reliability of their predicted trajectories. The main contribution of the paper is that the existence of adversarial samples was characterized for an aircraft trajectory problem, which is recast as a regression task in this paper. In other words, although data-driven algorithms are currently the best regressors, it is shown that they can be attacked by adversarial samples. Adversarial samples are similar to training samples; however, they can cause finely trained regressors to make incorrect predictions, which poses a security concern for learning-based trajectory prediction algorithms. It is shown that although deep-learning-based algorithms (e.g., long short-term memory (LSTM)) have higher regression accuracy with respect to conventional classifiers (e.g., support vector regression (SVR)), they are more sensitive to crafted states, which can be carefully manipulated even to redirect their predicted states towards incorrect states. This fact poses a real security issue for aircraft as adversarial attacks can result in intentional and purposely designed collisions of built-in systems that can include any type of learning-based trajectory predictor.
APA, Harvard, Vancouver, ISO, and other styles
49

Liu, Xiaolei, Xiaosong Zhang, Nadra Guizani, Jiazhong Lu, Qingxin Zhu, and Xiaojiang Du. "TLTD: A Testing Framework for Learning-Based IoT Traffic Detection Systems." Sensors 18, no. 8 (August 10, 2018): 2630. http://dx.doi.org/10.3390/s18082630.

Full text
Abstract:
With the popularization of IoT (Internet of Things) devices and the continuous development of machine learning algorithms, learning-based IoT malicious traffic detection technologies have gradually matured. However, learning-based IoT traffic detection models are usually very vulnerable to adversarial samples. There is a great need for an automated testing framework to help security analysts to detect errors in learning-based IoT traffic detection systems. At present, most methods for generating adversarial samples require training parameters of known models and are only applicable to image data. To address the challenge, we propose a testing framework for learning-based IoT traffic detection systems, TLTD. By introducing genetic algorithms and some technical improvements, TLTD can generate adversarial samples for IoT traffic detection systems and can perform a black-box test on the systems.
APA, Harvard, Vancouver, ISO, and other styles
50

Harford, Samuel, Fazle Karim, and Houshang Darabi. "Generating Adversarial Samples on Multivariate Time Series using Variational Autoencoders." IEEE/CAA Journal of Automatica Sinica 8, no. 9 (September 2021): 1523–38. http://dx.doi.org/10.1109/jas.2021.1004108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography