Academic literature on the topic 'Adversarial samples'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Adversarial samples.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Adversarial samples"

1

Liu, Faqiang, Mingkun Xu, Guoqi Li, Jing Pei, Luping Shi, and Rong Zhao. "Adversarial symmetric GANs: Bridging adversarial samples and adversarial networks." Neural Networks 133 (January 2021): 148–56. http://dx.doi.org/10.1016/j.neunet.2020.10.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Yang, Yuling Chen, Xuewei Wang, Jing Yang, and Qi Wang. "Promoting Adversarial Transferability via Dual-Sampling Variance Aggregation and Feature Heterogeneity Attacks." Electronics 12, no. 3 (February 3, 2023): 767. http://dx.doi.org/10.3390/electronics12030767.

Full text
Abstract:
At present, deep neural networks have been widely used in various fields, but their vulnerability requires attention. The adversarial attack aims to mislead the model by generating imperceptible perturbations on the source model, and although white-box attacks have achieved good success rates, existing adversarial samples exhibit weak migration in the black-box case, especially on some adversarially trained defense models. Previous work for gradient-based optimization either optimizes the image before iteration or optimizes the gradient during iteration, so it results in the generated adversarial samples overfitting the source model and exhibiting poor mobility to the adversarially trained model. To solve these problems, we propose the dual-sample variance aggregation with feature heterogeneity attack; our method is optimized before and during iterations to produce adversarial samples with better transferability. In addition, our method can be integrated with various input transformations. A large amount of experimental data demonstrate the effectiveness of the proposed method, which improves the attack success rate by 5.9% for the normally trained model and 11.5% for the adversarially trained model compared with the current state-of-the-art migration-enhancing attack methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Ding, Yuxin, Miaomiao Shao, Cai Nie, and Kunyang Fu. "An Efficient Method for Generating Adversarial Malware Samples." Electronics 11, no. 1 (January 4, 2022): 154. http://dx.doi.org/10.3390/electronics11010154.

Full text
Abstract:
Deep learning methods have been applied to malware detection. However, deep learning algorithms are not safe, which can easily be fooled by adversarial samples. In this paper, we study how to generate malware adversarial samples using deep learning models. Gradient-based methods are usually used to generate adversarial samples. These methods generate adversarial samples case-by-case, which is very time-consuming to generate a large number of adversarial samples. To address this issue, we propose a novel method to generate adversarial malware samples. Different from gradient-based methods, we extract feature byte sequences from benign samples. Feature byte sequences represent the characteristics of benign samples and can affect classification decision. We directly inject feature byte sequences into malware samples to generate adversarial samples. Feature byte sequences can be shared to produce different adversarial samples, which can efficiently generate a large number of adversarial samples. We compare the proposed method with the randomly injecting and gradient-based methods. The experimental results show that the adversarial samples generated using our proposed method have a high successful rate.
APA, Harvard, Vancouver, ISO, and other styles
4

Zheng, Tianhang, Changyou Chen, and Kui Ren. "Distributionally Adversarial Attack." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 2253–60. http://dx.doi.org/10.1609/aaai.v33i01.33012253.

Full text
Abstract:
Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks. It is worth noting that the original objective of an attack/defense model relies on a data distribution p(x), typically in the form of risk maximization/minimization, e.g., max/min Ep(x) L(x) with p(x) some unknown data distribution and L(·) a loss function. However, since PGD generates attack samples independently for each data sample based on L(·), the procedure does not necessarily lead to good generalization in terms of risk optimization. In this paper, we achieve the goal by proposing distributionally adversarial attack (DAA), a framework to solve an optimal adversarial-data distribution, a perturbed distribution that satisfies the L∞ constraint but deviates from the original data distribution to increase the generalization risk maximally. Algorithmically, DAA performs optimization on the space of potential data distributions, which introduces direct dependency between all data points when generating adversarial samples. DAA is evaluated by attacking state-of-the-art defense models, including the adversarially-trained models provided by MIT MadryLab. Notably, DAA ranks the first place on MadryLab’s white-box leaderboards, reducing the accuracy of their secret MNIST model to 88.56% (with l∞ perturbations of ε = 0.3) and the accuracy of their secret CIFAR model to 44.71% (with l∞ perturbations of ε = 8.0). Code for the experiments is released on https://github.com/tianzheng4/Distributionally-Adversarial-Attack.
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Daeha, and Byung Cheol Song. "Contrastive Adversarial Learning for Person Independent Facial Emotion Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 7 (May 18, 2021): 5948–56. http://dx.doi.org/10.1609/aaai.v35i7.16743.

Full text
Abstract:
Since most facial emotion recognition (FER) methods significantly rely on supervision information, they have a limit to analyzing emotions independently of persons. On the other hand, adversarial learning is a well-known approach for generalized representation learning because it never requires supervision information. This paper presents a new adversarial learning for FER. In detail, the proposed learning enables the FER network to better understand complex emotional elements inherent in strong emotions by adversarially learning weak emotion samples based on strong emotion samples. As a result, the proposed method can recognize the emotions independently of persons because it understands facial expressions more accurately. In addition, we propose a contrastive loss function for efficient adversarial learning. Finally, the proposed adversarial learning scheme was theoretically verified, and it was experimentally proven to show state of the art (SOTA) performance.
APA, Harvard, Vancouver, ISO, and other styles
6

Bhatia, Siddharth, Arjit Jain, and Bryan Hooi. "ExGAN: Adversarial Generation of Extreme Samples." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 6750–58. http://dx.doi.org/10.1609/aaai.v35i8.16834.

Full text
Abstract:
Mitigating the risk arising from extreme events is a fundamental goal with many applications, such as the modelling of natural disasters, financial crashes, epidemics, and many others. To manage this risk, a vital step is to be able to understand or generate a wide range of extreme scenarios. Existing approaches based on Generative Adversarial Networks (GANs) excel at generating realistic samples, but seek to generate typical samples, rather than extreme samples. Hence, in this work, we propose ExGAN, a GAN-based approach to generate realistic and extreme samples. To model the extremes of the training distribution in a principled way, our work draws from Extreme Value Theory (EVT), a probabilistic approach for modelling the extreme tails of distributions. For practical utility, our framework allows the user to specify both the desired extremeness measure, as well as the desired extremeness probability they wish to sample at. Experiments on real US Precipitation data show that our method generates realistic samples, based on visual inspection and quantitative measures, in an efficient manner. Moreover, generating increasingly extreme examples using ExGAN can be done in constant time (with respect to the extremeness probability τ), as opposed to the O(1/τ) time required by the baseline approach.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Pengfei, and Xiaoming Ju. "Adversarial Sample Detection with Gaussian Mixture Conditional Generative Adversarial Networks." Mathematical Problems in Engineering 2021 (September 13, 2021): 1–18. http://dx.doi.org/10.1155/2021/8268249.

Full text
Abstract:
It is important to detect adversarial samples in the physical world that are far away from the training data distribution. Some adversarial samples can make a machine learning model generate a highly overconfident distribution in the testing stage. Thus, we proposed a mechanism for detecting adversarial samples based on semisupervised generative adversarial networks (GANs) with an encoder-decoder structure; this mechanism can be applied to any pretrained neural network without changing the network’s structure. The semisupervised GANs also give us insight into the behavior of adversarial samples and their flow through the layers of a deep neural network. In the supervised scenario, the latent feature of the semisupervised GAN and the target network’s logit information are used as the input of the external classifier support vector machine to detect the adversarial samples. In the unsupervised scenario, first, we proposed a one-class classier based on the semisupervised Gaussian mixture conditional generative adversarial network (GM-CGAN) to fit the joint feature information of the normal data, and then, we used a discriminator network to detect normal data and adversarial samples. In both supervised scenarios and unsupervised scenarios, experimental results show that our method outperforms latest methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Xin, Xiangrui Li, Deng Pan, and Dongxiao Zhu. "Improving Adversarial Robustness via Probabilistically Compact Loss with Logit Constraints." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 8482–90. http://dx.doi.org/10.1609/aaai.v35i10.17030.

Full text
Abstract:
Convolutional neural networks (CNNs) have achieved state-of-the-art performance on various tasks in computer vision. However, recent studies demonstrate that these models are vulnerable to carefully crafted adversarial samples and suffer from a significant performance drop when predicting them. Many methods have been proposed to improve adversarial robustness (e.g., adversarial training and new loss functions to learn adversarially robust feature representations). Here we offer a unique insight into the predictive behavior of CNNs that they tend to misclassify adversarial samples into the most probable false classes. This inspires us to propose a new Probabilistically Compact (PC) loss with logit constraints which can be used as a drop-in replacement for cross-entropy (CE) loss to improve CNN's adversarial robustness. Specifically, PC loss enlarges the probability gaps between true class and false classes meanwhile the logit constraints prevent the gaps from being melted by a small perturbation. We extensively compare our method with the state-of-the-art using large scale datasets under both white-box and black-box attacks to demonstrate its effectiveness. The source codes are available at https://github.com/xinli0928/PC-LC.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Fangwei, Yuanyuan Lu, Changguang Wang, and Qingru Li. "Binary Black-Box Adversarial Attacks with Evolutionary Learning against IoT Malware Detection." Wireless Communications and Mobile Computing 2021 (August 30, 2021): 1–9. http://dx.doi.org/10.1155/2021/8736946.

Full text
Abstract:
5G is about to open Pandora’s box of security threats to the Internet of Things (IoT). Key technologies, such as network function virtualization and edge computing introduced by the 5G network, bring new security threats and risks to the Internet infrastructure. Therefore, higher detection and defense against malware are required. Nowadays, deep learning (DL) is widely used in malware detection. Recently, research has demonstrated that adversarial attacks have posed a hazard to DL-based models. The key issue of enhancing the antiattack performance of malware detection systems that are used to detect adversarial attacks is to generate effective adversarial samples. However, numerous existing methods to generate adversarial samples are manual feature extraction or using white-box models, which makes it not applicable in the actual scenarios. This paper presents an effective binary manipulation-based attack framework, which generates adversarial samples with an evolutionary learning algorithm. The framework chooses some appropriate action sequences to modify malicious samples. Thus, the modified malware can successfully circumvent the detection system. The evolutionary algorithm can adaptively simplify the modification actions and make the adversarial sample more targeted. Our approach can efficiently generate adversarial samples without human intervention. The generated adversarial samples can effectively combat DL-based malware detection models while preserving the consistency of the executable and malicious behavior of the original malware samples. We apply the generated adversarial samples to attack the detection engines of VirusTotal. Experimental results illustrate that the adversarial samples generated by our method reach an evasion success rate of 47.8%, which outperforms other attack methods. By adding adversarial samples in the training process, the MalConv network is retrained. We show that the detection accuracy is improved by 10.3%.
APA, Harvard, Vancouver, ISO, and other styles
10

Hu, Yongjin, Jin Tian, and Jun Ma. "A Novel Way to Generate Adversarial Network Traffic Samples against Network Traffic Classification." Wireless Communications and Mobile Computing 2021 (August 23, 2021): 1–12. http://dx.doi.org/10.1155/2021/7367107.

Full text
Abstract:
Network traffic classification technologies could be used by attackers to implement network monitoring and then launch traffic analysis attacks or website fingerprint attacks. In order to prevent such attacks, a novel way to generate adversarial samples of network traffic from the perspective of the defender is proposed. By adding perturbation to the normal network traffic, a kind of adversarial network traffic is formed, which will cause misclassification when the attackers are implementing network traffic classification with deep convolutional neural networks (CNN) as a classification model. The paper uses the concept of adversarial samples in image recognition for reference to the field of network traffic classification and chooses several different methods to generate adversarial samples of network traffic. The experiment, in which the LeNet-5 CNN is selected as a classification model used by attackers and Vgg16 CNN is selected as the model to test the transferability of the adversarial network traffic generated, shows the effect of the adversarial network traffic samples.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Adversarial samples"

1

Khoda, Mahbub. "Robust Mobile Malware Detection." Thesis, Federation University Australi, 2020. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/176412.

Full text
Abstract:
The increasing popularity and use of smartphones and hand-held devices have made them the most popular target for malware attackers. Researchers have proposed machine learning-based models to automatically detect malware attacks on these devices. Since these models learn application behaviors solely from the extracted features, choosing an appropriate and meaningful feature set is one of the most crucial steps for designing an effective mobile malware detection system. There are four categories of features for mobile applications. Previous works have taken arbitrary combinations of these categories to design models, resulting in sub-optimal performance. This thesis systematically investigates the individual impact of these feature categories on mobile malware detection systems. Feature categories that complement each other are investigated and categories that add redundancy to the feature space (thereby degrading the performance) are analyzed. In the process, the combination of feature categories that provides the best detection results is identified. Ensuring reliability and robustness of the above-mentioned malware detection systems is of utmost importance as newer techniques to break down such systems continue to surface. Adversarial attack is one such evasive attack that can bypass a detection system by carefully morphing a malicious sample even though the sample was originally correctly identified by the same system. Self-crafted adversarial samples can be used to retrain a model to defend against such attacks. However, randomly using too many such samples, as is currently done in the literature, can further degrade detection performance. This work proposed two intelligent approaches to retrain a classifier through the intelligent selection of adversarial samples. The first approach adopts a distance-based scheme where the samples are chosen based on their distance from malware and benign cluster centers while the second selects the samples based on a probability measure derived from a kernel-based learning method. The second method achieved a 6% improvement in terms of accuracy. To ensure practical deployment of malware detection systems, it is necessary to keep the real-world data characteristics in mind. For example, the benign applications deployed in the market greatly outnumber malware applications. However, most studies have assumed a balanced data distribution. Also, techniques to handle imbalanced data in other domains cannot be applied directly to mobile malware detection since they generate synthetic samples with broken functionality, making them invalid. In this regard, this thesis introduces a novel synthetic over-sampling technique that ensures valid sample generation. This technique is subsequently combined with a dynamic cost function in the learning scheme that automatically adjusts minority class weight during model training which counters the bias towards the majority class and stabilizes the model. This hybrid method provided a 9% improvement in terms of F1-score. Aiming to design a robust malware detection system, this thesis extensively studies machine learning-based mobile malware detection in terms of best feature category combination, resilience against evasive attacks, and practical deployment of detection models. Given the increasing technological advancements in mobile and hand-held devices, this study will be very useful for designing robust cybersecurity systems to ensure safe usage of these devices.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
2

SHIH, HUI-KANG, and 施彙康. "Decoupled Training of Generative Adversarial Networks with Noisy Samples." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/35fuz2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

YANG, HAO-XIANG, and 楊皓翔. "Surface Defect Detection of Scarce Samples Based on Deep Learning Model and Generative Adversarial Network." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/evzn27.

Full text
Abstract:
碩士
國立臺北科技大學
自動化科技研究所
107
In traditional automated optical inspection (AOI), the surface defect detection of different targets usually requires the specified detection algorithms and procedures from the field expertise. In order to solve this problem, this thesis used the deep learning model to train the surface defect and further used the data augmentation and generated adversarial network (GAN) to add more abundant training dataset. The sparse defect samples are always happened in surface defect detection. And then, the data augmentation through simple techniques, such as cropping, rotating, and flipping input images, are traditionally applied to expand the training dataset in order to improve the performance and ability of the model to generalize. However, these traditional techniques often induce the overfitting of the defect model. This thesis firstly obtained the rich and qualified defect images by active learning. The filtered defect images successively feed into the GAN to add more abundant training dataset. The Fréchet Inception Distance (FID) is further used to judge the difference between input and generated images. The images owned lowest FID will be stored as the training dataset of surface defect model. The dataset will efficiently decrease the overkill rate and missed detection rate of the corresponding well trained surface defect model. Finally, the surface detection of deep learning model will be verified through the public dataset and the captured images by the AOI instrument in real world. The experiment results show that the surface detection of deep learning model can get the equal detection accuracy and performance for both training with huge raw dataset and the expanded dataset with traditional data augmentation and GAN.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Adversarial samples"

1

Samanta, Suranjana, and Sameep Mehta. "Generating Adversarial Text Samples." In Lecture Notes in Computer Science, 744–49. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-76941-7_71.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shinde, Sandip, Jatan Loya, Shreya Lunkad, Harsh Pandey, Manas Nagaraj, and Khushali Daga. "Robust Adversarial Training for Detection of Adversarial Samples." In Advances in Intelligent Systems and Computing, 501–12. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-0475-2_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jere, Malhar, Sandro Herbig, Christine Lind, and Farinaz Koushanfar. "Principal Component Properties of Adversarial Samples." In Communications in Computer and Information Science, 58–66. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-62144-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ding, Jue, Jun Yin, Jingyu Dun, Wanwan Zhang, and Yayun Wang. "Attacking Frequency Information with Enhanced Adversarial Networks to Generate Adversarial Samples." In Advances in Visual Computing, 61–73. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-20713-6_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Yubo, Yihua Luo, Qiaoming Deng, and Xuanxing Zhou. "Exploration of Campus Layout Based on Generative Adversarial Network." In Proceedings of the 2020 DigitalFUTURES, 169–78. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4400-6_16.

Full text
Abstract:
AbstractThis paper aims to explore the idea and method of using deep learning with a small amount sample to realize campus layout generation. From the perspective of the architect, we construct two small amount sample campus layout data sets through artificial screening with the preference of the specific architects. These data sets are used to train the ability of Pix2Pix model to automatically generate the campus layout under the condition of the given campus boundary and surrounding roads. Through the analysis of the experimental results, this paper finds that under the premise of effective screening of the collected samples, even using a small amount sample data set for deep learning can achieve a good result.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Qifei, Rong Zhang, Bo Wu, Weiping Li, and Tong Mo. "Detection by Attack: Detecting Adversarial Samples by Undercover Attack." In Computer Security – ESORICS 2020, 146–64. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59013-0_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Eivazpour, Z., and Mohammad Reza Keyvanpour. "Adversarial Samples for Improving Performance of Software Defect Prediction Models." In Data Science: From Research to Application, 299–310. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-37309-2_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Martinez, Erick Eduardo Bernal, Bella Oh, Feng Li, and Xiao Luo. "Evading Deep Neural Network and Random Forest Classifiers by Generating Adversarial Samples." In Foundations and Practice of Security, 143–55. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-18419-3_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Deng, Wanyu, Hao Li, Yina Zhao, and Shuqi Ye. "Photo Mask Defect Detection Based on Generative Adversarial Network and Positive Samples." In Advances in Natural Computation, Fuzzy Systems and Knowledge Discovery, 892–903. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-89698-0_92.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pavate, Aruna, and Rajesh Bansode. "Design and Analysis of Adversarial Samples in Safety–Critical Environment: Disease Prediction System." In Artificial Intelligence on Medical Data, 349–61. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-0151-5_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Adversarial samples"

1

Guo, Xiaohui, Richong Zhang, Yaowei Zheng, and Yongyi Mao. "Robust Regularization with Adversarial Labelling of Perturbed Samples." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/343.

Full text
Abstract:
Recent researches have suggested that the predictive accuracy of neural network may contend with its adversarial robustness. This presents challenges in designing effective regularization schemes that also provide strong adversarial robustness. Revisiting Vicinal Risk Minimization (VRM) as a unifying regularization principle, we propose Adversarial Labelling of Perturbed Samples (ALPS) as a regularization scheme that aims at improving the generalization ability and adversarial robustness of the trained model. ALPS trains neural networks with synthetic samples formed by perturbing each authentic input sample towards another one along with an adversarially assigned label. The ALPS regularization objective is formulated as a min-max problem, in which the outer problem is minimizing an upper-bound of the VRM loss, and the inner problem is L1-ball constrained adversarial labelling on perturbed sample. The analytic solution to the induced inner maximization problem is elegantly derived, which enables computational efficiency. Experiments on the SVHN, CIFAR-10, CIFAR-100 and Tiny-ImageNet datasets show that the ALPS has a state-of-the-art regularization performance while also serving as an effective adversarial training scheme.
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Weibin, Yuxin Su, Michael R. Lyu, and Irwin King. "Improving the Transferability of Adversarial Samples with Adversarial Transformations." In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. http://dx.doi.org/10.1109/cvpr46437.2021.00891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ni, Yao, Dandan Song, Xi Zhang, Hao Wu, and Lejian Liao. "CAGAN: Consistent Adversarial Training Enhanced GANs." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/359.

Full text
Abstract:
Generative adversarial networks (GANs) have shown impressive results, however, the generator and the discriminator are optimized in finite parameter space which means their performance still need to be improved. In this paper, we propose a novel approach of adversarial training between one generator and an exponential number of critics which are sampled from the original discriminative neural network via dropout. As discrepancy between outputs of different sub-networks of a same sample can measure the consistency of these critics, we encourage the critics to be consistent to real samples and inconsistent to generated samples during training, while the generator is trained to generate consistent samples for different critics. Experimental results demonstrate that our method can obtain state-of-the-art Inception scores of 9.17 and 10.02 on supervised CIFAR-10 and unsupervised STL-10 image generation tasks, respectively, as well as achieve competitive semi-supervised classification results on several benchmarks. Importantly, we demonstrate that our method can maintain stability in training and alleviate mode collapse.
APA, Harvard, Vancouver, ISO, and other styles
4

Cao, Huayang, Wei Kong, Xiaohui Kuang, and Jianwen Tian. "Detecting Adversarial Samples with Neuron Coverage." In 2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE). IEEE, 2021. http://dx.doi.org/10.1109/csaiee54046.2021.9543451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Yacong, Lei Zhang, Liquan Chen, and Zhongyuan Qin. "Adversarial Samples Generation Based on RMSProp." In 2021 IEEE 6th International Conference on Signal and Image Processing (ICSIP). IEEE, 2021. http://dx.doi.org/10.1109/icsip52628.2021.9688946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liang, Bin, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. "Deep Text Classification Can be Fooled." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/585.

Full text
Abstract:
In this paper, we present an effective method to craft text adversarial samples, revealing one important yet underestimated fact that DNN-based text classifiers are also prone to adversarial sample attack. Specifically, confronted with different adversarial scenarios, the text items that are important for classification are identified by computing the cost gradients of the input (white-box attack) or generating a series of occluded test samples (black-box attack). Based on these items, we design three perturbation strategies, namely insertion, modification, and removal, to generate adversarial samples. The experiment results show that the adversarial samples generated by our method can successfully fool both state-of-the-art character-level and word-level DNN-based text classifiers. The adversarial samples can be perturbed to any desirable classes without compromising their utilities. At the same time, the introduced perturbation is difficult to be perceived.
APA, Harvard, Vancouver, ISO, and other styles
7

Ma, Yun, Xudong Mao, Yangbin Chen, and Qing Li. "Mixing Up Real Samples and Adversarial Samples for Semi-Supervised Learning." In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9207038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wei, JiaLi, Ming Fan, Xi Xu, Ang Jia, Zhou Xu, and Lei Xue. "Interpretation Area-Guided Detection of Adversarial Samples." In 2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C). IEEE, 2020. http://dx.doi.org/10.1109/qrs-c51114.2020.00049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Qiu, Zhongxi, Xiaofeng He, Lingna Chen, Hualing Liu, and LianPeng Zuo. "Generating Adversarial Samples with Convolutional Neural Network." In the 2019 the International Conference. New York, New York, USA: ACM Press, 2019. http://dx.doi.org/10.1145/3357777.3357791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bonnet, Benoît, Teddy Furon, and Patrick Bas. "What if Adversarial Samples were Digital Images?" In IH&MMSec '20: ACM Workshop on Information Hiding and Multimedia Security. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3369412.3395062.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Adversarial samples"

1

Eydenberg, Michael, Kanad Khanna, and Ryan Custer. Effects of Jacobian Matrix Regularization on the Detectability of Adversarial Samples. Office of Scientific and Technical Information (OSTI), December 2020. http://dx.doi.org/10.2172/1763568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography