Academic literature on the topic 'Adversarial Attacker'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Adversarial Attacker.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Adversarial Attacker"

1

Park, Sanglee, and Jungmin So. "On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification." Applied Sciences 10, no. 22 (2020): 8079. http://dx.doi.org/10.3390/app10228079.

Full text
Abstract:
State-of-the-art neural network models are actively used in various fields, but it is well-known that they are vulnerable to adversarial example attacks. Throughout the efforts to make the models robust against adversarial example attacks, it has been found to be a very difficult task. While many defense approaches were shown to be not effective, adversarial training remains as one of the promising methods. In adversarial training, the training data are augmented by “adversarial” samples generated using an attack algorithm. If the attacker uses a similar attack algorithm to generate adversarial examples, the adversarially trained network can be quite robust to the attack. However, there are numerous ways of creating adversarial examples, and the defender does not know what algorithm the attacker may use. A natural question is: Can we use adversarial training to train a model robust to multiple types of attack? Previous work have shown that, when a network is trained with adversarial examples generated from multiple attack methods, the network is still vulnerable to white-box attacks where the attacker has complete access to the model parameters. In this paper, we study this question in the context of black-box attacks, which can be a more realistic assumption for practical applications. Experiments with the MNIST dataset show that adversarially training a network with an attack method helps defending against that particular attack method, but has limited effect for other attack methods. In addition, even if the defender trains a network with multiple types of adversarial examples and the attacker attacks with one of the methods, the network could lose accuracy to the attack if the attacker uses a different data augmentation strategy on the target network. These results show that it is very difficult to make a robust network using adversarial training, even for black-box settings where the attacker has restricted information on the target network.
APA, Harvard, Vancouver, ISO, and other styles
2

Rosenberg, Ishai, Asaf Shabtai, Yuval Elovici, and Lior Rokach. "Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain." ACM Computing Surveys 54, no. 5 (2021): 1–36. http://dx.doi.org/10.1145/3453158.

Full text
Abstract:
In recent years, machine learning algorithms, and more specifically deep learning algorithms, have been widely used in many fields, including cyber security. However, machine learning systems are vulnerable to adversarial attacks, and this limits the application of machine learning, especially in non-stationary, adversarial environments, such as the cyber security domain, where actual adversaries (e.g., malware developers) exist. This article comprehensively summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques and illuminates the risks they pose. First, the adversarial attack methods are characterized based on their stage of occurrence, and the attacker’ s goals and capabilities. Then, we categorize the applications of adversarial attack and defense methods in the cyber security domain. Finally, we highlight some characteristics identified in recent research and discuss the impact of recent advancements in other adversarial learning domains on future research directions in the cyber security domain. To the best of our knowledge, this work is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain, map them in a unified taxonomy, and use the taxonomy to highlight future research directions.
APA, Harvard, Vancouver, ISO, and other styles
3

Sutanto, Richard Evan, and Sukho Lee. "Real-Time Adversarial Attack Detection with Deep Image Prior Initialized as a High-Level Representation Based Blurring Network." Electronics 10, no. 1 (2020): 52. http://dx.doi.org/10.3390/electronics10010052.

Full text
Abstract:
Several recent studies have shown that artificial intelligence (AI) systems can malfunction due to intentionally manipulated data coming through normal channels. Such kinds of manipulated data are called adversarial examples. Adversarial examples can pose a major threat to an AI-led society when an attacker uses them as means to attack an AI system, which is called an adversarial attack. Therefore, major IT companies such as Google are now studying ways to build AI systems which are robust against adversarial attacks by developing effective defense methods. However, one of the reasons why it is difficult to establish an effective defense system is due to the fact that it is difficult to know in advance what kind of adversarial attack method the opponent is using. Therefore, in this paper, we propose a method to detect the adversarial noise without knowledge of the kind of adversarial noise used by the attacker. For this end, we propose a blurring network that is trained only with normal images and also use it as an initial condition of the Deep Image Prior (DIP) network. This is in contrast to other neural network based detection methods, which require the use of many adversarial noisy images for the training of the neural network. Experimental results indicate the validity of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
4

Yang, Puyudi, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, and Michael Jordan. "ML-LOO: Detecting Adversarial Examples with Feature Attribution." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 6639–47. http://dx.doi.org/10.1609/aaai.v34i04.6140.

Full text
Abstract:
Deep neural networks obtain state-of-the-art performance on a series of tasks. However, they are easily fooled by adding a small adversarial perturbation to the input. The perturbation is often imperceptible to humans on image data. We observe a significant difference in feature attributions between adversarially crafted examples and original examples. Based on this observation, we introduce a new framework to detect adversarial examples through thresholding a scale estimate of feature attribution scores. Furthermore, we extend our method to include multi-layer feature attributions in order to tackle attacks that have mixed confidence levels. As demonstrated in extensive experiments, our method achieves superior performances in distinguishing adversarial examples from popular attack methods on a variety of real data sets compared to state-of-the-art detection methods. In particular, our method is able to detect adversarial examples of mixed confidence levels, and transfer between different attacking methods. We also show that our method achieves competitive performance even when the attacker has complete access to the detector.
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Yiding, and Xiaojin Zhu. "Optimal Attack against Autoregressive Models by Manipulating the Environment." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3545–52. http://dx.doi.org/10.1609/aaai.v34i04.5760.

Full text
Abstract:
We describe an optimal adversarial attack formulation against autoregressive time series forecast using Linear Quadratic Regulator (LQR). In this threat model, the environment evolves according to a dynamical system; an autoregressive model observes the current environment state and predicts its future values; an attacker has the ability to modify the environment state in order to manipulate future autoregressive forecasts. The attacker's goal is to force autoregressive forecasts into tracking a target trajectory while minimizing its attack expenditure. In the white-box setting where the attacker knows the environment and forecast models, we present the optimal attack using LQR for linear models, and Model Predictive Control (MPC) for nonlinear models. In the black-box setting, we combine system identification and MPC. Experiments demonstrate the effectiveness of our attacks.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Chaoning, Philipp Benz, Tooba Imtiaz, and In-So Kweon. "CD-UAP: Class Discriminative Universal Adversarial Perturbation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 6754–61. http://dx.doi.org/10.1609/aaai.v34i04.6154.

Full text
Abstract:
A single universal adversarial perturbation (UAP) can be added to all natural images to change most of their predicted class labels. It is of high practical relevance for an attacker to have flexible control over the targeted classes to be attacked, however, the existing UAP method attacks samples from all classes. In this work, we propose a new universal attack method to generate a single perturbation that fools a target network to misclassify only a chosen group of classes, while having limited influence on the remaining classes. Since the proposed attack generates a universal adversarial perturbation that is discriminative to targeted and non-targeted classes, we term it class discriminative universal adversarial perturbation (CD-UAP). We propose one simple yet effective algorithm framework, under which we design and compare various loss function configurations tailored for the class discriminative universal attack. The proposed approach has been evaluated with extensive experiments on various benchmark datasets. Additionally, our proposed approach achieves state-of-the-art performance for the original task of UAP attacking all classes, which demonstrates the effectiveness of our approach.
APA, Harvard, Vancouver, ISO, and other styles
7

Butts, Jonathan, Mason Rice, and Sujeet Shenoi. "An Adversarial Model for Expressing Attacks on Control Protocols." Journal of Defense Modeling and Simulation: Applications, Methodology, Technology 9, no. 3 (2012): 243–55. http://dx.doi.org/10.1177/1548512911449409.

Full text
Abstract:
In this paper we present a model for expressing attacks on control protocols that involve the exchange of messages. Attacks are modeled using the notion of an attacker who can block and/or fabricate messages. These two attack mechanisms cover a variety of scenarios ranging from power grid failures to cyber attacks on oil pipelines. The model provides a method to syntactically express communication systems and attacks, which supports the development of attack and defense strategies. For demonstration purposes, an attack instance is modeled that shows how a targeted messaging attack can result in the rupture of a gas pipeline.
APA, Harvard, Vancouver, ISO, and other styles
8

Saha, Aniruddha, Akshayvarun Subramanya, and Hamed Pirsiavash. "Hidden Trigger Backdoor Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (2020): 11957–65. http://dx.doi.org/10.1609/aaai.v34i07.6871.

Full text
Abstract:
With the success of deep learning algorithms in various domains, studying adversarial attacks to secure deep models in real world applications has become an important research topic. Backdoor attacks are a form of adversarial attacks on deep networks where the attacker provides poisoned data to the victim to train the model with, and then activates the attack by showing a specific small trigger pattern at the test time. Most state-of-the-art backdoor attacks either provide mislabeled poisoning data that is possible to identify by visual inspection, reveal the trigger in the poisoned data, or use noise to hide the trigger. We propose a novel form of backdoor attack where poisoned data look natural with correct labels and also more importantly, the attacker hides the trigger in the poisoned data and keeps the trigger secret until the test time. We perform an extensive study on various image classification settings and show that our attack can fool the model by pasting the trigger at random locations on unseen images although the model performs well on clean data. We also show that our proposed attack cannot be easily defended using a state-of-the-art defense algorithm for backdoor attacks.
APA, Harvard, Vancouver, ISO, and other styles
9

Chhabra, Anshuman, Abhishek Roy, and Prasant Mohapatra. "Suspicion-Free Adversarial Attacks on Clustering Algorithms." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3625–32. http://dx.doi.org/10.1609/aaai.v34i04.5770.

Full text
Abstract:
Clustering algorithms are used in a large number of applications and play an important role in modern machine learning– yet, adversarial attacks on clustering algorithms seem to be broadly overlooked unlike supervised learning. In this paper, we seek to bridge this gap by proposing a black-box adversarial attack for clustering models for linearly separable clusters. Our attack works by perturbing a single sample close to the decision boundary, which leads to the misclustering of multiple unperturbed samples, named spill-over adversarial samples. We theoretically show the existence of such adversarial samples for the K-Means clustering. Our attack is especially strong as (1) we ensure the perturbed sample is not an outlier, hence not detectable, and (2) the exact metric used for clustering is not known to the attacker. We theoretically justify that the attack can indeed be successful without the knowledge of the true metric. We conclude by providing empirical results on a number of datasets, and clustering algorithms. To the best of our knowledge, this is the first work that generates spill-over adversarial samples without the knowledge of the true metric ensuring that the perturbed sample is not an outlier, and theoretically proves the above.
APA, Harvard, Vancouver, ISO, and other styles
10

Dankwa, Stephen, and Lu Yang. "Securing IoT Devices: A Robust and Efficient Deep Learning with a Mixed Batch Adversarial Generation Process for CAPTCHA Security Verification." Electronics 10, no. 15 (2021): 1798. http://dx.doi.org/10.3390/electronics10151798.

Full text
Abstract:
The Internet of Things environment (e.g., smart phones, smart televisions, and smart watches) ensures that the end user experience is easy, by connecting lives on web services via the internet. Integrating Internet of Things devices poses ethical risks related to data security, privacy, reliability and management, data mining, and knowledge exchange. An adversarial machine learning attack is a good practice to adopt, to strengthen the security of text-based CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart), to withstand against malicious attacks from computer hackers, to protect Internet of Things devices and the end user’s privacy. The goal of this current study is to perform security vulnerability verification on adversarial text-based CAPTCHA, based on attacker–defender scenarios. Therefore, this study proposed computation-efficient deep learning with a mixed batch adversarial generation process model, which attempted to break the transferability attack, and mitigate the problem of catastrophic forgetting in the context of adversarial attack defense. After performing K-fold cross-validation, experimental results showed that the proposed defense model achieved mean accuracies in the range of 82–84% among three gradient-based adversarial attack datasets.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Adversarial Attacker"

1

Ammouri, Kevin. "Deep Reinforcement Learning for Temperature Control in Buildings and Adversarial Attacks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301052.

Full text
Abstract:
Heating, Ventilation and Air Conditioning (HVAC) systems in buildings are energy consuming and traditional methods used for building control results in energy losses. The methods cannot account for non-linear dependencies in the thermal behaviour. Deep Reinforcement Learning (DRL) is a powerful method for reaching optimal control in many different control environments. DRL utilizes neural networks to approximate the optimal actions to take given that the system is in a given state. Therefore, DRL is a promising method for building control and this fact is highlighted by several studies. However, neural network polices are known to be vulnerable to adversarial attacks, which are small, indistinguishable changes to the input, which make the network choose a sub-optimal action. Two of the main approaches to attack DRL policies are: (1) the Fast Gradient Sign Method, which uses the gradients of the control agent’s network to conduct the attack; (2) to train a a DRL-agent with the goal to minimize performance of control agents. The aim of this thesis is to investigate different strategies for solving the building control problem with DRL using the building simulator IDA ICE. This thesis is also going to use the concept of adversarial machine learning by applying the attacks on the agents controlling the temperature inside the building. We first built a DRL architecture to learn how to efficiently control temperature in a building. Experiments demonstrate that exploration of the agent plays a crucial role in the training of the building control agent, and one needs to fine-tune the exploration strategy in order to achieve satisfactory performance. Finally, we tested the susceptibility of the trained DRL controllers to adversarial attacks. These tests showed, on average, that attacks trained using DRL methods have a larger impact on building control than those using FGSM, while random perturbation have almost null impact.<br>Ventilationssystem i byggnader är energiförbrukande och traditionella metoder som används för byggnadskontroll resulterar i förlust av energisparande. Dessa metoder kan inte ta hänsyn till icke-linjära beroenden i termisk beteenden. Djup förstärkande inlärning (DRL) är en kraftfull metod för att uppnå optimal kontroll i många kontrollmiljöer. DRL använder sig av neurala nätverk för att approximera optimala val som kan tas givet att systemet befinner sig i en viss stadie. Därför är DRL en lovande metod för byggnadskontroll och detta faktumet är markerat av flera studier. Likväl, neurala nätverk i allmänhet är kända för att vara svaga mot adversarial attacker, vilket är små ändringar i inmatningen, som gör att neurala nätverket väljer en åtgärd som är suboptimal. Syftet med denna anvhandling är att undersöka olika strategier för att lösa byggnadskontroll-problemet med DRL genom att använda sig av byggnadssimulatorn IDA ICE. Denna avhandling kommer också att använda konceptet av adversarial machine learning för att attackera agenterna som kontrollerar temperaturen i byggnaden. Det finns två olika sätt att attackera neurala nätverk: (1) Fast Gradient Sign Method, som använder gradienterna av kontrollagentens nätverk för att utföra sin attack; (2) träna en inlärningsagent med DRL med målet att minimera kontrollagenternas prestanda. Först byggde vi en DRL-arkitektur som lärde sig kontrollera temperaturen i en byggad. Experimenten visar att utforskning av agenten är en grundläggande faktor för träningen av kontrollagenten och man måste finjustera utforskningen av agenten för att nå tillfredsställande prestanda. Slutligen testade vi känsligheten av de tränade DRL-agenterna till adversarial attacker. Dessa test visade att i genomsnitt har det större påverkan på kontrollagenterna att använda DRL metoder än att använda sig av FGSM medans att attackera helt slumpmässigt har nästan ingen påverkan.
APA, Harvard, Vancouver, ISO, and other styles
2

Akdemir, Kahraman D. "Error Detection Techniques Against Strong Adversaries." Digital WPI, 2010. https://digitalcommons.wpi.edu/etd-dissertations/406.

Full text
Abstract:
"Side channel attacks (SCA) pose a serious threat on many cryptographic devices and are shown to be effective on many existing security algorithms which are in the black box model considered to be secure. These attacks are based on the key idea of recovering secret information using implementation specific side-channels. Especially active fault injection attacks are very effective in terms of breaking otherwise impervious cryptographic schemes. Various countermeasures have been proposed to provide security against these attacks. Double-Data-Rate (DDR) computation, dual-rail encoding, and simple concurrent error detection (CED) are the most popular of these solutions. Even though these security schemes provide sufficient security against weak adversaries, they can be broken relatively easily by a more advanced attacker. In this dissertation, we propose various error detection techniques that target strong adversaries with advanced fault injection capabilities. We first describe the advanced attacker in detail and provide its characteristics. As part of this definition, we provide a generic metric to measure the strength of an adversary. Next, we discuss various techniques for protecting finite state machines (FSMs) of cryptographic devices against active fault attacks. These techniques mainly depend on nonlinear robust codes and physically unclonable functions (PUFs). We show that due to the nonuniform behavior of FSM variables, securing FSMs using nonlinear codes is an important and difficult problem. As a solution to this problem, we propose error detection techniques based on nonlinear codes with different randomization methods. We also show how PUFs can be utilized to protect a class of FSMs. This solution provides security on the physical level as well as the logical level. In addition, for each technique, we provide possible hardware realizations and discuss area/security performance. Furthermore, we provide an error detection technique for protecting elliptic curve point addition and doubling operations against active fault attacks. This technique is based on nonlinear robust codes and provides nearly perfect error detection capability (except with exponentially small probability). We also conduct a comprehensive analysis in which we apply our technique to different elliptic curves (i.e. Weierstrass and Edwards) over different coordinate systems (i.e. affine and projective). "
APA, Harvard, Vancouver, ISO, and other styles
3

Worzyk, Steffen [Verfasser], Oliver [Akademischer Betreuer] Kramer, and Mike [Akademischer Betreuer] Preuss. "Adversarials−1: detecting adversarial inputs with internal attacks / Steffen Worzyk ; Oliver Kramer, Mike Preuss." Oldenburg : BIS der Universität Oldenburg, 2020. http://d-nb.info/1211724522/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fält, Pontus. "ADVERSARIAL ATTACKS ON FACIAL RECOGNITION SYSTEMS." Thesis, Umeå universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-175887.

Full text
Abstract:
In machine learning, neural networks have shown to achieve state-of-the-art performance within image classifi€cation problems. ‘Though, recent work has brought up a threat to these high performing networks in the form of adversarial att‹acks. ‘These a‹ttacks fool the networks by applying small and hardly perceivable perturbations and questions the reliability of neural networks. ‘This paper will analyze and compare the behavior of adversarial a‹ttacks where reliability and safety is crucial, within facial recognition systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Fan, Zijian. "Applying Generative Adversarial Networks for the Generation of Adversarial Attacks Against Continuous Authentication." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-289634.

Full text
Abstract:
Cybersecurity has been a hot topic over the past decades with lots of approaches being proposed to secure our private information. One of the emerging approaches in security is continuous authentication, in which the computer system is authenticating the user by monitoring the user behavior during the login session. Although the research of continuous authentication has got a significant achievement, the security of state-of-the-art continuous authentication systems is far from perfect. In this thesis, we explore the ability of classifiers used in continuous authentication and examine whether they can be bypassed by generated samples of user behavior from generative models. In our work, we considered four machine learning classifiers as the continuous authentication system: One-Class support vector machine, support vector machine, Gaussian mixture model and an artificial neural network. Furthermore, we considered three generative models used to mimic the user behavior: generative adversarial network, kernel density estimation generator, and MMSE-based generator. The considered classifiers and generative models were tested on two continuous authentication datasets. The result shows that generative adversarial networks achieved superior results with more than 50samples passing continuous authentication.<br>Cybersäkerhet har varit ett hett ämne under de senaste decennierna med många tillvägagångssätt skapats för att säkra vår privata information. En av de nya tillvägagångssätten inom säkerhet är kontinuerlig autentisering där datorsystemet autentiserar användaren genom att övervaka dess beteende under inloggningssessionen. Trots att forskningen om kontinuerlig autentisering har fått betydande framsteg, är säkerheten för toppmoderna kontinuerliga autentiseringssystem långt ifrån perfekt. I denna avhandling undersöker vi förmågan hos klassificerare som används vid kontinuerlig autentisering och undersöker om de kan luras med hjälp av generativa modeller. I vårt arbete använde vi fyra maskininlärningsklassificerare som det kontinuerliga autentiseringssystemet: En-klass stödvektormaskin, stödvektormaskin, Gaussian-blandningsmodell och ett artificiellt neuronnät. Vidare övervägde vi tre generativa modeller som användes för att härma användarens beteende: generativt motsatt nätverk, kärnatäthetsuppskattningsgenerator och MMSE-baserad generator. De betraktade klassificerarna och generativa modellerna testades på två dataset för kontinuerlig autentisering. Resultatet visar att generativa motverkande nätverk uppnådde överlägsna resultat med mer än 50% av de genererade proverna som passerade kontinuerlig autentisering.
APA, Harvard, Vancouver, ISO, and other styles
6

Kufel, Maciej. "Adversarial Attacks against Behavioral-based Continuous Authentication." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-285537.

Full text
Abstract:
Online identity theft and session hijacking attacks have become a major hazardin recent years and are expected to become more frequent in the years to come.Unlike the traditional authentication methods, continuous authentication based onthe characterization of user behavior in interactions with the computer system allowsto continuously verify the user’s identity and mitigates the risk of such forms ofmalicious access. However, recent developments in the field of generative modelingcan pose a significant threat to behavioral-based continuous authentication. Agenerative model is able to generate data with certain desired characteristics andcould be used to imitate a user’s behavior, allowing an attacker to bypass continuousauthentication and perform an attack without being detected. In this thesis, weinvestigate this threat and carry out adversarial attacks against behavioral-basedcontinuous authentication with the use of generative models. In our attack setup, anattacker has access to the data used to train the considered machine learning-basedcontinuous authentication classifiers. The data is used to train generative models,which then generate adversarial samples aimed at impersonating an authorized user.We focus on three explicit generative models: Kernel Density Estimation, GaussianMixture Models and Variational Autoencoders. We test our attacks based on keystrokedynamics and smartphone touch dynamics. The chosen generative models achievedgreat results, where the median amount of adversarial samples, which bypassed thecontinuous authentication systems ranged from 70 to 100% for keystroke dynamicsand from 41 to 99% for smartphone touch dynamics. The results also show the relationbetween the size of the training data used for generative models and their performance.Moreover, we observed that the generated adversarial samples exhibited only a slightlyhigher variance than that of the original samples, which indicates that the imitationattack indeed resembled the authenticated user’s movements. The vulnerability ofbehavioral-based continuous authentication to adversarial attacks discovered in this study calls for further research aimed at improving the existing security solutions.<br>Identitetsstöld och kapning av sessioner har blivit en stor fara under de senaste årenoch förväntas öka under de kommande åren. Kontinuerlig autentisering baserad påkarakteriseringen av användarbeteende i interaktioner med systemet, till skillnad frånde traditionella autentiseringsmetoderna, gör det möjligt att kontinuerligt verifieraanvändarens identitet och minskar risken för sådana former av skadlig åtkomst.Den senaste utvecklingen på området generativ modellering kan emellertid utgöraett betydande hot mot beteendebaserad kontinuerlig autentisering. En generativmodell kan generera data med vissa önskade egenskaper och kan användas för attimitera en användares beteende, och därmed göra det möjligt för en angripare attkringgå kontinuerlig autentisering och utföra en attack utan att upptäckas. I dennaavhandling utreder vi detta hot och genomför illvillig attacker mot beteendebaseradkontinuerlig autentisering med hjälp av generativa modeller. Vi betraktar enangripare som har tillgång till de träningsdata som används vid inlärningen av denkontinuerliga autentiserings-klassificeraren. Uppgifterna används för att utbildagenerativa modeller som sedan genererar illvillig stickprov som syftar till att härmaen auktoriserad användare. Vi fokuserar på tre explicita generativa modeller: KernelDensity Estimation, Gaussian Mixture Models och Variational Autoencoders. Vi testarvåra attacker mot kontinuerlig autentisering baserat på tangentbord skrivmönster ochsmartphone touch-dynamik. De valda generativa modellerna gav lovande resultat, därmedianmängden av illvillig stickprov som imiterar en användare som undvek upptäcktvarierade från 70 till 100% för tangentbord skrivmönster och från 41 till 99% försmartphone touch dynamics. Resultaten visar också den varierande andelen lyckaderesultat i takt med att träningsdata för en generativ modell minskar och inspekterarvariansen för de illvillig stickproven. Denna studie visar hur sårbar beteendebaseradkontinuerlig autentisering är för illvillig attacker och bör leda till ytterligare forskningsom syftar till att förbättra de befintliga säkerhetslösningarna.
APA, Harvard, Vancouver, ISO, and other styles
7

Burago, Igor. "Automated Attacks on Compression-Based Classifiers." Thesis, University of Oregon, 2014. http://hdl.handle.net/1794/18439.

Full text
Abstract:
Methods of compression-based text classification have proven their usefulness for various applications. However, in some classification problems, such as spam filtering, a classifier confronts one or many adversaries willing to induce errors in the classifier's judgment on certain kinds of input. In this thesis, we consider the problem of finding thrifty strategies for character-based text modification that allow an adversary to revert classifier's verdict on a given family of input texts. We propose three statistical statements of the problem that can be used by an attacker to obtain transformation models which are optimal in some sense. Evaluating these three techniques on a realistic spam corpus, we find that an adversary can transform a spam message (detectable as such by an entropy-based text classifier) into a legitimate one by generating and appending, in some cases, as few additional characters as 20% of the original length of the message.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Yuan Man. "SIFT-based image copy-move forgery detection and its adversarial attacks." Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3952093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sun, Michael(Michael Z. ). "Local approximations of deep learning models for black-box adversarial attacks." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121687.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 45-47).<br>We study the problem of generating adversarial examples for image classifiers in the black-box setting (when the model is available only as an oracle). We unify two seemingly orthogonal and concurrent lines of work in black-box adversarial generation: query-based attacks and substitute models. In particular, we reinterpret adversarial transferability as a strong gradient prior. Based on this unification, we develop a method for integrating model-based priors into the generation of black-box attacks. The resulting algorithms significantly improve upon the current state-of-the-art in black-box adversarial attacks across a wide range of threat models.<br>by Michael Sun.<br>M. Eng.<br>M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
10

Itani, Aashish. "COMPARISON OF ADVERSARIAL ROBUSTNESS OF ANN AND SNN TOWARDS BLACKBOX ATTACKS." OpenSIUC, 2021. https://opensiuc.lib.siu.edu/theses/2864.

Full text
Abstract:
n recent years, the vulnerability of neural networks to adversarial samples has gained wide attention from machine learning and deep learning communities. Addition of small and imperceptible perturbations to the input samples can cause neural network models to make incorrect prediction with high confidence. As the employment of neural networks on safety critical application is rising, this vulnerability of traditional neural networks to the adversarial samples demand for more robust alternative neural network models. Spiking Neural Network (SNN), is a special class of ANN, which mimics the brain functionality by using spikes for information processing. The known advantages of SNN include fast inference, low power consumption and biologically plausible information processing. In this work, we experiment on the adversarial robustness of the SNN as compared to traditional ANN, and figure out if SNN can be a candidate to solve the security problems faced by ANN.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Adversarial Attacker"

1

Casola, Linda, and Dionna Ali, eds. Robust Machine Learning Algorithms and Systems for Detection and Mitigation of Adversarial Attacks and Anomalies. National Academies Press, 2019. http://dx.doi.org/10.17226/25534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Piepke, Joachim G., ed. P. Johann Frick SVD: Mao schlief in meinem Bett. Academia Verlag, 2020. http://dx.doi.org/10.5771/9783896659125.

Full text
Abstract:
The "notes" reflect a fascinating world of Chinese past, as experienced and reflected in an empathetic way by Johann Frick. The missionaries sought to achieve as much empathy as possible with the Chinese realities, but on the other hand could not hide their European judgment. The population appreciated the foreign missionaries when they were experienced as "praying and good people". The attachment to the Chinese people is a wonderful testimony of humanity. The last months of his stay draw a warm and vulnerable person who is emotionally attached to "his" Chinese, but also understands his communist adversaries. In the end, he is a broken man because "his" Chinese expel him from the country.
APA, Harvard, Vancouver, ISO, and other styles
3

Freilich, Charles D. The Military Response Today. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190602932.003.0008.

Full text
Abstract:
Chapter 7 assesses Israel’s military responses to the primary threats it now faces. It argues that Israel has gained overwhelming conventional superiority, but that it is unclear whether it could have effectively attacked Iran’s nuclear program. Israel has reduced terrorism to a level its society can tolerate, but it remains a strategic threat, nevertheless. Israel does not yet appear to have an offensive response to the Hezbollah and Hamas threats, at an acceptable price, requiring greater emphasis on defense. Conversely, there have been over 10 years of quiet with Hezbollah, partly because of the deterrence gained in 2006. Israel’s rocket defenses largely neutralized the Hamas threat during the 2014 operation, and if a similar lull is gained with Hamas, limited deterrence will have been achieved. The real challenge is Hezbollah’s rocket arsenal. Israel has become a global leader in cybersecurity but is concerned that its adversaries will narrow the gap.
APA, Harvard, Vancouver, ISO, and other styles
4

Sandler, Todd. Terrorism. Oxford University Press, 2018. http://dx.doi.org/10.1093/wentk/9780190845841.001.0001.

Full text
Abstract:
The causes and consequences of terrorism are matters of considerable debate and great interest. Spectacular events are recognized by their dates, including the 9/11 attacks in New York and Washington and the 7/7 London bombings. Many other attacks, including those in non-Western countries, receive far less attention even though they may be more frequent and cumulatively cause more casualties. In Terrorism: What Everyone Needs to KnowRG, leading economist Todd Sandler provides a broad overview of a persistently topical topic. The general issues he examines include what terrorism is, its causes, the roles of terrorist groups, how governments seek to counter terrorism, its economic consequences, and the future of terrorism. He focuses on the modern era and how specific motivations, ranging from nationalism/separatism to left- or right-wing extremism or religious ideals, and general conditions, such as poverty and inequality or whether a country is democratic or authoritarian, affect the frequency and costs of terrorism. The diversity of terrorist groups and type of attacks can be overwhelming, and Sandler provides a unifying framework to generate insight: strategic interaction. That is, like other organizations, terrorist groups organize to pursue goals and respond in an optimal fashion to a risky environment that can influence the group's size, its diversity of attacks, its regional location, its host country's characteristics, and the group's ideology. Terrorists also responded to enhanced security measures by altering their tactics, targets, and location. As such, they are formidable opponents to their stronger government adversaries. Governments, in turn, pursue various costly strategies to prevent terrorism, including passive barriers and active attacks against terrorists, their resources, and those who support them. Terrorism covers numerous questions on the subject and sheds lights on a wide-range of theoretical and empirical research.
APA, Harvard, Vancouver, ISO, and other styles
5

Morrell, Kit. ‘Certain gentlemen say…’. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198788201.003.0012.

Full text
Abstract:
This chapter examines the nature and significance of a debate between Cicero and Cato in 56 BC over the repeal of Clodius’ tribunician laws. By combining close attention to the evidence of Plutarch and Dio with a study of the prosopography and geographical movements of some members of the Roman elite, it seeks to reconstruct the arguments used on each side and suggests identifications for several of the anonymous ‘gentlemen’ elliptically identified as Cicero’s adversaries in the contemporary De prouinciis consularibus. It is argued that Cicero’s attacks on these unnamed opponents relate to a recently-held Senate meeting in which the validity of Clodius’ laws was debated and Cicero’s position rejected both on technical grounds and in the interests of constitutional stability. The repercussions of this debate for Cicero and Cato personally and for Roman politics more broadly are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Freilich, Charles D. The Changing Military Threat. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190602932.003.0004.

Full text
Abstract:
Chapter 3 argues that the military threat Israel faces has changed dramatically. Attacks by Arab armies are now the least likely, and the overall threat has diminished. Israel’s conventional superiority led its adversaries to adopt a decades-long strategy of attrition designed to lead to Israel’s collapse. Iran is the most sophisticated adversary Israel has ever faced, its ongoing nuclear program the greatest threat. Hezbollah’s rockets are the primary immediate threat. Iran, Hezbollah, and Syria now constitute one joint front. They and Hamas believe that the home front is Israel’s Achilles heel, which allows them to offset its military superiority, and it has become the primary battlefield, leading to wars of mutual deterrence and destruction. Hamas’s Gaza is the embodiment of Israel’s fears of a Palestinian state. Terrorism is a strategic threat, swaying elections and negotiations. A large regional conventional buildup is underway. Cyberattacks now pose one of the greatest threats.
APA, Harvard, Vancouver, ISO, and other styles
7

Franzinelli, Mimmo. Squadrism. Edited by R. J. B. Bosworth. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199594788.013.0006.

Full text
Abstract:
Squadrism characterized fascism in a decisive manner, furnishing it with a special impulse in its struggle against its political adversaries from its first beginnings until its taking of power. The roots of the squadrists were nourished by the war experience, especially so-called arditismo, the spirit that had driven young men who had fought as volunteers in assault units. The initial brigade or manipolo of squadrists was founded in Milan in the winter of 1918–19 by ex-arditi officer Ferruccio Vecchi, attracting men who were finding it difficult to resume civilian life. Bound to the charismatic figure of Benito Mussolini, the brigade acted as a bodyguard for the managing editor of the paper Il popolo d'Italia, with special fervour in defending the value of the war and in deprecating socialist pacifism. Among the movement's most significant figures was Filippo Tommaso Marinetti, who led attacks against what he called ‘pro-Bolshevik’ rallies.
APA, Harvard, Vancouver, ISO, and other styles
8

Tsygankov, Andrei P. The Dark Double. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780190919337.001.0001.

Full text
Abstract:
This book studies the role of US media in presenting American values as principally different from and superior to those of Russia. The analysis focuses on the media’s narratives, frames, and nature of criticism of the Russian side and is based on texts of editorials of selected mainstream newspapers in the United States and other media sources. The book identifies five media narratives of Russia—“transition to democracy” (1991–1995), “chaos” (1995–2005), “neo-Soviet autocracy” (2005–2013), “foreign enemy” (since 2014), and “collusion” (since 2016)—each emerging in a particular context and supported by distinct frames. The increasingly negative presentation of Russia in the US media is explained by the countries’ cultural differences, interstate competition, and polarizing domestic politics. Interstate conflicts served to consolidate the media’s presentation of Russia as “autocratic,” adversarial, and involved in “collusion” with Donald Trump to undermine American democracy. Russia’s centralization of power and anti-American attitudes also contributed to the US media presentation of Russia as a hostile Other. These internal developments did not initially challenge US values and interests and were secondary in their impact on the formation of Russia image in America. The United States’ domestic partisan divide further exacerbated perception of Russia as a threat to American democracy. Russia’s interference in the US elections deepened the existing divide, with Russia becoming a convenient target for media attacks. Future value conflicts in world politics are likely to develop in the areas where states lack internal confidence and where their preferences over the international system conflict.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Adversarial Attacker"

1

Chen, Xuguang, Hongbin Ma, Pujun Ji, Haiting Liu, and Yan Liu. "Based on GAN Generating Chaotic Sequence." In Communications in Computer and Information Science. Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-33-4922-3_4.

Full text
Abstract:
AbstractIn this paper, an adversarial encryption algorithm based on generating chaotic sequence by GAN is proposed. Starting from the poor leakage resistance of the basic adversarial encryption communication model based on GAN, the network structure was improved. Secondly, this paper used the generated adversarial network to generate chaotic-like sequences as the key K and entered the improved adversarial encryption model. The addition of the chaotic model further improved the security of the key. In the subsequent training process, the encryption and decryption party and the attacker confront each other and optimize, and then obtain a more secure encryption model. Finally, this paper analyzes the security of the proposed encryption scheme through the key and overall model security. After subsequent experimental tests, this encryption method can eliminate the chaotic periodicity to a certain extent and the model’s anti-attack ability has also been greatly improved. After leaking part of the key to the attacker, the secure communication can still be maintained.
APA, Harvard, Vancouver, ISO, and other styles
2

Specht, Felix, and Jens Otto. "Hardening Deep Neural Networks in Condition Monitoring Systems against Adversarial Example Attacks." In Machine Learning for Cyber Physical Systems. Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-662-62746-4_11.

Full text
Abstract:
AbstractCondition monitoring systems based on deep neural networks are used for system failure detection in cyber-physical production systems. However, deep neural networks are vulnerable to attacks with adversarial examples. Adversarial examples are manipulated inputs, e.g. sensor signals, are able to mislead a deep neural network into misclassification. A consequence of such an attack may be the manipulation of the physical production process of a cyber-physical production system without being recognized by the condition monitoring system. This can result in a serious threat for production systems and employees. This work introduces an approach named CyberProtect to prevent misclassification caused by adversarial example attacks. The approach generates adversarial examples for retraining a deep neural network which results in a hardened variant of the deep neural network. The hardened deep neural network sustains a significant better classification rate (82% compared to 20%) while under attack with adversarial examples, as shown by empirical results.
APA, Harvard, Vancouver, ISO, and other styles
3

Göpfert, Jan Philip, Heiko Wersing, and Barbara Hammer. "Recovering Localized Adversarial Attacks." In Artificial Neural Networks and Machine Learning – ICANN 2019: Theoretical Neural Computation. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30487-4_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vasconcellos Vargas, Danilo. "Learning Systems Under Attack—Adversarial Attacks, Defenses and Beyond." In Autonomous Vehicles. Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-9255-3_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pieters, Wolter, and Mohsen Davarynejad. "Calculating Adversarial Risk from Attack Trees: Control Strength and Probabilistic Attackers." In Data Privacy Management, Autonomous Spontaneous Security, and Security Assurance. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-17016-9_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kurakin, Alexey, Ian Goodfellow, Samy Bengio, et al. "Adversarial Attacks and Defences Competition." In The NIPS '17 Competition: Building Intelligent Systems. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94042-7_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jia, Shuai, Chao Ma, Yibing Song, and Xiaokang Yang. "Robust Tracking Against Adversarial Attacks." In Computer Vision – ECCV 2020. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58529-7_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhou, Mo, Zhenxing Niu, Le Wang, Qilin Zhang, and Gang Hua. "Adversarial Ranking Attack and Defense." In Computer Vision – ECCV 2020. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58568-6_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Yulin, and Guoquan Huang. "Map-Based Localization Under Adversarial Attacks." In Springer Proceedings in Advanced Robotics. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28619-4_54.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Worzyk, Nils, Hendrik Kahlen, and Oliver Kramer. "Physical Adversarial Attacks by Projecting Perturbations." In Lecture Notes in Computer Science. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30508-6_51.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Adversarial Attacker"

1

Nguyen, Thanh H., Arunesh Sinha, and He He. "Partial Adversarial Behavior Deception in Security Games." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/40.

Full text
Abstract:
Learning attacker behavior is an important research topic in security games as security agencies are often uncertain about attackers' decision making. Previous work has focused on developing various behavioral models of attackers based on historical attack data. However, a clever attacker can manipulate its attacks to fail such attack-driven learning, leading to ineffective defense strategies. We study attacker behavior deception with three main contributions. First, we propose a new model, named partial behavior deception model, in which there is a deceptive attacker (among multiple attackers) who controls a portion of attacks. Our model captures real-world security scenarios such as wildlife protection in which multiple poachers are present. Second, we introduce a new scalable algorithm, GAMBO, to compute an optimal deception strategy of the deceptive attacker. Our algorithm employs the projected gradient descent and uses the implicit function theorem for the computation of gradient. Third, we conduct a comprehensive set of experiments, showing a significant benefit for the attacker and loss for the defender due to attacker deception.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Yeni, Hany S. Abdel-Khalik, and Elisa Bertino. "Online Adversarial Learning of Reactor State." In 2018 26th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/icone26-82372.

Full text
Abstract:
This paper is in support of our recent efforts to designing intelligent defenses against false data injection attacks, where false data are injected in the raw data used to control the reactor. Adopting a game-model between the attacker and the defender, we focus here on how the attacker may estimate reactor state in order to inject an attack that can bypass normal reactor anomaly and outlier detection checks. This approach is essential to designing defensive strategies that can anticipate the attackers moves. More importantly, it is to alert the community that defensive methods based on approximate physics models could be bypassed by the attacker who can approximate the models in an online mode during a lie-in-wait period. For illustration, we employ a simplified point kinetics model and show how an attacker, once gaining access to the reactor raw data, i.e., instrumentation readings, can inject small perturbations to learn the reactor dynamic behavior. In our context, this equates to estimating the reactivity feedback coefficients, e.g., Doppler, Xenon poisoning, etc. We employ a non-parametric learning approach that employs alternating conditional estimation in conjunction with discrete Fourier transform and curve fitting techniques to estimate reactivity coefficients. An Iranian model of the Bushehr reactor is employed for demonstration. Results indicate that very accurate estimation of reactor state could be achieved using the proposed learning method.
APA, Harvard, Vancouver, ISO, and other styles
3

Gong, Yuan, Boyang Li, Christian Poellabauer, and Yiyu Shi. "Real-Time Adversarial Attacks." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/649.

Full text
Abstract:
In recent years, many efforts have demonstrated that modern machine learning algorithms are vulnerable to adversarial attacks, where small, but carefully crafted, perturbations on the input can make them fail. While these attack methods are very effective, they only focus on scenarios where the target model takes static input, i.e., an attacker can observe the entire original sample and then add a perturbation at any point of the sample. These attack approaches are not applicable to situations where the target model takes streaming input, i.e., an attacker is only able to observe past data points and add perturbations to the remaining (unobserved) data points of the input. In this paper, we propose a real-time adversarial attack scheme for machine learning models with streaming inputs.
APA, Harvard, Vancouver, ISO, and other styles
4

Ghafouri, Amin, Yevgeniy Vorobeychik, and Xenofon Koutsoukos. "Adversarial Regression for Detecting Attacks in Cyber-Physical Systems." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/524.

Full text
Abstract:
Attacks in cyber-physical systems (CPS) which manipulate sensor readings can cause enormous physical damage if undetected. Detection of attacks on sensors is crucial to mitigate this issue. We study supervised regression as a means to detect anomalous sensor readings, where each sensor's measurement is predicted as a function of other sensors. We show that several common learning approaches in this context are still vulnerable to stealthy attacks, which carefully modify readings of compromised sensors to cause desired damage while remaining undetected. Next, we model the interaction between the CPS defender and attacker as a Stackelberg game in which the defender chooses detection thresholds, while the attacker deploys a stealthy attack in response. We present a heuristic algorithm for finding an approximately optimal threshold for the defender in this game, and show that it increases system resilience to attacks without significantly increasing the false alarm rate.
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Xuanqing, and Cho-Jui Hsieh. "Rob-GAN: Generator, Discriminator, and Adversarial Attacker." In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019. http://dx.doi.org/10.1109/cvpr.2019.01149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Kai. "Adversarial Machine Learning with Double Oracle." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/925.

Full text
Abstract:
We aim to improve the general adversarial machine learning solution by introducing the double oracle idea from game theory, which is commonly used to solve a sequential zero-sum game, where the adversarial machine learning problem can be formulated as a zero-sum minimax problem between learner and attacker.
APA, Harvard, Vancouver, ISO, and other styles
7

Hajaj, Chen, and Yevgeniy Vorobeychik. "Adversarial Task Assignment." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/526.

Full text
Abstract:
The problem of task assignment to workers is of long-standing fundamental importance. Examples of this include the classical problem of assigning computing tasks to nodes in a distributed computing environment, assigning jobs to robots, and crowdsourcing. Extensive research into this problem generally addresses important issues such as uncertainty and incentives. However, the problem of adversarial tampering with the task assignment process has not received as much attention. We are concerned with a particular adversarial setting in task assignment where an attacker may target a set of workers in order to prevent the tasks assigned to these workers from being completed. For the case when all tasks are homogeneous, we provide an efficient algorithm for computing the optimal assignment. When tasks are heterogeneous, we show that the adversarial assignment problem is NP-Hard, and present an algorithm for solving it approximately. Our theoretical results are accompanied by extensive simulation results showing the effectiveness of our algorithms.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Chengwei, Jing Liu, Yuan Xie, et al. "Latent Regularized Generative Dual Adversarial Network For Abnormal Detection." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/106.

Full text
Abstract:
With the development of adversarial attack in deep learning, it is critical for abnormal detector to not only discover the out-of-distribution samples but also provide defence against the adversarial attacker. Since few previous universal detector is known to work well on both tasks, we consider against both scenarios by constructing a robust and effective technique, where one sample could be regarded as the abnormal sample if it exhibits a higher image reconstruction error. Due to the training instability issues existed in previous generative adversarial networks (GANs) based methods, in this paper we propose a dual auxiliary autoencoder to make a tradeoff between the capability of generator and discriminator, leading to a more stable training process and high-quality image reconstruction. Moreover, to generate discriminative and robust latent representations, the mutual information estimator regarded as latent regularizer is adopted to extract the most unique information of target class. Overall, our generative dual adversarial network simultaneously optimizes the image reconstruction space and latent space to improve the performance. Experiments show that our model has the clear superiority over cutting edge semi-supervised abnormal detectors and achieves the state-of-the-art results on the datasets.
APA, Harvard, Vancouver, ISO, and other styles
9

Chipade, Vishnu S., and Dimitra Panagou. "Herding an Adversarial Attacker to a Safe Area for Defending Safety-Critical Infrastructure." In 2019 American Control Conference (ACC). IEEE, 2019. http://dx.doi.org/10.23919/acc.2019.8814380.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Han, Hao, Li Cui, Wen Li, et al. "Radio Frequency Fingerprint Based Wireless Transmitter Identification Against Malicious Attacker: An Adversarial Learning Approach." In 2020 International Conference on Wireless Communications and Signal Processing (WCSP). IEEE, 2020. http://dx.doi.org/10.1109/wcsp49889.2020.9299859.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Adversarial Attacker"

1

Meyers, C., S. Powers, and D. Faissol. Taxonomies of Cyber Adversaries and Attacks: A Survey of Incidents and Approaches. Office of Scientific and Technical Information (OSTI), 2009. http://dx.doi.org/10.2172/967712.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!