Academic literature on the topic 'Data poisoning attacks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Data poisoning attacks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Data poisoning attacks"

1

Billah, Mustain, Adnan Anwar, Ziaur Rahman, and Syed Md Galib. "Bi-Level Poisoning Attack Model and Countermeasure for Appliance Consumption Data of Smart Homes." Energies 14, no. 13 (June 28, 2021): 3887. http://dx.doi.org/10.3390/en14133887.

Full text
Abstract:
Accurate building energy prediction is useful in various applications starting from building energy automation and management to optimal storage control. However, vulnerabilities should be considered when designing building energy prediction models, as intelligent attackers can deliberately influence the model performance using sophisticated attack models. These may consequently degrade the prediction accuracy, which may affect the efficiency and performance of the building energy management systems. In this paper, we investigate the impact of bi-level poisoning attacks on regression models of energy usage obtained from household appliances. Furthermore, an effective countermeasure against the poisoning attacks on the prediction model is proposed in this paper. Attacks and defenses are evaluated on a benchmark dataset. Experimental results show that an intelligent cyber-attacker can poison the prediction model to manipulate the decision. However, our proposed solution successfully ensures defense against such poisoning attacks effectively compared to other benchmark techniques.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Jian, Xuxin Zhang, Rui Zhang, Chen Wang, and Ling Liu. "De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks." IEEE Transactions on Information Forensics and Security 16 (2021): 3412–25. http://dx.doi.org/10.1109/tifs.2021.3080522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Saha, Aniruddha, Akshayvarun Subramanya, and Hamed Pirsiavash. "Hidden Trigger Backdoor Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11957–65. http://dx.doi.org/10.1609/aaai.v34i07.6871.

Full text
Abstract:
With the success of deep learning algorithms in various domains, studying adversarial attacks to secure deep models in real world applications has become an important research topic. Backdoor attacks are a form of adversarial attacks on deep networks where the attacker provides poisoned data to the victim to train the model with, and then activates the attack by showing a specific small trigger pattern at the test time. Most state-of-the-art backdoor attacks either provide mislabeled poisoning data that is possible to identify by visual inspection, reveal the trigger in the poisoned data, or use noise to hide the trigger. We propose a novel form of backdoor attack where poisoned data look natural with correct labels and also more importantly, the attacker hides the trigger in the poisoned data and keeps the trigger secret until the test time. We perform an extensive study on various image classification settings and show that our attack can fool the model by pasting the trigger at random locations on unseen images although the model performs well on clean data. We also show that our proposed attack cannot be easily defended using a state-of-the-art defense algorithm for backdoor attacks.
APA, Harvard, Vancouver, ISO, and other styles
4

Dunn, Corey, Nour Moustafa, and Benjamin Turnbull. "Robustness Evaluations of Sustainable Machine Learning Models against Data Poisoning Attacks in the Internet of Things." Sustainability 12, no. 16 (August 10, 2020): 6434. http://dx.doi.org/10.3390/su12166434.

Full text
Abstract:
With the increasing popularity of the Internet of Things (IoT) platforms, the cyber security of these platforms is a highly active area of research. One key technology underpinning smart IoT systems is machine learning, which classifies and predicts events from large-scale data in IoT networks. Machine learning is susceptible to cyber attacks, particularly data poisoning attacks that inject false data when training machine learning models. Data poisoning attacks degrade the performances of machine learning models. It is an ongoing research challenge to develop trustworthy machine learning models resilient and sustainable against data poisoning attacks in IoT networks. We studied the effects of data poisoning attacks on machine learning models, including the gradient boosting machine, random forest, naive Bayes, and feed-forward deep learning, to determine the levels to which the models should be trusted and said to be reliable in real-world IoT settings. In the training phase, a label modification function is developed to manipulate legitimate input classes. The function is employed at data poisoning rates of 5%, 10%, 20%, and 30% that allow the comparison of the poisoned models and display their performance degradations. The machine learning models have been evaluated using the ToN_IoT and UNSW NB-15 datasets, as they include a wide variety of recent legitimate and attack vectors. The experimental results revealed that the models’ performances will be degraded, in terms of accuracy and detection rates, if the number of the trained normal observations is not significantly larger than the poisoned data. At the rate of data poisoning of 30% or greater on input data, machine learning performances are significantly degraded.
APA, Harvard, Vancouver, ISO, and other styles
5

Weerasinghe, Sandamal, Tansu Alpcan, Sarah M. Erfani, and Christopher Leckie. "Defending Support Vector Machines Against Data Poisoning Attacks." IEEE Transactions on Information Forensics and Security 16 (2021): 2566–78. http://dx.doi.org/10.1109/tifs.2021.3058771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kalajdzic, Kenan, Ahmed Patel, and Mona Taghavi. "Two Methods for Active Detection and Prevention of Sophisticated ARP-Poisoning Man-in-the-Middle Attacks on Switched Ethernet LANs." International Journal of Digital Crime and Forensics 3, no. 3 (July 2011): 50–60. http://dx.doi.org/10.4018/jdcf.2011070104.

Full text
Abstract:
This paper describes two novel methods for active detection and prevention of ARP-poisoning-based Man-in-the-Middle (MitM) attacks on switched Ethernet LANs. As a stateless and inherently insecure protocol, ARP has been used as a relatively simple means to launch Denial-of-Service (DoS) and MitM attacks on local networks and multiple solutions have been proposed to detect and prevent these types of attacks. MitM attacks are particularly dangerous, because they allow an attacker to monitor network traffic and break the integrity of data being sent over the network. The authors introduce backwards compatible techniques to prevent ARP poisoning and deal with sophisticated stealth MitM programs.
APA, Harvard, Vancouver, ISO, and other styles
7

Alsuwat, Emad, Hatim Alsuwat, Marco Valtorta, and Csilla Farkas. "Adversarial data poisoning attacks against the PC learning algorithm." International Journal of General Systems 49, no. 1 (June 17, 2019): 3–31. http://dx.doi.org/10.1080/03081079.2019.1630401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Prabadevi, B., and N. Jeyanthi. "TSCBA-A Mitigation System for ARP Cache Poisoning Attacks." Cybernetics and Information Technologies 18, no. 4 (November 1, 2018): 75–93. http://dx.doi.org/10.2478/cait-2018-0049.

Full text
Abstract:
Abstract Address Resolution Protocol (ARP) cache poisoning results in numerous attacks. A novel mitigation system for ARP cache poisoning presented here avoids ARP cache poisoning attacks by introducing timestamps and counters in the ARP messages and ARP data tables. The system is evaluated based on criteria specified by the researchers and abnormal packets.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhou, Xingchen, Ming Xu, Yiming Wu, and Ning Zheng. "Deep Model Poisoning Attack on Federated Learning." Future Internet 13, no. 3 (March 14, 2021): 73. http://dx.doi.org/10.3390/fi13030073.

Full text
Abstract:
Federated learning is a novel distributed learning framework, which enables thousands of participants to collaboratively construct a deep learning model. In order to protect confidentiality of the training data, the shared information between server and participants are only limited to model parameters. However, this setting is vulnerable to model poisoning attack, since the participants have permission to modify the model parameters. In this paper, we perform systematic investigation for such threats in federated learning and propose a novel optimization-based model poisoning attack. Different from existing methods, we primarily focus on the effectiveness, persistence and stealth of attacks. Numerical experiments demonstrate that the proposed method can not only achieve high attack success rate, but it is also stealthy enough to bypass two existing defense methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Aydin, Burc. "Global Characteristics of Chemical, Biological, and Radiological Poison Use in Terrorist Attacks." Prehospital and Disaster Medicine 35, no. 3 (April 2, 2020): 260–66. http://dx.doi.org/10.1017/s1049023x20000394.

Full text
Abstract:
AbstractBackground:Chemical, biological, and radiological (CBR) terrorism continues to be a global threat. Studies examining global and historical toxicological characteristics of CBR terrorism are lacking.Methods:Global Terrorism Database (GTD) and RAND Database of Worldwide Terrorism Incidents (RDWTI) were searched for CBR terrorist attacks from 1970 through 2017. Events fulfilling terrorism and poisoning definitions were included. Variables of event date and location, event realization, poisonous agent type, poisoning agent, exposure route, targets, connected events, additional means of harm, disguise methods, poisonings, and casualties were analyzed along with time trends and data gaps.Results:A total of 446 events of CBR terrorism were included from all world regions. A trend for increased number of events over time was observed (R2 = 0.727; coefficient = 0.511). In these attacks, 4,093 people lost their lives and 31,903 were injured. Chemicals were the most commonly used type of poison (63.5%). The most commonly used poisonous agents were acids (12.3%), chlorine or chlorine compounds (11.2%), riot control agents (10.8%), cyanides (5.8%), and Bacillus anthracis (4.9%). Occurrence of poisoning was confirmed in 208 events (46.6%). Most common exposure routes were skin, mucosa, or eye (57.2%) and inhalation (47.5%). Poison was delivered with additional means of harm in 151 events (33.9%) and in a disguised way in 214 events (48.0%), respectively.Conclusions:This study showed that CBR terrorism is an on-going and increasingly recorded global threat involving diverse groups of poisons with additional harmful mechanisms and disguise. Industrial chemicals were used in chemical attacks. Vigilance and preparedness are needed for future CBR threats.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Data poisoning attacks"

1

"Data Poisoning Attacks on Linked Data with Graph Regularization." Master's thesis, 2019. http://hdl.handle.net/2286/R.I.53572.

Full text
Abstract:
abstract: Social media has become the norm of everyone for communication. The usage of social media has increased exponentially in the last decade. The myriads of Social media services such as Facebook, Twitter, Snapchat, and Instagram etc allow people to connect with their friends, and followers freely. The attackers who try to take advantage of this situation has also increased at an exponential rate. Every social media service has its own recommender systems and user profiling algorithms. These algorithms use users current information to make different recommendations. Often the data that is formed from social media services is Linked data as each item/user is usually linked with other users/items. Recommender systems due to their ubiquitous and prominent nature are prone to several forms of attacks. One of the major form of attacks is poisoning the training set data. As recommender systems use current user/item information as the training set to make recommendations, the attacker tries to modify the training set in such a way that the recommender system would benefit the attacker or give incorrect recommendations and hence failing in its basic functionality. Most existing training set attack algorithms work with ``flat" attribute-value data which is typically assumed to be independent and identically distributed (i.i.d.). However, the i.i.d. assumption does not hold for social media data since it is inherently linked as described above. Usage of user-similarity with Graph Regularizer in morphing the training data produces best results to attacker. This thesis proves the same by demonstrating with experiments on Collaborative Filtering with multiple datasets.
Dissertation/Thesis
Masters Thesis Computer Science 2019
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Data poisoning attacks"

1

Chen, Pengpeng, Hailong Sun, and Zhijun Chen. "Data Poisoning Attacks on Crowdsourcing Learning." In Web and Big Data, 164–79. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-85896-4_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ma, Yuzhe, Kwang-Sung Jun, Lihong Li, and Xiaojin Zhu. "Data Poisoning Attacks in Contextual Bandits." In Lecture Notes in Computer Science, 186–204. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01554-1_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tolpegin, Vale, Stacey Truex, Mehmet Emre Gursoy, and Ling Liu. "Data Poisoning Attacks Against Federated Learning Systems." In Computer Security – ESORICS 2020, 480–501. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58951-6_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tahmasebian, Farnaz, Li Xiong, Mani Sotoodeh, and Vaidy Sunderam. "Crowdsourcing Under Data Poisoning Attacks: A Comparative Study." In Data and Applications Security and Privacy XXXIV, 310–32. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-49669-2_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Qi, Yizhi Ren, Tianyu Xia, Lifeng Yuan, and Linqiang Chen. "Data Poisoning Attacks on Graph Convolutional Matrix Completion." In Algorithms and Architectures for Parallel Processing, 427–39. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-38961-1_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lin, Chia-Chih, and Ming-Syan Chen. "Attack Is the Best Defense: A Multi-Mode Poisoning PUF Against Machine Learning Attacks." In Advances in Knowledge Discovery and Data Mining, 176–87. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-75762-5_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Peri, Neehar, Neal Gupta, W. Ronny Huang, Liam Fowl, Chen Zhu, Soheil Feizi, Tom Goldstein, and John P. Dickerson. "Deep k-NN Defense Against Clean-Label Data Poisoning Attacks." In Computer Vision – ECCV 2020 Workshops, 55–70. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66415-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Xi, David J. Miller, Zhen Xiang, and George Kesidis. "A Scalable Mixture Model Based Defense Against Data Poisoning Attacks on Classifiers." In Lecture Notes in Computer Science, 262–73. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61725-7_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Usman, Muhammad, Divya Gopinath, Youcheng Sun, Yannic Noller, and Corina S. Păsăreanu. "NNrepair: Constraint-Based Repair of Neural Network Classifiers." In Computer Aided Verification, 3–25. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81685-8_1.

Full text
Abstract:
AbstractWe present NNrepair, a constraint-based technique for repairing neural network classifiers. The technique aims to fix the logic of the network at an intermediate layer or at the last layer. NNrepair first uses fault localization to find potentially faulty network parameters (such as the weights) and then performs repair using constraint solving to apply small modifications to the parameters to remedy the defects. We present novel strategies to enable precise yet efficient repair such as inferring correctness specifications to act as oracles for intermediate layer repair, and generation of experts for each class. We demonstrate the technique in the context of three different scenarios: (1) Improving the overall accuracy of a model, (2) Fixing security vulnerabilities caused by poisoning of training data and (3) Improving the robustness of the network against adversarial attacks. Our evaluation on MNIST and CIFAR-10 models shows that NNrepair can improve the accuracy by 45.56% points on poisoned data and 10.40% points on adversarial data. NNrepair also provides small improvement in the overall accuracy of models, without requiring new data or re-training.
APA, Harvard, Vancouver, ISO, and other styles
10

Gupta, Viresh, and Tanmoy Chakraborty. "VIKING: Adversarial Attack on Network Embeddings via Supervised Network Poisoning." In Advances in Knowledge Discovery and Data Mining, 103–15. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-75768-7_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Data poisoning attacks"

1

Ma, Yuzhe, Xiaojin Zhu, and Justin Hsu. "Data Poisoning against Differentially-Private Learners: Attacks and Defenses." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/657.

Full text
Abstract:
Data poisoning attacks aim to manipulate the model produced by a learning algorithm by adversarially modifying the training set. We consider differential privacy as a defensive measure against this type of attack. We show that private learners are resistant to data poisoning attacks when the adversary is only able to poison a small number of items. However, this protection degrades as the adversary is allowed to poison more data. We emprically evaluate this protection by designing attack algorithms targeting objective and output perturbation learners, two standard approaches to differentially-private machine learning. Experiments show that our methods are effective when the attacker is allowed to poison sufficiently many training items.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Heng, and Gregory Ditzler. "Data Poisoning Attacks against MRMR." In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019. http://dx.doi.org/10.1109/icassp.2019.8683530.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nuding, Florian, and Rudolf Mayer. "Poisoning Attacks in Federated Learning." In CODASPY '20: Tenth ACM Conference on Data and Application Security and Privacy. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3374664.3379534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Hengtong, Tianhang Zheng, Jing Gao, Chenglin Miao, Lu Su, Yaliang Li, and Kui Ren. "Data Poisoning Attack against Knowledge Graph Embedding." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/674.

Full text
Abstract:
Knowledge graph embedding (KGE) is a technique for learning continuous embeddings for entities and relations in the knowledge graph. Due to its benefit to a variety of downstream tasks such as knowledge graph completion, question answering and recommendation, KGE has gained significant attention recently. Despite its effectiveness in a benign environment, KGE's robustness to adversarial attacks is not well-studied. Existing attack methods on graph data cannot be directly applied to attack the embeddings of knowledge graph due to its heterogeneity. To fill this gap, we propose a collection of data poisoning attack strategies, which can effectively manipulate the plausibility of arbitrary targeted facts in a knowledge graph by adding or deleting facts on the graph. The effectiveness and efficiency of the proposed attack strategies are verified by extensive evaluations on two widely-used benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Huiyuan, and Jing Li. "Data Poisoning Attacks on Cross-domain Recommendation." In CIKM '19: The 28th ACM International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3357384.3358116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wallace, Eric, Tony Zhao, Shi Feng, and Sameer Singh. "Concealed Data Poisoning Attacks on NLP Models." In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.naacl-main.13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Russo, Alessio, and Alexandre Proutiere. "Poisoning Attacks against Data-Driven Control Methods." In 2021 American Control Conference (ACC). IEEE, 2021. http://dx.doi.org/10.23919/acc50511.2021.9482992.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Jun, and Jingrui He. "Indirect Invisible Poisoning Attacks on Domain Adaptation." In KDD '21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3447548.3467214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Takahashi, Tsubasa. "Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks." In 2019 IEEE International Conference on Big Data (Big Data). IEEE, 2019. http://dx.doi.org/10.1109/bigdata47090.2019.9006004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ou, Yifan, and Reza Samavi. "Mixed Strategy Game Model Against Data Poisoning Attacks." In 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). IEEE, 2019. http://dx.doi.org/10.1109/dsn-w.2019.00015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography