To see the other types of publications on this topic, follow the link: SVM and poisoning attack.

Journal articles on the topic 'SVM and poisoning attack'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'SVM and poisoning attack.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Mahalle, Sheetal Anil, and Kaushal Kumar Dr. "Optimised curie pre-filter defending technique for SVM against poisoning attack." International Journal of Advance Research in Multidisciplinary 1, no. 2 (2023): 447–51. https://doi.org/10.5281/zenodo.14617570.

Full text
Abstract:
In the contemporary business landscape, sustainability has become a critical factor in customer engagement strategies. This study explores the role of innovative green technologies in enhancing customer engagement, focusing on how businesses can leverage environmentally friendly practices to build stronger connections with their customers. This research used a mixed-methods strategy, drawing both quantitative and qualitative conclusions from customer surveys and case studies of businesses that have effectively implemented green technology. This study provides valuable insights for businesses aiming to align their customer engagement strategies with sustainability goals, offering practical recommendations for integrating green technologies into their customer outreach efforts.
APA, Harvard, Vancouver, ISO, and other styles
2

Jiao, Shuobo. "Impact of SVM-based Poisoning on the Semantic Recognition of Sounds." Applied and Computational Engineering 109, no. 1 (2024): 103–8. http://dx.doi.org/10.54254/2755-2721/109/20241412.

Full text
Abstract:
Abstract. Machine learning is a technique that enables computers to learn from data and make predictions or decisions, data poisoning is the process of machine learning training where malicious samples are put in to make the model predictions or classifications less accurate. Data poisoning attacks help to reveal security vulnerabilities in AI systems. In this paper, we study Support Vector Machine (SVM) poisoning for sound recognition techniques, using AISHELL-3 dataset training data, from which we find the most vulnerable features for SVM poisoning. In the field of speech recognition, SVM can be applied to speech feature extraction, speech classification and speech synthesis to find the best hyperplane by finding the maximum margin optimization for effective classification and recognition of speech signals. Experiments have resulted in biased semantic recognition of sounds, output of incorrect speech, reduced accuracy of model classification and generation of incorrect decision boundaries. The role of this research paper is to investigate whether SVM poisoning affects the semantic recognition of sounds, and the result of the research is that it does cause semantic bias.
APA, Harvard, Vancouver, ISO, and other styles
3

Upreti, Deepak, Hyunil Kim, Eunmok Yang, and Changho Seo. "Defending against label-flipping attacks in federated learning systems using uniform manifold approximation and projection." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 1 (2024): 459. http://dx.doi.org/10.11591/ijai.v13.i1.pp459-466.

Full text
Abstract:
<span lang="EN-US">The user experience can be greatly improved by using learning models that have been trained using data from mobile devices and other internet of things (IoT) devices. Numerous efforts have been made to implement federated learning (FL) algorithms in order to facilitate the success of machine learning models. Researchers have been working on various privacy-preserving methodologies, such as deep neural networks (DNN), support vector machines (SVM), logistic regression, and gradient boosted decision trees, to support a wider range of machine learning models. The capacity for computing and storage has increased over time, emphasizing the growing significance of data mining in engineering. Artificial intelligence and machine learning have recently achieved remarkable progress. We carried out research on data poisoning attacks in the FL system and proposed defence technique using uniform manifold approximation and projection (UMAP). We compare the efficiency by using UMAP, principal component analysis (PCA), Kernel principal component analysis (KPCA) and k-mean clustering algorithm. We make clear in the paper that UMAP performs better than PCA, KPCA and k-mean, and gives excellent performance in detection and mitigating against data-poisoning attacks.</span>
APA, Harvard, Vancouver, ISO, and other styles
4

Upreti, Deepak, Hyunil Kim, Eunmok Yang, and Changho Seo. "Defending against label-flipping attacks in federated learning systems using uniform manifold approximation and projection." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 1 (2024): 459–66. https://doi.org/10.11591/ijai.v13.i1.pp459-466.

Full text
Abstract:
The user experience can be greatly improved by using learning models that have been trained using data from mobile devices and other internet of things (IoT) devices. Numerous efforts have been made to implement federated learning (FL) algorithms in order to facilitate the success of machine learning models. Researchers have been working on various privacy-preserving methodologies, such as deep neural networks (DNN), support vector machines (SVM), logistic regression, and gradient boosted decision trees, to support a wider range of machine learning models. The capacity for computing and storage has increased over time, emphasizing the growing significance of data mining in engineering. Artificial intelligence and machine learning have recently achieved remarkable progress. We carried out research on data poisoning attacks in the FL system and proposed defence technique using uniform manifold approximation and projection (UMAP). We compare the efficiency by using UMAP, principal component analysis (PCA), Kernel principal component analysis (KPCA) and k-mean clustering algorithm. We make clear in the paper that UMAP performs better than PCA, KPCA and k-mean, and gives excellent performance in detection and mitigating against data-poisoning attacks.
APA, Harvard, Vancouver, ISO, and other styles
5

Rathod, Tejal, Nilesh Kumar Jadav, Sudeep Tanwar, et al. "AI and Blockchain-Based Secure Data Dissemination Architecture for IoT-Enabled Critical Infrastructure." Sensors 23, no. 21 (2023): 8928. http://dx.doi.org/10.3390/s23218928.

Full text
Abstract:
The Internet of Things (IoT) is the most abundant technology in the fields of manufacturing, automation, transportation, robotics, and agriculture, utilizing the IoT’s sensors-sensing capability. It plays a vital role in digital transformation and smart revolutions in critical infrastructure environments. However, handling heterogeneous data from different IoT devices is challenging from the perspective of security and privacy issues. The attacker targets the sensor communication between two IoT devices to jeopardize the regular operations of IoT-based critical infrastructure. In this paper, we propose an artificial intelligence (AI) and blockchain-driven secure data dissemination architecture to deal with critical infrastructure security and privacy issues. First, we reduced dimensionality using principal component analysis (PCA) and explainable AI (XAI) approaches. Furthermore, we applied different AI classifiers such as random forest (RF), decision tree (DT), support vector machine (SVM), perceptron, and Gaussian Naive Bayes (GaussianNB) that classify the data, i.e., malicious or non-malicious. Furthermore, we employ an interplanetary file system (IPFS)-driven blockchain network that offers security to the non-malicious data. In addition, to strengthen the security of AI classifiers, we analyze data poisoning attacks on the dataset that manipulate sensitive data and mislead the classifier, resulting in inaccurate results from the classifiers. To overcome this issue, we provide an anomaly detection approach that identifies malicious instances and removes the poisoned data from the dataset. The proposed architecture is evaluated using performance evaluation metrics such as accuracy, precision, recall, F1 score, and receiver operating characteristic curve (ROC curve). The findings show that the RF classifier transcends other AI classifiers in terms of accuracy, i.e., 98.46%.
APA, Harvard, Vancouver, ISO, and other styles
6

Sajid, Maimoona Bint E., Sameeh Ullah, Nadeem Javaid, Ibrar Ullah, Ali Mustafa Qamar, and Fawad Zaman. "Exploiting Machine Learning to Detect Malicious Nodes in Intelligent Sensor-Based Systems Using Blockchain." Wireless Communications and Mobile Computing 2022 (January 18, 2022): 1–16. http://dx.doi.org/10.1155/2022/7386049.

Full text
Abstract:
In this paper, a blockchain-based secure routing model is proposed for the Internet of Sensor Things (IoST). The blockchain is used to register the nodes and store the data packets’ transactions. Moreover, the Proof of Authority (PoA) consensus mechanism is used in the model to avoid the extra overhead incurred due to the use of Proof of Work (PoW) consensus mechanism. Furthermore, during routing of data packets, malicious nodes can exist in the IoST network, which eavesdrop the communication. Therefore, the Genetic Algorithm-based Support Vector Machine (GA-SVM) and Genetic Algorithm-based Decision Tree (GA-DT) models are proposed for malicious node detection. After the malicious node detection, the Dijkstra algorithm is used to find the optimal routing path in the network. The simulation results show the effectiveness of the proposed model. PoA is compared with PoW in terms of the transaction cost in which PoA has consumed 30% less cost than PoW. Furthermore, without Man In The Middle (MITM) attack, GA-SVM consumes 10% less energy than with MITM attack. Moreover, without any attack, GA-SVM consumes 30% less than grayhole attack and 60% less energy than mistreatment. The results of Decision Tree (DT), Support Vector Machine (SVM), GA-DT, and GA-SVM are compared in terms of accuracy and precision. The accuracy of DT, SVM, GA-DT, and GA-SVM is 88%, 93%, 96%, and 98%, respectively. The precision of DT, SVM, GA-DT, and GA-SVM is 100%, 92%, 94%, and 96%, respectively. In addition, the Dijkstra algorithm is compared with Bellman Ford algorithm. The shortest distances calculated by Dijkstra and Bellman are 8 and 11 hops long, respectively. Also, security analysis is performed to check the smart contract’s effectiveness against attacks. Moreover, we induced three attacks: grayhole attack, mistreatment attack, and MITM attack to check the resilience of our proposed system model.
APA, Harvard, Vancouver, ISO, and other styles
7

Rawat, Romil, and Shailendra Kumar Shrivastav. "SQL injection attack Detection using SVM." International Journal of Computer Applications 42, no. 13 (2012): 1–4. http://dx.doi.org/10.5120/5749-7043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shah, Zawar, and Steve Cosgrove. "Mitigating ARP Cache Poisoning Attack in Software-Defined Networking (SDN): A Survey." Electronics 8, no. 10 (2019): 1095. http://dx.doi.org/10.3390/electronics8101095.

Full text
Abstract:
Address Resolution Protocol (ARP) is a widely used protocol that provides a mapping of Internet Protocol (IP) addresses to Media Access Control (MAC) addresses in local area networks. This protocol suffers from many spoofing attacks because of its stateless nature and lack of authentication. One such spoofing attack is the ARP Cache Poisoning attack, in which attackers poison the cache of hosts on the network by sending spoofed ARP requests and replies. Detection and mitigation of ARP Cache Poisoning attack is important as this attack can be used by attackers to further launch Denial of Service (DoS) and Man-In-The Middle (MITM) attacks. As with traditional networks, an ARP Cache Poisoning attack is also a serious concern in Software Defined Networking (SDN) and consequently, many solutions are proposed in the literature to mitigate this attack. In this paper, a detailed survey on various solutions to mitigate ARP Cache Poisoning attack in SDN is carried out. In this survey, various solutions are classified into three categories: Flow Graph based solutions; Traffic Patterns based solutions; IP-MAC Address Bindings based solutions. All these solutions are critically evaluated in terms of their working principles, advantages and shortcomings. Another important feature of this survey is to compare various solutions with respect to different performance metrics, e.g., attack detection time, ARP response time, calculation of delay at the Controller etc. In addition, future research directions are also presented in this survey that can be explored by other researchers to propose better solutions to mitigate the ARP Cache Poisoning attack in SDN.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Puning, and Zhiguo Wan. "Robust Nonparametric Regression under Poisoning Attack." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 15 (2024): 17007–15. http://dx.doi.org/10.1609/aaai.v38i15.29644.

Full text
Abstract:
This paper studies robust nonparametric regression, in which an adversarial attacker can modify the values of up to q samples from a training dataset of size N. Our initial solution is an M-estimator based on Huber loss minimization. Compared with simple kernel regression, i.e. the Nadaraya-Watson estimator, this method can significantly weaken the impact of malicious samples on the regression performance. We provide the convergence rate as well as the corresponding minimax lower bound. The result shows that, with proper bandwidth selection, supremum error is minimax optimal. The L2 error is optimal with relatively small q, but is suboptimal with larger q. The reason is that this estimator is vulnerable if there are many attacked samples concentrating in a small region. To address this issue, we propose a correction method by projecting the initial estimate to the space of Lipschitz functions. The final estimate is nearly minimax optimal for arbitrary q, up to a logarithmic factor.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Hongyi, Jinshu Su, Linbo Qiao, and Qin Xin. "Malware Collusion Attack against SVM: Issues and Countermeasures." Applied Sciences 8, no. 10 (2018): 1718. http://dx.doi.org/10.3390/app8101718.

Full text
Abstract:
Android has become the most popular mobile platform, and a hot target for malware developers. At the same time, researchers have come up with numerous ways to deal with malware. Among them, machine learning based methods are quite effective in Android malware detection, the accuracy of which can be as high as 98%. Thus, malware developers have the incentives to develop more advanced malware to evade detection. This paper presents an adversary attack scenario (Collusion Attack) that will compromise current machine learning based malware detection methods, especially Support Vector Machines (SVM). The malware developers can perform this attack easily by splitting malicious payload into two or more apps. Meanwhile, attackers may hide their malicious behavior by using advanced techniques (Evasion Attack), such as obfuscation, etc. According to our simulation, 87.4% of apps can evade Linear SVM by Collusion Attack. When performing Collusion and Evasion Attack simultaneously, the evasion rate can reach 100% at a low cost. Thus, we proposed a method to deal with this issue. This approach, realized in a tool, called ColluDroid, can identify the collusion apps by analyzing the communication between apps. In addition, it can integrate secure learning methods (e.g., Sec-SVM) to fight against Evasion Attack. The evaluation results show that ColluDroid is effective in finding out the collusion apps and ColluDroid-Sec-SVM has the best performance in the presence of both Collusion and Evasion Attack.
APA, Harvard, Vancouver, ISO, and other styles
11

Godinho, António, José Rosado, Filipe Sá, Filipe Caldeira, and Filipe Cardoso. "Torrent Poisoning Protection with a Reverse Proxy Server." Electronics 12, no. 1 (2022): 165. http://dx.doi.org/10.3390/electronics12010165.

Full text
Abstract:
A Distributed Denial-of-Service attack uses multiple sources operating in concert to attack a network or site. A typical DDoS flood attack on a website targets a web server with multiple valid requests, exhausting the server’s resources. The participants in this attack are usually compromised/infected computers controlled by the attackers. There are several variations of this kind of attack, and torrent index poisoning is one. A Distributed Denial-of-Service (DDoS) attack using torrent poisoning, more specifically using index poisoning, is one of the most effective and disruptive types of attacks. These web flooding attacks originate from BitTorrent-based file-sharing communities, where the participants using the BitTorrent applications cannot detect their involvement. The antivirus and other tools cannot detect the altered torrent file, making the BitTorrent client target the webserver. The use of reverse proxy servers can block this type of request from reaching the web server, preventing the severity and impact on the service of the DDoS. In this paper, we analyze a torrent index poisoning DDoS to a higher education institution, the impact on the network systems and servers, and the mitigation measures implemented.
APA, Harvard, Vancouver, ISO, and other styles
12

Paradise, Paradise, Wahyu Adi Prabowo, and Teguh Rijanandi. "Analysis of Distributed Denial of Service Attacks Using Support Vector Machine and Fuzzy Tsukamoto." JURNAL MEDIA INFORMATIKA BUDIDARMA 7, no. 1 (2023): 66. http://dx.doi.org/10.30865/mib.v7i1.5199.

Full text
Abstract:
Advances in technology in the field of information technology services allow hackers to attack internet systems, one of which is the DDOS attack, more specifically, the smurf attack, which involves multiple computers attacking database server systems and File Transfer Protocol (FTP). The DDOS smurf attack significantly affects computer network traffic. This research will analyze the classification of machine learning Support Vector Machine (SVM) and Fuzzy Tsukamoto in detecting DDOS attacks using intensive simulations in analyzing computer networks. Classification techniques in machine learning, such as SVM and fuzzy Tsukamoto, can make it easier to distinguish computer network traffic when detecting DDOS attacks on servers. Three variables are used in this classification: the length of the packet, the number of packets, and the number of packet senders. By testing 51 times, 50 times is the DDOS attack trial dataset performed in a computer laboratory, and one dataset derived from DDOS attack data is CAIDA 2007 data. From this study, we obtained an analysis of the accuracy level of the classification of machine learning SVM and fuzzy Tsukamoto, each at 100%.
APA, Harvard, Vancouver, ISO, and other styles
13

Ioannou, Christiana, and Vasos Vassiliou. "Network Attack Classification in IoT Using Support Vector Machines." Journal of Sensor and Actuator Networks 10, no. 3 (2021): 58. http://dx.doi.org/10.3390/jsan10030058.

Full text
Abstract:
Machine learning (ML) techniques learn a system by observing it. Events and occurrences in the network define what is expected of the network’s operation. It is for this reason that ML techniques are used in the computer network security field to detect unauthorized intervention. In the event of suspicious activity, the result of the ML analysis deviates from the definition of expected normal network activity and the suspicious activity becomes apparent. Support vector machines (SVM) are ML techniques that have been used to profile normal network activity and classify it as normal or abnormal. They are trained to configure an optimal hyperplane that classifies unknown input vectors’ values based on their positioning on the plane. We propose to use SVM models to detect malicious behavior within low-power, low-rate and short range networks, such as those used in the Internet of Things (IoT). We evaluated two SVM approaches, the C-SVM and the OC-SVM, where the former requires two classes of vector values (one for the normal and one for the abnormal activity) and the latter observes only normal behavior activity. Both approaches were used as part of an intrusion detection system (IDS) that monitors and detects abnormal activity within the smart node device. Actual network traffic with specific network-layer attacks implemented by us was used to create and evaluate the SVM detection models. It is shown that the C-SVM achieves up to 100% classification accuracy when evaluated with unknown data taken from the same network topology it was trained with and 81% accuracy when operating in an unknown topology. The OC-SVM that is created using benign activity achieves at most 58% accuracy.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Ruo, and Guiqin Yang. "Research on DDoS Attack Detection Based on GBDT-SVM Model in SDN Architecture." Journal of Computers 36, no. 1 (2025): 15–28. https://doi.org/10.63367/199115992025023601002.

Full text
Abstract:
Distributed Denial of Service (DDoS) attack is one of the significant threats to network security currently. The emerging network architecture Software-Defined Networking (SDN) with its centralized control and programmability makes it susceptible to malicious attacks, leading to network paralysis. In response to this issue, this paper proposes a hybrid machine learning model based on Support Vector Machine (SVM) and Gradient Boosting Decision Tree (GBDT) to detect attack traffic. The combination of GBDT and SVM enables dual-stage classification detection. Initially, GBDT conducts preliminary classification on large-scale data and filters misclassified samples. Subsequently, these filtered samples are inputted into the SVM classifier. Leveraging SVM’s robust generalization performance between training and testing data and its advantage in detecting anomalous traffic, further classification of data is achieved to accomplish attack detection. The integration of GBDT-SVM helps reduce misclassification of data samples by SVM that are close to the decision boundary during detection. Experimental results demonstrate that compared to other methods, the GBDT-SVM model achieves higher detection efficiency, with an average detection rate of up to 98.1%, lower false positive rates, thus enhancing detection accuracy and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
15

Song, Xuan, Huibin Li, Kailang Hu, and Guangjun Zai. "Backdoor Federated Learning by Poisoning Key Parameters." Electronics 14, no. 1 (2024): 129. https://doi.org/10.3390/electronics14010129.

Full text
Abstract:
Federated learning (FL) utilizes distributed data processing to enable collaborative machine learning model development while safeguarding user privacy. However, the decentralized nature of FL, combined with data heterogeneity, substantially expands the attack surface for backdoor threats. Existing FL attack and defense strategies typically target the entire model, neglecting the critical backdoor parameters—a small subset of parameters that govern model vulnerabilities. Focusing on these parameters can replicate the impact of attacking the entire model while greatly reducing the risk of detection by advanced defenses. To address this challenge, we introduce Key Parameter Backdoor Attack in Federated Learning (KPBAFL), an innovative, adaptive, and scalable framework specifically designed to exploit model vulnerabilities by targeting critical backdoor parameters. KPBAFL integrates three core components: key parameter analysis, a beacon feedback mechanism, and adaptive attack strategies. By embedding beacons within the backdoor model, the framework can gather real-time attack feedback and dynamically adjust its strategy accordingly. When these components operate in concert, KPBAFL exhibits exceptional stealthiness, achieving an attack success rate (ASR) exceeding 96.5% while maintaining a benign task accuracy (BTA) of 97.8% across various datasets and models. Extensive experiments demonstrate its effectiveness, even in the presence of advanced defenses such as FLAME, Fldetector, Rflbat, and Deepsight, underscoring its strong generalizability. Although the modular design ensures adaptability, the framework’s performance may significantly degrade if the components are not properly synchronized. Our research provides a critical foundation for understanding and mitigating backdoor vulnerabilities in federated learning systems.
APA, Harvard, Vancouver, ISO, and other styles
16

Ermilova, A., E. Kovtun, D. Berestnev, and A. Zaytsev. "Hiding Backdoors within Event Sequence Data via Poisoning Attacks." Doklady Mathematics 110, S1 (2024): S288—S298. https://doi.org/10.1134/s1064562424602221.

Full text
Abstract:
Abstract Deep learning’s emerging role in the financial sector’s decision-making introduces risks of adversarial attacks. A specific threat is a poisoning attack that modifies the training sample to develop a backdoor that persists during model usage. However, data cleaning procedures and routine model checks are easy-to-implement actions that prevent the usage of poisoning attacks. The problem is even more challenging for event sequence models, for which it is hard to design an attack due to the discrete nature of the data. We start with a general investigation of the possibility of poisoning for event sequence models. Then, we propose a concealed poisoning attack that can bypass natural banks’ defences. The empirical investigation shows that the developed poisoned model trained on contaminated data passes the check procedure, being similar to a clean model, and simultaneously contains a simple to-implement backdoor.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhu, Yanxu, Hong Wen, Runhui Zhao, Yixin Jiang, Qiang Liu, and Peng Zhang. "Research on Data Poisoning Attack against Smart Grid Cyber–Physical System Based on Edge Computing." Sensors 23, no. 9 (2023): 4509. http://dx.doi.org/10.3390/s23094509.

Full text
Abstract:
Data poisoning attack is a well-known attack against machine learning models, where malicious attackers contaminate the training data to manipulate critical models and predictive outcomes by masquerading as terminal devices. As this type of attack can be fatal to the operation of a smart grid, addressing data poisoning is of utmost importance. However, this attack requires solving an expensive two-level optimization problem, which can be challenging to implement in resource-constrained edge environments of the smart grid. To mitigate this issue, it is crucial to enhance efficiency and reduce the costs of the attack. This paper proposes an online data poisoning attack framework based on the online regression task model. The framework achieves the goal of manipulating the model by polluting the sample data stream that arrives at the cache incrementally. Furthermore, a point selection strategy based on sample loss is proposed in this framework. Compared to the traditional random point selection strategy, this strategy makes the attack more targeted, thereby enhancing the attack’s efficiency. Additionally, a batch-polluting strategy is proposed in this paper, which synchronously updates the poisoning points based on the direction of gradient ascent. This strategy reduces the number of iterations required for inner optimization and thus reduces the time overhead. Finally, multiple experiments are conducted to compare the proposed method with the baseline method, and the evaluation index of loss over time is proposed to demonstrate the effectiveness of the method. The results show that the proposed method outperforms the existing baseline method in both attack effectiveness and overhead.
APA, Harvard, Vancouver, ISO, and other styles
18

H.K., Madhu, and D. Ramesh. "Heart Attack Analysis and Prediction using SVM." International Journal of Computer Applications 183, no. 27 (2021): 35–39. http://dx.doi.org/10.5120/ijca2021921658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Zhang, Jintao, Chao Zhang, Guoliang Li, and Chengliang Chai. "PACE: Poisoning Attacks on Learned Cardinality Estimation." Proceedings of the ACM on Management of Data 2, no. 1 (2024): 1–27. http://dx.doi.org/10.1145/3639292.

Full text
Abstract:
Cardinality estimation (CE) plays a crucial role in database optimizer. We have witnessed the emergence of numerous learned CE models recently which can outperform traditional methods such as histograms and samplings. However, learned models also bring many security risks. For example, a query-driven learned CE model learns a query-to-cardinality mapping based on the historical workload. Such a learned model could be attacked by poisoning queries, which are crafted by malicious attackers and woven into the historical workload, leading to performance degradation of CE. In this paper, we explore the potential security risks in learned CE and study a new problem of poisoning attacks on learned CE in a black-box setting. There are three challenges. First, the interior details of the CE model are hidden in the black-box setting, making it difficult to attack the model. Second, the attacked CE model's parameters will be updated with the poisoning queries, i.e., a variable varying with the optimization variable, so the problem cannot be modeled as a univariate optimization problem and thus is hard to solve by an efficient algorithm. Third, to make an imperceptible attack, it requires to generate poisoning queries that follow a similar distribution to historical workload. We propose a poisoning attack system, PACE, to address these challenges. To tackle the first challenge, we propose a method of speculating and training a surrogate model, which transforms the black-box attack into a near-white-box attack. To address the second challenge, we model the poisoning problem as a bivariate optimization problem, and design an effective and efficient algorithm to solve it. To overcome the third challenge, we propose an adversarial approach to train a poisoning query generator alongside an anomaly detector, ensuring that the poisoning queries follow similar distribution to historical workload. Experiments show that PACE reduces the accuracy of the learned CE models by 178×, leading to a 10× decrease in the end-to-end performance of the target database.
APA, Harvard, Vancouver, ISO, and other styles
20

Fan, Jiaxin, Mohan Li, Yanbin Sun, and Peng Chen. "DRLAttack: A Deep Reinforcement Learning-Based Framework for Data Poisoning Attack on Collaborative Filtering Algorithms." Applied Sciences 15, no. 10 (2025): 5461. https://doi.org/10.3390/app15105461.

Full text
Abstract:
Collaborative filtering, as a widely used recommendation method, is widely applied but susceptible to data poisoning attacks, where malicious actors inject synthetic user interaction data to manipulate recommendation results and secure illicit benefits. Traditional poisoning attack methods require in-depth understanding of the recommendation system. However, they fail to address its dynamic nature and algorithmic complexity, thereby hindering effective breaches of the system’s defensive mechanisms. In this paper, we propose DRLAttack, a deep reinforcement learning-based framework for data poisoning attacks. DRLAttack can launch both white-box and black-box data poisoning attacks. In the white-box setting, DRLAttack dynamically tailors attack strategies to recommendation context changes, generating more potent and stealthy fake user interactions for the precise targeting of data poisoning. Furthermore, we extend DRLAttack to black-box settings. By introducing spy users to simulate the behavior of active and inactive users into the training dataset, we indirectly obtain the promotion status of target items and adjust the attack strategy in response. Experimental results on real-world recommendation system datasets demonstrate that DRLAttack can effectively manipulate recommendation results.
APA, Harvard, Vancouver, ISO, and other styles
21

Yu, Fangchao, Bo Zeng, Kai Zhao, Zhi Pang, and Lina Wang. "Chronic Poisoning: Backdoor Attack against Split Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 15 (2024): 16531–38. http://dx.doi.org/10.1609/aaai.v38i15.29591.

Full text
Abstract:
Split learning is a computing resource-friendly distributed learning framework that protects client training data by splitting the model between the client and server. Previous work has proved that split learning faces a severe risk of privacy leakage, as a malicious server can recover the client's private data by hijacking the training process. In this paper, we first explore the vulnerability of split learning to server-side backdoor attacks, where our goal is to compromise the model's integrity. Since the server-side attacker cannot access the training data and client model in split learning, the traditional poisoning-based backdoor attack methods are no longer applicable. Therefore, constructing backdoor attacks in split learning poses significant challenges. Our strategy involves the attacker establishing a shadow model on the server side that can encode backdoor samples and guiding the client model to learn from this model during the training process, thereby enabling the client to acquire the same capability. Based on these insights, we propose a three-stage backdoor attack framework named SFI. Our attack framework minimizes assumptions about the attacker's background knowledge and ensures that the attack process remains imperceptible to the client. We implement SFI on various benchmark datasets, and extensive experimental results demonstrate its effectiveness and generality. For example, success rates of our attack on MNIST, Fashion, and CIFAR10 datasets all exceed 90%, with limited impact on the main task.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhou, Xingchen, Ming Xu, Yiming Wu, and Ning Zheng. "Deep Model Poisoning Attack on Federated Learning." Future Internet 13, no. 3 (2021): 73. http://dx.doi.org/10.3390/fi13030073.

Full text
Abstract:
Federated learning is a novel distributed learning framework, which enables thousands of participants to collaboratively construct a deep learning model. In order to protect confidentiality of the training data, the shared information between server and participants are only limited to model parameters. However, this setting is vulnerable to model poisoning attack, since the participants have permission to modify the model parameters. In this paper, we perform systematic investigation for such threats in federated learning and propose a novel optimization-based model poisoning attack. Different from existing methods, we primarily focus on the effectiveness, persistence and stealth of attacks. Numerical experiments demonstrate that the proposed method can not only achieve high attack success rate, but it is also stealthy enough to bypass two existing defense methods.
APA, Harvard, Vancouver, ISO, and other styles
23

SHIN, Youngjoo. "DCUIP Poisoning Attack in Intel x86 Processors." IEICE Transactions on Information and Systems E104.D, no. 8 (2021): 1386–90. http://dx.doi.org/10.1587/transinf.2020edl8148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Kwon, Hyun, Hyunsoo Yoon, and Ki-Woong Park. "Selective Poisoning Attack on Deep Neural Networks †." Symmetry 11, no. 7 (2019): 892. http://dx.doi.org/10.3390/sym11070892.

Full text
Abstract:
Studies related to pattern recognition and visualization using computer technology have been introduced. In particular, deep neural networks (DNNs) provide good performance for image, speech, and pattern recognition. However, a poisoning attack is a serious threat to a DNN’s security. A poisoning attack reduces the accuracy of a DNN by adding malicious training data during the training process. In some situations, it may be necessary to drop a specifically chosen class of accuracy from the model. For example, if an attacker specifically disallows nuclear facilities to be selectively recognized, it may be necessary to intentionally prevent unmanned aerial vehicles from correctly recognizing nuclear-related facilities. In this paper, we propose a selective poisoning attack that reduces the accuracy of only the chosen class in the model. The proposed method achieves this by training malicious data corresponding to only the chosen class while maintaining the accuracy of the remaining classes. For the experiment, we used tensorflow as the machine-learning library as well as MNIST, Fashion-MNIST, and CIFAR10 as the datasets. Experimental results show that the proposed method can reduce the accuracy of the chosen class by 43.2%, 41.7%, and 55.3% in MNIST, Fashion-MNIST, and CIFAR10, respectively, while maintaining the accuracy of the remaining classes.
APA, Harvard, Vancouver, ISO, and other styles
25

Rasha Thamer Shawe, Kawther Thabt Saleh, and Farah Neamah Abbas. "Building attack detection system base on machine learning." Global Journal of Engineering and Technology Advances 6, no. 2 (2021): 018–32. http://dx.doi.org/10.30574/gjeta.2021.6.2.0010.

Full text
Abstract:
These days, security threats detection, generally discussed to as intrusion, has befitted actual significant and serious problem in network, information and data security. Thus, an intrusion detection system (IDS) has befitted actual important element in computer or network security. Avoidance of such intrusions wholly bases on detection ability of Intrusion Detection System (IDS) which productions necessary job in network security such it identifies different kinds of attacks in network. Moreover, the data mining has been playing an important job in the different disciplines of technologies and sciences. For computer security, data mining are presented for serving intrusion detection System (IDS) to detect intruders accurately. One of the vital techniques of data mining is characteristic, so we suggest Intrusion Detection System utilizing data mining approach: SVM (Support Vector Machine). In suggest system, the classification will be through by employing SVM and realization concerning the suggested system efficiency will be accomplish by executing a number of experiments employing KDD Cup’99 dataset. SVM (Support Vector Machine) is one of the best distinguished classification techniques in the data mining region. KDD Cup’99 data set is utilized to execute several investigates in our suggested system. The experimental results illustration that we can decrease wide time is taken to construct SVM model by accomplishment suitable data set pre-processing. False Positive Rate (FPR) is decrease and Attack detection rate of SVM is increased .applied with classification algorithm gives the accuracy highest result. Implementation Environment Intrusion detection system is implemented using Mat lab 2015 programming language, and the examinations have been implemented in the environment of Windows-7 operating system mat lab R2015a, the processor: Core i7- Duo CPU 2670, 2.5 GHz, and (8GB) RAM.
APA, Harvard, Vancouver, ISO, and other styles
26

Rasha, Thamer Shawe, Thabt Saleh Kawther, and Neamah Abbas Farah. "Building attack detection system base on machine learning." Global Journal of Engineering and Technology Advances 6, no. 2 (2021): 018–32. https://doi.org/10.5281/zenodo.4616048.

Full text
Abstract:
These days, security threats detection, generally discussed to as intrusion, has befitted actual significant and serious problem in network, information and data security. Thus, an intrusion detection system (IDS) has befitted actual important element in computer or network security. Avoidance of such intrusions wholly bases on detection ability of Intrusion Detection System (IDS) which productions necessary job in network security such it identifies different kinds of attacks in network. Moreover, the data mining has been playing an important job in the different disciplines of technologies and sciences. For computer security, data mining are presented for serving intrusion detection System (IDS) to detect intruders accurately. One of the vital techniques of data mining is characteristic, so we suggest Intrusion Detection System utilizing data mining approach: SVM (Support Vector Machine). In suggest system, the classification will be through by employing SVM and realization concerning the suggested system efficiency will be accomplish by executing a number of experiments employing KDD Cup’99 dataset. SVM (Support Vector Machine) is one of the best distinguished classification techniques in the data mining region. KDD Cup’99 data set is utilized to execute several investigates in our suggested system. The experimental results illustration that we can decrease wide time is taken to construct SVM model by accomplishment suitable data set pre-processing. False Positive Rate (FPR) is decrease and Attack detection rate of SVM is increased .applied with classification algorithm gives the accuracy highest result. Implementation Environment Intrusion detection system is implemented using Mat lab 2015 programming language, and the examinations have been implemented in the environment of Windows-7 operating system mat lab R2015a, the processor: Core i7- Duo CPU 2670, 2.5 GHz, and (8GB) RAM.
APA, Harvard, Vancouver, ISO, and other styles
27

Aissaoui, Sihem, and Sofiane Boukli Hacene. "Sinkhole Attack Detection-Based SVM In Wireless Sensor Networks." International Journal of Wireless Networks and Broadband Technologies 10, no. 2 (2021): 16–31. http://dx.doi.org/10.4018/ijwnbt.2021070102.

Full text
Abstract:
Wireless sensor network is a special kind of ad hoc network characterized by high density, low mobility, and the use of a shared wireless medium. This last feature makes the network deployment easy; however, it is prone to various types of attacks such as sinkhole attack, sybil attack. Many researchers studied the effect of such attacks on the network performance and their detection. Classification techniques are some of the most used end effective methods to detect attacks in WSN. In this paper, the authors focus on sinkhole attack, which is one of the most destructive attacks in WSNs. The authors propose an intrusion detection system for sinkhole attack using support vector machines (SVM) on AODV routing protocol. In the different experiments, a special sinkhole dataset is used, and a comparison with previous techniques is done on the basis of detection accuracy. The results show the efficiency of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
28

LONG, JUN, WENTAO ZHAO, FANGZHOU ZHU, and ZHIPING CAI. "ACTIVE LEARNING TO DEFEND POISONING ATTACK AGAINST SEMI-SUPERVISED INTRUSION DETECTION CLASSIFIER." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 19, supp01 (2011): 93–106. http://dx.doi.org/10.1142/s0218488511007362.

Full text
Abstract:
Intrusion detection systems play an important role in computer security. To make intrusion detection systems adaptive to changing environments, supervised learning techniques had been applied in intrusion detection. However, supervised learning needs a large amount of training instances to obtain classifiers with high accuracy. Limited to lack of high quality labeled instances, some researchers focused on semi-supervised learning to utilize unlabeled instances enhancing classification. But involving the unlabeled instances into the learning process also introduces vulnerability: attackers can generate fake unlabeled instances to mislead the final classifier so that a few intrusions can not be detected. In this paper we show that the attacker could mislead the semi-supervised intrusion detection classifier by poisoning the unlabeled instances. And we propose a defend method based on active learning to defeat the poisoning attack. Experiments show that the poisoning attack can reduce the accuracy of the semi-supervised learning classifier and the proposed defending method based on active learning can obtain higher accuracy than the original semi-supervised learner under the presented poisoning attack.
APA, Harvard, Vancouver, ISO, and other styles
29

Liu, Fengchun, Sen Zhang, Weining Ma, and Jingguo Qu. "Research on Attack Detection of Cyber Physical Systems Based on Improved Support Vector Machine." Mathematics 10, no. 15 (2022): 2713. http://dx.doi.org/10.3390/math10152713.

Full text
Abstract:
Cyber physical systems (CPS), in the event of a cyber attack, can have a serious impact on the operating physical equipment. In order to improve the attack detection capability of CPS, an support vector machine (SVM) attacks detection model based on particle swarm optimization (PSO) is proposed. First, the box plot anomaly detection method is used to detect the characteristic variables, and the characteristic variables with abnormal distribution are discretized. Secondly, the number of attack samples was increased by the SMOTE method to solve the problem of data imbalance, and the linear combination of characteristic variables was performed on the high-dimensional CPS network traffic data using principal component analysis (PCA). Then, the penalty coefficient and the hyperparameter of the kernel function in the SVM model are optimized by the PSO algorithm. Finally, Experiments on attack detection of CPS network traffic data show that the proposed model can detect different types of attack data and has higher detection accuracy compared with general detection models.
APA, Harvard, Vancouver, ISO, and other styles
30

Lee, Jaehyun, Youngho Cho, Ryungeon Lee, et al. "A Novel Data Sanitization Method Based on Dynamic Dataset Partition and Inspection Against Data Poisoning Attacks." Electronics 14, no. 2 (2025): 374. https://doi.org/10.3390/electronics14020374.

Full text
Abstract:
Deep learning (DL) technology has shown outstanding performance in various fields such as object recognition and classification, speech recognition, and natural language processing. However, it is well known that DL models are vulnerable to data poisoning attacks, where adversaries modify or inject data samples maliciously during the training phase, leading to degraded classification accuracy or misclassification. Since data poisoning attacks keep evolving to avoid existing defense methods, security researchers thoroughly examine data poisoning attack models and devise more reliable and effective detection methods accordingly. In particular, data poisoning attacks can be realistic in an adversarial situation where we retrain a DL model with a new dataset obtained from an external source during transfer learning. By this motivation, we propose a novel defense method that partitions and inspects the new dataset and then removes malicious sub-datasets. Specifically, our proposed method first divides a new dataset into n sub-datasets either evenly or randomly, inspects them by using the clean DL model as a poisoned dataset detector, and finally removes malicious sub-datasets classified by the detector. For partition and inspection, we design two dynamic defensive algorithms: the Sequential Partitioning and Inspection Algorithm (SPIA) and the Randomized Partitioning and Inspection Algorithm (RPIA). With this approach, a resulting cleaned dataset can be used reliably for retraining a DL model. In addition, we conducted two experiments in the Python and DL environment to show that our proposed methods effectively defend against two data poisoning attack models (concentrated poisoning attacks and random poisoning attacks) in terms of various evaluation metrics such as removed poison rate (RPR), attack success rate (ASR), and classification accuracy (ACC). Specifically, the SPIA completely removed all poisoned data under concentrated poisoning attacks in both Python and DL environments. In addition, the RPIA removed up to 91.1% and 99.1% of poisoned data under random poisoning attacks in Python and DL environments, respectively.
APA, Harvard, Vancouver, ISO, and other styles
31

V. Punitha and C. Mala. "SVM based Traffic Classification for Mitigating HTTP Attack." Research Briefs on Information and Communication Technology Evolution 4 (August 15, 2018): 37–45. http://dx.doi.org/10.56801/rebicte.v4i.64.

Full text
Abstract:
The advancement in Internet technology brings new dimension to commercial applications, entertainmentand information sharing. Consequently, many web services are launched in almost all needs ofthe internet users. The development of effective network infrastructure increases the usage of theseservices. However the convenience of using the web services are blocked by denial of service attack,which is the foremost web threat. This attack injects malicious traffics into the internet which deeplyaffects the availability of services. Categorizing the malicious traffic from normal traffic facilitatesthe elimination process. In view of eliminating the most victimized attacks which deny the servicesto the potential users, this paper proposes a classification method based on machine learning technique.The proposed SVM based classifier discriminates the HTTP attacks that intentionally blocksthe computing resources to the legitimate users based on network flow properties. The network flowproperties are selected by the proposed optimization method. The simulated results exhibit that withoptimized feature set, the classification performance of the proposed classifier using RBF kernel iscompetently higher when compared with other kernel models.
APA, Harvard, Vancouver, ISO, and other styles
32

Behnoush, B., E. Bazmi, SH Nazari, S. Khodakarim, MA Looha, and H. Soori. "Machine learning algorithms to predict seizure due to acute tramadol poisoning." Human & Experimental Toxicology 40, no. 8 (2021): 1225–33. http://dx.doi.org/10.1177/0960327121991910.

Full text
Abstract:
Introduction: This study was designed to develop and evaluate machine learning algorithms for predicting seizure due to acute tramadol poisoning, identifying high-risk patients and facilitating appropriate clinical decision-making. Methods: Several characteristics of acute tramadol poisoning cases were collected in the Emergency Department (ED) (2013–2019). After selecting important variables in random forest method, prediction models were developed using the Support Vector Machine (SVM), Naïve Bayes (NB), Artificial Neural Network (ANN) and K-Nearest Neighbor (K-NN) algorithms. Area Under the Curve (AUC) and other diagnostic criteria were used to assess performance of models. Results: In 909 patients, 544 (59.8%) experienced seizures. The important predictors of seizure were sex, pulse rate, arterial blood oxygen pressure, blood bicarbonate level and pH. SVM (AUC = 0.68), NB (AUC = 0.71) and ANN (AUC = 0.70) models outperformed k-NN model (AUC = 0.58). NB model had a higher sensitivity and negative predictive value and k-NN model had higher specificity and positive predictive values than other models. Conclusion: A perfect prediction model may help improve clinicians’ decision-making and clinical care at EDs in hospitals and medical settings. SVM, ANN and NB models had no significant differences in the performance and accuracy; however, validated logistic regression (LR) was the superior model for predicting seizure due to acute tramadol poisoning.
APA, Harvard, Vancouver, ISO, and other styles
33

Wu, Young, Jeremy McMahan, Xiaojin Zhu, and Qiaomin Xie. "Reward Poisoning Attacks on Offline Multi-Agent Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (2023): 10426–34. http://dx.doi.org/10.1609/aaai.v37i9.26240.

Full text
Abstract:
In offline multi-agent reinforcement learning (MARL), agents estimate policies from a given dataset. We study reward-poisoning attacks in this setting where an exogenous attacker modifies the rewards in the dataset before the agents see the dataset. The attacker wants to guide each agent into a nefarious target policy while minimizing the Lp norm of the reward modification. Unlike attacks on single-agent RL, we show that the attacker can install the target policy as a Markov Perfect Dominant Strategy Equilibrium (MPDSE), which rational agents are guaranteed to follow. This attack can be significantly cheaper than separate single-agent attacks. We show that the attack works on various MARL agents including uncertainty-aware learners, and we exhibit linear programs to efficiently solve the attack problem. We also study the relationship between the structure of the datasets and the minimal attack cost. Our work paves the way for studying defense in offline MARL.
APA, Harvard, Vancouver, ISO, and other styles
34

Lyu, Xiaoting, Yufei Han, Wei Wang, et al. "Poisoning with Cerberus: Stealthy and Colluded Backdoor Attack against Federated Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (2023): 9020–28. http://dx.doi.org/10.1609/aaai.v37i7.26083.

Full text
Abstract:
Are Federated Learning (FL) systems free from backdoor poisoning with the arsenal of various defense strategies deployed? This is an intriguing problem with significant practical implications regarding the utility of FL services. Despite the recent flourish of poisoning-resilient FL methods, our study shows that carefully tuning the collusion between malicious participants can minimize the trigger-induced bias of the poisoned local model from the poison-free one, which plays the key role in delivering stealthy backdoor attacks and circumventing a wide spectrum of state-of-the-art defense methods in FL. In our work, we instantiate the attack strategy by proposing a distributed backdoor attack method, namely Cerberus Poisoning (CerP). It jointly tunes the backdoor trigger and controls the poisoned model changes on each malicious participant to achieve a stealthy yet successful backdoor attack against a wide spectrum of defensive mechanisms of federated learning techniques. Our extensive study on 3 large-scale benchmark datasets and 13 mainstream defensive mechanisms confirms that Cerberus Poisoning raises a significantly severe threat to the integrity and security of federated learning practices, regardless of the flourish of robust Federated Learning methods.
APA, Harvard, Vancouver, ISO, and other styles
35

Manguling, Inez sri wahyuningsi, and Jumadi Mabe Parenreng. "Security System Analysis Using the HTTP Protocol Against Packet Sniffing Attacks." Internet of Things and Artificial Intelligence Journal 3, no. 4 (2023): 325–40. http://dx.doi.org/10.31763/iota.v3i4.612.

Full text
Abstract:
The security level of the SIM MBKM website information system needs to be analyzed because it is accessed by many students who provide essential data. The security testing process for the SIM MBKM system uses the software Ettercap and Wireshark to test the system's security level and network data against cybercrime attacks. The results of these experiments showed the same essential data, but Wireshark displayed more personal information. The difference lies in the system architecture using different methods or stages, namely ARP Poisoning and Filtering HTTP. The third difference is the estimated time taken during the experiment. Ettercap and Wireshark applications enable eavesdropping on confidential and essential data and information. Through security testing using Ettercap and Wireshark, more critical data is displayed. solutions and preventive actions can be implemented by encrypting confidential data and improving website security using the HTTPS (Hypertext Transfer Protocol Secure) protocol.
APA, Harvard, Vancouver, ISO, and other styles
36

Shi, Tianyu. "The Research about Heart Attack Prediction Model." Highlights in Science, Engineering and Technology 99 (June 18, 2024): 28–33. http://dx.doi.org/10.54097/xzaanz17.

Full text
Abstract:
Nowadays, coronary heart disease is becoming the most important cause of death all over the world. The use of science technology makes it much more easier for people to analyze the causes for the different kinds of diseases. This article uses the Behavioral Risk Factor Surveillance System to assemble data on health-related risk behaviors from more than 400,000 Americans and analyze these data to provide a precise prediction model for coronary heart disease. The article uses methods like Logistic regression, support vector machine (SVM) and random forest (RF) to explore a better prediction model. After all the calculation, the accuracy of logistic regression is not very high because the accuracy is 85%. SVM and RF proves to make a better prediction since the accuracy of these two models are both above 90% and random forest is the best predictor. To discover how the variables contributes to construct the model, the weight of these variables are crucial. Of all these variables, the proportion of BMI is 44.34%, which has the highest weight and plays a key role in model construction. Moreover, further predictions and better models should be set up to analyze the data. In conclusion, the study shows that SVM and RF make better predictions of heart disease than simple logistic regression. More comprehensive data from more respondents and more suitable analytic models need to use to improve prediction.
APA, Harvard, Vancouver, ISO, and other styles
37

Jingyuan Fan, Jingyuan Fan, Guiqin Yang Jingyuan Fan, and Jiyang Gai Guiqin Yang. "DDoS Attack Detection System Based on RF-SVM-IL Model Under SDN." 電腦學刊 32, no. 5 (2021): 031–43. http://dx.doi.org/10.53106/199115992021103205003.

Full text
Abstract:
The data forwarding plane in the Software Defined Network (SDN) is decoupled from the network control plane, which can realize the unified control of the whole network by the controller. Although centralize control provide great convenient in many cases, it is vulnerable to malicious attacks, especially for one of the most threatening attack - Distributed Denial of Service (DDoS). We innovatively propose a machine learning hybrid DDoS attack detection model which provide high precision within short-term. We named our model as RF-SVM-IL, which represents an integration of integrates Random Forest (RF), Support Vector Machine (SVM) and Incremental Learning (IL). The combination of RF and SVM can detect detect attacks in two layers and filter out the easily misclassified samples. Then IL is added to filter new samples to avoid repeated iteration training, and improve the adaptability of the model to dynamic data. Compared with other methods, RF-SVM-IL can detect DDoS attacks in SDN with higher accuracy and shorter time. The experimental results show that the average detection accuracy of RF-SVM-IL model is as high as 98.54%, and the detection time is as low as 2.386s.
APA, Harvard, Vancouver, ISO, and other styles
38

Dasari, Kishore Babu, and Nagaraju Devarakonda. "Detection of TCP-Based DDoS Attacks with SVM Classification with Different Kernel Functions Using Common Uncorrelated Feature Subsets." International Journal of Safety and Security Engineering 12, no. 2 (2022): 239–49. http://dx.doi.org/10.18280/ijsse.120213.

Full text
Abstract:
Distributed Denial of Service (DDoS) is a server-side infrastructure type security attack that aims to prevent legitimate users from accessing server system resources. Huge financial losses, reputation damage and data theft are some of the serious circumstances of DDoS attacks. Available DDoS attack detection methods reduce the severity of the attack's consequences, but they require more data computation, which is more expensive. This research proposed two feature selection methods in order to reduce the data computation for TCP-based DDoS attack detection with Support Vector Machine (SVM) classification algorithm. The first feature selection proposal of this study is to use Pearson, Spearman, and Kendall correlation approaches to select the PSK common uncorrelated feature subset. Use these PSK common uncorrelated feature subsets with SVM classifier with different kernels on TCP-based DDoS attacks and evaluate the classification results. This research, performed operations on Syn flood, MSSQL, SSDP datasets have taken from the CIC-DDoS2019 evaluation dataset. Select TCP-based DDoS attacks common uncorrelated feature subset selected by applying intersection on Syn flood, MSSQL, and SSDP data sets PSK common uncorrelated feature subsets is the second feature selection proposal of this research. Use these TCP-based DDoS attacks common uncorrelated feature subsets with SVM classifier with different kernels on TCP-based DDoS attacks and evaluate the classification results. Results with these two proposed methods also compared in this study. Experiments have been performed with these two approaches on a customized TCP-based DDoS attack that's been developed with Syn flood, MSSQL, and SSDP data sets, and the results have been evaluated. Linear, rbf, poly, sigmoid kernels SVM kernels used in this research. Experiments conclude that SVM with rbf kernel produces better results on TCP-based DDoS attacks.
APA, Harvard, Vancouver, ISO, and other styles
39

Cui, Jing, Yufei Han, Yuzhe Ma, Jianbin Jiao, and Junge Zhang. "BadRL: Sparse Targeted Backdoor Attack against Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (2024): 11687–94. http://dx.doi.org/10.1609/aaai.v38i10.29052.

Full text
Abstract:
Backdoor attacks in reinforcement learning (RL) have previously employed intense attack strategies to ensure attack success. However, these methods suffer from high attack costs and increased detectability. In this work, we propose a novel approach, BadRL, which focuses on conducting highly sparse backdoor poisoning efforts during training and testing while maintaining successful attacks. Our algorithm, BadRL, strategically chooses state observations with high attack values to inject triggers during training and testing, thereby reducing the chances of detection. In contrast to the previous methods that utilize sample-agnostic trigger patterns, BadRL dynamically generates distinct trigger patterns based on targeted state observations, thereby enhancing its effectiveness. Theoretical analysis shows that the targeted backdoor attack is always viable and remains stealthy under specific assumptions. Empirical results on various classic RL tasks illustrate that BadRL can substantially degrade the performance of a victim agent with minimal poisoning efforts (0.003% of total training steps) during training and infrequent attacks during testing. Code is available at: https://github.com/7777777cc/code.
APA, Harvard, Vancouver, ISO, and other styles
40

Shi, Siping, Chuang Hu, Dan Wang, Yifei Zhu, and Zhu Han. "Federated Anomaly Analytics for Local Model Poisoning Attack." IEEE Journal on Selected Areas in Communications 40, no. 2 (2022): 596–610. http://dx.doi.org/10.1109/jsac.2021.3118347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Yoshikura, Hiroshi. "Attack Rate in Food Poisoning: Order in Chaos." Japanese Journal of Infectious Diseases 68, no. 5 (2015): 394–406. http://dx.doi.org/10.7883/yoken.jjid.2014.374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

KWON, Hyun, and Sunghwan CHO. "Multi-Targeted Poisoning Attack in Deep Neural Networks." IEICE Transactions on Information and Systems E105.D, no. 11 (2022): 1916–20. http://dx.doi.org/10.1587/transinf.2022ngl0006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Sourav, Kumar Bhoi, and Prasad K. Krishna. "A Cloud Based Machine Intelligent Framework to Identify DDoS Botnet Attack in Internet of Things." International Journal of Innovative Research in Engineering & Management (IJIREM) 9, no. 4 (2022): 1–5. https://doi.org/10.5281/zenodo.7276342.

Full text
Abstract:
First few Botnet attack is a major issue in security of Internet of Things (IoT) devices and it needs to be identified to secure the system from the attackers. In this paper, a cloud-based machine intelligent framework is proposed to identify DDoS (distributed denial of service) Botnet attack in IoT systems. In this framework, the IoT devices communicating with the cloud are categorized based on their communication record to check the DDoS Botnet attack. In this work, three DDoS botnet attacks are considered such as HTTP, UDP, and TCP. The cloud is installed with a supervised machine intelligent model to classify the type of DDoS attack. The model is selected by considering 4 models such as Tree, stacking classifier, Neural Network (NN), and Support Vector Machine (SVM) and performance is evaluated based on classification accuracy (CA). Here, the stacking classifier is a hybrid model designed using the aggregation of Logistic Regression (LR), NN, and SVM. The performance is evaluated using Python tool. From the results it was found that except Tree all three models show an CA of 1.0. The computation time is also analyzed for four models and it was found that Tree shows less time than other models however, if considered with respect to CA (1.0) then SVM can be preferred as it shows lesser time then other two models. The detection time of the IoT devices is also simulated and result show that if SVM is installed in cloud then the detection time is lesser.    E
APA, Harvard, Vancouver, ISO, and other styles
44

Banodha1, Bhavna. "A Machine Learning Based Approach for Identifying Adversarial Poisoning." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 03 (2025): 1–9. https://doi.org/10.55041/ijsrem43093.

Full text
Abstract:
Due to the largeness of data to be analysed by online platforms, most of the big data models leverage machine learning in the backend. This has led to a new type of attack by users termed as adversarial machine learning in which the machine learning model used in the backend is targeted rather than the from end or the APIs. In conventional attacks, the first line of attack is the front end of the software. In this case, the machine learning model used in the backend is fed with bogus and/or deliberately falsified data to make it inactive. This is termed as adversarial machine learning attack or adversarial cyber-attack. It is extremely challenging to detect such attacks as there are no clear signs of attacks such as redirections, malicious code scripts, auto refresh tags etc. Instead, the data fed to the back-end machine learning model is targeted using adversarial data feeds. In this paper, a deep learning based model is used to detect such attacks. The proposed approach attains a precision value of 80.99% which is significantly higher compared to existing work. Keywords: Big data, Dark Web, Socio-Technical data, Poisoning Attacks, Deep Learning, Precision
APA, Harvard, Vancouver, ISO, and other styles
45

Wu, Young, Jeremy McMahan, Xiaojin Zhu, and Qiaomin Xie. "Data Poisoning to Fake a Nash Equilibria for Markov Games." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (2024): 15979–87. http://dx.doi.org/10.1609/aaai.v38i14.29529.

Full text
Abstract:
We characterize offline data poisoning attacks on Multi-Agent Reinforcement Learning (MARL), where an attacker may change a data set in an attempt to install a (potentially fictitious) unique Markov-perfect Nash equilibrium for a two-player zero-sum Markov game. We propose the unique Nash set, namely the set of games, specified by their Q functions, with a specific joint policy being the unique Nash equilibrium. The unique Nash set is central to poisoning attacks because the attack is successful if and only if data poisoning pushes all plausible games inside it. The unique Nash set generalizes the reward polytope commonly used in inverse reinforcement learning to MARL. For zero-sum Markov games, both the inverse Nash set and the set of plausible games induced by data are polytopes in the Q function space. We exhibit a linear program to efficiently compute the optimal poisoning attack. Our work sheds light on the structure of data poisoning attacks on offline MARL, a necessary step before one can design more robust MARL algorithms.
APA, Harvard, Vancouver, ISO, and other styles
46

Wu, Jianping, Jiahe Jin, and Chunming Wu. "Challenges and Countermeasures of Federated Learning Data Poisoning Attack Situation Prediction." Mathematics 12, no. 6 (2024): 901. http://dx.doi.org/10.3390/math12060901.

Full text
Abstract:
Federated learning is a distributed learning method used to solve data silos and privacy protection in machine learning, aiming to train global models together via multiple clients without sharing data. However, federated learning itself introduces certain security threats, which pose significant challenges in its practical applications. This article focuses on the common security risks of data poisoning during the training phase of federated learning clients. First, the definition of federated learning, attack types, data poisoning methods, privacy protection technology and data security situational awareness are summarized. Secondly, the system architecture fragility, communication efficiency shortcomings, computing resource consumption and situation prediction robustness of federated learning are analyzed, and related issues that affect the detection of data poisoning attacks are pointed out. Thirdly, a review is provided from the aspects of building a trusted federation, optimizing communication efficiency, improving computing power technology and personalized the federation. Finally, the research hotspots of the federated learning data poisoning attack situation prediction are prospected.
APA, Harvard, Vancouver, ISO, and other styles
47

Mehta, Suma B., and A. Rengarajan. "A Survey on ARP Poisoning." International Journal of Innovative Research in Computer and Communication Engineering 12, no. 02 (2024): 1031–37. http://dx.doi.org/10.15680/ijircce.2024.1202051.

Full text
Abstract:
This paper's primary goal is to examine the detection and mechanism of ARP spoofing. One may argue that the Address Resolution Protocol, or ARP for simple terms, is crucial to computer science and forensics. ARP spoofing is one of the various computers hacking techniques used nowadays by individuals to transmit phony ARP packets across a Local Area Network (LAN). Such attacks might lead to changes in traffic patterns or, worse yet, a temporary or permanent stoppage of traffic. Even though this attack is only possible on networks with Address Resolution Protocols, ARP spoofing can be a precursor to more dangerous assaults that have the potential to do considerably more harm. An attacker seeking to launch this sort of attack will search for the Address Resolution Protocol's vulnerabilities. He may, for instance, be trying to take advantage of flaws like the message's inability to properly verify the sender. Because of this, hackers may find it very simple to alter or steal users' data. Because ARP spoofing poses a genuine risk to the security of every user on the network, all appropriate precautions must be taken to minimize harm.
APA, Harvard, Vancouver, ISO, and other styles
48

Gregori, Gabriella Silva de, Elisângela de Souza Loureiro, Luis Gustavo Amorim Pessoa, et al. "Machine Learning in the Hyperspectral Classification of Glycaspis brimblecombei (Hemiptera Psyllidae) Attack Severity in Eucalyptus." Remote Sensing 15, no. 24 (2023): 5657. http://dx.doi.org/10.3390/rs15245657.

Full text
Abstract:
Assessing different levels of red gum lerp psyllid (Glycaspis brimblecombei) can influence the hyperspectral reflectance of leaves in different ways due to changes in chlorophyll. In order to classify these levels, the use of machine learning (ML) algorithms can help process the data faster and more accurately. The objectives were: (I) to evaluate the spectral behavior of the G. brimblecombei attack levels; (II) find the most accurate ML algorithm for classifying pest attack levels; (III) find the input configuration that improves performance of the algorithms. Data were collected from a clonal eucalyptus plantation (clone AEC 0144—Eucalyptus urophilla) aged 10.3 months old. Eighty sample evaluations were carried out considering the following severity levels: control (no shells), low infestation (N1), intermediate infestation (N2), and high infestation (N3), for which leaf spectral reflectances were obtained using a spectroradiometer. The spectral range acquired by the equipment was 350 to 2500 nm. After obtaining the wavelengths, they were grouped into representative interval means in 28 bands. Data were submitted to the following ML algorithms: artificial neural networks (ANN), REPTree (DT) and J48 decision trees, random forest (RF), support vector machine (SVM), and conventional logistic regression (LR) analysis. Two input configurations were tested: using only the wavelengths (ALL) and using the spectral bands (SB) to classify the attack levels. The output variable was the severity of G. brimblecombei attack. There were differences in the hyperspectral behavior of the leaves for the different attack levels. The highest attack level shows the greatest distinction and the highest reflectance values. LR and SVM show better accuracy in classifying the severity levels of G. brimblecombei attack. For the correct classification percentage, the RL and SVM algorithms performed better, both with accuracy above 90%. Both algorithms achieved F-score values close to 0.90 and above 0.8 for Kappa. The entire spectral range guaranteed the best accuracy for both algorithms.
APA, Harvard, Vancouver, ISO, and other styles
49

N, Kalaiarasi, Kadirvel A, Geethamahalakshmi G, Nageswari D, Hariharan N, and Senthil Kumar S. "Mitigation of Attacks via Improved Network Security in IoT Network using Machine Learning." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 10s (2023): 541–47. http://dx.doi.org/10.17762/ijritcc.v11i10s.7692.

Full text
Abstract:
In this paper, we develop a support vector machine (SVM) based attack mitigation technique from the IoT network. The SVM aims to classify the features related to the attacks based on pre-processed and feature extracted information. The simulation is conducted in terms of accuracy, precision, recall and f-measure over KDD datasets. The results show that the proposed SVM classifier obtains high grade of classification accuracy in both training and testing datasets.
APA, Harvard, Vancouver, ISO, and other styles
50

Shalabi, Eman, Walid Khedr, Ehab Rushdy, and Ahmad Salah. "A Comparative Study of Privacy-Preserving Techniques in Federated Learning: A Performance and Security Analysis." Information 16, no. 3 (2025): 244. https://doi.org/10.3390/info16030244.

Full text
Abstract:
Federated learning (FL) is a machine learning technique where clients exchange only local model updates with a central server that combines them to create a global model after local training. While FL offers privacy benefits through local training, privacy-preserving strategies are needed since model updates can leak training data information due to various attacks. To enhance privacy and attack robustness, techniques like homomorphic encryption (HE), Secure Multi-Party Computation (SMPC), and the Private Aggregation of Teacher Ensembles (PATE) can be combined with FL. Currently, no study has combined more than two privacy-preserving techniques with FL or comparatively analyzed their combinations. We conducted a comparative study of privacy-preserving techniques in FL, analyzing performance and security. We implemented FL using an artificial neural network (ANN) with a Malware Dataset from Kaggle for malware detection. To enhance privacy, we proposed models combining FL with the PATE, SMPC, and HE. All models were evaluated against poisoning attacks (targeted and untargeted), a backdoor attack, a model inversion attack, and a man in the middle attack. The combined models maintained performance while improving attack robustness. FL_SMPC, FL_CKKS, and FL_CKKS_SMPC improved both their performance and attack resistance. All the combined models outperformed the base FL model against the evaluated attacks. FL_PATE_CKKS_SMPC achieved the lowest backdoor attack success rate (0.0920). FL_CKKS_SMPC best resisted untargeted poisoning attacks (0.0010 success rate). FL_CKKS and FL_CKKS_SMPC best defended against targeted poisoning attacks (0.0020 success rate). FL_PATE_SMPC best resisted model inversion attacks (19.267 MSE). FL_PATE_CKKS_SMPC best defended against man in the middle attacks with the lowest degradation in accuracy (1.68%), precision (1.94%), recall (1.68%), and the F1-score (1.64%).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography