To see the other types of publications on this topic, follow the link: Membership Inference Attack.

Journal articles on the topic 'Membership Inference Attack'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Membership Inference Attack.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Pedersen, Joseph, Rafael Muñoz-Gómez, Jiangnan Huang, Haozhe Sun, Wei-Wei Tu, and Isabelle Guyon. "LTU Attacker for Membership Inference." Algorithms 15, no. 7 (2022): 254. http://dx.doi.org/10.3390/a15070254.

Full text
Abstract:
We address the problem of defending predictive models, such as machine learning classifiers (Defender models), against membership inference attacks, in both the black-box and white-box setting, when the trainer and the trained model are publicly released. The Defender aims at optimizing a dual objective: utility and privacy. Privacy is evaluated with the membership prediction error of a so-called “Leave-Two-Unlabeled” LTU Attacker, having access to all of the Defender and Reserved data, except for the membership label of one sample from each, giving the strongest possible attack scenario. We p
APA, Harvard, Vancouver, ISO, and other styles
2

Hilprecht, Benjamin, Martin Härterich, and Daniel Bernau. "Monte Carlo and Reconstruction Membership Inference Attacks against Generative Models." Proceedings on Privacy Enhancing Technologies 2019, no. 4 (2019): 232–49. http://dx.doi.org/10.2478/popets-2019-0067.

Full text
Abstract:
Abstract We present two information leakage attacks that outperform previous work on membership inference against generative models. The first attack allows membership inference without assumptions on the type of the generative model. Contrary to previous evaluation metrics for generative models, like Kernel Density Estimation, it only considers samples of the model which are close to training data records. The second attack specifically targets Variational Autoencoders, achieving high membership inference accuracy. Furthermore, previous work mostly considers membership inference adversaries w
APA, Harvard, Vancouver, ISO, and other styles
3

Yang, Ziqi, Lijin Wang, Da Yang, et al. "Purifier: Defending Data Inference Attacks via Transforming Confidence Scores." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (2023): 10871–79. http://dx.doi.org/10.1609/aaai.v37i9.26289.

Full text
Abstract:
Neural networks are susceptible to data inference attacks such as the membership inference attack, the adversarial model inversion attack and the attribute inference attack, where the attacker could infer useful information such as the membership, the reconstruction or the sensitive attributes of a data sample from the confidence scores predicted by the target classifier. In this paper, we propose a method, namely PURIFIER, to defend against membership inference attacks. It transforms the confidence score vectors predicted by the target classifier and makes purified confidence scores indisting
APA, Harvard, Vancouver, ISO, and other styles
4

Jayaraman, Bargav, Lingxiao Wang, Katherine Knipmeyer, Quanquan Gu, and David Evans. "Revisiting Membership Inference Under Realistic Assumptions." Proceedings on Privacy Enhancing Technologies 2021, no. 2 (2021): 348–68. http://dx.doi.org/10.2478/popets-2021-0031.

Full text
Abstract:
Abstract We study membership inference in settings where assumptions commonly used in previous research are relaxed. First, we consider cases where only a small fraction of the candidate pool targeted by the adversary are members and develop a PPV-based metric suitable for this setting. This skewed prior setting is more realistic than the balanced prior setting typically considered. Second, we consider adversaries that select inference thresholds according to their attack goals, such as identifying as many members as possible with a given false positive tolerance. We develop a threshold select
APA, Harvard, Vancouver, ISO, and other styles
5

Pang, Yan, Tianhao Wang, Xuhui Kang, Mengdi Huai, and Yang Zhang. "White-box Membership Inference Attacks against Diffusion Models." Proceedings on Privacy Enhancing Technologies 2025, no. 2 (2025): 398–415. https://doi.org/10.56553/popets-2025-0068.

Full text
Abstract:
Diffusion models have begun to overshadow GANs and other generative models in industrial applications due to their superior image generation performance. The complex architecture of these models furnishes an extensive array of attack features. In light of this, we aim to design membership inference attacks (MIAs) catered to diffusion models. We first conduct an exhaustive analysis of existing MIAs on diffusion models, taking into account factors such as black-box/white-box models and the selection of attack features. We found that white-box attacks are highly applicable in real-world scenarios
APA, Harvard, Vancouver, ISO, and other styles
6

Moore, Hunter D., Andrew Stephens, and William Scherer. "An Understanding of the Vulnerability of Datasets to Disparate Membership Inference Attacks." Journal of Cybersecurity and Privacy 2, no. 4 (2022): 882–906. http://dx.doi.org/10.3390/jcp2040045.

Full text
Abstract:
Recent efforts have shown that training data is not secured through the generalization and abstraction of algorithms. This vulnerability to the training data has been expressed through membership inference attacks that seek to discover the use of specific records within the training dataset of a model. Additionally, disparate membership inference attacks have been shown to achieve better accuracy compared with their macro attack counterparts. These disparate membership inference attacks use a pragmatic approach to attack individual, more vulnerable sub-sets of the data, such as underrepresente
APA, Harvard, Vancouver, ISO, and other styles
7

Xia, Fan, Yuhao Liu, Bo Jin, et al. "Leveraging Multiple Adversarial Perturbation Distances for Enhanced Membership Inference Attack in Federated Learning." Symmetry 16, no. 12 (2024): 1677. https://doi.org/10.3390/sym16121677.

Full text
Abstract:
In recent years, federated learning (FL) has gained significant attention for its ability to protect data privacy during distributed training. However, it also introduces new privacy leakage risks. Membership inference attacks (MIAs), which aim to determine whether a specific sample is part of the training dataset, pose a significant threat to federated learning. Existing research on membership inference attacks in federated learning has primarily focused on leveraging intrinsic model parameters or manipulating the training process. However, the widespread adoption of privacy-preserving framew
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Xiuling, and Wendy Hui Wang. "GCL-Leak: Link Membership Inference Attacks against Graph Contrastive Learning." Proceedings on Privacy Enhancing Technologies 2024, no. 3 (2024): 165–85. http://dx.doi.org/10.56553/popets-2024-0073.

Full text
Abstract:
Graph contrastive learning (GCL) has emerged as a successful method for self-supervised graph learning. It involves generating augmented views of a graph by augmenting its edges and aims to learn node embeddings that are invariant to graph augmentation. Despite its effectiveness, the potential privacy risks associated with GCL models have not been thoroughly explored. In this paper, we delve into the privacy vulnerability of GCL models through the lens of link membership inference attacks (LMIA). Specifically, we focus on the federated setting where the adversary has white-box access to the no
APA, Harvard, Vancouver, ISO, and other styles
9

Lintilhac, Paul, Henry Scheible, and Nathaniel D. Bastian. "Datamodel Distance: A New Metric for Privacy." Proceedings of the AAAI Symposium Series 4, no. 1 (2024): 68–75. http://dx.doi.org/10.1609/aaaiss.v4i1.31773.

Full text
Abstract:
Recent work developing Membership Inference Attacks has demonstrated that certain points in the dataset are often in- trinsically easier to attack than others. In this paper, we intro- duce a new pointwise metric, the Datamodel Distance, and show that it is empirically correlated to and establishes a theoreti- cal lower bound for the success probability for a point under the LiRA Membership Inference Attack. This establishes a connection between the concepts of Datamodels and Member- ship Inference, and also gives new intuitive explanations for why certain points are more susceptible to attack
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Yanchao, Jiale Chen, Jiale Zhang, et al. "User-Level Membership Inference for Federated Learning in Wireless Network Environment." Wireless Communications and Mobile Computing 2021 (October 19, 2021): 1–17. http://dx.doi.org/10.1155/2021/5534270.

Full text
Abstract:
With the rise of privacy concerns in traditional centralized machine learning services, federated learning, which incorporates multiple participants to train a global model across their localized training data, has lately received significant attention in both industry and academia. Bringing federated learning into a wireless network scenario is a great move. The combination of them inspires tremendous power and spawns a number of promising applications. Recent researches reveal the inherent vulnerabilities of the various learning modes for the membership inference attacks that the adversary c
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Xiuling, and Wendy Hui Wang. "Subgraph Structure Membership Inference Attacks against Graph Neural Networks." Proceedings on Privacy Enhancing Technologies 2024, no. 4 (2024): 268–90. http://dx.doi.org/10.56553/popets-2024-0116.

Full text
Abstract:
Graph Neural Networks (GNNs) have been widely applied to various applications across different domains. However, recent studies have shown that GNNs are susceptible to the membership inference attacks (MIAs) which aim to infer if some particular data samples were included in the model’s training data. While most previous MIAs have focused on inferring the membership of individual nodes and edges within the training graph, we introduce a novel form of membership inference attack called the Structure Membership Inference Attack (SMIA) which aims to determine whether a given set of nodes correspo
APA, Harvard, Vancouver, ISO, and other styles
12

Kulynych, Bogdan, Mohammad Yaghini, Giovanni Cherubin, Michael Veale, and Carmela Troncoso. "Disparate Vulnerability to Membership Inference Attacks." Proceedings on Privacy Enhancing Technologies 2022, no. 1 (2021): 460–80. http://dx.doi.org/10.2478/popets-2022-0023.

Full text
Abstract:
Abstract A membership inference attack (MIA) against a machine-learning model enables an attacker to determine whether a given data record was part of the model’s training data or not. In this paper, we provide an in-depth study of the phenomenon of disparate vulnerability against MIAs: unequal success rate of MIAs against different population subgroups. We first establish necessary and sufficient conditions for MIAs to be prevented, both on average and for population subgroups, using a notion of distributional generalization. Second, we derive connections of disparate vulnerability to algorit
APA, Harvard, Vancouver, ISO, and other styles
13

Shi, Haonan, Tu Ouyang, and An Wang. "Unveiling Client Privacy Leakage from Public Dataset Usage in Federated Distillation." Proceedings on Privacy Enhancing Technologies 2025, no. 4 (2025): 201–15. https://doi.org/10.56553/popets-2025-0127.

Full text
Abstract:
Federated Distillation (FD) has emerged as a popular federated training framework, enabling clients to collaboratively train models without sharing private data. Public Dataset-Assisted Federated Distillation (PDA-FD), which leverages public datasets for knowledge sharing, has become widely adopted. Although PDA-FD enhances privacy compared to traditional Federated Learning, we demonstrate that the use of public datasets still poses significant privacy risks to clients' private training data. This paper presents the first comprehensive privacy analysis of PDA-FD in the presence of an honest-bu
APA, Harvard, Vancouver, ISO, and other styles
14

Xie, Guangxu, and Qingqi Pei. "Towards Attack to MemGuard with Nonlocal-Means Method." Security and Communication Networks 2022 (April 18, 2022): 1–9. http://dx.doi.org/10.1155/2022/6272737.

Full text
Abstract:
An adversarial example is the weakness of the machine learning (ML), and it can be utilized as the tool to defend against the inference attacks launched by ML classifiers. Jia et al. proposed MemGuard, which applied the idea of adversarial example to defend against membership inference attack. In a membership inference attack, the attacker attempts to infer whether a particular sample is in the training set of the target classifier, which may be a software or a service whose model parameters are unknown to the attacker. MemGuard does not tamper the training process of the target classifier, me
APA, Harvard, Vancouver, ISO, and other styles
15

Liu, Zhenpeng, Ruilin Li, Dewei Miao, Lele Ren, and Yonggang Zhao. "Membership Inference Defense in Distributed Federated Learning Based on Gradient Differential Privacy and Trust Domain Division Mechanisms." Security and Communication Networks 2022 (July 14, 2022): 1–14. http://dx.doi.org/10.1155/2022/1615476.

Full text
Abstract:
Distributed federated learning models are vulnerable to membership inference attacks (MIA) because they remember information about their training data. Through a comprehensive privacy analysis of distributed federated learning models, we design an attack model based on generative adversarial networks (GAN) and member inference attacks (MIA). Malicious participants (attackers) utilize the attack model to successfully reconstruct training sets of other regular participants without any negative impact on the global model. To solve this problem, we apply the differential privacy method to the trai
APA, Harvard, Vancouver, ISO, and other styles
16

Riaz, Shazia, Saqib Ali, Guojun Wang, Muhammad Ahsan Latif, and Muhammad Zafar Iqbal. "Membership inference attack on differentially private block coordinate descent." PeerJ Computer Science 9 (October 5, 2023): e1616. http://dx.doi.org/10.7717/peerj-cs.1616.

Full text
Abstract:
The extraordinary success of deep learning is made possible due to the availability of crowd-sourced large-scale training datasets. Mostly, these datasets contain personal and confidential information, thus, have great potential of being misused, raising privacy concerns. Consequently, privacy-preserving deep learning has become a primary research interest nowadays. One of the prominent approaches adopted to prevent the leakage of sensitive information about the training data is by implementing differential privacy during training for their differentially private training, which aims to preser
APA, Harvard, Vancouver, ISO, and other styles
17

Abbasi Tadi, Ali, Saroj Dayal, Dima Alhadidi, and Noman Mohammed. "Comparative Analysis of Membership Inference Attacks in Federated and Centralized Learning." Information 14, no. 11 (2023): 620. http://dx.doi.org/10.3390/info14110620.

Full text
Abstract:
The vulnerability of machine learning models to membership inference attacks, which aim to determine whether a specific record belongs to the training dataset, is explored in this paper. Federated learning allows multiple parties to independently train a model without sharing or centralizing their data, offering privacy advantages. However, when private datasets are used in federated learning and model access is granted, the risk of membership inference attacks emerges, potentially compromising sensitive data. To address this, effective defenses in a federated learning environment must be deve
APA, Harvard, Vancouver, ISO, and other styles
18

Gao, Junyao, Xinyang Jiang, Huishuai Zhang, et al. "Similarity Distribution Based Membership Inference Attack on Person Re-identification." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (2023): 14820–28. http://dx.doi.org/10.1609/aaai.v37i12.26731.

Full text
Abstract:
While person Re-identification (Re-ID) has progressed rapidly due to its wide real-world applications, it also causes severe risks of leaking personal information from training data. Thus, this paper focuses on quantifying this risk by membership inference (MI) attack. Most of the existing MI attack algorithms focus on classification models, while Re-ID follows a totally different training and inference paradigm. Re-ID is a fine-grained recognition task with complex feature embedding, and model outputs commonly used by existing MI like logits and losses are not accessible during inference. Sin
APA, Harvard, Vancouver, ISO, and other styles
19

Yu, Da, Huishuai Zhang, Wei Chen, Jian Yin, and Tie-Yan Liu. "How Does Data Augmentation Affect Privacy in Machine Learning?" Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (2021): 10746–53. http://dx.doi.org/10.1609/aaai.v35i12.17284.

Full text
Abstract:
It is observed in the literature that data augmentation can significantly mitigate membership inference (MI) attack. However, in this work, we challenge this observation by proposing new MI attacks to utilize the information of augmented data. MI attack is widely used to measure the model's information leakage of the training set. We establish the optimal membership inference when the model is trained with augmented data, which inspires us to formulate the MI attack as a set classification problem, i.e., classifying a set of augmented instances instead of a single data point, and design input
APA, Harvard, Vancouver, ISO, and other styles
20

Jagielski, Matthew, Stanley Wu, Alina Oprea, Jonathan Ullman, and Roxana Geambasu. "How to Combine Membership-Inference Attacks on Multiple Updated Machine Learning Models." Proceedings on Privacy Enhancing Technologies 2023, no. 3 (2023): 211–32. http://dx.doi.org/10.56553/popets-2023-0078.

Full text
Abstract:
A large body of research has shown that machine learning models are vulnerable to membership inference (MI) attacks that violate the privacy of the participants in the training data. Most MI research focuses on the case of a single standalone model, while production machine-learning platforms often update models over time, on data that often shifts in distribution, giving the attacker more information. This paper proposes new attacks that take advantage of one or more model updates to improve MI. A key part of our approach is to leverage rich information from standalone MI attacks mounted sepa
APA, Harvard, Vancouver, ISO, and other styles
21

Famili, Azadeh, and Yingjie Lao. "Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks." Sensors 23, no. 18 (2023): 7722. http://dx.doi.org/10.3390/s23187722.

Full text
Abstract:
Machine learning deployment on edge devices has faced challenges such as computational costs and privacy issues. Membership inference attack (MIA) refers to the attack where the adversary aims to infer whether a data sample belongs to the training set. In other words, user data privacy might be compromised by MIA from a well-trained model. Therefore, it is vital to have defense mechanisms in place to protect training data, especially in privacy-sensitive applications such as healthcare. This paper exploits the implications of quantization on privacy leakage and proposes a novel quantization me
APA, Harvard, Vancouver, ISO, and other styles
22

KWON, Hyun, and Yongchul KIM. "Toward Selective Membership Inference Attack against Deep Learning Model." IEICE Transactions on Information and Systems E105.D, no. 11 (2022): 1911–15. http://dx.doi.org/10.1587/transinf.2022ngl0001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Pham, Tuan Dung, Bao Dung Nguyen, Son T. Mai, and Viet Cuong Ta. "QL-PGD: An efficient defense against membership inference attack." Journal of Information Security and Applications 92 (July 2025): 104095. https://doi.org/10.1016/j.jisa.2025.104095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Luo, Zihao, Xilie Xu, Feng Liu, Yun Sing Koh, Di Wang, and Jingfeng Zhang. "Privacy-Preserving Low-Rank Adaptation Against Membership Inference Attacks for Latent Diffusion Models." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 6 (2025): 5883–91. https://doi.org/10.1609/aaai.v39i6.32628.

Full text
Abstract:
Low-rank adaptation (LoRA) is an efficient strategy for adapting latent diffusion models (LDMs) on a private dataset to generate specific images by minimizing the adaptation loss. However, the LoRA-adapted LDMs are vulnerable to membership inference (MI) attacks that can judge whether a particular data point belongs to the private dataset, thus leading to the privacy leakage. To defend against MI attacks, we first propose a straightforward solution: Membership-Privacy-preserving LoRA (MP-LoRA). MP-LoRA is formulated as a min-max optimization problem where a proxy attack model is trained by max
APA, Harvard, Vancouver, ISO, and other styles
25

Dai, Jiazhu, and Yubing Lu. "Graph-Level Label-Only Membership Inference Attack Against Graph Neural Networks." Applied Sciences 15, no. 9 (2025): 5086. https://doi.org/10.3390/app15095086.

Full text
Abstract:
Graph neural networks (GNNs) are widely used for graph-structured data. However, GNNs are vulnerable to membership inference attacks (MIAs) in graph classification tasks, which determine whether a graph was in the training set, risking the leakage of sensitive data. Existing MIAs rely on prediction probability vectors, but they become ineffective when only prediction labels are available. We propose a Graph-level Label-Only Membership Inference Attack (GLO-MIA), which is based on the intuition that the target model’s predictions on training data are more stable than those on testing data. GLO-
APA, Harvard, Vancouver, ISO, and other styles
26

Ali, Rana Salal, Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Tham Nguyen, Ian David Wood, and Mohamed Ali Kaafar. "Unintended Memorization and Timing Attacks in Named Entity Recognition Models." Proceedings on Privacy Enhancing Technologies 2023, no. 2 (2023): 329–46. http://dx.doi.org/10.56553/popets-2023-0056.

Full text
Abstract:
Named entity recognition models (NER), are widely used for identifying named entities (e.g., individuals, locations, and other information) in text documents. Machine learning based NER models are increasingly being applied in privacy-sensitive applications that need automatic and scalable identification of sensitive information to redact text for data sharing. In this paper, we study the setting when NER models are available as a black-box service for identifying sensitive information in user documents and show that these models are vulnerable to membership inference on their training dataset
APA, Harvard, Vancouver, ISO, and other styles
27

Park, Cheolhee, Youngsoo Kim, Jong-Geun Park, Dowon Hong, and Changho Seo. "Evaluating Differentially Private Generative Adversarial Networks Over Membership Inference Attack." IEEE Access 9 (2021): 167412–25. http://dx.doi.org/10.1109/access.2021.3137278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Guan, Faqian, Tianqing Zhu, Hanjin Tong, and Wanlei Zhou. "Topology modification against membership inference attack in Graph Neural Networks." Knowledge-Based Systems 305 (December 2024): 112642. http://dx.doi.org/10.1016/j.knosys.2024.112642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Suri, Anshuman, and David Evans. "Formalizing and Estimating Distribution Inference Risks." Proceedings on Privacy Enhancing Technologies 2022, no. 4 (2022): 528–51. http://dx.doi.org/10.56553/popets-2022-0121.

Full text
Abstract:
Distribution inference, sometimes called property inference, infers statistical properties about a training set from access to a model trained on that data. Distribution inference attacks can pose serious risks when models are trained on private data, but are difficult to distinguish from the intrinsic purpose of statistical machine learning—namely, to produce models that capture statistical properties about a distribution. Motivated by Yeom et al.’s membership inference framework, we propose a formal definition of distribution inference attacks general enough to describe a broad class of atta
APA, Harvard, Vancouver, ISO, and other styles
30

Han, Bing, Qiang Fu, and Xinliang Zhang. "Towards Privacy-Preserving Federated Neuromorphic Learning via Spiking Neuron Models." Electronics 12, no. 18 (2023): 3984. http://dx.doi.org/10.3390/electronics12183984.

Full text
Abstract:
Federated learning (FL) has been broadly adopted in both academia and industry in recent years. As a bridge to connect the so-called “data islands”, FL has contributed greatly to promoting data utilization. In particular, FL enables disjoint entities to cooperatively train a shared model, while protecting each participant’s data privacy. However, current FL frameworks cannot offer privacy protection and reduce the computation overhead at the same time. Therefore, its implementation in practical scenarios, such as edge computing, is limited. In this paper, we propose a novel FL framework with s
APA, Harvard, Vancouver, ISO, and other styles
31

Guan, Vincent, Florent Guépin, Ana-Maria Cretu, and Yves-Alexandre de Montjoye. "A Zero Auxiliary Knowledge Membership Inference Attack on Aggregate Location Data." Proceedings on Privacy Enhancing Technologies 2024, no. 4 (2024): 80–101. http://dx.doi.org/10.56553/popets-2024-0108.

Full text
Abstract:
Location data is frequently collected from populations and shared in aggregate form to guide policy and decision making. However, the prevalence of aggregated data also raises the privacy concern of membership inference attacks (MIAs). MIAs infer whether an individual's data contributed to the aggregate release. Although effective MIAs have been developed for aggregate location data, these require access to an extensive auxiliary dataset of individual traces over the same locations, which are collected from a similar population. This assumption is often impractical given common privacy practic
APA, Harvard, Vancouver, ISO, and other styles
32

Elhattab, Fatima, Sara Bouchenak, and Cédric Boscher. "PASTEL." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, no. 4 (2023): 1–29. http://dx.doi.org/10.1145/3633808.

Full text
Abstract:
Federated Learning (FL) aims to improve machine learning privacy by allowing several data owners in edge and ubiquitous computing systems to collaboratively train a model, while preserving their local training data private, and sharing only model training parameters. However, FL systems remain vulnerable to privacy attacks, and in particular, to membership inference attacks that allow adversaries to determine whether a given data sample belongs to participants' training data, thus, raising a significant threat in sensitive ubiquitous computing systems. Indeed, membership inference attacks are
APA, Harvard, Vancouver, ISO, and other styles
33

Sinha, Abhishek, Himanshi Tibrewal, Mansi Gupta, Nikhar Waghela, and Shivank Garg. "Confidence Is All You Need for MI Attacks (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (2024): 23655–56. http://dx.doi.org/10.1609/aaai.v38i21.30513.

Full text
Abstract:
In this evolving era of machine learning security, membership inference attacks have emerged as a potent threat to the confidentiality of sensitive data. In this attack, adversaries aim to determine whether a particular point was used during the training of a target model. This paper proposes a new method to gauge a data point’s membership in a model’s training set. Instead of correlating loss with membership, as is traditionally done, we have leveraged the fact that training examples generally exhibit higher confidence values when classified into their actual class. During training, the model
APA, Harvard, Vancouver, ISO, and other styles
34

Huang, Hongwei. "Defense against Membership Inference Attack Applying Domain Adaptation with Addictive Noise." Journal of Computer and Communications 09, no. 05 (2021): 92–108. http://dx.doi.org/10.4236/jcc.2021.95007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, Kehao, Zhixin Hu, Qingsong Ai, et al. "Membership Inference Attack with Multi-Grade Service Models in Edge Intelligence." IEEE Network 35, no. 1 (2021): 184–89. http://dx.doi.org/10.1109/mnet.011.2000246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Karthikeyan, K., K. Padmanaban, Datchanamoorthy Kavitha, and Jampani Chandra Sekhar. "Performance analysis of various machine learning models for membership inference attack." International Journal of Sensor Networks 43, no. 4 (2023): 232–45. http://dx.doi.org/10.1504/ijsnet.2023.135848.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Wunderlich, Dominik, Daniel Bernau, Francesco Aldà, Javier Parra-Arnau, and Thorsten Strufe. "On the Privacy–Utility Trade-Off in Differentially Private Hierarchical Text Classification." Applied Sciences 12, no. 21 (2022): 11177. http://dx.doi.org/10.3390/app122111177.

Full text
Abstract:
Hierarchical text classification consists of classifying text documents into a hierarchy of classes and sub-classes. Although Artificial Neural Networks have proved useful to perform this task, unfortunately, they can leak training data information to adversaries due to training data memorization. Using differential privacy during model training can mitigate leakage attacks against trained models, enabling the models to be shared safely at the cost of reduced model accuracy. This work investigates the privacy–utility trade-off in hierarchical text classification with differential privacy guara
APA, Harvard, Vancouver, ISO, and other styles
38

Bendoukha, Adda-Akram, Didem Demirag, Nesrine Kaaniche, Aymen Boudguiga, Renaud Sirdey, and Sébastien Gambs. "Towards Privacy-preserving and Fairness-aware Federated Learning Framework." Proceedings on Privacy Enhancing Technologies 2025, no. 1 (2025): 845–65. http://dx.doi.org/10.56553/popets-2025-0044.

Full text
Abstract:
Federated Learning (FL) enables the distributed training of a model across multiple data owners under the orchestration of a central server responsible for aggregating the models generated by the different clients. However, the original approach of FL has significant shortcomings related to privacy and fairness requirements. Specifically, the observation of the model updates may lead to privacy issues, such as membership inference attacks, while the use of imbalanced local datasets can introduce or amplify classification biases, especially for minority groups. In this work, we show that these
APA, Harvard, Vancouver, ISO, and other styles
39

Hou, Dai, Zhenkai Yang, Lei Zheng, et al. "Neighborhood Deviation Attack Against In-Context Learning." Applied Sciences 15, no. 8 (2025): 4177. https://doi.org/10.3390/app15084177.

Full text
Abstract:
In-context learning (ICL) enables large language models (LLMs) to adapt to new tasks using only a few examples, without requiring fine-tuning. However, the new privacy and security risks brought about by this increasing capability have not received enough attention, and there is a lack of research on this issue. In this work, we propose a novel membership inference attack (MIA) method, termed Neighborhood Deviation Attack, specifically designed to evaluate the privacy risks of LLMs in ICL. Unlike traditional MIA methods, our approach does not require access to model parameters and instead reli
APA, Harvard, Vancouver, ISO, and other styles
40

Almadhoun, Nour, Erman Ayday, and Özgür Ulusoy. "Inference attacks against differentially private query results from genomic datasets including dependent tuples." Bioinformatics 36, Supplement_1 (2020): i136—i145. http://dx.doi.org/10.1093/bioinformatics/btaa475.

Full text
Abstract:
Abstract Motivation The rapid decrease in the sequencing technology costs leads to a revolution in medical research and clinical care. Today, researchers have access to large genomic datasets to study associations between variants and complex traits. However, availability of such genomic datasets also results in new privacy concerns about personal information of the participants in genomic studies. Differential privacy (DP) is one of the rigorous privacy concepts, which received widespread interest for sharing summary statistics from genomic datasets while protecting the privacy of participant
APA, Harvard, Vancouver, ISO, and other styles
41

Vasin, N. N., and K. S. Kakabian. "Application of Adaptive Neuro-Fuzzy Inference System for DDoS Attack Detection Based on CIC-DDoS-2019 Dataset." Proceedings of Telecommunication Universities 11, no. 3 (2025): 87–96. https://doi.org/10.31854/1813-324x-2025-11-3-87-96.

Full text
Abstract:
The relevance. Distributed Denial of Service (DDoS) attacks remain a significant threat to the availability of online services. Traditional intrusion detection systems based on signatures or anomaly analysis face limitations in detecting new and complex attacks, while machine learning-based approaches, while showing high potential, often lack interpretability. Hybrid systems, such as the Adaptive Neuro-Fuzzy Inference System (ANFIS), combine the advantages of neural networks and fuzzy logic, offering both accuracy and interpretability. However, their effectiveness with respect to modern datase
APA, Harvard, Vancouver, ISO, and other styles
42

Ayoz, Kerem, Erman Ayday, and A. Ercument Cicek. "Genome Reconstruction Attacks Against Genomic Data-Sharing Beacons." Proceedings on Privacy Enhancing Technologies 2021, no. 3 (2021): 28–48. http://dx.doi.org/10.2478/popets-2021-0036.

Full text
Abstract:
Abstract Sharing genome data in a privacy-preserving way stands as a major bottleneck in front of the scientific progress promised by the big data era in genomics. A community-driven protocol named genomic data-sharing beacon protocol has been widely adopted for sharing genomic data. The system aims to provide a secure, easy to implement, and standardized interface for data sharing by only allowing yes/no queries on the presence of specific alleles in the dataset. However, beacon protocol was recently shown to be vulnerable against membership inference attacks. In this paper, we show that priv
APA, Harvard, Vancouver, ISO, and other styles
43

Gu, Yuhao, Yuebin Bai, and Shubin Xu. "CS-MIA: Membership inference attack based on prediction confidence series in federated learning." Journal of Information Security and Applications 67 (June 2022): 103201. http://dx.doi.org/10.1016/j.jisa.2022.103201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Marshalko, Grigory Borisovich, Roman Alexandrovich Romanenkov, and Julia Anatolievna Trufanova. "Security Analysis of the Draft National Standard «Neural Network Algorithms in Protected Execution. Automatic Training of Neural Network Models on Small Samples in Classification Tasks»." Proceedings of the Institute for System Programming of the RAS 35, no. 6 (2023): 179–88. http://dx.doi.org/10.15514/ispras-2023-35(6)-11.

Full text
Abstract:
We propose a membership inference attack against the neural classification algorithm from the draft national standard developed by the Omsk State Technical University under the auspices of the Technical Committee on Standardization «Artificial Intelligence» (TC 164). The attack allows us to determine whether the data were used for neural network training, and aimed at violating the confidentiality property of the training set. The results show that the protection mechanism of neural network classifiers described by the draft national standard does not provide the declared properties. The resul
APA, Harvard, Vancouver, ISO, and other styles
45

Mukherjee, Sumit, Yixi Xu, Anusua Trivedi, Nabajyoti Patowary, and Juan L. Ferres. "privGAN: Protecting GANs from membership inference attacks at low cost to utility." Proceedings on Privacy Enhancing Technologies 2021, no. 3 (2021): 142–63. http://dx.doi.org/10.2478/popets-2021-0041.

Full text
Abstract:
Abstract Generative Adversarial Networks (GANs) have made releasing of synthetic images a viable approach to share data without releasing the original dataset. It has been shown that such synthetic data can be used for a variety of downstream tasks such as training classifiers that would otherwise require the original dataset to be shared. However, recent work has shown that the GAN models and their synthetically generated data can be used to infer the training set membership by an adversary who has access to the entire dataset and some auxiliary information. Current approaches to mitigate thi
APA, Harvard, Vancouver, ISO, and other styles
46

Graves, Laura, Vineel Nagisetty, and Vijay Ganesh. "Amnesiac Machine Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (2021): 11516–24. http://dx.doi.org/10.1609/aaai.v35i13.17371.

Full text
Abstract:
The Right to be Forgotten is part of the recently enacted General Data Protection Regulation (GDPR) law that affects any data holder that has data on European Union residents. It gives EU residents the ability to request deletion of their personal data, including training records used to train machine learning models. Unfortunately, Deep Neural Network models are vulnerable to information leaking attacks such as model inversion attacks which extract class information from a trained model and membership inference attacks which determine the presence of an example in a model's training data. If
APA, Harvard, Vancouver, ISO, and other styles
47

Usynin, Dmitrii, Daniel Rueckert, Jonathan Passerat-Palmbach, and Georgios Kaissis. "Zen and the art of model adaptation: Low-utility-cost attack mitigations in collaborative machine learning." Proceedings on Privacy Enhancing Technologies 2022, no. 1 (2021): 274–90. http://dx.doi.org/10.2478/popets-2022-0014.

Full text
Abstract:
Abstract In this study, we aim to bridge the gap between the theoretical understanding of attacks against collaborative machine learning workflows and their practical ramifications by considering the effects of model architecture, learning setting and hyperparameters on the resilience against attacks. We refer to such mitigations as model adaptation. Through extensive experimentation on both, benchmark and real-life datasets, we establish a more practical threat model for collaborative learning scenarios. In particular, we evaluate the impact of model adaptation by implementing a range of atta
APA, Harvard, Vancouver, ISO, and other styles
48

Huang, Zhiheng, Yannan Liu, Daojing He, and Yu Li. "DF-MIA: A Distribution-Free Membership Inference Attack on Fine-Tuned Large Language Models." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 1 (2025): 343–51. https://doi.org/10.1609/aaai.v39i1.32012.

Full text
Abstract:
Membership Inference Attack (MIA) aims to determine if a specific sample is present in the training dataset of a target machine learning model. Previous MIAs against fine-tuned Large Language Models (LLMs) either fail to address the unique challenges in the fine-tuned setting or rely on strong assumption of the training data distribution. This paper proposes a distribution-free MIA framework tailored for fine-tuned LLMs, named DF-MIA. We recognize that samples await to test can serve as a valuable reference dataset for fine-tuning reference models. By enhancing the signals of non-member sample
APA, Harvard, Vancouver, ISO, and other styles
49

Jha, Rahul Kumar, Santosh Kumar Henge, Sanjeev Kumar Mandal, et al. "Neural Fuzzy Hybrid Rule-Based Inference System with Test Cases for Prediction of Heart Attack Probability." Mathematical Problems in Engineering 2022 (September 29, 2022): 1–18. http://dx.doi.org/10.1155/2022/3414877.

Full text
Abstract:
Heart disease has reached to the number one position in last decade in terms of mortality rate, and more wretchedly, heart attack has affected life in 80% of the cases. Cardiac arrest is an incurable incongruity that requires special treatment and cure. It has been a key research area for many years, and the number of researchers across the globe is devoted toward finding the optimal solution to avoid the ill-effect of this disease. Along with predicting heart disease, if focus moves towards prevention of heart attack as well, then this could result in major life saver area for masses. This re
APA, Harvard, Vancouver, ISO, and other styles
50

Cretu, Ana-Maria, Daniel Jones, Yves-Alexandre de Montjoye, and Shruti Tople. "Investigating the Effect of Misalignment on Membership Privacy in the White-box Setting." Proceedings on Privacy Enhancing Technologies 2024, no. 3 (2024): 407–30. http://dx.doi.org/10.56553/popets-2024-0085.

Full text
Abstract:
Machine learning models have been shown to leak sensitive information about their training datasets. Models are increasingly deployed on devices, raising concerns that white-box access to the model parameters increases the attack surface compared to black-box access which only provides query access. Directly extending the shadow modelling technique from the black-box to the white-box setting has been shown, in general, not to perform better than black-box only attacks. A potential reason is misalignment, a known characteristic of deep neural networks. In the shadow modelling context, misalignm
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!