Academic literature on the topic 'Membership Inference Attack'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Membership Inference Attack.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Membership Inference Attack"

1

Pedersen, Joseph, Rafael Muñoz-Gómez, Jiangnan Huang, Haozhe Sun, Wei-Wei Tu, and Isabelle Guyon. "LTU Attacker for Membership Inference." Algorithms 15, no. 7 (2022): 254. http://dx.doi.org/10.3390/a15070254.

Full text
Abstract:
We address the problem of defending predictive models, such as machine learning classifiers (Defender models), against membership inference attacks, in both the black-box and white-box setting, when the trainer and the trained model are publicly released. The Defender aims at optimizing a dual objective: utility and privacy. Privacy is evaluated with the membership prediction error of a so-called “Leave-Two-Unlabeled” LTU Attacker, having access to all of the Defender and Reserved data, except for the membership label of one sample from each, giving the strongest possible attack scenario. We p
APA, Harvard, Vancouver, ISO, and other styles
2

Hilprecht, Benjamin, Martin Härterich, and Daniel Bernau. "Monte Carlo and Reconstruction Membership Inference Attacks against Generative Models." Proceedings on Privacy Enhancing Technologies 2019, no. 4 (2019): 232–49. http://dx.doi.org/10.2478/popets-2019-0067.

Full text
Abstract:
Abstract We present two information leakage attacks that outperform previous work on membership inference against generative models. The first attack allows membership inference without assumptions on the type of the generative model. Contrary to previous evaluation metrics for generative models, like Kernel Density Estimation, it only considers samples of the model which are close to training data records. The second attack specifically targets Variational Autoencoders, achieving high membership inference accuracy. Furthermore, previous work mostly considers membership inference adversaries w
APA, Harvard, Vancouver, ISO, and other styles
3

Yang, Ziqi, Lijin Wang, Da Yang, et al. "Purifier: Defending Data Inference Attacks via Transforming Confidence Scores." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (2023): 10871–79. http://dx.doi.org/10.1609/aaai.v37i9.26289.

Full text
Abstract:
Neural networks are susceptible to data inference attacks such as the membership inference attack, the adversarial model inversion attack and the attribute inference attack, where the attacker could infer useful information such as the membership, the reconstruction or the sensitive attributes of a data sample from the confidence scores predicted by the target classifier. In this paper, we propose a method, namely PURIFIER, to defend against membership inference attacks. It transforms the confidence score vectors predicted by the target classifier and makes purified confidence scores indisting
APA, Harvard, Vancouver, ISO, and other styles
4

Jayaraman, Bargav, Lingxiao Wang, Katherine Knipmeyer, Quanquan Gu, and David Evans. "Revisiting Membership Inference Under Realistic Assumptions." Proceedings on Privacy Enhancing Technologies 2021, no. 2 (2021): 348–68. http://dx.doi.org/10.2478/popets-2021-0031.

Full text
Abstract:
Abstract We study membership inference in settings where assumptions commonly used in previous research are relaxed. First, we consider cases where only a small fraction of the candidate pool targeted by the adversary are members and develop a PPV-based metric suitable for this setting. This skewed prior setting is more realistic than the balanced prior setting typically considered. Second, we consider adversaries that select inference thresholds according to their attack goals, such as identifying as many members as possible with a given false positive tolerance. We develop a threshold select
APA, Harvard, Vancouver, ISO, and other styles
5

Pang, Yan, Tianhao Wang, Xuhui Kang, Mengdi Huai, and Yang Zhang. "White-box Membership Inference Attacks against Diffusion Models." Proceedings on Privacy Enhancing Technologies 2025, no. 2 (2025): 398–415. https://doi.org/10.56553/popets-2025-0068.

Full text
Abstract:
Diffusion models have begun to overshadow GANs and other generative models in industrial applications due to their superior image generation performance. The complex architecture of these models furnishes an extensive array of attack features. In light of this, we aim to design membership inference attacks (MIAs) catered to diffusion models. We first conduct an exhaustive analysis of existing MIAs on diffusion models, taking into account factors such as black-box/white-box models and the selection of attack features. We found that white-box attacks are highly applicable in real-world scenarios
APA, Harvard, Vancouver, ISO, and other styles
6

Moore, Hunter D., Andrew Stephens, and William Scherer. "An Understanding of the Vulnerability of Datasets to Disparate Membership Inference Attacks." Journal of Cybersecurity and Privacy 2, no. 4 (2022): 882–906. http://dx.doi.org/10.3390/jcp2040045.

Full text
Abstract:
Recent efforts have shown that training data is not secured through the generalization and abstraction of algorithms. This vulnerability to the training data has been expressed through membership inference attacks that seek to discover the use of specific records within the training dataset of a model. Additionally, disparate membership inference attacks have been shown to achieve better accuracy compared with their macro attack counterparts. These disparate membership inference attacks use a pragmatic approach to attack individual, more vulnerable sub-sets of the data, such as underrepresente
APA, Harvard, Vancouver, ISO, and other styles
7

Xia, Fan, Yuhao Liu, Bo Jin, et al. "Leveraging Multiple Adversarial Perturbation Distances for Enhanced Membership Inference Attack in Federated Learning." Symmetry 16, no. 12 (2024): 1677. https://doi.org/10.3390/sym16121677.

Full text
Abstract:
In recent years, federated learning (FL) has gained significant attention for its ability to protect data privacy during distributed training. However, it also introduces new privacy leakage risks. Membership inference attacks (MIAs), which aim to determine whether a specific sample is part of the training dataset, pose a significant threat to federated learning. Existing research on membership inference attacks in federated learning has primarily focused on leveraging intrinsic model parameters or manipulating the training process. However, the widespread adoption of privacy-preserving framew
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Xiuling, and Wendy Hui Wang. "GCL-Leak: Link Membership Inference Attacks against Graph Contrastive Learning." Proceedings on Privacy Enhancing Technologies 2024, no. 3 (2024): 165–85. http://dx.doi.org/10.56553/popets-2024-0073.

Full text
Abstract:
Graph contrastive learning (GCL) has emerged as a successful method for self-supervised graph learning. It involves generating augmented views of a graph by augmenting its edges and aims to learn node embeddings that are invariant to graph augmentation. Despite its effectiveness, the potential privacy risks associated with GCL models have not been thoroughly explored. In this paper, we delve into the privacy vulnerability of GCL models through the lens of link membership inference attacks (LMIA). Specifically, we focus on the federated setting where the adversary has white-box access to the no
APA, Harvard, Vancouver, ISO, and other styles
9

Lintilhac, Paul, Henry Scheible, and Nathaniel D. Bastian. "Datamodel Distance: A New Metric for Privacy." Proceedings of the AAAI Symposium Series 4, no. 1 (2024): 68–75. http://dx.doi.org/10.1609/aaaiss.v4i1.31773.

Full text
Abstract:
Recent work developing Membership Inference Attacks has demonstrated that certain points in the dataset are often in- trinsically easier to attack than others. In this paper, we intro- duce a new pointwise metric, the Datamodel Distance, and show that it is empirically correlated to and establishes a theoreti- cal lower bound for the success probability for a point under the LiRA Membership Inference Attack. This establishes a connection between the concepts of Datamodels and Member- ship Inference, and also gives new intuitive explanations for why certain points are more susceptible to attack
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Yanchao, Jiale Chen, Jiale Zhang, et al. "User-Level Membership Inference for Federated Learning in Wireless Network Environment." Wireless Communications and Mobile Computing 2021 (October 19, 2021): 1–17. http://dx.doi.org/10.1155/2021/5534270.

Full text
Abstract:
With the rise of privacy concerns in traditional centralized machine learning services, federated learning, which incorporates multiple participants to train a global model across their localized training data, has lately received significant attention in both industry and academia. Bringing federated learning into a wireless network scenario is a great move. The combination of them inspires tremendous power and spawns a number of promising applications. Recent researches reveal the inherent vulnerabilities of the various learning modes for the membership inference attacks that the adversary c
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Membership Inference Attack"

1

Zari, Oualid. "Machine learning and privacy protection : Attacks and defenses." Electronic Thesis or Diss., Sorbonne université, 2025. http://www.theses.fr/2025SORUS027.

Full text
Abstract:
L'adoption croissante d'algorithmes d'apprentissage automatique dans des domaines sensibles à la protection de la vie privée a révolutionné l'analyse des données dans de nombreux domaines. Ces algorithmes, notamment les réseaux neuronaux profonds, l'analyse en composantes principales (PCA) et les réseaux neuronaux graphiques (GNN), traitent de grandes quantités de données pour en extraire des informations utiles et précieuses. L'intégration de ces techniques dans des applications critiques traitant des informations sensibles, comme des dossiers médicaux ou des données des réseaux sociaux, a pe
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Membership Inference Attack"

1

Monreale, Anna, Francesca Naretto, and Simone Rizzo. "Agnostic Label-Only Membership Inference Attack." In Network and System Security. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-39828-5_14.

Full text
Abstract:
AbstractIn recent years we are witnessing the diffusion of AI systems based on powerful Machine Learning models which find application in many critical contexts such as medicine and financial market. In such contexts, it is important to design Trustworthy AI systems while guaranteeing privacy protection. However, some attacks on the privacy of Machine Learning models have been designed to show the threats of exposing such models. Membership Inference is one of the simplest privacy threats faced by Machine Learning models. It is based on the assumption that an adversary, observing the confidenc
APA, Harvard, Vancouver, ISO, and other styles
2

Kwatra, Saloni, and Vicenç Torra. "Data Reconstruction Attack Against Principal Component Analysis." In Security and Privacy in Social Networks and Big Data. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-5177-2_5.

Full text
Abstract:
AbstractAttacking machine learning models is one of the many ways to measure the privacy of machine learning models. Therefore, studying the performance of attacks against machine learning techniques is essential to know whether somebody can share information about machine learning models, and if shared, how much can be shared? In this work, we investigate one of the widely used dimensionality reduction techniques Principal Component Analysis (PCA). We refer to a recent paper that shows how to attack PCA using a Membership Inference Attack (MIA). When using membership inference attacks against
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Shi, and Yubin Zhong. "Two-Stage High Precision Membership Inference Attack." In Machine Learning for Cyber Security. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-20099-1_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zari, Oualid, Javier Parra-Arnau, Ayşe Ünsal, Thorsten Strufe, and Melek Önen. "Membership Inference Attack Against Principal Component Analysis." In Privacy in Statistical Databases. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13945-1_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Xi, Yanchao Zhao, Jiale Zhang, and Bing Chen. "Label-Only Membership Inference Attack Against Federated Distillation." In Algorithms and Architectures for Parallel Processing. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-0801-7_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Junying, Yiwen Xia, Xindi Ma, et al. "DP-CLMI:Differentially Private Contrastive Learning Against Membership Inference Attack." In Lecture Notes in Computer Science. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-1548-3_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Zongqi, Hongwei Li, Meng Hao, and Guowen Xu. "Enhanced Mixup Training: a Defense Method Against Membership Inference Attack." In Information Security Practice and Experience. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93206-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Jiaxin, Hongyun Cai, and Yuhang Yang. "MvSMIA: Multi-view Source Membership Inference Attack in Federated Learning." In Lecture Notes in Computer Science. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-9872-1_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lim, Gyeongsup, Wonjun Oh, and Junbeom Hur. "VoteGAN: Generalized Membership Inference Attack Against Generative Models by Multiple Discriminators." In Lecture Notes in Computer Science. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-1624-4_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Chen, Hengzhu Liu, Huanhuan Chi, and Ping Xiong. "Enhancing Privacy in Machine Unlearning: Posterior Perturbation Against Membership Inference Attack." In Lecture Notes in Computer Science. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-1551-3_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Membership Inference Attack"

1

Akbarian, Fatemeh, and Amir Aminifar. "Membership Inference Attack in Random Forests." In ESANN 2025. Ciaco - i6doc.com, 2025. https://doi.org/10.14428/esann/2025.es2025-184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shah, Akash, Sapna Varshney, and Monica Mehrotra. "DepInferAttack: Framework for Membership Inference Attack in Depression Dataset." In 2024 4th International Conference on Technological Advancements in Computational Sciences (ICTACS). IEEE, 2024. https://doi.org/10.1109/ictacs62700.2024.10840770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Yan, Jiawei Li, Dianqi Han, and Yanchao Zhang. "W-MIA: Membership Inference Attack against Deep Learning-based RF Fingerprinting." In 2024 IEEE Conference on Communications and Network Security (CNS). IEEE, 2024. http://dx.doi.org/10.1109/cns62487.2024.10735522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Shuwen, Yongfeng Qian, and Yixue Hao. "Balancing Privacy and Attack Utility: Calibrating Sample Difficulty for Membership Inference Attacks in Transfer Learning." In 2024 54th Annual IEEE/IFIP International Conference on Dependable Systems and Networks - Supplemental Volume (DSN-S). IEEE, 2024. http://dx.doi.org/10.1109/dsn-s60304.2024.00046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Ye, Shan Chang, Denghui Li, and Minghui Dai. "M-Door: Joint Attack of Backdoor Injection and Membership Inference in Federated Learning." In GLOBECOM 2024 - 2024 IEEE Global Communications Conference. IEEE, 2024. https://doi.org/10.1109/globecom52923.2024.10901593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ahamed, Sayyed Farid, Soumya Banerjee, Sandip Roy, et al. "Accuracy-Privacy Trade-off in the Mitigation of Membership Inference Attack in Federated Learning." In 2025 International Conference on Computing, Networking and Communications (ICNC). IEEE, 2025. https://doi.org/10.1109/icnc64010.2025.10993764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Han, Yuhao Wu, Zhiyuan Yu, and Ning Zhang. "Please Tell Me More: Privacy Impact of Explainability through the Lens of Membership Inference Attack." In 2024 IEEE Symposium on Security and Privacy (SP). IEEE, 2024. http://dx.doi.org/10.1109/sp54263.2024.00120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tao, Jiashu, and Reza Shokri. "Range Membership Inference Attacks." In 2025 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML). IEEE, 2025. https://doi.org/10.1109/satml64287.2025.00026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rossi, Lorenzo, Michael Aerni, Jie Zhang, and Florian Tramèr. "Membership Inference Attacks on Sequence Models." In 2025 IEEE Security and Privacy Workshops (SPW). IEEE, 2025. https://doi.org/10.1109/spw67851.2025.00014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yichuan, Shi, Olivera Kotevska, Viktor Reshniak, and Amir Sadovnik. "Assessing Membership Inference Attacks under Distribution Shifts." In 2024 IEEE International Conference on Big Data (BigData). IEEE, 2024. https://doi.org/10.1109/bigdata62323.2024.10825580.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Membership Inference Attack"

1

Rosenblat, Sruly, Tim O'Reilly, and Ilan Strauss. Beyond Public Access in LLM Pre-Training Data: Non-public book content in OpenAI’s Models. AI Disclosures Project, Social Science Research Council, 2025. https://doi.org/10.35650/aidp.4111.d.2025.

Full text
Abstract:
Using a legally obtained dataset of 34 copyrighted O’Reilly Media books, we apply the DE-COP membership inference attack method to investigate whether OpenAI’s large language models were trained on copyrighted content without consent. Our AUROC scores show that GPT-4o, OpenAI’s more recent and capable model, demonstrates strong recognition of paywalled O’Reilly book content (AUROC = 82%), compared to OpenAI’s earlier model GPT-3.5 Turbo. In contrast, GPT-3.5 Turbo shows greater relative recognition of publicly accessible O’Reilly book samples. GPT-4o Mini, as a much smaller model, shows no kno
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!