To see the other types of publications on this topic, follow the link: Machine Unlearning.

Journal articles on the topic 'Machine Unlearning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Machine Unlearning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Agarwal, Shubham. "Machine unlearning." New Scientist 260, no. 3463 (2023): 40–43. http://dx.doi.org/10.1016/s0262-4079(23)02059-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Aldaghri, Nasser, Hessam Mahdavifar, and Ahmad Beirami. "Coded Machine Unlearning." IEEE Access 9 (2021): 88137–50. http://dx.doi.org/10.1109/access.2021.3090019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

S S, Mr Veerasagar. "Vershachi Unlearning: A Framework for Machine Unlearning." International Journal for Research in Applied Science and Engineering Technology 13, no. 3 (2025): 426–34. https://doi.org/10.22214/ijraset.2025.67269.

Full text
Abstract:
In the contemporary landscape of the digital world where industry relies on the technology of artificial intelligence which fundamentally depends on the concepts of machine learning. Machine learning is a field where it utilizes the immense amount of data and then feeds this data into a structure called models. This data “trains” this model. Abundant data is used to train these models, for this data to be as accurate as it can be optimally. However, reliance on this abundant data exposes us to a significant risk to user privacy which is a matter of concern. It directly challenges the existence
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Zihao, Tianhao Wang, Mengdi Huai, and Chenglin Miao. "Backdoor Attacks via Machine Unlearning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (2024): 14115–23. http://dx.doi.org/10.1609/aaai.v38i13.29321.

Full text
Abstract:
As a new paradigm to erase data from a model and protect user privacy, machine unlearning has drawn significant attention. However, existing studies on machine unlearning mainly focus on its effectiveness and efficiency, neglecting the security challenges introduced by this technique. In this paper, we aim to bridge this gap and study the possibility of conducting malicious attacks leveraging machine unlearning. Specifically, we consider the backdoor attack via machine unlearning, where an attacker seeks to inject a backdoor in the unlearned model by submitting malicious unlearning requests, s
APA, Harvard, Vancouver, ISO, and other styles
5

Bartra, Mary J. "When Federated Learning Meets Machine Unlearning." Journal of Industrial Engineering and Applied Science 2, no. 5 (2024): 39–47. https://doi.org/10.5281/zenodo.13854241.

Full text
Abstract:
This research paper explores the intersection of Incremental Learning and Unlearning in the context of machine learning, with a particular emphasis on dynamic environments where data evolves rapidly, and models must continuously adapt. Incremental learning allows machine learning models to efficiently update their knowledge without retraining from scratch, while incremental unlearning ensures compliance with privacy regulations or user requests by selectively removing specific data points from the model. The paper discusses several key techniques for balancing learning and unlearning, includin
APA, Harvard, Vancouver, ISO, and other styles
6

Kurmanji, Meghdad, Eleni Triantafillou, and Peter Triantafillou. "Machine Unlearning in Learned Databases: An Experimental Analysis." Proceedings of the ACM on Management of Data 2, no. 1 (2024): 1–26. http://dx.doi.org/10.1145/3639304.

Full text
Abstract:
Machine learning models based on neural networks (NNs) are enjoying ever-increasing attention in the Database (DB) community, both in research and practice. However, an important issue has been largely overlooked, namely the challenge of dealing with the inherent, highly dynamic nature of DBs, where data updates are fundamental, highly-frequent operations (unlike, for instance, in ML classification tasks). Although some recent research has addressed the issues of maintaining updated NN models in the presence of new data insertions, the effects of data deletions (a.k.a., "machine unlearning") r
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Hyunjune, Sangyong Lee, and Simon S. Woo. "Layer Attack Unlearning: Fast and Accurate Machine Unlearning via Layer Level Attack and Knowledge Distillation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (2024): 21241–48. http://dx.doi.org/10.1609/aaai.v38i19.30118.

Full text
Abstract:
Recently, serious concerns have been raised about the privacy issues related to training datasets in machine learning algorithms when including personal data. Various regulations in different countries, including the GDPR grant individuals to have personal data erased, known as ‘the right to be forgotten’ or ‘the right to erasure’. However, there has been less research on effectively and practically deleting the requested personal data from the training set while not jeopardizing the overall machine learning performance. In this work, we propose a fast and novel machine unlearning paradigm at
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Lingzhi, Xingshan Zeng, Jinsong Guo, Kam-Fai Wong, and Georg Gottlob. "Selective Forgetting: Advancing Machine Unlearning Techniques and Evaluation in Language Models." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 1 (2025): 843–51. https://doi.org/10.1609/aaai.v39i1.32068.

Full text
Abstract:
This paper explores Machine Unlearning (MU), an emerging field that is gaining increased attention due to concerns about neural models unintentionally remembering personal or sensitive information. We present SeUL, a novel method that enables selective and fine-grained unlearning for language models. Unlike previous work that employs a fully reversed training objective in unlearning, SeUL minimizes the negative impact on the capability of language models, particularly in terms of generation. Furthermore, we introduce two innovative evaluation metrics, sensitive extraction likelihood (S-EL) and
APA, Harvard, Vancouver, ISO, and other styles
9

Chundawat, Vikram S., Ayush K. Tarun, Murari Mandal, and Mohan Kankanhalli. "Can Bad Teaching Induce Forgetting? Unlearning in Deep Networks Using an Incompetent Teacher." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (2023): 7210–17. http://dx.doi.org/10.1609/aaai.v37i6.25879.

Full text
Abstract:
Machine unlearning has become an important area of research due to an increasing need for machine learning (ML) applications to comply with the emerging data privacy regulations. It facilitates the provision for removal of certain set or class of data from an already trained ML model without requiring retraining from scratch. Recently, several efforts have been put in to make unlearning to be effective and efficient. We propose a novel machine unlearning method by exploring the utility of competent and incompetent teachers in a student-teacher framework to induce forgetfulness. The knowledge f
APA, Harvard, Vancouver, ISO, and other styles
10

Sakib, Shahnewaz Karim, and Mengjun Xie. "Machine Unlearning in Digital Healthcare: Addressing Technical and Ethical Challenges." Proceedings of the AAAI Symposium Series 4, no. 1 (2024): 319–22. http://dx.doi.org/10.1609/aaaiss.v4i1.31809.

Full text
Abstract:
The ``Right to be Forgotten," as outlined in regulatory frameworks such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), allows individuals to request the deletion of their personal data from deployed machine learning models. This provision ensures that individuals can maintain control over their personal information. In the digital health era, this right has become a critical concern for both patients and healthcare providers. To facilitate the effective removal of personal data from machine learning models, the concept of ``machine unlearning"
APA, Harvard, Vancouver, ISO, and other styles
11

Chen, Kongyang, Zixin Wang, and Bing Mi. "Private Data Protection with Machine Unlearning in Contrastive Learning Networks." Mathematics 12, no. 24 (2024): 4001. https://doi.org/10.3390/math12244001.

Full text
Abstract:
The security of AI models poses significant challenges, as sensitive user information can potentially be inferred from the models, leading to privacy breaches. To address this, machine unlearning methods aim to remove specific data from a trained model, effectively eliminating the training traces of those data. However, most existing approaches focus primarily on supervised learning scenarios, leaving the unlearning of contrastive learning models underexplored. This paper proposes a novel fine-tuning-based unlearning method tailored for contrastive learning models. The approach introduces a th
APA, Harvard, Vancouver, ISO, and other styles
12

Schelter, Sebastian, Stefan Grafberger, and Maarten de Rijke. "Snarcase - Regain Control over Your Predictions with Low-Latency Machine Unlearning." Proceedings of the VLDB Endowment 17, no. 12 (2024): 4273–76. http://dx.doi.org/10.14778/3685800.3685853.

Full text
Abstract:
The "right-to-be-forgotten" requires the removal of personal data from trained machine learning (ML) models with machine unlearning. Conducting such unlearning with low latency is crucial for responsible data management. Low-latency unlearning is challenging, but possible for certain classes of ML models when treating them as "materialised views" over training data, with carefully chosen operations and data structures for computing updates. We present Snapcase, a recommender system that can unlearn user interactions with sub-second latency on a large grocery shopping dataset with 33 million pu
APA, Harvard, Vancouver, ISO, and other styles
13

Alshabanah, Abdulla, Keshav Balasubramanian, and Murali Annavaram. "Meta-Learn to Unlearn: Enhanced Exact Machine Unlearning in Recommendation Systems with Meta-Learning." Proceedings on Privacy Enhancing Technologies 2025, no. 4 (2025): 696–711. https://doi.org/10.56553/popets-2025-0152.

Full text
Abstract:
Recommendation systems are used widely to recommend items such as movies, products, or news to users. The performance of a recommendation model depends on the quality of the embeddings that are associated with users and items, which are generally learned by tracking user behavior, such as their click history. Recent legislative requirements allow users to withdraw their consent to learning from some of their behaviors, even if they have provided such a consent initially. Once a user withdraws their consent, the models are supposed to unlearn the user behavior. This requirement has led to the e
APA, Harvard, Vancouver, ISO, and other styles
14

Sommer, David M., Liwei Song, Sameer Wagh, and Prateek Mittal. "Athena: Probabilistic Verification of Machine Unlearning." Proceedings on Privacy Enhancing Technologies 2022, no. 3 (2022): 268–90. http://dx.doi.org/10.56553/popets-2022-0072.

Full text
Abstract:
The right to be forgotten, also known as the right to erasure, is the right of individuals to have their data erased from an entity storing it. The status of this long held notion was legally solidified recently by the General Data Protection Regulation (GDPR) in the European Union. As a consequence, there is a need for mechanisms whereby users can verify if service providers comply with their deletion requests. In this work, we take the first step in proposing a formal framework, called Athena, to study the design of such verification mechanisms for data deletion requests – also known as mach
APA, Harvard, Vancouver, ISO, and other styles
15

Guo, Qiming, Chen Pan, Hua Zhang, and Wenlu Wang. "Efficient Unlearning for Spatio-temporal Graph (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 28 (2025): 29382–84. https://doi.org/10.1609/aaai.v39i28.35259.

Full text
Abstract:
Machine unlearning is becoming increasingly important as deep models become more prevalent, particularly when there are frequent requests to remove the influence of specific training data due to privacy concerns or erroneous sensing signals. Spatial-temporal Graph Neural Networks, in particular, have been widely adopted in real-world applications that demand efficient unlearning, yet research in this area remains in its early stages. In this paper, we introduce STEPS, a framework specifically designed to address the challenges of spatio-temporal graph unlearning. Our results demonstrate that S
APA, Harvard, Vancouver, ISO, and other styles
16

Djaffal, Souhaila, Yasmina Benmabrouk, Chawki Djeddi, Moises Diaz, and Nadhir Nouioua. "When machine unlearning meets script identification." IET Conference Proceedings 2024, no. 10 (2024): 347–50. https://doi.org/10.1049/icp.2024.3330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Mahadevan, Ananth, and Michael Mathioudakis. "Certifiable Unlearning Pipelines for Logistic Regression: An Experimental Study." Machine Learning and Knowledge Extraction 4, no. 3 (2022): 591–620. http://dx.doi.org/10.3390/make4030028.

Full text
Abstract:
Machine unlearning is the task of updating machine learning (ML) models after a subset of the training data they were trained on is deleted. Methods for the task are desired to combine effectiveness and efficiency (i.e., they should effectively “unlearn” deleted data, but in a way that does not require excessive computational effort (e.g., a full retraining) for a small amount of deletions). Such a combination is typically achieved by tolerating some amount of approximation in the unlearning. In addition, laws and regulations in the spirit of “the right to be forgotten” have given rise to requ
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Xunkai, Yulin Zhao, Zhengyu Wu, Wentao Zhang, Rong-Hua Li, and Guoren Wang. "Towards Effective and General Graph Unlearning via Mutual Evolution." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 12 (2024): 13682–90. http://dx.doi.org/10.1609/aaai.v38i12.29273.

Full text
Abstract:
With the rapid advancement of AI applications, the growing needs for data privacy and model robustness have highlighted the importance of machine unlearning, especially in thriving graph-based scenarios. However, most existing graph unlearning strategies primarily rely on well-designed architectures or manual process, rendering them less user-friendly and posing challenges in terms of deployment efficiency. Furthermore, striking a balance between unlearning performance and framework generalization is also a pivotal concern. To address the above issues, we propose Mutual Evolution Graph Unlearn
APA, Harvard, Vancouver, ISO, and other styles
19

Marchant, Neil G., Benjamin I. P. Rubinstein, and Scott Alfeld. "Hard to Forget: Poisoning Attacks on Certified Machine Unlearning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (2022): 7691–700. http://dx.doi.org/10.1609/aaai.v36i7.20736.

Full text
Abstract:
The right to erasure requires removal of a user's information from data held by organizations, with rigorous interpretations extending to downstream products such as learned models. Retraining from scratch with the particular user's data omitted fully removes its influence on the resulting model, but comes with a high computational cost. Machine "unlearning" mitigates the cost incurred by full retraining: instead, models are updated incrementally, possibly only requiring retraining when approximation errors accumulate. Rapid progress has been made towards privacy guarantees on the indistinguis
APA, Harvard, Vancouver, ISO, and other styles
20

Foster, Jack, Stefan Schoepf, and Alexandra Brintrup. "Fast Machine Unlearning without Retraining through Selective Synaptic Dampening." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (2024): 12043–51. http://dx.doi.org/10.1609/aaai.v38i11.29092.

Full text
Abstract:
Machine unlearning, the ability for a machine learning model to forget, is becoming increasingly important to comply with data privacy regulations, as well as to remove harmful, manipulated, or outdated information. The key challenge lies in forgetting specific information while protecting model performance on the remaining data. While current state-of-the-art methods perform well, they typically require some level of retraining over the retained data, in order to protect or restore model performance. This adds computational overhead and mandates that the training data remain available and acc
APA, Harvard, Vancouver, ISO, and other styles
21

Panda, Subhodip, Shashwat Sourav, and Prathosh A.P. "Partially Blinded Unlearning: Class Unlearning for Deep Networks from Bayesian Perspective." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 6 (2025): 6372–80. https://doi.org/10.1609/aaai.v39i6.32682.

Full text
Abstract:
To follow regulations on individual data privacy and safety, machine learning models must systematically remove information learned from specific subsets of a user's training data that can no longer be utilized. To address this problem, machine unlearning has emerged as an important area of research, that helps remove information learned from specific subsets of training data from a pre-trained model without needing to retrain the whole model from scratch. The principal aim of this study is to formulate a methodology aimed for the purposeful elimination of information linked to a specific clas
APA, Harvard, Vancouver, ISO, and other styles
22

Cevallos, Ivanna Daniela, Marco E. Benalcázar, Ángel Leonardo Valdivieso Caraguay, Jonathan A. Zea, and Lorena Isabel Barona-López. "A Systematic Literature Review of Machine Unlearning Techniques in Neural Networks." Computers 14, no. 4 (2025): 150. https://doi.org/10.3390/computers14040150.

Full text
Abstract:
This review examines the field of machine unlearning in neural networks, an area driven by data privacy regulations such as the General Data Protection Regulation and the California Consumer Privacy Act. By analyzing 37 primary studies of machine unlearning applied to neural networks in both regression and classification tasks, this review thoroughly evaluates the foundational principles, key performance metrics, and methodologies used to assess these techniques. Special attention is given to recent advancements up to December 2023, including emerging approaches and frameworks. By categorizing
APA, Harvard, Vancouver, ISO, and other styles
23

Wu, Zhaomin, Junhui Zhu, Qinbin Li, and Bingsheng He. "DeltaBoost: Gradient Boosting Decision Trees with Efficient Machine Unlearning." Proceedings of the ACM on Management of Data 1, no. 2 (2023): 1–26. http://dx.doi.org/10.1145/3589313.

Full text
Abstract:
As machine learning (ML) has been widely developed in real-world applications, the privacy of ML models draws an increasing concern. In this paper, we study how to forget specific data records from ML models to preserve the privacy of these data. Although some studies propose efficient unlearning algorithms on random forests and extremely randomized trees, Gradient Boosting Decision Trees (GBDT), which are widely used in practice, have not been explored. The efficient unlearning of GBDT faces two major challenges: 1) the training of each tree is deterministic and non-robust; 2) the training of
APA, Harvard, Vancouver, ISO, and other styles
24

Zhang, Yongjing, Zhaobo Lu, Feng Zhang, Hao Wang, and Shaojing Li. "Machine Unlearning by Reversing the Continual Learning." Applied Sciences 13, no. 16 (2023): 9341. http://dx.doi.org/10.3390/app13169341.

Full text
Abstract:
Recent legislations, such as the European General Data Protection Regulation (GDPR), require user data holders to guarantee the individual’s right to be forgotten. This means that user data holders must completely delete user data upon request. However, in the field of machine learning, it is not possible to simply remove these data from the back-end database wherein the training dataset is stored, because the machine learning model still retains this data information. Retraining the model using a dataset with these data removed can overcome this problem; however, this can lead to expensive co
APA, Harvard, Vancouver, ISO, and other styles
25

Qu, Youyang, Xin Yuan, Ming Ding, Wei Ni, Thierry Rakotoarivelo, and David Smith. "Learn to Unlearn: Insights Into Machine Unlearning." Computer 57, no. 3 (2024): 79–90. http://dx.doi.org/10.1109/mc.2023.3333319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Ghannam, Naglaa E., and Esraa A. Mahareek. "Privacy-Preserving Federated Unlearning with Ontology-Guided Relevance Modeling for Secure Distributed Systems." Future Internet 17, no. 8 (2025): 335. https://doi.org/10.3390/fi17080335.

Full text
Abstract:
Federated Learning (FL) is a privacy-focused technique for training models; however, most existing unlearning techniques in FL fall significantly short of the efficiency and situational awareness required by the GDPR. The paper introduces two new unlearning methods: EG-FedUnlearn, a gradient-based technique that eliminates the effect of specific target clients without retraining, and OFU-Ontology, an ontology-based approach that ranks data importance to facilitate forgetting contextually. EG-FedUnlearn directly eliminates the contributions of specific target data by reversing the gradient, whe
APA, Harvard, Vancouver, ISO, and other styles
27

Mittal, Atharv. "LoRA Unlearns More and Retains More (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 28 (2025): 29431–32. https://doi.org/10.1609/aaai.v39i28.35277.

Full text
Abstract:
Due to increasing privacy regulations and regulatory compliance, Machine Unlearning (MU) has become essential. The goal of unlearning is to remove information related to a specific class from a model. Traditional approaches achieve exact unlearning by retraining the model on the remaining dataset, but incur high computational costs. This has driven the development of more efficient unlearning techniques, including model sparsification techniques, which boost computational efficiency, but degrade the model’s performance on the remaining classes. To mitigate these issues, we propose a novel meth
APA, Harvard, Vancouver, ISO, and other styles
28

Graves, Laura, Vineel Nagisetty, and Vijay Ganesh. "Amnesiac Machine Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (2021): 11516–24. http://dx.doi.org/10.1609/aaai.v35i13.17371.

Full text
Abstract:
The Right to be Forgotten is part of the recently enacted General Data Protection Regulation (GDPR) law that affects any data holder that has data on European Union residents. It gives EU residents the ability to request deletion of their personal data, including training records used to train machine learning models. Unfortunately, Deep Neural Network models are vulnerable to information leaking attacks such as model inversion attacks which extract class information from a trained model and membership inference attacks which determine the presence of an example in a model's training data. If
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Jiali, Hongxia Bie, Zhao Jing, and Yichen Zhi. "Scrub-and-Learn: Category-Aware Weight Modification for Machine Unlearning." AI 6, no. 6 (2025): 108. https://doi.org/10.3390/ai6060108.

Full text
Abstract:
(1) Background: Machine unlearning plays a crucial role in privacy protection and model optimization, particularly in forgetting entire categories of data in classification tasks. However, existing methods often struggle with high computational costs, such as estimating the inverse Hessian, or require access to the original training data, limiting their practicality. (2) Methods: In this work, we introduce Scrub-and-Learn, which is a category-aware weight modification framework designed to remove class-level knowledge efficiently. By modeling unlearning as a continual learning task, our method
APA, Harvard, Vancouver, ISO, and other styles
30

Zhang, Chenhao, Shaofei Shen, Weitong Chen, and Miao Xu. "Toward Efficient Data-Free Unlearning." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 21 (2025): 22372–79. https://doi.org/10.1609/aaai.v39i21.34393.

Full text
Abstract:
Machine unlearning without access to real data distribution is challenging. The existing method based on data-free distillation achieved unlearning by filtering out synthetic samples containing forgetting information but struggled to distill the retaining-related knowledge efficiently. In this work, we analyze that such a problem is due to over-filtering, which reduces the synthesized retaining-related information. We propose a novel method, Inhibited Synthetic PostFilter (ISPF), to tackle this challenge from two perspectives: First, the Inhibited Synthetic, by reducing the synthesized forgett
APA, Harvard, Vancouver, ISO, and other styles
31

Wu, Yongliang, Shiji Zhou, Mingzhuo Yang, et al. "Unlearning Concepts in Diffusion Model via Concept Domain Correction and Concept Preserving Gradient." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 8 (2025): 8496–504. https://doi.org/10.1609/aaai.v39i8.32917.

Full text
Abstract:
Text-to-image diffusion models have achieved remarkable success in generating photorealistic images. However, the inclusion of sensitive information during pre-training poses significant risks. Machine Unlearning (MU) offers a promising solution to eliminate sensitive concepts from these models. Despite its potential, existing MU methods face two main challenges: 1) limited generalization, where concept erasure is effective only within the unlearned set, failing to prevent sensitive concept generation from out-of-set prompts; and 2) utility degradation, where removing target concepts significa
APA, Harvard, Vancouver, ISO, and other styles
32

Chen, Kongyang, Dongping Zhang, Bing Mi, Yao Huang, and Zhipeng Li. "Fast yet versatile machine unlearning for deep neural networks." Neural Networks 190 (October 2025): 107648. https://doi.org/10.1016/j.neunet.2025.107648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Jang, Jinhyeok, Jaehong Kim, and Chan-Hyun Youn. "Learning to Rewind via Iterative Prediction of Past Weights for Practical Unlearning." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 25 (2025): 26248–55. https://doi.org/10.1609/aaai.v39i25.34822.

Full text
Abstract:
In artificial intelligence (AI), many legal conflicts have arisen, especially concerning privacy and copyright associated with training data. When an AI model's training data incurs privacy concerns, it becomes imperative to develop a new model devoid of influences from such contentious data. However, retraining from scratch is often not viable due to the extensive data requirements and heavy computational costs. Machine unlearning presents a promising solution by enabling the selective erasure of specific knowledge from models. Despite its potential, many existing approaches in machine unlear
APA, Harvard, Vancouver, ISO, and other styles
34

Gao, Ji, Sanjam Garg, Mohammad Mahmoody, and Prashant Nalini Vasudevan. "Deletion inference, reconstruction, and compliance in machine (un)learning." Proceedings on Privacy Enhancing Technologies 2022, no. 3 (2022): 415–36. http://dx.doi.org/10.56553/popets-2022-0079.

Full text
Abstract:
Privacy attacks on machine learning models aim to identify the data that is used to train such models. Such attacks, traditionally, are studied on static models that are trained once and are accessible by the adversary. Motivated to meet new legal requirements, many machine learning methods are recently extended to support machine unlearning, i.e., updating models as if certain examples are removed from their training sets, and meet new legal requirements. However, privacy attacks could potentially become more devastating in this new setting, since an attacker could now access both the origina
APA, Harvard, Vancouver, ISO, and other styles
35

Juliussen, Bjørn Aslak, Jon Petter Rui, and Dag Johansen. "Algorithms that forget: Machine unlearning and the right to erasure." Computer Law & Security Review 51 (November 2023): 105885. http://dx.doi.org/10.1016/j.clsr.2023.105885.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Choi, Dasol, and Dongbin Na. "Distribution-Level Feature Distancing for Machine Unlearning: Towards a Better Trade-off Between Model Utility and Forgetting." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 3 (2025): 2536–44. https://doi.org/10.1609/aaai.v39i3.32256.

Full text
Abstract:
With the explosive growth of deep learning applications and increasing privacy concerns, the right to be forgotten has become a critical requirement in various AI industries. For example, given a facial recognition system, some individuals may wish to remove their personal data that might have been used in the training phase. Unfortunately, deep neural networks sometimes unexpectedly leak personal identities, making this removal challenging. While recent machine unlearning algorithms aim to enable models to forget specific data, we identify an unintended utility drop—correlation collapse—in wh
APA, Harvard, Vancouver, ISO, and other styles
37

Tang, Yonghao, Zhiping Cai, Qiang Liu, Tongqing Zhou, and Qiang Ni. "Ensuring User Privacy and Model Security via Machine Unlearning: A Review." Computers, Materials & Continua 77, no. 2 (2023): 2645–56. http://dx.doi.org/10.32604/cmc.2023.032307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Liu, Hengzhu, Ping Xiong, Tianqing Zhu, and Philip S. Yu. "A survey on machine unlearning: Techniques and new emerged privacy risks." Journal of Information Security and Applications 90 (May 2025): 104010. https://doi.org/10.1016/j.jisa.2025.104010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Nguyen, Thanh Tam, Thanh Trung Huynh, Zhao Ren, et al. "A Survey of Machine Unlearning." ACM Transactions on Intelligent Systems and Technology, July 22, 2025. https://doi.org/10.1145/3749987.

Full text
Abstract:
Today, computer systems hold large amounts of personal data. Yet while such an abundance of data allows breakthroughs in artificial intelligence, and especially machine learning, its existence can be a threat to user privacy, and it can weaken the bonds of trust between humans and AI. Recent regulations now require that, on request, private information about a user must be removed from both computer systems and from machine learning models – this legislation is more colloquially called “the right to be forgotten”). While removing data from back-end databases should be straightforward, it is no
APA, Harvard, Vancouver, ISO, and other styles
40

Viswanath, Yashaswini, Sudha Jamthe, Suresh Lokiah, and Emanuele Bianchini. "Machine unlearning for generative AI." Journal of AI, Robotics & Workplace Automation, September 1, 2023. http://dx.doi.org/10.69554/kzrs2422.

Full text
Abstract:
This paper introduces a new field of AI research called machine unlearning and examines the challenges and approaches to extend machine unlearning to generative AI (GenAI). Machine unlearning is a model-driven approach to make an existing artificial intelligence (AI) model unlearn a set of data from its learning. Machine unlearning is becoming important for businesses to comply with privacy laws such as General Data Protection Regulation (GDPR) customer’s right to be forgotten, to manage security and to remove bias that AI models learn from their training data, as it is expensive to retrain an
APA, Harvard, Vancouver, ISO, and other styles
41

Chen, Aobo, Yangyi Li, Chenxu Zhao, and Mengdi Huai. "A survey of security and privacy issues of machine unlearning." AI Magazine 46, no. 1 (2025). https://doi.org/10.1002/aaai.12209.

Full text
Abstract:
AbstractMachine unlearning is a cutting‐edge technology that embodies the privacy legal principle of the right to be forgotten within the realm of machine learning (ML). It aims to remove specific data or knowledge from trained models without retraining from scratch and has gained significant attention in the field of artificial intelligence in recent years. However, the development of machine unlearning research is associated with inherent vulnerabilities and threats, posing significant challenges for researchers and practitioners. In this article, we provide the first comprehensive survey of
APA, Harvard, Vancouver, ISO, and other styles
42

Xu, Heng, Tianqing Zhu*, Lefeng Zhang, Wanlei Zhou, and Philip S. Yu. "Machine Unlearning: A Survey." ACM Computing Surveys, June 7, 2023. http://dx.doi.org/10.1145/3603620.

Full text
Abstract:
Machine learning has attracted widespread attention and evolved into an enabling technology for a wide range of highly successful applications, such as intelligent computer vision, speech recognition, medical diagnosis, and more. Yet a special need has arisen where, due to privacy, usability, and/or the right to be forgotten , information about some specific samples needs to be removed from a model, called machine unlearning. This emerging technology has drawn significant interest from both academics and industry due to its innovation and practicality. At the same time, this ambitious problem
APA, Harvard, Vancouver, ISO, and other styles
43

Chundawat, Vikram S., Ayush K. Tarun, Murari Mandal, and Mohan Kankanhalli. "Zero-Shot Machine Unlearning." IEEE Transactions on Information Forensics and Security, 2023, 1. http://dx.doi.org/10.1109/tifs.2023.3265506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ye, Guanhua, Tong Chen, Quoc Viet Hung Nguyen, and Hongzhi Yin. "Heterogeneous decentralised machine unlearning with seed model distillation." CAAI Transactions on Intelligence Technology, January 17, 2024. http://dx.doi.org/10.1049/cit2.12281.

Full text
Abstract:
AbstractAs some recent information security legislation endowed users with unconditional rights to be forgotten by any trained machine learning model, personalised IoT service providers have to put unlearning functionality into their consideration. The most straightforward method to unlearn users' contribution is to retrain the model from the initial state, which is not realistic in high throughput applications with frequent unlearning requests. Though some machine unlearning frameworks have been proposed to speed up the retraining process, they fail to match decentralised learning scenarios.
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Chaoyi, Zuobin Ying, and Zijie Pan. "Machine unlearning in brain-inspired neural network paradigms." Frontiers in Neurorobotics 18 (May 21, 2024). http://dx.doi.org/10.3389/fnbot.2024.1361577.

Full text
Abstract:
Machine unlearning, which is crucial for data privacy and regulatory compliance, involves the selective removal of specific information from a machine learning model. This study focuses on implementing machine unlearning in Spiking Neuron Models (SNMs) that closely mimic biological neural network behaviors, aiming to enhance both flexibility and ethical compliance of AI models. We introduce a novel hybrid approach for machine unlearning in SNMs, which combines selective synaptic retraining, synaptic pruning, and adaptive neuron thresholding. This methodology is designed to effectively eliminat
APA, Harvard, Vancouver, ISO, and other styles
46

Zhang, Lefeng, Tianqing Zhu, Ping Xiong, and Wanlei Zhou. "The Price of Unlearning: Identifying Unlearning Risk in Edge Computing." ACM Transactions on Multimedia Computing, Communications, and Applications, May 6, 2024. http://dx.doi.org/10.1145/3662184.

Full text
Abstract:
Machine unlearning is an emerging paradigm that aims to make machine learning models “forget” what they have learned about particular data. It fulfills the requirements of privacy legislation (e.g., GDPR), which stipulates that individuals have the autonomy to determine the usage of their personal data. However, alongside all the achievements, there are still loopholes in machine unlearning that may cause significant losses for the system, especially in edge computing. Edge computing is a distributed computing paradigm with the purpose of migrating data processing tasks closer to terminal devi
APA, Harvard, Vancouver, ISO, and other styles
47

Li, Chunxiao, Haipeng Jiang, Jiankang Chen, et al. "An overview of machine unlearning." High-Confidence Computing, July 2024, 100254. http://dx.doi.org/10.1016/j.hcc.2024.100254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Tarun, Ayush K., Vikram S. Chundawat, Murari Mandal, and Mohan Kankanhalli. "Fast Yet Effective Machine Unlearning." IEEE Transactions on Neural Networks and Learning Systems, 2023, 1–10. http://dx.doi.org/10.1109/tnnls.2023.3266233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Zhang, Haibo, Toru Nakamura, Takamasa Isohara, and Kouichi Sakurai. "A Review on Machine Unlearning." SN Computer Science 4, no. 4 (2023). http://dx.doi.org/10.1007/s42979-023-01767-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Shao, Chenghao, Chang Li, Rencheng Song, Xiang Liu, Ruobing Qian, and Xun Chen. "Machine Unlearning for Seizure Prediction." IEEE Transactions on Cognitive and Developmental Systems, 2024, 1–13. http://dx.doi.org/10.1109/tcds.2024.3395663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!