Academic literature on the topic 'Machine Unlearning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Machine Unlearning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Machine Unlearning"

1

Agarwal, Shubham. "Machine unlearning." New Scientist 260, no. 3463 (2023): 40–43. http://dx.doi.org/10.1016/s0262-4079(23)02059-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Aldaghri, Nasser, Hessam Mahdavifar, and Ahmad Beirami. "Coded Machine Unlearning." IEEE Access 9 (2021): 88137–50. http://dx.doi.org/10.1109/access.2021.3090019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

S S, Mr Veerasagar. "Vershachi Unlearning: A Framework for Machine Unlearning." International Journal for Research in Applied Science and Engineering Technology 13, no. 3 (2025): 426–34. https://doi.org/10.22214/ijraset.2025.67269.

Full text
Abstract:
In the contemporary landscape of the digital world where industry relies on the technology of artificial intelligence which fundamentally depends on the concepts of machine learning. Machine learning is a field where it utilizes the immense amount of data and then feeds this data into a structure called models. This data “trains” this model. Abundant data is used to train these models, for this data to be as accurate as it can be optimally. However, reliance on this abundant data exposes us to a significant risk to user privacy which is a matter of concern. It directly challenges the existence
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Zihao, Tianhao Wang, Mengdi Huai, and Chenglin Miao. "Backdoor Attacks via Machine Unlearning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (2024): 14115–23. http://dx.doi.org/10.1609/aaai.v38i13.29321.

Full text
Abstract:
As a new paradigm to erase data from a model and protect user privacy, machine unlearning has drawn significant attention. However, existing studies on machine unlearning mainly focus on its effectiveness and efficiency, neglecting the security challenges introduced by this technique. In this paper, we aim to bridge this gap and study the possibility of conducting malicious attacks leveraging machine unlearning. Specifically, we consider the backdoor attack via machine unlearning, where an attacker seeks to inject a backdoor in the unlearned model by submitting malicious unlearning requests, s
APA, Harvard, Vancouver, ISO, and other styles
5

Bartra, Mary J. "When Federated Learning Meets Machine Unlearning." Journal of Industrial Engineering and Applied Science 2, no. 5 (2024): 39–47. https://doi.org/10.5281/zenodo.13854241.

Full text
Abstract:
This research paper explores the intersection of Incremental Learning and Unlearning in the context of machine learning, with a particular emphasis on dynamic environments where data evolves rapidly, and models must continuously adapt. Incremental learning allows machine learning models to efficiently update their knowledge without retraining from scratch, while incremental unlearning ensures compliance with privacy regulations or user requests by selectively removing specific data points from the model. The paper discusses several key techniques for balancing learning and unlearning, includin
APA, Harvard, Vancouver, ISO, and other styles
6

Kurmanji, Meghdad, Eleni Triantafillou, and Peter Triantafillou. "Machine Unlearning in Learned Databases: An Experimental Analysis." Proceedings of the ACM on Management of Data 2, no. 1 (2024): 1–26. http://dx.doi.org/10.1145/3639304.

Full text
Abstract:
Machine learning models based on neural networks (NNs) are enjoying ever-increasing attention in the Database (DB) community, both in research and practice. However, an important issue has been largely overlooked, namely the challenge of dealing with the inherent, highly dynamic nature of DBs, where data updates are fundamental, highly-frequent operations (unlike, for instance, in ML classification tasks). Although some recent research has addressed the issues of maintaining updated NN models in the presence of new data insertions, the effects of data deletions (a.k.a., "machine unlearning") r
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Hyunjune, Sangyong Lee, and Simon S. Woo. "Layer Attack Unlearning: Fast and Accurate Machine Unlearning via Layer Level Attack and Knowledge Distillation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (2024): 21241–48. http://dx.doi.org/10.1609/aaai.v38i19.30118.

Full text
Abstract:
Recently, serious concerns have been raised about the privacy issues related to training datasets in machine learning algorithms when including personal data. Various regulations in different countries, including the GDPR grant individuals to have personal data erased, known as ‘the right to be forgotten’ or ‘the right to erasure’. However, there has been less research on effectively and practically deleting the requested personal data from the training set while not jeopardizing the overall machine learning performance. In this work, we propose a fast and novel machine unlearning paradigm at
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Lingzhi, Xingshan Zeng, Jinsong Guo, Kam-Fai Wong, and Georg Gottlob. "Selective Forgetting: Advancing Machine Unlearning Techniques and Evaluation in Language Models." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 1 (2025): 843–51. https://doi.org/10.1609/aaai.v39i1.32068.

Full text
Abstract:
This paper explores Machine Unlearning (MU), an emerging field that is gaining increased attention due to concerns about neural models unintentionally remembering personal or sensitive information. We present SeUL, a novel method that enables selective and fine-grained unlearning for language models. Unlike previous work that employs a fully reversed training objective in unlearning, SeUL minimizes the negative impact on the capability of language models, particularly in terms of generation. Furthermore, we introduce two innovative evaluation metrics, sensitive extraction likelihood (S-EL) and
APA, Harvard, Vancouver, ISO, and other styles
9

Chundawat, Vikram S., Ayush K. Tarun, Murari Mandal, and Mohan Kankanhalli. "Can Bad Teaching Induce Forgetting? Unlearning in Deep Networks Using an Incompetent Teacher." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (2023): 7210–17. http://dx.doi.org/10.1609/aaai.v37i6.25879.

Full text
Abstract:
Machine unlearning has become an important area of research due to an increasing need for machine learning (ML) applications to comply with the emerging data privacy regulations. It facilitates the provision for removal of certain set or class of data from an already trained ML model without requiring retraining from scratch. Recently, several efforts have been put in to make unlearning to be effective and efficient. We propose a novel machine unlearning method by exploring the utility of competent and incompetent teachers in a student-teacher framework to induce forgetfulness. The knowledge f
APA, Harvard, Vancouver, ISO, and other styles
10

Sakib, Shahnewaz Karim, and Mengjun Xie. "Machine Unlearning in Digital Healthcare: Addressing Technical and Ethical Challenges." Proceedings of the AAAI Symposium Series 4, no. 1 (2024): 319–22. http://dx.doi.org/10.1609/aaaiss.v4i1.31809.

Full text
Abstract:
The ``Right to be Forgotten," as outlined in regulatory frameworks such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), allows individuals to request the deletion of their personal data from deployed machine learning models. This provision ensures that individuals can maintain control over their personal information. In the digital health era, this right has become a critical concern for both patients and healthcare providers. To facilitate the effective removal of personal data from machine learning models, the concept of ``machine unlearning"
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Machine Unlearning"

1

Machine, Unlearning. Counterpath Press, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Machine Unlearning"

1

Zhu, HongBin, YuXiao Xia, YunZhao Li, Wei Li, Kang Liu, and Xianzhou Gao. "Hierarchical Machine Unlearning." In Lecture Notes in Computer Science. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-44505-7_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cheng, Jiali, and Hadi Amiri. "MultiDelete for Multimodal Machine Unlearning." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72940-9_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Doughan, Ziad, and Sari Itani. "Machine Unlearning, A Comparative Analysis." In Engineering Applications of Neural Networks. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-62495-7_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kang, Lei, Mohamed Ali Souibgui, Fei Yang, Lluis Gomez, Ernest Valveny, and Dimosthenis Karatzas. "Machine Unlearning for Document Classification." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-70546-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ficarra, Francisco V. Cipolla. "Machine Learning and Human Unlearning." In Lecture Notes in Networks and Systems. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-67426-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Pin-Yu, and Sijia Liu. "Machine Unlearning for Foundation Models." In Introduction to Foundation Models. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-76770-8_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Baohai, Youyang Qu, Longxiang Gao, Conggai Li, Lin Li, and David Smith. "Mitigating Over-Unlearning in Machine Unlearning with Synthetic Data Augmentation." In Lecture Notes in Computer Science. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-1545-2_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vidal, Àlex Pujol, Anders S. Johansen, Mohammad N. S. Jahromi, Sergio Escalera, Kamal Nasrollahi, and Thomas B. Moeslund. "Verifying Machine Unlearning with Explainable AI." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-88223-4_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Zixin, Bing Mi, and Kongyang Chen. "EncoderMU: Machine Unlearning in Contrastive Learning." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-73699-5_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Deepanjali, S., S. Dhivya, and S. Monica Catherine. "Efficient Machine Unlearning Using General Adversarial Network." In Artificial Intelligence Techniques for Advanced Computing Applications. Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5329-5_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Machine Unlearning"

1

K, Devan K. P., Srikar Saketram P, Shyam K. S, and Srimathi R. "Machine Unlearning In Recommendation Systems." In 2025 2nd International Conference on Trends in Engineering Systems and Technologies (ICTEST). IEEE, 2025. https://doi.org/10.1109/ictest64710.2025.11042432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hu, Hongsheng, Shuo Wang, Tian Dong, and Minhui Xue. "Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearning." In 2024 IEEE Symposium on Security and Privacy (SP). IEEE, 2024. http://dx.doi.org/10.1109/sp54263.2024.00248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wichert, Leon, and Sandipan Sikdar. "Rethinking Evaluation Methods for Machine Unlearning." In Findings of the Association for Computational Linguistics: EMNLP 2024. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.findings-emnlp.271.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Guven, Eray, and Gunes Karabulut Kurt. "Machine Unlearning for Uplink Interference Cancellation." In GLOBECOM 2024 - 2024 IEEE Global Communications Conference. IEEE, 2024. https://doi.org/10.1109/globecom52923.2024.10901616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Seo, Seonguk, Dongwan Kim, and Bohyung Han. "Revisiting Machine Unlearning with Dimensional Alignment." In 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, 2025. https://doi.org/10.1109/wacv61041.2025.00317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Eisenhofer, Thorsten, Doreen Riepel, Varun Chandrasekaran, Esha Ghosh, Olga Ohrimenko, and Nicolas Papernot. "Verifiable and Provably Secure Machine Unlearning." In 2025 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML). IEEE, 2025. https://doi.org/10.1109/satml64287.2025.00033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mansi, Srn Reddy, and Rishika Anand. "Comparison of Model Adaptation Techniques with Machine Unlearning." In 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT). IEEE, 2024. http://dx.doi.org/10.1109/icccnt61001.2024.10723921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yao, Jin, Eli Chien, Minxin Du, et al. "Machine Unlearning of Pre-trained Large Language Models." In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.acl-long.457.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Zheyuan, Guangyao Dou, Zhaoxuan Tan, Yijun Tian, and Meng Jiang. "Towards Safer Large Language Models through Machine Unlearning." In Findings of the Association for Computational Linguistics ACL 2024. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.findings-acl.107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Xiong, Zuobin, Wei Li, and Zhipeng Cai. "Appro-Fun: Approximate Machine Unlearning in Federated Setting." In 2024 33rd International Conference on Computer Communications and Networks (ICCCN). IEEE, 2024. http://dx.doi.org/10.1109/icccn61486.2024.10637564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!