To see the other types of publications on this topic, follow the link: Adversarial Deepfake.

Journal articles on the topic 'Adversarial Deepfake'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Adversarial Deepfake.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lad, Sumit. "Adversarial Approaches to Deepfake Detection: A Theoretical Framework for Robust Defense." Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 6, no. 1 (2024): 46–58. http://dx.doi.org/10.60087/jaigs.v6i1.225.

Full text
Abstract:
The rapid improvements in capabilities of neural networks and generative adversarial networks (GANs) has given rise to extremely sophisticated deepfake technologies. This has made it very difficult to reliably recognize fake digital content. It has enabled the creation of highly convincing synthetic media which can be used in malicious ways in this era of user generated information and social media. Existing deepfake detection techniques are effective against early iterations of deepfakes but get increasingly vulnerable to more sophisticated deepfakes and adversarial attacks. In this paper we
APA, Harvard, Vancouver, ISO, and other styles
2

Abbasi, Maryam, Paulo Váz, José Silva, and Pedro Martins. "Comprehensive Evaluation of Deepfake Detection Models: Accuracy, Generalization, and Resilience to Adversarial Attacks." Applied Sciences 15, no. 3 (2025): 1225. https://doi.org/10.3390/app15031225.

Full text
Abstract:
The rise of deepfakes—synthetic media generated using artificial intelligence—threatens digital content authenticity, facilitating misinformation and manipulation. However, deepfakes can also depict real or entirely fictitious individuals, leveraging state-of-the-art techniques such as generative adversarial networks (GANs) and emerging diffusion-based models. Existing detection methods face challenges with generalization across datasets and vulnerability to adversarial attacks. This study focuses on subsets of frames extracted from the DeepFake Detection Challenge (DFDC) and FaceForensics++ v
APA, Harvard, Vancouver, ISO, and other styles
3

Garcia, Jan Mark. "Exploring Deepfakes and Effective Prevention Strategies: A Critical Review." Psychology and Education: A Multidisciplinary Journal 33, no. 1 (2025): 93–96. https://doi.org/10.70838/pemj.330107.

Full text
Abstract:
Deepfake technology, powered by artificial intelligence and deep learning, has rapidly advanced, enabling the creation of highly realistic synthetic media. While it presents opportunities in entertainment and creative applications, deepfakes pose significant risks, including misinformation, identity fraud, and threats to privacy and national security. This study explores the evolution of deepfake technology, its implications, and current detection techniques. Existing methods for deepfake detection, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative
APA, Harvard, Vancouver, ISO, and other styles
4

Zhuang, Zhong, Yoichi Tomioka, Jungpil Shin, and Yuichi Okuyama. "PGD-Trap: Proactive Deepfake Defense with Sticky Adversarial Signals and Iterative Latent Variable Refinement." Electronics 13, no. 17 (2024): 3353. http://dx.doi.org/10.3390/electronics13173353.

Full text
Abstract:
With the development of artificial intelligence (AI), deepfakes, in which the face of one person is changed to another expression of the same person or a different person, have advanced. There is a need for countermeasures against crimes that exploit deepfakes. Methods to interfere with deepfake generation by adding an invisible weak adversarial signal to an image have been proposed. However, there is a problem: the weak signal can be easily removed by processing the image. In this paper, we propose trap signals that appear in response to a process that weakens adversarial signals. We also pro
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Hao, Yongtao Wang, Zhaoyu Chen, et al. "CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (2022): 989–97. http://dx.doi.org/10.1609/aaai.v36i1.19982.

Full text
Abstract:
Malicious applications of deepfakes (i.e., technologies generating target facial attributes or entire faces from facial images) have posed a huge threat to individuals' reputation and security. To mitigate these threats, recent studies have proposed adversarial watermarks to combat deepfake models, leading them to generate distorted outputs. Despite achieving impressive results, these adversarial watermarks have low image-level and model-level transferability, meaning that they can protect only one facial image from one specific deepfake model. To address these issues, we propose a novel solut
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Junyi, Minghao Yang, and Kaishen Yuan. "A Review of Deepfake Detection Techniques." Applied and Computational Engineering 117, no. 1 (2025): 165–74. https://doi.org/10.54254/2755-2721/2025.20955.

Full text
Abstract:
With the development of deepfake technology, the use of this technology to forge videos and images has caused serious privacy and legal problems in society. In order to solve these problems, deepfake detection is required. In this paper, the generation and detection techniques of deepfakes in recent years are studied. First, the principles of deepfake generation technology are briefly introduced, including Generative Adversarial Networks (GAN) based and autoencoder. Then, this paper focuses on the detection techniques of deepfakes, classifies them based on the principles of each method, and su
APA, Harvard, Vancouver, ISO, and other styles
7

Ghariwala, Love. "Impact of Deepfake Technology on Social Media: Detection, Misinformation and Societal Implications." International Journal for Research in Applied Science and Engineering Technology 13, no. 3 (2025): 2982–86. https://doi.org/10.22214/ijraset.2025.67997.

Full text
Abstract:
The rise of Artificial Intelligence (AI) has opened up new possibilities, but it also brings significant challenges. Deepfake technology, which creates realistic fake videos, raises concerns about privacy, identity, and consent. This paper explores the impacts of deepfakes and suggests solutions to mitigate their negative effects. Deepfake technology, which allows the manipulation and fabrication of audio, video, and images, has gained significant attention due to its potential to deceive and manipulate. As deepfakes proliferate on social media platforms, understanding their impact becomes cru
APA, Harvard, Vancouver, ISO, and other styles
8

Noreen, Iram, Muhammad Shahid Muneer, and Saira Gillani. "Deepfake attack prevention using steganography GANs." PeerJ Computer Science 8 (October 20, 2022): e1125. http://dx.doi.org/10.7717/peerj-cs.1125.

Full text
Abstract:
Background Deepfakes are fake images or videos generated by deep learning algorithms. Ongoing progress in deep learning techniques like auto-encoders and generative adversarial networks (GANs) is approaching a level that makes deepfake detection ideally impossible. A deepfake is created by swapping videos, images, or audio with the target, consequently raising digital media threats over the internet. Much work has been done to detect deepfake videos through feature detection using a convolutional neural network (CNN), recurrent neural network (RNN), and spatiotemporal CNN. However, these techn
APA, Harvard, Vancouver, ISO, and other styles
9

Shukla, Dheeraj. "Deep Fake Face Detection Using Deep Learning." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 06 (2025): 1–9. https://doi.org/10.55041/ijsrem50976.

Full text
Abstract:
Artificial Intelligence, deepfake technology, Generative Adversarial Networks GAN, Detection System, Detection Accuracy, User accessibility, Digital content verification. Abstract: In recent years, the rise of deepfake technology has raised significant concerns. regarding the authenticity of digital content. Deepfakes, which are synthetic media created using advanced artificial intelligence techniques, can mislead viewers and pose risks to personal privacy, public trust, and social discourse. The proposed system focuses on developing a Generative Adversarial Network (GAN)- based deepfake detec
APA, Harvard, Vancouver, ISO, and other styles
10

Omkar, Ajit Awadhut, and Prof. Priya Dhadawe Asst. "Deep fakes and Mitigation Strategies." International Journal of Advance and Applied Research S6, no. 23 (2025): 121–28. https://doi.org/10.5281/zenodo.15194884.

Full text
Abstract:
<em>Deepfake technology, powered by artificial intelligence (AI) and deep learning, has transformed digital media by enabling the creation of highly realistic synthetic content. While deepfakes have legitimate applications in entertainment, education, and accessibility, they also pose significant risks, including misinformation, identity theft, and political manipulation. The rapid advancement of deepfake generation techniques, particularly through Generative Adversarial Networks (GANs) and Autoencoders, has made it increasingly difficult to distinguish between real and fake content.</em>
APA, Harvard, Vancouver, ISO, and other styles
11

Deepali Ujalambkar. "AI Powered Solutions for Deepfake Identification." Advances in Nonlinear Variational Inequalities 28, no. 4s (2025): 460–71. https://doi.org/10.52783/anvi.v28.3503.

Full text
Abstract:
The emergence of deepfake technology, primarily driven by generative adversarial networks (GANs), has introduced new challenges in the realms of media authenticity, security, and public trust. Deepfakes, which involve the manipulation of images, videos, and audio to create highly realistic but fabricated media, can be used in both benign applications and malicious scenarios, such as misinformation, identity theft, and privacy invasion. As deepfakes become more sophisticated, AI-based solutions have emerged as the primary means of identifying these manipulations. This paper explores various AI-
APA, Harvard, Vancouver, ISO, and other styles
12

Wildan, Jameel Hadi, Malallah Kadhem Suhad, and Rodhan Abbas Ayad. "A survey of deepfakes in terms of deep learning and multimedia forensics." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 4 (2022): 4408–14. https://doi.org/10.11591/ijece.v12i4.pp4408-4414.

Full text
Abstract:
Artificial intelligence techniques are reaching us in several forms, some of which are useful but can be exploited in a way that harms us. One of these forms is called deepfakes. Deepfakes is used to completely modify video (or image) content to display something that was not in it originally. The danger of deepfake technology impact on society through the loss of confidence in everything is published. Therefore, in this paper, we focus on deepfake detection technology from the view of two concepts which are deep learning and forensic tools. The purpose of this survey is to give the reader a d
APA, Harvard, Vancouver, ISO, and other styles
13

S, SARANYA. "Deepfake Detection using Deep Learning." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 04 (2025): 1–9. https://doi.org/10.55041/ijsrem46605.

Full text
Abstract:
Abstract—Deepfake technology, driven by generative adversarial networks (GANs), poses significant challenges in digital security, misinformation, and privacy. Detecting deepfakes in images and videos requires advanced deep learning models. This study explores deepfake detection using convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformer-based architectures like Vision Transformers (ViTs). We employ Meso4_DF deepfake detection pipeline that uses TensorFlow/Keras, PyTorch, OpenCV for processing, with Dlib, Scikit-Image, and NumPy for feature extraction, leveragi
APA, Harvard, Vancouver, ISO, and other styles
14

Dr.A.Shaji, George, and George A.S.Hovan. "Deepfakes: The Evolution of Hyper realistic Media Manipulation." Partners Universal Innovative Research Publication (PUIRP) 01, no. 02 (2023): 58–74. https://doi.org/10.5281/zenodo.10148558.

Full text
Abstract:
Deepfakes, synthetic media created using artificial intelligence and machine learning techniques, allow for the creation of highly realistic fake videos and audio recordings. As deepfake technology has rapidly advanced in recent years, the potential for its misuse in disinformation campaigns, fraud, and other forms of deception has grown exponentially. This paper explores the current state and trajectory of deepfake technology, emerging safeguards designed to detect deepfakes, and the critical role of education and skepticism in inoculating society against their harms. The paper begins by prov
APA, Harvard, Vancouver, ISO, and other styles
15

K., D.V.N.Vaishnavi, Hima Bindu L., Sathvika M., Udaya Lakshmi K., Harini M., and Ashok N. "Deep learning approaches for robust deep fake detection." World Journal of Advanced Research and Reviews 21, no. 3 (2024): 2283–89. https://doi.org/10.5281/zenodo.14176242.

Full text
Abstract:
Detecting deepfake images using a deep learning approach, particularly using model Densenet121, involves training a neural network to differentiate between authentic and manipulated images. Deepfakes have gained prominence due to advances in deep learning, especially generative adversarial networks (GANs). They pose significant challenges to the veracity of digital content, as they can be used to create realistic and deceptive media. Deepfakes are realistic looking fake media generated by many artificial intelligence tools like face2face and deepfake, which pose a severe threat to public. As m
APA, Harvard, Vancouver, ISO, and other styles
16

Shilpa, K. C., B. P. Poornima, R. Pai Rajath, Harmain Khan Rakeen, N. S. Shreya, and A. Suchith. "A Comprehensive Review on Deep Fake Detection in Videos." Journal of Advancement in Architectures for Computer Vision 1, no. 1 (2025): 33–44. https://doi.org/10.5281/zenodo.15118896.

Full text
Abstract:
<em>The last few decades have seen a significant rise in Artificial Intelligence (AI) and Machine Learning (ML), promoting the development of deepfake technology. Deepfakes are synthetic media created using AI techniques, altering audio, images, and videos to appear authentic but are fabricated. Employing concepts like Generative Adversarial Networks (GANs), deepfake creation involves a competitive process where one model produces forgeries while another aims to identify them. The consequences of deepfakes are extensive, ranging from misinformation campaigns by terrorist organizations to indiv
APA, Harvard, Vancouver, ISO, and other styles
17

Yang, Yang, Norisma Binti Idris, Chang Liu, Hui Wu, and Dingguo Yu. "A destructive active defense algorithm for deepfake face images." PeerJ Computer Science 10 (October 4, 2024): e2356. http://dx.doi.org/10.7717/peerj-cs.2356.

Full text
Abstract:
The harm caused by deepfake face images is increasing. To proactively defend against this threat, this paper innovatively proposes a destructive active defense algorithm for deepfake face images (DADFI). This algorithm adds slight perturbations to the original face images to generate adversarial samples. These perturbations are imperceptible to the human eye but cause significant distortions in the outputs of mainstream deepfake models. Firstly, the algorithm generates adversarial samples that maintain high visual fidelity and authenticity. Secondly, in a black-box scenario, the adversarial sa
APA, Harvard, Vancouver, ISO, and other styles
18

Diljith, M. S., C. P. Emilyn, Afitha Abu T. Fathimathul, and KS Salkala. "Deepfake Technology: An Overview, Applications, Detection, and Future Challenges." Journal of Advancement in Architectures for Computer Vision 1, no. 1 (2025): 45–53. https://doi.org/10.5281/zenodo.15152176.

Full text
Abstract:
<em>Deepfake technology, powered by artificial intelligence, has revolutionized digital media by enabling the creation of highly realistic synthetic videos, images, and audio. While it offers numerous benefits in fields such as entertainment, education, and accessibility, deepfake technology also raises significant ethical, legal, and security concerns. This report explores the methods used to generate deepfakes, including Generative Adversarial Networks (GANs) and autoencoders, and highlights key deepfake techniques such as face-swapping, lip-syncing, and voice cloning. It further examines bo
APA, Harvard, Vancouver, ISO, and other styles
19

Fan, Li, Wei Li, and Xiaohui Cui. "Deepfake-Image Anti-Forensics with Adversarial Examples Attacks." Future Internet 13, no. 11 (2021): 288. http://dx.doi.org/10.3390/fi13110288.

Full text
Abstract:
Many deepfake-image forensic detectors have been proposed and improved due to the development of synthetic techniques. However, recent studies show that most of these detectors are not immune to adversarial example attacks. Therefore, understanding the impact of adversarial examples on their performance is an important step towards improving deepfake-image detectors. This study developed an anti-forensics case study of two popular general deepfake detectors based on their accuracy and generalization. Herein, we propose the Poisson noise DeepFool (PNDF), an improved iterative adversarial exampl
APA, Harvard, Vancouver, ISO, and other styles
20

Wu, Mengjie, Jingui Ma, Run Wang, et al. "TraceEvader: Making DeepFakes More Untraceable via Evading the Forgery Model Attribution." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 18 (2024): 19965–73. http://dx.doi.org/10.1609/aaai.v38i18.29973.

Full text
Abstract:
In recent few years, DeepFakes are posing serve threats and concerns to both individuals and celebrities, as realistic DeepFakes facilitate the spread of disinformation. Model attribution techniques aim at attributing the adopted forgery models of DeepFakes for provenance purposes and providing explainable results to DeepFake forensics. However, the existing model attribution techniques rely on the trace left in the DeepFake creation, which can become futile if such traces were disrupted. Motivated by our observation that certain traces served for model attribution appeared in both the high-fr
APA, Harvard, Vancouver, ISO, and other styles
21

Dr. Sheshang Degadwala and Vishal Manishbhai Patel. "Advancements in Deepfake Detection : A Review of Emerging Techniques and Technologies." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10, no. 5 (2024): 127–39. http://dx.doi.org/10.32628/cseit24105811.

Full text
Abstract:
This review paper provides a comprehensive analysis of the current state of deepfake detection technologies, driven by the growing concerns over the misuse of synthetic media for malicious purposes, such as misinformation, identity theft, and privacy invasion. The motivation behind this work stems from the increasing sophistication of deepfake generation methods, making it challenging to differentiate between real and manipulated content. While numerous detection techniques have been proposed, they often face limitations in scalability, generalization across different types of deepfakes, and r
APA, Harvard, Vancouver, ISO, and other styles
22

Omoshalewa Anike Adeosun, Gbenga Akingbulere, Nonso Okika, Blessing Unwana Umoh, Adeyemi A. Adesola, and Haruna Ogweda. "Adversarial attacks on deepfake detection: Assessing vulnerability and robustness in video-based models." Global Journal of Engineering and Technology Advances 22, no. 2 (2025): 090–102. https://doi.org/10.30574/gjeta.2025.22.2.0029.

Full text
Abstract:
The increasing prevalence of deepfake media has led to significant advancements in detection models, but these models remain vulnerable to adversarial attacks that exploit weaknesses in deep learning architectures. This study investigates the vulnerability and robustness of video-based deepfake detection models, specifically comparing a Long Short-Term Convolutional Neural Network (LST-CNN) with adversarial perturbations using the Fast Gradient Sign Method (FGSM) attacks. We evaluate the performance of the models under both clean and adversarial conditions, highlighting the impact of adversari
APA, Harvard, Vancouver, ISO, and other styles
23

Reddi, Nallamilli V. K., Karangi Sindhu, Shaik Ahmed Aaquelah, Mohammad Azarunnisa, and Palleti Nikhil. "DEEP-TRUST: DEEPFAKE DETECTION VIA HYBRID CNN-ELA-GAN." International Journal of Engineering Applied Sciences and Technology 09, no. 12 (2025): 88–91. https://doi.org/10.33564/ijeast.2025.v09i12.011.

Full text
Abstract:
—Deepfakes pose a growing threat to digital security and trust, demanding robust methods to detect AI-generated manipulations. Traditional approaches like XceptionNet and Error Level Analysis (ELA), while foundational, struggle with evolving generative architectures like diffusion models and fail to balance accuracy with interpretability. Static forensic methods also lack adaptability to dynamic adversarial attacks. This study introduces Deep-Trust, a hybrid framework integrating Convolutional Neural Networks (CNNs), Error Level Analysis (ELA), and Generative Adversarial Networks (GANs) to exp
APA, Harvard, Vancouver, ISO, and other styles
24

Nisha, Rose, K. Lijina, Jumana Fathimathul, K. Sayana, and V. Gopichandana. "Multimedia Deepfake Detection." Recent Trends in Computer Graphics and Multimedia Technology 7, no. 3 (2025): 1–5. https://doi.org/10.5281/zenodo.15471531.

Full text
Abstract:
<em>In tech-enabled communities, social media allows users to access multimedia content easily. With recent advancements in computer vision and natural language processing, machine learning (ML) and deep learning (DL) models have evolved. With advancements in generative adversarial networks (GAN), it has become possible to create synthetic media of a person or use some person&rsquo;s contents to fit other environments. Deepfakes are fake media generated using advanced tools and applications, which may mislead people and create an issue of trust within communities. Detecting fakes is crucial an
APA, Harvard, Vancouver, ISO, and other styles
25

Mr. Om D. Bhonsle, Mr. Rohit G. Gupta, Mr. Vishal C. Gupta, and Dr. Poorva G. Waingankar. "Detecting Deepfake Media with AI and ML." International Research Journal on Advanced Engineering Hub (IRJAEH) 3, no. 02 (2025): 267–74. https://doi.org/10.47392/irjaeh.2025.0037.

Full text
Abstract:
Deepfake technology has rapidly advanced, enabling the creation of highly realistic yet manipulated digital media. These artificial videos and images pose significant risks to digital security, misinformation, and identity fraud. Traditional forensic techniques struggle to detect deepfakes effectively due to the increasing sophistication of Generative Adversarial Networks (GANs) and other deep learning-based synthesis methods. The need for a robust, scalable, and automated detection system has become crucial for ensuring media authenticity. This research presents DeepFake Bot, an AI-driven sys
APA, Harvard, Vancouver, ISO, and other styles
26

Adepu, Pavan Kumar. "Adversarial Robustness in Generative AI: Defending Against Malicious Model Inversions and Deepfake Attacks." International Journal of Management Technology 13, no. 4 (2025): 27–43. https://doi.org/10.37745/ijmt.2013/vol13n42743.

Full text
Abstract:
Generative AI models are rapidly advancing creative content creation but remain vulnerable to adversarial attacks like model inversion and deepfakes. In this work, we delve into robust defence strategies with an actual dataset of the Deepfake Detection Challenge (DFDC) to simulate various attack scenarios. We employ the use of both anomaly detection and adversarial training mechanisms to harden the security of generative models. Experimental results reveal that these composite defence mechanisms significantly reduce the malicious attack success rate while the inventive capability of the models
APA, Harvard, Vancouver, ISO, and other styles
27

Qiu, Haoxuan, Yanhui Du, and Tianliang Lu. "The Framework of Cross-Domain and Model Adversarial Attack against Deepfake." Future Internet 14, no. 2 (2022): 46. http://dx.doi.org/10.3390/fi14020046.

Full text
Abstract:
To protect images from the tampering of deepfake, adversarial examples can be made to replace the original images by distorting the output of the deepfake model and disrupting its work. Current studies lack generalizability in that they simply focus on the adversarial examples generated by a model in a domain. To improve the generalization of adversarial examples and produce better attack effects on each domain of multiple deepfake models, this paper proposes a framework of Cross-Domain and Model Adversarial Attack (CDMAA). Firstly, CDMAA uniformly weights the loss function of each domain and
APA, Harvard, Vancouver, ISO, and other styles
28

Eidan, Shahad, and Shahad Eadan. "Unmasking Deepfakes: A Systematic Review of Generation Techniques and Detection Strategies." Iraqi Journal of Intelligent Computing and Informatics (IJICI) 4, no. 2 (2025): 134–54. https://doi.org/10.52940/ijici.v4i2.105.

Full text
Abstract:
Deepfake technology has progressed rapidly alongside the development of Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and multiple-encoder synthesis methods. These improvements made the generation of hyperreal synthetic media possible, posing the challenge of misinformation, identity theft and cyberthreats. To address these risks, research on deepfake detection had continued and has employed CNNs, RNNs, transformers, and hybrid architectures to sense content that has been manipulated. This survey offers a detailed overview of the emerging techniques for generating de
APA, Harvard, Vancouver, ISO, and other styles
29

K. D.V.N.Vaishnavi, L. Hima Bindu, M. Sathvika, K. Udaya Lakshmi, M. Harini, and N. Ashok. "Deep learning approaches for robust deep fake detection." World Journal of Advanced Research and Reviews 21, no. 3 (2023): 2283–89. http://dx.doi.org/10.30574/wjarr.2024.21.3.0889.

Full text
Abstract:
Detecting deepfake images using a deep learning approach, particularly using model Densenet121, involves training a neural network to differentiate between authentic and manipulated images. Deepfakes have gained prominence due to advances in deep learning, especially generative adversarial networks (GANs). They pose significant challenges to the veracity of digital content, as they can be used to create realistic and deceptive media. Deepfakes are realistic looking fake media generated by many artificial intelligence tools like face2face and deepfake, which pose a severe threat to public. As m
APA, Harvard, Vancouver, ISO, and other styles
30

Pankhuri, Tripathi, Singh Shikha, Nishad Anubhav, and Siddiqui Farheen. "Fraudshield – Deepfake Detection Tools." International Journal of Engineering and Management Research 15, no. 2 (2025): 47–51. https://doi.org/10.5281/zenodo.15314707.

Full text
Abstract:
FraudShield is a web application designed to detect and mitigate the impact of deepfakes, ensuring content authenticity and integrity. With the rise of image manipulation and deepfake videos, detecting fraudulent activities has become increasingly critical. This project introduces a hybrid detection system that integrates Convolutional Neural Networks (CNNs) to identify morphed images and manipulated content. The framework leverages machine learning techniques to detect tampered facial features, artifacts, and inconsistencies in deepfake videos and images. The CNN component analyzes visual fea
APA, Harvard, Vancouver, ISO, and other styles
31

Emaley, Aman Kumar. "Discerning Deception: A Face-Centric Deepfake Detection Approach with ResNeXt-50 and LSTMs." International Journal for Research in Applied Science and Engineering Technology 12, no. 4 (2024): 5075–83. http://dx.doi.org/10.22214/ijraset.2024.61186.

Full text
Abstract:
Abstract: A.I. has grown to epidemic proportions over the last years as its applied in almost all sectors to allocate workload from humans but end up being done effectively with no human intervention. A branch of A.I. called deep learning, which operates by mimicking human judgment and action through neural network systems. Nonetheless, with the increase height of the two platforms have been experienced sufficient cases of misguided individuals using tools to recycle videos, audios, and texts to achieve their agendas. This insinuates a due assumption that Generative Adversarial Networks, GANs,
APA, Harvard, Vancouver, ISO, and other styles
32

Khormali, Aminollah, and Jiann-Shiun Yuan. "ADD: Attention-Based DeepFake Detection Approach." Big Data and Cognitive Computing 5, no. 4 (2021): 49. http://dx.doi.org/10.3390/bdcc5040049.

Full text
Abstract:
Recent advancements of Generative Adversarial Networks (GANs) pose emerging yet serious privacy risks threatening digital media’s integrity and trustworthiness, specifically digital video, through synthesizing hyper-realistic images and videos, i.e., DeepFakes. The need for ascertaining the trustworthiness of digital media calls for automatic yet accurate DeepFake detection algorithms. This paper presents an attention-based DeepFake detection (ADD) method that exploits the fine-grained and spatial locality attributes of artificially synthesized videos for enhanced detection. ADD framework is c
APA, Harvard, Vancouver, ISO, and other styles
33

Jheelan, Jhanvi, and Sameerchand Pudaruth. "Using Deep Learning to Identify Deepfakes Created Using Generative Adversarial Networks." Computers 14, no. 2 (2025): 60. https://doi.org/10.3390/computers14020060.

Full text
Abstract:
Generative adversarial networks (GANs) have revolutionised various fields by creating highly realistic images, videos, and audio, thus enhancing applications such as video game development and data augmentation. However, this technology has also given rise to deepfakes, which pose serious challenges due to their potential to create deceptive content. Thousands of media reports have informed us of such occurrences, highlighting the urgent need for reliable detection methods. This study addresses the issue by developing a deep learning (DL) model capable of distinguishing between real and fake f
APA, Harvard, Vancouver, ISO, and other styles
34

Sweety, Dr. "The Rise of AI-Powered Cybersecurity Threats and the Evolution of Defense Mechanisms." International Journal for Research in Applied Science and Engineering Technology 13, no. 5 (2025): 7215–19. https://doi.org/10.22214/ijraset.2025.71745.

Full text
Abstract:
The rapid integration of Artificial Intelligence (AI) into digital infrastructure has significantly transformed both cybersecurity defense and attack mechanisms. While AI is enhancing security capabilities through intelligent intrusion detection, anomaly recognition, and real-time threat response, it is simultaneously empowering malicious actors with sophisticated tools such as deepfake technology, AI-generated phishing campaigns, adversarial attacks, and self-learning malware. These AIpowered threats challenge the traditional security paradigms by evolving faster than conventional defensive s
APA, Harvard, Vancouver, ISO, and other styles
35

Trivedi, Aayush. "Identifying Deepfake Cyber Attacks: Challenges and Countermeasures." International Journal for Research in Applied Science and Engineering Technology 13, no. 1 (2025): 475–78. https://doi.org/10.22214/ijraset.2025.66305.

Full text
Abstract:
Deepfake technology has emerged as a major cybersecurity threat, enabling sophisticated cyber attacks that exploit artificial intelligence (AI) and machine learning (ML). This paper explores the methodologies used to detect deepfake-based cyber threats, including image and video forensics, AIdriven detection systems, and biometric verification. Additionally, countermeasures such as blockchain authentication and adversarial training are examined. A comprehensive review of deepfake detection datasets is also provided, discussing their efficacy and relevance. This study aims to enhance awareness
APA, Harvard, Vancouver, ISO, and other styles
36

Gosavi, Prof Amol. "Deepfake Video Face Detection." International Journal for Research in Applied Science and Engineering Technology 13, no. 4 (2025): 5840–47. https://doi.org/10.22214/ijraset.2025.69233.

Full text
Abstract:
The emergence of deepfake technology, which relies on generative adversarial networks (GANs), has raised substantial concerns in the realm of digital media. This technology enables the manipulation of facial features in videos, leading to potential misuse for spreading false information, misrepresentation, and identity theft. As a result, there is a pressing need to establish robust methods for detecting deepfakes effectively. Detecting deepfake videos is particularly difficult due to their increasingly realistic appearance and the sophisticated techniques involved in their creation. This rese
APA, Harvard, Vancouver, ISO, and other styles
37

Lad, Sumit. "Applied Ethical and Explainable AI in Adversarial Deepfake Detection: From Theory to Real-World Systems." Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 6, no. 1 (2024): 126–37. http://dx.doi.org/10.60087/jaigs.v6i1.236.

Full text
Abstract:
Deepfake technology is advancing by the minute. This gives rise to increased privacy, trust and security risks. Such technology can be used for malicious activities like manipulating public opinion and spreading misinformation using social media. Adversarial machine learning techniques seem to be a strong defense in detecting and flagging deepfake content. But the challenge with practical use of many deepfake detection models is that they operate as black-boxes with little transparency or accountability in their decisions. This paper proposes a framework and guidelines to integrate ethical AI
APA, Harvard, Vancouver, ISO, and other styles
38

Janardhan Komarolu, C.Nagaraju. "Design of an Adaptive and Iterative Multi-Modal Transformer-Based Biometric Security Framework for Robust Authentication and Spoof Detection." Communications on Applied Nonlinear Analysis 32, no. 9s (2025): 2658–77. https://doi.org/10.52783/cana.v32.4547.

Full text
Abstract:
As more people rely on biometrics in conjunction with traditional identity authentication systems, attacks like deepfakes and adversarial techniques have emerged as prominent threats in several identity verification systems. Traditional unimodal and even static multimodal schemes are generally found ineffective in the face of new attacks because unimodality leaves them vulnerable to different adversarial manipulations, cannot verify continuously during the use phase, and lack adaptability to change. Therefore, in light of these limitations, we present an adaptive, robust, and privacy-preservin
APA, Harvard, Vancouver, ISO, and other styles
39

Journal, IJSREM. "Deep Fake Face Detection Using Deep Learning Tech with LSTM." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 02 (2024): 1–10. http://dx.doi.org/10.55041/ijsrem28624.

Full text
Abstract:
The fabrication of extremely life like spoof films and pictures that are getting harder to tell apart from actual content is now possible because to the quick advancement of deep fake technology. A number of industries, including cybersecurity, politics, and journalism, are greatly impacted by the widespread use of deepfakes, which seriously jeopardizes the accuracy of digital media. In computer vision, machine learning, and digital forensics, detecting deepfakes has emerged as a crucial topic for study and development. An outline of the most recent cutting-edge methods and difficulties in dee
APA, Harvard, Vancouver, ISO, and other styles
40

Jabbar, Nadeem, Sohail Masood Bhatti, Muhammad Rashid, Arfan Jaffar, and Sheeraz Akram. "Single-layer KAN for deepfake classification: Balancing efficiency and performance in resource constrained environments." PLOS One 20, no. 7 (2025): e0326565. https://doi.org/10.1371/journal.pone.0326565.

Full text
Abstract:
Deepfakes, synthetic media created using artificial intelligence, threaten the authenticity of digital content. Traditional detection methods, such as Convolutional Neural Networks (CNNs), require substantial computational resources, rendering them impractical for resource-constrained devices like smartphones and IoT systems. This study evaluates a single-layer Kolmogorov-Arnold Network (KAN) with 200 nodes for efficient deepfake classification. Experimental results show that KAN achieves 95.01% accuracy on the FaceForensics++ dataset and 88.32% on the Celeb-DF dataset, while requiring only 52
APA, Harvard, Vancouver, ISO, and other styles
41

Bisht, Upasana, and Pooja. "Evolving Deepfake Technologies: Advancements, Detection Techniques, and Societal Impact." Don Bosco Institute of Technology Delhi Journal of Research 1, no. 2 (2025): 38–43. https://doi.org/10.48165/dbitdjr.2024.1.02.06.

Full text
Abstract:
The rapid advancement of Deepfake technology, powered by deep learning algorithms and Generative Adversarial Networks (GANs), presents a paradigm shift in digital content creation and manipulation. This technology, capable of generating highly realistic but entirely synthetic audiovisual content, has implications that stretch across various domains, from entertainment to politics, posing both opportunities for innovation and risks for misinformation and privacy violations. This paper provides a comprehensive overview of the evolution of Deepfake technology, highlighting key developments in AI-
APA, Harvard, Vancouver, ISO, and other styles
42

Naik, Deepak. "Fake Media Forensics:AI – Driven Forensic Analysis of Fake Multimedia Content." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–9. https://doi.org/10.55041/ijsrem47208.

Full text
Abstract:
Abstract—With the rapid advancement of deep learning techniques, the generation of synthetic media—commonly Research and development on deepfakes technology have reached new levels of sophistication. Digital security along with misinformation face serious threats because of these sophisticated methods. and privacy. Existing deepfake detection models primarily the detection methods primarily analyze either video or audio or image-based forgeries yet they seldom employ unified multi-modal examination methods. The authors introduce here a multi-modal deepfake detection system. The proposed framew
APA, Harvard, Vancouver, ISO, and other styles
43

Nimbalkar, Suhas. "“Deepfake Detection in Call Recordings:A Deep Learning Solution for Voice Authentication”." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–9. https://doi.org/10.55041/ijsrem49194.

Full text
Abstract:
Abstract— The emergence of deepfake technology has improved exponentially and this intensified the fears that surround the credibility of audio recordings, in instance telecommunication and security. This project proposes a full deep learning-based approach to deepfake voice recordings detection in call communications as an improvement to the voice authentication processes used. It is with this in mind that we came up with an adaptive architecture that positions convolutional neural networks (CNN) and recurrent neural network (RNN) in a way that assists in distinguishing between real and fabri
APA, Harvard, Vancouver, ISO, and other styles
44

Rabhi, Mouna, Spiridon Bakiras, and Roberto Di Pietro. "Audio-deepfake detection: Adversarial attacks and countermeasures." Expert Systems with Applications 250 (September 2024): 123941. http://dx.doi.org/10.1016/j.eswa.2024.123941.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Gaddam, Narayana. "ADVERSARIAL MACHINE LEARNING FOR STRENGTHENING DEEPFAKE DETECTION." INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE & MACHINE LEARNING 2, no. 1 (2023): 190–203. https://doi.org/10.34218/ijaiml_02_01_018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Savitha, A. C., Kumar KM Madhu, M. Pallavi, Chincholi Pallavi, H. B. Prethi, and Rachitha. "Experimental Detection of Deep Fake Images Using Face Swap Algorithm." Journal of Scholastic Engineering Science and Management (JSESM), A Peer Reviewed Refereed Multidisciplinary Research Journal 4, no. 5 (2025): 56–61. https://doi.org/10.5281/zenodo.15397033.

Full text
Abstract:
Deepfakes enable highly realistic face-swapping in videos using deep learning. To address the threat posed by Deepfakes, the DFDC dataset, the largest face-swapped video dataset to date, was created with over 100,000 clips generated using multiple methods, including Deepfake Autoencoders and GANs. The dataset consists of videos from 3,426 consenting actors. It supports the development of scalable Deepfake detection models and includes a public Kaggle competition to benchmark solutions. The dataset highlights the complexity of Deepfake detection but shows the potential for generalization to rea
APA, Harvard, Vancouver, ISO, and other styles
47

Phanireddy, Sandeep. "Advancements in AI for Deepfake attacks Detection and prevention." International Scientific Journal of Engineering and Management 02, no. 11 (2023): 1–8. https://doi.org/10.55041/isjem01274.

Full text
Abstract:
Deepfake technology has rapidly evolved, raising significant ethical, security, and societal concerns. By leveraging deep learning models, malicious actors can create highly realistic fake videos and audio clips, making it increasingly difficult to distinguish between real and manipulated content. This paper explores the mechanics behind deepfake creation, its implications in misinformation, fraud, and identity theft, and the challenges in detecting and mitigating such threats. Through a discussion on detection techniques, legal frameworks, and public awareness initiatives, this study highligh
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Wendy Edda, Davide Salvi, Viola Negroni, Daniele Ugo Leonzio, Paolo Bestagini, and Stefano Tubaro. "BIM-Based Adversarial Attacks Against Speech Deepfake Detectors." Electronics 14, no. 15 (2025): 2967. https://doi.org/10.3390/electronics14152967.

Full text
Abstract:
Automatic Speaker Verification (ASV) systems are increasingly employed to secure access to services and facilities. However, recent advances in speech deepfake generation pose serious threats to their reliability. Modern speech synthesis models can convincingly imitate a target speaker’s voice and generate realistic synthetic audio, potentially enabling unauthorized access through ASV systems. To counter these threats, forensic detectors have been developed to distinguish between real and fake speech. Although these models achieve strong performance, their deep learning nature makes them susce
APA, Harvard, Vancouver, ISO, and other styles
49

Wan, Da, Manchun Cai, Shufan Peng, Wenkai Qin, and Lanting Li. "Deepfake Detection Algorithm Based on Dual-Branch Data Augmentation and Modified Attention Mechanism." Applied Sciences 13, no. 14 (2023): 8313. http://dx.doi.org/10.3390/app13148313.

Full text
Abstract:
Mainstream deepfake detection algorithms generally fail to fully extract forgery traces and have low accuracy when detecting forged images with natural corruptions or human damage. On this basis, a new algorithm based on an adversarial dual-branch data augmentation framework and a modified attention mechanism is proposed in this paper to improve the robustness of detection models. First, this paper combines the traditional random sampling augmentation method with the adversarial sample idea to enhance and expand the forged images in data preprocessing. Then, we obtain training samples with div
APA, Harvard, Vancouver, ISO, and other styles
50

Yang, Sung-Hyun, Keshav Thapa, and Barsha Lamichhane. "Detection of Image Level Forgery with Various Constraints Using DFDC Full and Sample Datasets." Sensors 22, no. 23 (2022): 9121. http://dx.doi.org/10.3390/s22239121.

Full text
Abstract:
The emergence of advanced machine learning or deep learning techniques such as autoencoders and generative adversarial networks, can generate images known as deepfakes, which astonishingly resemble the realistic images. These deepfake images are hard to distinguish from the real images and are being used unethically against famous personalities such as politicians, celebrities, and social workers. Hence, we propose a method to detect these deepfake images using a light weighted convolutional neural network (CNN). Our research is conducted with Deep Fake Detection Challenge (DFDC) full and samp
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!