Добірка наукової літератури з теми "Adversarial Deepfake"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Adversarial Deepfake".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Adversarial Deepfake"

1

Lad, Sumit. "Adversarial Approaches to Deepfake Detection: A Theoretical Framework for Robust Defense." Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 6, no. 1 (2024): 46–58. http://dx.doi.org/10.60087/jaigs.v6i1.225.

Повний текст джерела
Анотація:
The rapid improvements in capabilities of neural networks and generative adversarial networks (GANs) has given rise to extremely sophisticated deepfake technologies. This has made it very difficult to reliably recognize fake digital content. It has enabled the creation of highly convincing synthetic media which can be used in malicious ways in this era of user generated information and social media. Existing deepfake detection techniques are effective against early iterations of deepfakes but get increasingly vulnerable to more sophisticated deepfakes and adversarial attacks. In this paper we
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Abbasi, Maryam, Paulo Váz, José Silva, and Pedro Martins. "Comprehensive Evaluation of Deepfake Detection Models: Accuracy, Generalization, and Resilience to Adversarial Attacks." Applied Sciences 15, no. 3 (2025): 1225. https://doi.org/10.3390/app15031225.

Повний текст джерела
Анотація:
The rise of deepfakes—synthetic media generated using artificial intelligence—threatens digital content authenticity, facilitating misinformation and manipulation. However, deepfakes can also depict real or entirely fictitious individuals, leveraging state-of-the-art techniques such as generative adversarial networks (GANs) and emerging diffusion-based models. Existing detection methods face challenges with generalization across datasets and vulnerability to adversarial attacks. This study focuses on subsets of frames extracted from the DeepFake Detection Challenge (DFDC) and FaceForensics++ v
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Garcia, Jan Mark. "Exploring Deepfakes and Effective Prevention Strategies: A Critical Review." Psychology and Education: A Multidisciplinary Journal 33, no. 1 (2025): 93–96. https://doi.org/10.70838/pemj.330107.

Повний текст джерела
Анотація:
Deepfake technology, powered by artificial intelligence and deep learning, has rapidly advanced, enabling the creation of highly realistic synthetic media. While it presents opportunities in entertainment and creative applications, deepfakes pose significant risks, including misinformation, identity fraud, and threats to privacy and national security. This study explores the evolution of deepfake technology, its implications, and current detection techniques. Existing methods for deepfake detection, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zhuang, Zhong, Yoichi Tomioka, Jungpil Shin, and Yuichi Okuyama. "PGD-Trap: Proactive Deepfake Defense with Sticky Adversarial Signals and Iterative Latent Variable Refinement." Electronics 13, no. 17 (2024): 3353. http://dx.doi.org/10.3390/electronics13173353.

Повний текст джерела
Анотація:
With the development of artificial intelligence (AI), deepfakes, in which the face of one person is changed to another expression of the same person or a different person, have advanced. There is a need for countermeasures against crimes that exploit deepfakes. Methods to interfere with deepfake generation by adding an invisible weak adversarial signal to an image have been proposed. However, there is a problem: the weak signal can be easily removed by processing the image. In this paper, we propose trap signals that appear in response to a process that weakens adversarial signals. We also pro
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Huang, Hao, Yongtao Wang, Zhaoyu Chen, et al. "CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (2022): 989–97. http://dx.doi.org/10.1609/aaai.v36i1.19982.

Повний текст джерела
Анотація:
Malicious applications of deepfakes (i.e., technologies generating target facial attributes or entire faces from facial images) have posed a huge threat to individuals' reputation and security. To mitigate these threats, recent studies have proposed adversarial watermarks to combat deepfake models, leading them to generate distorted outputs. Despite achieving impressive results, these adversarial watermarks have low image-level and model-level transferability, meaning that they can protect only one facial image from one specific deepfake model. To address these issues, we propose a novel solut
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Chen, Junyi, Minghao Yang, and Kaishen Yuan. "A Review of Deepfake Detection Techniques." Applied and Computational Engineering 117, no. 1 (2025): 165–74. https://doi.org/10.54254/2755-2721/2025.20955.

Повний текст джерела
Анотація:
With the development of deepfake technology, the use of this technology to forge videos and images has caused serious privacy and legal problems in society. In order to solve these problems, deepfake detection is required. In this paper, the generation and detection techniques of deepfakes in recent years are studied. First, the principles of deepfake generation technology are briefly introduced, including Generative Adversarial Networks (GAN) based and autoencoder. Then, this paper focuses on the detection techniques of deepfakes, classifies them based on the principles of each method, and su
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Ghariwala, Love. "Impact of Deepfake Technology on Social Media: Detection, Misinformation and Societal Implications." International Journal for Research in Applied Science and Engineering Technology 13, no. 3 (2025): 2982–86. https://doi.org/10.22214/ijraset.2025.67997.

Повний текст джерела
Анотація:
The rise of Artificial Intelligence (AI) has opened up new possibilities, but it also brings significant challenges. Deepfake technology, which creates realistic fake videos, raises concerns about privacy, identity, and consent. This paper explores the impacts of deepfakes and suggests solutions to mitigate their negative effects. Deepfake technology, which allows the manipulation and fabrication of audio, video, and images, has gained significant attention due to its potential to deceive and manipulate. As deepfakes proliferate on social media platforms, understanding their impact becomes cru
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Noreen, Iram, Muhammad Shahid Muneer, and Saira Gillani. "Deepfake attack prevention using steganography GANs." PeerJ Computer Science 8 (October 20, 2022): e1125. http://dx.doi.org/10.7717/peerj-cs.1125.

Повний текст джерела
Анотація:
Background Deepfakes are fake images or videos generated by deep learning algorithms. Ongoing progress in deep learning techniques like auto-encoders and generative adversarial networks (GANs) is approaching a level that makes deepfake detection ideally impossible. A deepfake is created by swapping videos, images, or audio with the target, consequently raising digital media threats over the internet. Much work has been done to detect deepfake videos through feature detection using a convolutional neural network (CNN), recurrent neural network (RNN), and spatiotemporal CNN. However, these techn
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Shukla, Dheeraj. "Deep Fake Face Detection Using Deep Learning." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 06 (2025): 1–9. https://doi.org/10.55041/ijsrem50976.

Повний текст джерела
Анотація:
Artificial Intelligence, deepfake technology, Generative Adversarial Networks GAN, Detection System, Detection Accuracy, User accessibility, Digital content verification. Abstract: In recent years, the rise of deepfake technology has raised significant concerns. regarding the authenticity of digital content. Deepfakes, which are synthetic media created using advanced artificial intelligence techniques, can mislead viewers and pose risks to personal privacy, public trust, and social discourse. The proposed system focuses on developing a Generative Adversarial Network (GAN)- based deepfake detec
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Omkar, Ajit Awadhut, and Prof. Priya Dhadawe Asst. "Deep fakes and Mitigation Strategies." International Journal of Advance and Applied Research S6, no. 23 (2025): 121–28. https://doi.org/10.5281/zenodo.15194884.

Повний текст джерела
Анотація:
<em>Deepfake technology, powered by artificial intelligence (AI) and deep learning, has transformed digital media by enabling the creation of highly realistic synthetic content. While deepfakes have legitimate applications in entertainment, education, and accessibility, they also pose significant risks, including misinformation, identity theft, and political manipulation. The rapid advancement of deepfake generation techniques, particularly through Generative Adversarial Networks (GANs) and Autoencoders, has made it increasingly difficult to distinguish between real and fake content.</em>
Стилі APA, Harvard, Vancouver, ISO та ін.
Більше джерел

Книги з теми "Adversarial Deepfake"

1

Lanham, Micheal. Generating a New Reality: From Autoencoders and Adversarial Networks to Deepfakes. Apress L. P., 2021.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Urcuqui López, Christian Camilo, and Andrés Navarro Cadavid, eds. Ciberseguridad: los datos tienen la respuesta. Universidad Icesi, 2022. http://dx.doi.org/10.18046/eui/ee.4.2022.

Повний текст джерела
Анотація:
Este libro presenta la importancia de la información y la manera de utilizarla para el proceso de ciencia de datos, para luego introducir los fundamentos teóricos de ambas disciplinas y presentar su aplicación en ocho proyectos de investigación, los cuales buscan orientar al lector sobre cómo utilizar la inteligencia artificial tanto desde la perspectiva defensiva como en la ofensiva. En el aspecto defensivo, en el libro se aborda la realización de experimentos precisos para el desarrollo de modelos para la detección de malware en dispositivos Android, cryptojacking, deepfakes y botnets malici
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Adversarial Deepfake"

1

Khan, Sarwar, Jun-Cheng Chen, Wen-Hung Liao, and Chu-Song Chen. "Adversarially Robust Deepfake Detection via Adversarial Feature Similarity Learning." In MultiMedia Modeling. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-53311-2_37.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Chen, Zengqiang, Xudong Wang, and Yuezun Li. "Enhancing Deepfake Detection via Adversarial Generative Learning." In Lecture Notes in Computer Science. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-1068-6_22.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Vo, Ngan Hoang, Khoa D. Phan, Anh-Duy Tran, and Duc-Tien Dang-Nguyen. "Adversarial Attacks on Deepfake Detectors: A Practical Analysis." In MultiMedia Modeling. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98355-0_27.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Vasoya, Yash, Dhairya Patel, Kanubhai K. Patel, Rutvij H. Jhaveri, Digvijaysinh M. Rathod, and Jigarkumar Shah. "Detecting Deepfake Images with Enhanced Generative Adversarial Networks." In Communications in Computer and Information Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-88039-1_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Coccomini, Davide Alessandro, Roberto Caldelli, Giuseppe Amato, Fabrizio Falchi, and Claudio Gennaro. "Adversarial Magnification to Deceive Deepfake Detection Through Super Resolution." In Communications in Computer and Information Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-74627-7_41.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Fernandes, Steven Lawrence, and Sumit Kumar Jha. "Adversarial Attack on Deepfake Detection Using RL Based Texture Patches." In Computer Vision – ECCV 2020 Workshops. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66415-2_14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Remya Revi, K., K. R. Vidya, and M. Wilscy. "Detection of Deepfake Images Created Using Generative Adversarial Networks: A Review." In Transactions on Computational Science and Computational Intelligence. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-49500-8_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Irfan, Muhammad, Myung J. Lee, and Daiki Nobayashi. "Robust Deepfake Detection and Resilient Adversarial Image Reconstruction with Reduced Features Set." In Lecture Notes on Data Engineering and Communications Technologies. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72322-3_15.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Gautam, Abhishek, and Awadhesh Kumar Singh. "Deep Convolutional Neural Network Implementation for Detecting Generative Adversarial Network Generated Deepfake Videos." In Lecture Notes in Electrical Engineering. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-97-7384-8_44.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Gaur, Loveleen, Mohan Bhandari, and Tanvi Razdan. "Development of Image Translating Model to Counter Adversarial Attacks." In DeepFakes. CRC Press, 2022. http://dx.doi.org/10.1201/9781003231493-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Adversarial Deepfake"

1

Mohamed, Saifeldin Nasser, Ahmed Amr Ahmed, and Wael Elsersy. "FGSM Adversarial Attack Detection On Deepfake Videos." In 2024 Intelligent Methods, Systems, and Applications (IMSA). IEEE, 2024. http://dx.doi.org/10.1109/imsa61967.2024.10652708.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Farooq, Muhammad Umar, Awais Khan, Kutub Uddin, and Khalid Mahmood Malik. "Transferable Adversarial Attacks on Audio Deepfake Detection." In 2025 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW). IEEE, 2025. https://doi.org/10.1109/wacvw65960.2025.00178.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Yadav, Anurag, Vishwas Singh, David, and Shailendra Narayan Singh. "Detection of DeepFake using Generative Adversarial Networks (GANs)." In 2025 4th OPJU International Technology Conference (OTCON) on Smart Computing for Innovation and Advancement in Industry 5.0. IEEE, 2025. https://doi.org/10.1109/otcon65728.2025.11070449.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Galdi, Chiara, Michele Panariello, Massimiliano Todisco, and Nicholas Evans. "2D-Malafide: Adversarial Attacks Against Face Deepfake Detection Systems." In 2024 International Conference of the Biometrics Special Interest Group (BIOSIG). IEEE, 2024. https://doi.org/10.1109/biosig61931.2024.10786754.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Meng, Xiangtao, Li Wang, Shanqing Guo, Lei Ju, and Qingchuan Zhao. "AVA: Inconspicuous Attribute Variation-based Adversarial Attack bypassing DeepFake Detection." In 2024 IEEE Symposium on Security and Privacy (SP). IEEE, 2024. http://dx.doi.org/10.1109/sp54263.2024.00155.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zeng, Siding, Jiangyan Yi, Jianhua Tao, et al. "Adversarial Training and Gradient Optimization for Partially Deepfake Audio Localization." In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025. https://doi.org/10.1109/icassp49660.2025.10890470.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

N, Pallavi, Pallavi T P, Sushma Bylaiah, and Goutam R. "Adversarial Robustness in DeepFake Detection: Enhancing Model Resilience with Defensive Strategies." In 2024 International Conference on Intelligent Cybernetics Technology & Applications (ICICyTA). IEEE, 2024. https://doi.org/10.1109/icicyta64807.2024.10913151.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Yang, Wang, Lingchen Zhao, and Dengpan Ye. "Reputation Defender: Local Black-Box Adversarial Attack against Image-Translation-Based DeepFake." In 2024 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2024. http://dx.doi.org/10.1109/icme57554.2024.10687690.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Nguyen-Le, Hong-Hanh, Van-Tuan Tran, Dinh-Thuc Nguyen, and Nhien-An Le-Khac. "D-CAPTCHA++: A Study of Resilience of Deepfake CAPTCHA under Transferable Imperceptible Adversarial Attack." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650401.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ain, Qurat Ul, Ali Javed, Khalid Mahmood Malik, and Aun Irtaza. "Exposing the Limits of Deepfake Detection using novel Facial mole attack: A Perceptual Black- Box Adversarial Attack Study." In 2024 IEEE International Conference on Image Processing (ICIP). IEEE, 2024. http://dx.doi.org/10.1109/icip51287.2024.10647949.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Adversarial Deepfake"

1

Hwang, Tim. Deepfakes: A Grounded Threat Assessment. Center for Security and Emerging Technology, 2020. http://dx.doi.org/10.51593/20190030.

Повний текст джерела
Анотація:
The rise of deepfakes could enhance the effectiveness of disinformation efforts by states, political parties and adversarial actors. How rapidly is this technology advancing, and who in reality might adopt it for malicious ends? This report offers a comprehensive deepfake threat assessment grounded in the latest machine learning research on generative models.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!