Gotowa bibliografia na temat „DETECTING DEEPFAKES”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „DETECTING DEEPFAKES”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "DETECTING DEEPFAKES"

1

Mai, Kimberly T., Sergi Bray, Toby Davies, and Lewis D. Griffin. "Warning: Humans cannot reliably detect speech deepfakes." PLOS ONE 18, no. 8 (2023): e0285333. http://dx.doi.org/10.1371/journal.pone.0285333.

Pełny tekst źródła
Streszczenie:
Speech deepfakes are artificial voices generated by machine learning models. Previous literature has highlighted deepfakes as one of the biggest security threats arising from progress in artificial intelligence due to their potential for misuse. However, studies investigating human detection capabilities are limited. We presented genuine and deepfake audio to n = 529 individuals and asked them to identify the deepfakes. We ran our experiments in English and Mandarin to understand if language affects detection performance and decision-making rationale. We found that detection capability is unre
Style APA, Harvard, Vancouver, ISO itp.
2

Dobber, Tom, Nadia Metoui, Damian Trilling, Natali Helberger, and Claes de Vreese. "Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes?" International Journal of Press/Politics 26, no. 1 (2020): 69–91. http://dx.doi.org/10.1177/1940161220944364.

Pełny tekst źródła
Streszczenie:
Deepfakes are perceived as a powerful form of disinformation. Although many studies have focused on detecting deepfakes, few have measured their effects on political attitudes, and none have studied microtargeting techniques as an amplifier. We argue that microtargeting techniques can amplify the effects of deepfakes, by enabling malicious political actors to tailor deepfakes to susceptibilities of the receiver. In this study, we have constructed a political deepfake (video and audio), and study its effects on political attitudes in an online experiment ( N = 278). We find that attitudes towar
Style APA, Harvard, Vancouver, ISO itp.
3

Vinogradova, Ekaterina. "The malicious use of political deepfakes and attempts to neutralize them in Latin America." Latinskaia Amerika, no. 5 (2023): 35. http://dx.doi.org/10.31857/s0044748x0025404-3.

Pełny tekst źródła
Streszczenie:
Deepfake technology has revolutionized the field of artificial intelligence and communication processes, creating a real threat of misinformation of target audiences on digital platforms. The malicious use of political deepfakes has become widespread between 2017 and 2023. The political leaders of Argentina, Brazil, Colombia and Mexico were attacked with elements of doxing. Fake videos that used the politicians' faces undermined their reputations, diminishing the trust of the electorate, and became an advanced tool for manipulating public opinion. A series of political deepfakes has r
Style APA, Harvard, Vancouver, ISO itp.
4

Singh, Preeti, Khyati Chaudhary, Gopal Chaudhary, Manju Khari, and Bharat Rawal. "A Machine Learning Approach to Detecting Deepfake Videos: An Investigation of Feature Extraction Techniques." Journal of Cybersecurity and Information Management 9, no. 2 (2022): 42–50. http://dx.doi.org/10.54216/jcim.090204.

Pełny tekst źródła
Streszczenie:
Deepfake videos are a growing concern today as they can be used to spread misinformation and manipulate public opinion. In this paper, we investigate the use of different feature extraction techniques for detecting deepfake videos using machine learning algorithms. We explore three feature extraction techniques, including facial landmarks detection, optical flow, and frequency analysis, and evaluate their effectiveness in detecting deepfake videos. We compare the performance of different machine learning algorithms and analyze their ability to detect deepfakes using the extracted features. Our
Style APA, Harvard, Vancouver, ISO itp.
5

Das, Rashmiranjan, Gaurav Negi, and Alan F. Smeaton. "Detecting Deepfake Videos Using Euler Video Magnification." Electronic Imaging 2021, no. 4 (2021): 272–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.4.mwsf-272.

Pełny tekst źródła
Streszczenie:
Recent advances in artificial intelligence make it progressively hard to distinguish between genuine and counterfeit media, especially images and videos. One recent development is the rise of deepfake videos, based on manipulating videos using advanced machine learning techniques. This involves replacing the face of an individual from a source video with the face of a second person, in the destination video. This idea is becoming progressively refined as deepfakes are getting progressively seamless and simpler to compute. Combined with the outreach and speed of social media, deepfakes could ea
Style APA, Harvard, Vancouver, ISO itp.
6

Raza, Ali, Kashif Munir, and Mubarak Almutairi. "A Novel Deep Learning Approach for Deepfake Image Detection." Applied Sciences 12, no. 19 (2022): 9820. http://dx.doi.org/10.3390/app12199820.

Pełny tekst źródła
Streszczenie:
Deepfake is utilized in synthetic media to generate fake visual and audio content based on a person’s existing media. The deepfake replaces a person’s face and voice with fake media to make it realistic-looking. Fake media content generation is unethical and a threat to the community. Nowadays, deepfakes are highly misused in cybercrimes for identity theft, cyber extortion, fake news, financial fraud, celebrity fake obscenity videos for blackmailing, and many more. According to a recent Sensity report, over 96% of the deepfakes are of obscene content, with most victims being from the United Ki
Style APA, Harvard, Vancouver, ISO itp.
7

Jameel, Wildan J., Suhad M. Kadhem, and Ayad R. Abbas. "Detecting Deepfakes with Deep Learning and Gabor Filters." ARO-THE SCIENTIFIC JOURNAL OF KOYA UNIVERSITY 10, no. 1 (2022): 18–22. http://dx.doi.org/10.14500/aro.10917.

Pełny tekst źródła
Streszczenie:
The proliferation of many editing programs based on artificial intelligence techniques has contributed to the emergence of deepfake technology. Deepfakes are committed to fabricating and falsifying facts by making a person do actions or say words that he never did or said. So that developing an algorithm for deepfakes detection is very important to discriminate real from fake media. Convolutional neural networks (CNNs) are among the most complex classifiers, but choosing the nature of the data fed to these networks is extremely important. For this reason, we capture fine texture details of inp
Style APA, Harvard, Vancouver, ISO itp.
8

Giudice, Oliver, Luca Guarnera, and Sebastiano Battiato. "Fighting Deepfakes by Detecting GAN DCT Anomalies." Journal of Imaging 7, no. 8 (2021): 128. http://dx.doi.org/10.3390/jimaging7080128.

Pełny tekst źródła
Streszczenie:
To properly contrast the Deepfake phenomenon the need to design new Deepfake detection algorithms arises; the misuse of this formidable A.I. technology brings serious consequences in the private life of every involved person. State-of-the-art proliferates with solutions using deep neural networks to detect a fake multimedia content but unfortunately these algorithms appear to be neither generalizable nor explainable. However, traces left by Generative Adversarial Network (GAN) engines during the creation of the Deepfakes can be detected by analyzing ad-hoc frequencies. For this reason, in this
Style APA, Harvard, Vancouver, ISO itp.
9

Lim, Suk-Young, Dong-Kyu Chae, and Sang-Chul Lee. "Detecting Deepfake Voice Using Explainable Deep Learning Techniques." Applied Sciences 12, no. 8 (2022): 3926. http://dx.doi.org/10.3390/app12083926.

Pełny tekst źródła
Streszczenie:
Fake media, generated by methods such as deepfakes, have become indistinguishable from real media, but their detection has not improved at the same pace. Furthermore, the absence of interpretability on deepfake detection models makes their reliability questionable. In this paper, we present a human perception level of interpretability for deepfake audio detection. Based on their characteristics, we implement several explainable artificial intelligence (XAI) methods used for image classification on an audio-related task. In addition, by examining the human cognitive process of XAI on image clas
Style APA, Harvard, Vancouver, ISO itp.
10

Gadgilwar, Jitesh, Kunal Rahangdale, Om Jaiswal, Parag Asare, Pratik Adekar, and Prof Leela Bitla. "Exploring Deepfakes - Creation Techniques, Detection Strategies, and Emerging Challenges: A Survey." International Journal for Research in Applied Science and Engineering Technology 11, no. 3 (2023): 1491–95. http://dx.doi.org/10.22214/ijraset.2023.49681.

Pełny tekst źródła
Streszczenie:
Abstract: Deep learning, integrated with Artificial Intelligence algorithms, has brought about numerous beneficial practical technologies. However, it also brings up a problem that the world is facing today. Despite its innumerable suitable applications, it poses a danger to public personal privacy, democracy, and corporate credibility. One such use that has emerged is deepfake, which has caused chaos on the internet. Deepfake manipulates an individual's image and video, creating problems in differentiating the original from the fake. This requires a solution in today's period to counter and a
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!