To see the other types of publications on this topic, follow the link: Deepfake.

Dissertations / Theses on the topic 'Deepfake'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 24 dissertations / theses for your research on the topic 'Deepfake.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wang, Xueyu. "DeepFake's Adversary: Disrupting DeepFake by Perturbations." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/28642.

Full text
Abstract:
In recent years, with the advances of generative models, many powerful face manipulation systems have been developed based on Deep Neural Networks (DNNs), called DeepFakes. If DeepFakes are not controlled timely and properly, they would cause severe social impact and become a real threat to not only celebrities but also ordinary people. One way to defend the DeepFake is to disrupt the DeepFake generation by adding human-imperceptible perturbations to source inputs. Adding perturbations to the source inputs will make DeepFake results distorted from the perspective of human eyes. However, the existing methods only ensure that fake data will be visually distorted rather than ensuring that the disrupted data (fake data) can also be detected by DeepFake detectors for automation process. If DeepFake detectors are still easily getting spoofed by the disrupted data, then the existing methods cannot be used because this will result in huge labour cost when examining a large amount of data manually. However, we argue that the detectors do not have a similar perspective as human eyes, and thus the detectors might still be spoofed by the disrupted data. Besides, the existing disruption methods rely on iteration-based perturbation generation algorithms, which is time-consuming. In this paper, we propose a novel DeepFake disruption algorithm called "DeepFake Disrupter". By training a perturbation generator, we can add the human-imperceptible perturbations to source images that need to be protected without any backpropagation update. The DeepFake results of these protected source inputs would not only look unrealistic by the human eye but also can be distinguished by DeepFake detectors easily. For example, experimental results show that by adding our trained perturbations, fake images generated by StarGAN can result in a 10~20% increase in F1-score evaluated by various DeepFake detectors.
APA, Harvard, Vancouver, ISO, and other styles
2

Hasanaj, Enis, Albert Aveler, and William Söder. "Cooperative edge deepfake detection." Thesis, Jönköping University, JTH, Avdelningen för datateknik och informatik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-53790.

Full text
Abstract:
Deepfakes are an emerging problem in social media and for celebrities and political profiles, it can be devastating to their reputation if the technology ends up in the wrong hands. Creating deepfakes is becoming increasingly easy. Attempts have been made at detecting whether a face in an image is real or not but training these machine learning models can be a very time-consuming process. This research proposes a solution to training deepfake detection models cooperatively on the edge. This is done in order to evaluate if the training process, among other things, can be made more efficient with this approach.  The feasibility of edge training is evaluated by training machine learning models on several different types of iPhone devices. The models are trained using the YOLOv2 object detection system.  To test if the YOLOv2 object detection system is able to distinguish between real and fake human faces in images, several models are trained on a computer. Each model is trained with either different number of iterations or different subsets of data, since these metrics have been identified as important to the performance of the models. The performance of the models is evaluated by measuring the accuracy in detecting deepfakes.  Additionally, the deepfake detection models trained on a computer are ensembled using the bagging ensemble method. This is done in order to evaluate the feasibility of cooperatively training a deepfake detection model by combining several models.  Results show that the proposed solution is not feasible due to the time the training process takes on each mobile device. Additionally, each trained model is about 200 MB, and the size of the ensemble model grows linearly by each model added to the ensemble. This can cause the ensemble model to grow to several hundred gigabytes in size.
APA, Harvard, Vancouver, ISO, and other styles
3

Spinato, Claudia <1995&gt. "Arte e intelligenza artificiale nell'era dei deepfake." Master's Degree Thesis, Università Ca' Foscari Venezia, 2021. http://hdl.handle.net/10579/20119.

Full text
Abstract:
Con lo sviluppo dell’intelligenza artificiale e dei nuovi media diventa sempre più difficile al giorno d’oggi orientarsi nel mondo digitale e della disinformazione. Il confine tra realtà e illusione è labile: i deepfake nell’ultimo decennio sono un pericolo sempre maggiore e spesso difficile da riconoscere. L’elaborato, basandosi sul concetto di iperrealismo secondo alcuni principali filosofi e affrontando il problema filosofico della distinzione tra immagine e realtà, ne segue lo sviluppo attraverso la nascita dei deepfake e dell’intelligenza artificiale (nello specifico delle GANs), individuando, tuttavia, un campo in cui questo sviluppo può essere innovativo e avere un risvolto positivo: quello artistico. Se l’uso iperrealistico dell’intelligenza artificiale può avere un risvolto negativo, con lo scopo di ingannare lo spettatore, l’uso della stessa tecnologia in maniera creativa potrebbe invece aprire una nuova strada?
APA, Harvard, Vancouver, ISO, and other styles
4

Emir, Alkazhami. "Facial Identity Embeddings for Deepfake Detection in Videos." Thesis, Linköpings universitet, Datorseende, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170587.

Full text
Abstract:
Forged videos of swapped faces, so-called deepfakes, have gained a  lot  of  attention in recent years. Methods for automated detection of this type of manipulation are also seeing rapid progress in their development. The purpose of this thesis work is to evaluate the possibility and effectiveness of using deep embeddings from facial recognition networks as base for detection of such deepfakes. In addition, the thesis aims to answer whether or not the identity embeddings contain information that can be used for detection while analyzed over time and if it is suitable to include information about the person's head pose in this analysis. To answer these questions, three classifiers are created with the intent to answer one question each. Their performances are compared with each other and it is shown that identity embeddings are suitable as a basis for deepfake detection. Temporal analysis of the embeddings also seem effective, at least for deepfake methods that only work on a frame-by-frame basis. Including information about head poses in the videos is shown to not improve a classifier like this.
APA, Harvard, Vancouver, ISO, and other styles
5

GUARNERA, LUCA. "Discovering Fingerprints for Deepfake Detection and Multimedia-Enhanced Forensic Investigations." Doctoral thesis, Università degli studi di Catania, 2021. http://hdl.handle.net/20.500.11769/539620.

Full text
Abstract:
Forensic Science, which concerns the application of technical and scientific methods to justice, investigation and evidence discovery, has evolved over the years to the birth of several fields such as Multimedia Forensics, which involves the analysis of digital images, video and audio contents. Multimedia data was (and still is), altered using common editing tools such as Photoshop and GIMP. Rapid advances in Deep Learning have opened up the possibility of creating sophisticated algorithms capable of manipulating images, video and audio in a “simple” manner causing the emergence of a powerful yet frightening new phenomenon called deepfake: synthetic multimedia data created and/or altered using generative models. A great discovery made by forensic researchers over the years concerns the possibility of extracting a unique fingerprint that can determine the devices and software used to create the data itself. Unfortunately, extracting these traces turns out to be a complicated task. A fingerprint can be extracted not only in multimedia data in order to determine the devices used in the acquisition phase, or the social networks where the file was uploaded, or recently define the generative models used to create deepfakes, but, in general, this trace can be extracted from evidences recovered in a crime scene as shells or projectiles to determine the model of gun that have fired (Forensic Firearms Ballistics Comparison). Forensic Analysis of Handwritten Documents is another field of Forensic Science that can determine the authors of a manuscript by extracting a fingerprint defined by a careful analysis of the text style in the document. Developing new algorithms for Deepfake Detection, Forensic Firearms Ballistics Comparison, and Forensic Handwritten Document Analysis was the main focus of this Ph.D. thesis. These three macro areas of Forensic Science have a common element, namely a unique fingerprint present in the data itself that can be extracted in order to solve the various tasks. Therefore, for each of these topics a preliminary analysis will be performed and new detection techniques will be presented obtaining promising results in all these domains.
APA, Harvard, Vancouver, ISO, and other styles
6

Tak, Hemlata. "End-to-End Modeling for Speech Spoofing and Deepfake Detection." Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS104.pdf.

Full text
Abstract:
Les systèmes biométriques vocaux sont utilisés dans diverses applications pour une authentification sécurisée. Toutefois, ces systèmes sont vulnérables aux attaques par usurpation d'identité. Il est donc nécessaire de disposer de techniques de détection plus robustes. Cette thèse propose de nouvelles techniques de détection fiables et efficaces contre les attaques invisibles. La première contribution est un ensemble non linéaire de classificateurs de sous-bandes utilisant chacun un modèle de mélange gaussien. Des résultats compétitifs montrent que les modèles qui apprennent des indices discriminants spécifiques à la sous-bande peuvent être nettement plus performants que les modèles entraînés sur des signaux à bande complète. Étant donné que les DNN sont plus puissants et peuvent effectuer à la fois l'extraction de caractéristiques et la classification, la deuxième contribution est un modèle RawNet2. Il s'agit d'un modèle de bout en bout qui apprend les caractéristiques directement à partir de la forme d'onde brute. La troisième contribution comprend la première utilisation de réseaux neuronaux graphiques (GNN) avec un mécanisme d'attention pour modéliser la relation complexe entre les indices d'usurpation présents dans les domaines spectral et temporel. Nous proposons un réseau d'attention spectro-temporel E2E appelé RawGAT-ST. Il est ensuite étendu à un réseau d'attention spectro-temporel intégré, appelé AASIST, qui exploite la relation entre les graphes spectraux et temporels hétérogènes. Enfin, cette thèse propose une nouvelle technique d'augmentation des données appelée RawBoost et utilise un modèle vocal auto-supervisé et pré-entraîné pour améliorer la généralisation<br>Voice biometric systems are being used in various applications for secure user authentication using automatic speaker verification technology. However, these systems are vulnerable to spoofing attacks, which have become even more challenging with recent advances in artificial intelligence algorithms. There is hence a need for more robust, and efficient detection techniques. This thesis proposes novel detection algorithms which are designed to perform reliably in the face of the highest-quality attacks. The first contribution is a non-linear ensemble of sub-band classifiers each of which uses a Gaussian mixture model. Competitive results show that models which learn sub-band specific discriminative information can substantially outperform models trained on full-band signals. Given that deep neural networks are more powerful and can perform both feature extraction and classification, the second contribution is a RawNet2 model. It is an end-to-end (E2E) model which learns features directly from raw waveform. The third contribution includes the first use of graph neural networks (GNNs) with an attention mechanism to model the complex relationship between spoofing cues present in spectral and temporal domains. We propose an E2E spectro-temporal graph attention network called RawGAT-ST. RawGAT-ST model is further extended to an integrated spectro-temporal graph attention network, named AASIST which exploits the relationship between heterogeneous spectral and temporal graphs. Finally, this thesis proposes a novel data augmentation technique called RawBoost and uses a self-supervised, pre-trained speech model as a front-end to improve generalisation in the wild conditions
APA, Harvard, Vancouver, ISO, and other styles
7

Weidenstolpe, Louise, and Jade Jönsson. "Manipulation i rörligt format - En studie kring deepfake video och dess påverkan." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20573.

Full text
Abstract:
Med deepfake-teknologi kan det skapas manipulerade videor där det produceras falska bilder och ljud som framställs vara verkliga. Deepfake-teknologin förbättras ständigt och det kommer att bli svårare att upptäcka manipulerade videor online. Detta kan innebära att en stor del mediekonsumenter omedvetet exponeras för tekniken när de använder sociala medier. Studiens syfte är att undersöka unga vuxnas medvetenhet, synsätt och påverkan av deepfake videor. Detta eftersom deepfake-teknologin förbättras årligen och problemen med tekniken växer samt kan få negativa konsekvenser i framtiden om den utnyttjas på fel sätt. Insamlingen av det empiriska materialet skedde genom en kvantitativ metod i form av en webbenkät och en kvalitativ metod med tre fokusgrupper. Slutsatsen visade på att det finns ett större antal unga vuxna som inte är medvetna om vad en deepfake video är, dock existerar det en viss oro för deepfake-teknologin och dess utveckling. Det upplevs att det finns risker för framtiden med teknologin i form av hot mot demokratin och politik, spridning av Fake news, video-manipulation samt brist på källkritik. De positiva aspekterna är att tekniken kan användas i sammanhang av humor, inom film- och TV-industrin samt sjukvård. Ytterligare en slutsats är att unga vuxna kommer att vara mer källkritiska till innehåll de exponeras av framöver, dock kommer de med stor sannolikhet ändå att påverkas av deepfake-teknologin i framtiden.<br>Manipulated videos can be created with deepfake technology, where fake images and sounds are produced and seem to be real. Deepfake technology is constantly improving and it will be more problematic to detect manipulated video online. This may result in a large number of media consumers being unknowingly exposed to deepfake technology while using social media. The purpose of this study is to research young adults' awareness, approach and impact of deepfake videos. The deepfake technology improves annually and more problems occur, which can cause negative consequences in the future if it’s misused. The study is based on a quantitative method in the form of a web survey and a qualitative method with three focus groups. The conclusion shows that there’s a large number of young adults who are not aware of what a deepfake video is, however there’s some concern about deepfake technology and its development. It’s perceived that there can be risks in the future with the technology in terms of threats to democracy and politics, distribution of Fake news, video manipulation and lack of source criticism. The positive aspects are that the technology can be used for entertainment purposes, in the film and television industry also in the healthcare department. Another conclusion is that young adults will be more critical to the content they are exposed to in the future, but likely be affected by deepfake technology either way.
APA, Harvard, Vancouver, ISO, and other styles
8

Jönsson, Jade, and louise weidenstolpe. "Manipulation i rörligt format - En studie kring deepfake video och dess påverkan." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20776.

Full text
Abstract:
Med deepfake-teknologi kan det skapas manipulerade videor där det produceras falska bilder och ljud som framställs vara verkliga. Deepfake-teknologin förbättras ständigt och det kommer att bli svårare att upptäcka manipulerade videor online. Detta kan innebära att en stor del mediekonsumenter omedvetet exponeras för tekniken när de använder sociala medier. Studiens syfte är att undersöka unga vuxnas medvetenhet, synsätt och påverkan av deepfake videor. Detta eftersom deepfake-teknologin förbättras årligen och problemen med tekniken växer samt kan få negativa konsekvenser i framtiden om den utnyttjas på fel sätt. Insamlingen av det empiriska materialet skedde genom en kvantitativ metod i form av en webbenkät och en kvalitativ metod med tre fokusgrupper. Slutsatsen visade på att det finns ett större antal unga vuxna som inte är medvetna om vad en deepfake video är, dock existerar det en viss oro för deepfake-teknologin och dess utveckling. Det upplevs att det finns risker för framtiden med teknologin i form av hot mot demokratin och politik, spridning av Fake news, video-manipulation samt brist på källkritik. De positiva aspekterna är att tekniken kan användas i sammanhang av humor, inom film- och TV-industrin samt sjukvård. Ytterligare en slutsats är att unga vuxna kommer att vara mer källkritiska till innehåll de exponeras av framöver, dock kommer de med stor sannolikhet ändå att påverkas av deepfake-teknologin i framtiden.<br>Manipulated videos can be created with deepfake technology, where fake images and sounds are produced and seem to be real. Deepfake technology is constantly improving and it will be more problematic to detect manipulated video online. This may result in a large number of media consumers being unknowingly exposed to deepfake technology while using social media. The purpose of this study is to research young adults' awareness, approach and impact of deepfake videos. The deepfake technology improves annually and more problems occur, which can cause negative consequences in the future if it’s misused. The study is based on a quantitative method in the form of a web survey and a qualitative method with three focus groups. The conclusion shows that there’s a large number of young adults who are not aware of what a deepfake video is, however there’s some concern about deepfake technology and its development. It’s perceived that there can be risks in the future with the technology in terms of threats to democracy and politics, distribution of Fake news, video manipulation and lack of source criticism. The positive aspects are that the technology can be used for entertainment purposes, in the film and television industry also in the healthcare department. Another conclusion is that young adults will be more critical to the content they are exposed to in the future, but likely be affected by deepfake technology either way.
APA, Harvard, Vancouver, ISO, and other styles
9

Fjellström, Lisa. "The Contribution of Visual Explanations in Forensic Investigations of Deepfake Video : An Evaluation." Thesis, Umeå universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184671.

Full text
Abstract:
Videos manipulated by machine learning have rapidly increased online in the past years. So called deepfakes can depict people who never participated in a video recording by transposing their faces onto others in it. This raises the concern of authenticity of media, which demand for higher performing detection methods in forensics. Introduction of AI detectors have been of interest, but is held back today by their lack of interpretability. The objective of this thesis was therefore to examine what the explainable AI method local interpretable model-agnostic explanations (LIME) could contribute to forensic investigations of deepfake video.  An evaluation was conducted where 3 multimedia forensics evaluated the contribution of visual explanations of classifications when investigating deepfake video frames. The estimated contribution was not significant yet answers showed that LIME may be used to indicate areas to start examine. LIME was however not considered to provide sufficient proof to why a frame was classified as `fake', and would if introduced be used as one of several methods in the process. Issues were apparent regarding the interpretability of the explanations, as well as LIME's ability to indicate features of manipulation with superpixels.
APA, Harvard, Vancouver, ISO, and other styles
10

Firc, Anton. "Použitelnost Deepfakes v oblasti kybernetické bezpečnosti." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445534.

Full text
Abstract:
Deepfake technológia je v poslednej dobe na vzostupe. Vzniká mnoho techník a nástrojov pre tvorbu deepfake médií a začínajú sa používať ako pre nezákonné tak aj pre prospešné činnosti. Nezákonné použitie vedie k výskumu techník pre detekciu deepfake médií a ich neustálemu zlepšovaniu, takisto ako k potrebe vzdelávať širokú verejnosť o nástrahách, ktoré táto technológia prináša. Jedna z málo preskúmaných oblastí škodlivého použitia je používanie deepfake pre oklamanie systémov hlasovej autentifikácie. Názory spoločnosti na vykonateľnosť takýchto útokov sa líšia, no existuje len málo vedeckých dôkazov. Cieľom tejto práce je preskúmať aktuálnu pripravenosť systémov hlasovej biometrie čeliť deepfake nahrávkam. Vykonané experimenty ukazujú, že systémy hlasovej biometrie sú zraniteľné pomocou deepfake nahrávok. Napriek tomu, že skoro všetky verejne dostupné nástroje a modely sú určené pre syntézu anglického jazyka, v tejto práci ukazujem, že syntéza hlasu v akomkoľvek jazyku nie je veľmi náročná. Nakoniec navrhujem riešenie pre zníženie rizika ktoré deepfake nahrávky predstavujú pre systémy hlasovej biometrie, a to používať overenie hlasu závislé na texte, nakoľko som ukázal, že je odolnejšie proti deepfake nahrávkam.
APA, Harvard, Vancouver, ISO, and other styles
11

Huang, Jiajun. "Learning to Detect Compressed Facial Animation Forgery Data with Contrastive Learning." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/29183.

Full text
Abstract:
The facial forgery generation, which could be utilised to modify facial attributes, is the critical threat to digital society. The recent Deep Nerual Network based forgery generation methods, called Deepfake, can generate high quality results that are hard to be distinguished by human eyes. Various detection methods and datasets are proposed for detecting such data. However, recent research less considers facial animation, which is also important in the forgery attack side. It tries to animate face images with actions provided by driving videos. Our experiments show that the existed datasets are not sufficient to develop reliable detection methods for animation data. As a response, we propose a facial animation dataset, called DeepFake MNIST+. It includes 10,000 facial animation videos in 10 different actions. We also provide a baseline detection method and the comprehensive analysis of the method and dataset. Meanwhile, we notice that the data compression process could affect the detection performance. Thus creating a forgery detection model that can handle the data compressed with unknown levels is critical. To enhance the performance for such models, we consider the weak and strong compressed data as two views of the original data and they should have similar relationships with other samples. We propose a novel anti-compression forgery detection framework by maintaining closer relations within data under different compression levels. Specifically, the algorithm measures the pair-wise similarity within data as the relations and forcing the relations of weak and strong compressed data close to each other, thus improving the performance for detecting strong compressed data. To achieve a better strong compressed data relation guided by the less compressed one, we apply video level contrastive learning for weak compressed data. The experiment results show the proposed algorithm could adapt to multiple compression levels well.
APA, Harvard, Vancouver, ISO, and other styles
12

Svatos, Karl Benton West. "Data-driven, Mediterranean, A.I. simulations and the ethical implications of embedded qualitative bias in digital twin deepfake games." Thesis, Svatos, Karl Benton West ORCID: 0000-0003-1468-0601 (2021) Data-driven, Mediterranean, A.I. simulations and the ethical implications of embedded qualitative bias in digital twin deepfake games. PhD thesis, Murdoch University, 2021. https://researchrepository.murdoch.edu.au/id/eprint/63264/.

Full text
Abstract:
Put simply, the question that has intrigued me and led me to do a PhD is to undertake scientific research which gave me the opportunity to study the underlying ethical and philosophical bases of the research process itself. The thesis takes the reader through real-life studies which combine developing computational power with the handling of large datasets and the ethics of data collection and ownership. This journey resulted in the study and representation of real-life and in-silico or digitalised copies of research data (called digital twins) and compares their physical and in-silico representation from an ethical viewpoint. Digital twins and deepfake imagery (computer simulations based on mathematical principles) may be used to verify significance of statistical outcomes during artificial simulations’ via computational processes. Deepfakes are a possible occidental progression due to increasingly complex environmental variation being used to explain adaptation, or an evolutionary process because variation is oft unable to be replicated in experiments, e.g. for political reasons. A renaissance of informatics ‘big-data science;’ implementing adaptive real-time A.I. via digital twin construction, to predicted outcomes concerning simulations’ predictability themselves’, digital twins can be used to infer without disturbing system balance. Extensive implementations of this technology with adaptive frequency hopping based on Gaussian ‘pseudo-random signalling’ now exist. Several model species’ were selected including hairy marron, dairy farm cattle, and a unique Australian Tibetan barley. Using environmental data from the Mediterranean environment; environmental growth, habit, and sensory behaviour; digital twins’ predictions based on complementary statistical heuristics included incorporating genome sequences, and phenotype data through embedded kernalisation telematic software, for automated species informatics simulation. ‘Murdoch Twins’ was created to test whether TCP/IP latency data supporting machine learning programs, and visualisation statistics could alter both quantitative and qualitative outcomes (in a simulated game carried out in 2020 in Japan). This thesis makes use of a series of quantitative experiments that were conceived, proposed, carried out, and are written up as four separate manuscripts (two results manuscripts published in 2018 and 2019 respectively, one abstract presented in Osaka Japan in 2018, and one manuscript currently under peer review), with seven supporting appendices (including industry specific publications, data collection and analyses). The full contributions span two decades culminated in a move to Murdoch University in 2016. The final thesis product only possible by the award, of a fully funded, Murdoch University postgraduate research scholarship, administered by The Western Australian, State, Agricultural, and Biotechnology Center (SABC) under the tutelage of Professor Michael Jones. And has since generated significant research-related opportunities including, “PhD scholarship in digital connectivity and big-data analytics in agriculture”, administered through the newly coined ‘agtech group’ at Murdoch University. The research experiments were made possible through contributions including through the receipt of smaller financial bursaries (grant monies), in-kind contributions, corporate awards (purchased and/or borrowed equipment, including hired casual staff), or industry contributions. Some research was also personally funded, or earned through internships, further grants rewards, in-kind volunteering, and/or while consulting for Karl Corporation (including in Japan, through 2020 at Kazusa DNA Research Institute in Chiba Prefecture as a visiting professional informatics scientist). The thesis, “Data-driven, Mediterranean, A.I. simulations and the ethical implications of embedded qualitative bias in digital twin deepfake games” contains an original, standalone, independently researched introduction, methodology, with the sequential and logical presentations of four manuscripts (results chapters), culminating in the general results titled, “Murdoch Twins: containerised .NET API backend data implementation for real-time deepfake games”, proceeded by the general discussion. As it relates to the thesis in its entirety; the combined output is the development of a new, computer-based, rapid, real-time, data visualisation strategy (and technique), that facilitated the creation of a digital twin called ‘Murdoch Twins.’ The purpose of Murdoch Twins is for rapidly measuring and quantifying, the digital authenticity of informatics analyses (via construction of a digital twin including for a confidential* Tibetan-Australian barley species; a type of doubled haploid population of almost identical genetic clones); based on complementary environmental, sequencer-based genotyping, phenotype data lakes (for the Mediterranean environment); through the creation of a trust enabled 5G/LTE, IoT, core to core, embedded kernel stack, to therefore test that it may be penetrated, remotely, (via VPN), with deepfake imagery, during a real-time A.I. game simulation (in a real-world scenario delivered through .NET APIs). Implications deepfake games have regarding current real-world scenarios include real-time medicine, geophysical and A.I., predictions; but as it relates to the manuscripts and the central tenet of the thesis deepfake games concerns the ethical use of deepfake images to securely verify the digital authenticity of phenotype data, as it relates to environmental variation of raw ‘trusted’ data sources, and their use in qualitative descriptions (embedded bias). The four manuscripts demonstrate that a methodological progression was followed, concerning the development of the aims and the hypothesis. Tables and figures throughout the thesis are sequential and are not broken up for individual manuscripts. Because of the strategic nature of the thesis, there were significant challenges along the way. To address this, each manuscript has a preamble, including its own declaration statement, that summarises the authors’ contributions’ at that time, as well a peroratory summation. Additional contributions, as they relate to the generation of each manuscript of the thesis (in terms of the overall aims and hypothesis) are addressed in 7 appendices. Contributions reflect the time required as follows. 1. Manuscript one: An independently researched, student-funded, sole-author, published manuscript (Karl Svatos (KS) contribution 100%), •“Svatos KBW, 2018. Commercial silicate phosphate sequestration and desorption leads to a gradual decline of aquatic systems. Environmental Science and Pollution Research 25, 5386-92. doi: 10.1007/s11356-017-0846-9” 2. Manuscript two: an independently researched, partially industry-funded (Dairy Australia Grant; UWA 13344), first-author, published manuscript (KS contribution 95%, (UWA staff 5% includes Em. Prof. Abbott (LA) contribution 3%)), • Svatos KBW, Abbott LK, 2019. Dairy soil bacterial responses to nitrogen application in simulated Italian ryegrass and white clover pasture. Journal of Dairy Science 102, 9495- 504. doi: 10.3168/jds.2018-16107 o Original research description (visualisation stats) (Appendix 3 preamble) 3. Manuscript three: an, independently researched, privately funded, first-author, published conference, journal abstract (Osaka Japan, 2018), • “Svatos KBW, Diepeveen D, Abbott LK, Li C, 2018. Big data GPU/CPU kernalisation pipeline for API based quantitative genetic assessments in field-based drone research (Abstract Submitted). Journal of Plant Pathology and Microbiology, 9. doi: 10.4172/2157-7471-C2-011” And two subsequent, corresponding, report-style projects with unpublished results (Project 1 and Project 2), and including one published, strategic white position paper associated with Project 1, • Project 1; a student-led, partially industry-funded (GRDC Grant; UMU00049) collaboration between Murdoch University, The Western Australian Department of Primary Industries and Regional Development (DPIRD) (formerly Department of Agriculture), UWA, Scientific Aerospace, Karl Corporation, and The Western Crop Genetics Alliance (formerly Western Barley Genetics alliance and affiliated Australian institutions). (KS contribution 55%), (DPIRD staff 20% includes Dr. Diepeveen (DD) 10% and Prof. Li (CL) 5%), (Murdoch staff 10% includes DD 5% Prof. Jones (MJ) 2%, CL 1%, Dr. Murray (DM) 1%, Dr. Hill (CH) 1%), (UWA 3% includes LA 2%) (Scientific Aerospace includes Mr. Trowbridge RET. (GT) 2%) (Karl Corporation 10%). • “Rapid downstream glasshouse field trial phenotype assessments for variability minimisation in GPU core processing and telematics data analyses” • “Svatos K, Trowbridge G, 2018. Australian drone technology assisting a significant step in crop tolerance to heat and drought stress. Future Directions International. http: futuredirections.org.au/publication/australian-drone-technology-assisting-significantstep- crop-tolerance-heat-drought-stress/” • Project 2; a student-led, independently organised, resourced, and partially industryfunded, project collaboration between Murdoch University, UWA, DPIRD, Pivotel, Nokia-Bell, Microsoft, Precision Ag, Edith Cowan University (ECU), Kazusa DNA Research Institute (KDRI), and Karl Corporation (and affiliated partners). (KS contribution 30%), (Murdoch staff 25% includes MJ 15%, DM 10%), (DPIRD staff 10% includes DD 5%), (Pivotel, Nokia-Bell, Microsoft, Precision Ag, ECU, UWA, and KDRI 30%), (Karl Corporation 5%). • “A scaleable, private LTE/4G, Boolean GPU networking stack for automated, remote, IoT decision making” 4. Manuscript 4; an industry-led, partially industry-funded, jointly student industry conceived, run, and managed, first author (unpublished), data-science research collaboration between iPREP, Murdoch University, ECU, and the industry partner udrew. (KS contribution 50%), (udrew 50% includes AR (Angela Recaldes) and ZA (Zubair Ahmed) 10%). “Heuristics encanced SAAS platform: remote geospatial machine learning of soil profiles from an ancient Mediterranean environment” Each manuscript in the thesis is complete and acknowledges all authors’ and contributions’. Additional research methods, results, and discussion generated during this research are addressed in the disclaimer at the beginning of the methodology, and in the privacy and confidentiality statement after the preamble of manuscript three. Individual appendices also contain declaration statements about the significance and relevance to the thesis aims and hypothesis, concerning co-authors’ contributions’ respectively (including Karl Corporation). The research presented in this thesis shows that the rapid rise of data-driven ‘A.I., big-data science’, has an embedded, objective bias that quantitative computation cannot be used to solve in all real-time simulations. Predictions were supported through the creation of binary-tree data islands. Supporting technologies were connected through an embedded pythonic .NET API (AARCH64) and then utilised to create a digital twin to assess deepfake risk factors via the digital twin (concerning data, security, and ownership). The implications are substantial for this type of implementation due to the ever-expanding collection and use of said data to support qualitative interpretation for action by humans as it relates to A.I. ethics. This process may offer scientists, engineers, land managers, farmers and governments an advantage; knowing how a change (Δ) at any given time (t) might alter an organism’s behaviour, based on issued quantitative source-code trust certificates (.NET APIs, in LTE/5G, real-time). However, there are no ‘real’ solutions in non-binary calculations. Using deepfakes in digital twins to model game outcomes thus resulted in occidental natural latency ‘blips’. Trusted, quantitative, A.I., source-code program manifests, only support purely open-source hypotheses testing.
APA, Harvard, Vancouver, ISO, and other styles
13

Fontanella, Simone. "Rassegna degli strumenti per la creazione e l'individuazione di Deepfakes." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23156/.

Full text
Abstract:
Questo volume di tesi descrive lo studio di una nuova tecnologia basata sull'intelligenza artificiale e l'apprendimento automatico nota con il nome di Deepfake. I deepfake nascono per manipolare il volto di una persona in una foto o in un video, attraverso dei modelli matematici noti come reti neurali artificiali. Il primo a fare uso di Deepfake fu un utente Reddit che voleva scambiare i volti delle attrici in video per adulti. I target di questa tecnologia sono spesso attori o personaggi famosi, visto la facilità e la quantità con cui possiamo reperire foto e video che li ritraggono. Sebbene non sia così semplice creare un deepfake, negli ultimi anni applicazioni mobile come FaceApp e ZAO hanno dato la possibilità a chiunque di usufruire di questa tecnologia pur non possedendo le competenze adeguate. Il campo in cui si fa maggiormente uso dei deepfake è senza altro quello delle fake news, alimentate dagli utenti dei social media che non riescono a comprendere se quel video o quell'immagine è autentica oppure no. Per risolvere questa problematica, i ricercatori stanno investendo nella ricerca di metodi efficaci per la loro rilevazione anche attraverso competizioni organizzate dalle più grandi aziende tecnologiche.
APA, Harvard, Vancouver, ISO, and other styles
14

Trabelsi, Anis. "Robustesse aux attaques en authentification digitale par apprentissage profond." Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS580.

Full text
Abstract:
L'identité des personnes sur Internet devient un problème de sécurité majeur. Depuis les accords de Bale, les institutions bancaires ont intégré la vérification de l'identité des personnes ou Know Your Customer (KYC) dans leur processus d'inscription. Avec la dématérialisation des banques, cette procédure est devenue l'e-KYC ou KYC à distance qui fonctionne à distance via le smartphone de l'utilisateur. De même, la vérification d'identité à distance est devenue la norme pour l'inscription aux outils de signature électronique. De nouvelles réglementations émergent pour sécuriser cette approche, par exemple, en France, le cadre PVID encadre l'acquisition à distance des documents d'identité et du visage des personnes dans le cadre du règlement eIDAS. Cela est nécessaire, car on assiste à l'émergence d'un nouveau type de criminalité numérique : l'usurpation d'identité profonde. Grâce aux nouveaux outils d'apprentissage profond, les imposteurs peuvent modifier leur apparence pour ressembler à quelqu'un d'autre en temps réel. Les imposteurs peuvent alors accomplir toutes les actions courantes requises lors d'une inscription à distance sans être détectés par les algorithmes de vérification d'identité. Aujourd'hui, il existe des applications sur smartphone et des outils destinés à un public plus limité qui permettent aux imposteurs de transformer facilement leur apparence en temps réel. Il existe même des méthodes pour usurper une identité à partir d'une seule image du visage de la victime. L'objectif de cette thèse est d'étudier les vulnérabilités des systèmes d'authentification d'identité à distance face aux nouvelles attaques<br>The identity of people on the Internet is becoming a major security issue. Since the Bale agreements, banking institutions have integrated the verification of people's identity or Know Your Customer (KYC) in their registration process. With the dematerialization of banks, this procedure has become e-KYC or remote KYC which works remotely through the user's smartphone. Similarly, remote identity verification has become the standard for enrollment in electronic signature tools. New regulations are emerging to secure this approach, for example, in France, the PVID framework regulates the remote acquisition of identity documents and people's faces under the eIDAS regulation. This is required because a new type of digital crime is emerging: deep identity theft. With new deep learning tools, imposters can change their appearance to look like someone else in real time. Imposters can then perform all the common actions required in a remote registration without being detected by identity verification algorithms. Today, smartphone applications and tools for a more limited audience exist allowing imposters to easily transform their appearance in real time. There are even methods to spoof an identity based on a single image of the victim's face. The objective of this thesis is to study the vulnerabilities of remote identity authentication systems against new attacks in order to propose solutions based on deep learning to make the systems more robust
APA, Harvard, Vancouver, ISO, and other styles
15

Wardh, Eric, and Victor Wirstam. "Deepfakes - En risk för samhället?" Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-44904.

Full text
Abstract:
En deepfake kan vara allt från en bild, video eller ljudklipp, manipulerad med hjälp av AI-teknologi. Deepfakes används legitimt i exempelvis spel- och filmindustrin, men det vanligaste användningsområdet för deepfakes är att skapa manipulerade bilder, videor eller ljudklipp för att sprida felaktig information. Ett annat användningsområde är för att få det att se ut som att personer som egentligen inte medverkat i den aktuella bilden, videon eller ljudklippet faktiskt har gjort det. Denna uppsats fokuserar på att undersöka hur deepfakes används och hur de kan användas för att påverka samhället nu och inom de kommande fem åren. Detta görs med hjälp av en litteraturstudie samt semi-strukturerade intervjuer. I dagsläget används inte deepfakes i större grad för att försöka påverka samhället. Det som istället används för detta är en enklare variant av deepfakes som kallas för cheapfakes eller shallowfakes, som är snabbare, enklare och billigare att ta fram. Så länge deepfakes kommer vara svårare och dyrare att ta fram än cheapfakes och shallowfakes kommer inte deepfakes att användas i större grad än vad det gör idag för att påverka samhället. I takt med att utvecklingen går framåt kommer också användandet av deepfakes öka.
APA, Harvard, Vancouver, ISO, and other styles
16

Björklund, Christoffer. "Deepfakes inom social engineering och brottsutredningar." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-42373.

Full text
Abstract:
”Deepfake” är en förkortning av ”deep learning” och ”fake”. Deepfakes är syntetisk audiovisuell media som använder sig av maskininlärning för att generera falska videoklipp, bilder och/eller ljudklipp. Detta projekt fokuserar på deepfakes inom förfalskat videomaterial, där en persons ansikte i en video är utbytt mot en annan persons ansikte. Fokuset för den här rapporten är att undersöka hur enkelt det är att göra en egen deepfake med grundläggande kunskap. Detta är gjort med ett experiment som avser att mäta kvantitativa och kvalitativa resultat från intervjuer. Intervjuobjekten har tittat på två videor där de försökt identifiera författarens egna förfalskade videoklipp blandade med legitima videoklipp. Experimentet visar på att det är möjligt och relativt enkelt att skapa övertygande högkvalitativa deepfakes gjorda för social engineering. Det är däremot svårare, men fortfarande möjligt, att förfalska audiovisuellt material i bildbevis. Vidare undersöks vad det finns för typer av preventiva forensiska verktyg och metoder som utvecklas till att upptäcka deepfakes inom förfalskat videomaterial. I nuläget finns det många tekniker som föreslagits som metoder för att identifiera deepfakes. Denna rapport granskar även deepfakes gjorda för social engineering. Deepfakes anses bli ett av de större hoten i framtiden där de kan användas till att effektivt sprida propaganda och desinformation. Nyhetsmedia står inför stora utmaningar framöver på grund av misstro från konsumenter av audiovisuellt nyhetsmaterial. Utifrån de kvantitativa och kvalitativa resultaten, föreslår författaren att nyhetsmedia och social media kan informera om vad deepfakes är och hur sådana förfalskade videoklipp typiskt ser ut.<br>’Deepfake’ is an abbreviation of ’deep learning’ and ’fake’. Deepfakes are synthetical audiovisual media that uses machine learning to create fake videos, images and/or audio clips. This project is focused on deepfakes within forged videos, where one person’s face is swapped with another person’s face. This technique is usually referred to as ’face swapping’. However, deepfakes goes beyond what usual face swaps can achieve. The focus for this project is to investigate how easy it is to forge your own deepfakes with basic technical knowledge. This is achieved through an experiment that measures result from fourteen interviews. The interviewees watched two different videos, where each person tried to identify the writers’ own deepfaked video clips, that were mixed with legitimate video clips. The experiment shows that it is possible and relatively easy to create convincing deepfakes aimed at social engineering. It is also possible, but harder, to create deepfakes to forge videos within criminal investigations. This report examines the potential forensic techniques and tools that exists and are developed to identify deepfakes. Furthermore, this report also examines deepfakes made for social engineering. Deepfakes are considered being one of the more significant threats in the future and could be used to effectively spread propaganda and misinformation. The results generated from the experiment in this report, lead to a proposition from the writer that news outlets and social media platforms could aim at an informative approach towards deepfakes. This approach means informing their consumers on what deepfakes are, how they typically look and what consumers can do themselves to identify them.
APA, Harvard, Vancouver, ISO, and other styles
17

Gardner, Angelica. "Stronger Together? An Ensemble of CNNs for Deepfakes Detection." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97643.

Full text
Abstract:
Deepfakes technology is a face swap technique that enables anyone to replace faces in a video, with highly realistic results. Despite its usefulness, if used maliciously, this technique can have a significant impact on society, for instance, through the spreading of fake news or cyberbullying. This makes the ability of deepfakes detection a problem of utmost importance. In this paper, I tackle the problem of deepfakes detection by identifying deepfakes forgeries in video sequences. Inspired by the state-of-the-art, I study the ensembling of different machine learning solutions built on convolutional neural networks (CNNs) and use these models as objects for comparison between ensemble and single model performances. Existing work in the research field of deepfakes detection suggests that escalated challenges posed by modern deepfake videos make it increasingly difficult for detection methods. I evaluate that claim by testing the detection performance of four single CNN models as well as six stacked ensembles on three modern deepfakes datasets. I compare various ensemble approaches to combine single models and in what way their predictions should be incorporated into the ensemble output. The results I found was that the best approach for deepfakes detection is to create an ensemble, though, the ensemble approach plays a crucial role in the detection performance. The final proposed solution is an ensemble of all available single models which use the concept of soft (weighted) voting to combine its base-learners’ predictions. Results show that this proposed solution significantly improved deepfakes detection performance and substantially outperformed all single models.
APA, Harvard, Vancouver, ISO, and other styles
18

Inevik, Mina. "Skenet bedrar: Fake porn i svensk straffrätt." Thesis, Uppsala universitet, Juridiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-444204.

Full text
Abstract:
Sedan år 2017 har många internetanvändare fått bekanta sig med fenomenet deepfakes som är en slags digital imitation där en persons ansikte kan fogas samman med en annan persons kropp. Så kallade face swap-appar och filter är inget nytt men det som särskiljer deepfakes är att de skapas med hjälp av artificiell intelligens som kan ge extremt verk- liga resultat. Dessutom blir tekniken allt mer avancerad och lättillgänglig för gemene man, vilket innebär större risk för att den missbrukas t.ex. för att skapa fake porn där en persons ansikte infogas i en existerande bild med sexuellt innehåll. Syftet med uppsatsen är att utreda om det genom befintlig svensk straffrätt går att lagföra den som skapar eller sprider fake porn. Detta innefattar en utredning av huruvida skyddet för den personliga integriteten omfattar sådant sexuellt material som inte visar en persons faktiska kropp men ändock dennes ansikte, eller om fake porn avslöjar luck- or i det straffrättsliga skyddet för den personliga integriteten. För att uppnå syftet utreds brotten sexuellt ofredande och ofredande när det gäller skapande av fake porn samt för- tal och olaga integritetsintrång när det gäller spridande av sådant material. De slutsatser som dras är att det i allmänhet inte går att lagföra den som skapat fake porn men att det i många fall troligtvis går att bedöma spridande av materialet som för- tal. Detta är dock inte ett önskvärt sätt att angripa fake porn på då det inte fokuserar på skyddet för den personliga integriteten utan istället skyddet för en persons ära, vilket kan ge en del märkliga resultat vid tillämpningen på fake porn. Detta visar på ett behov av att lagstiftaren agerar proaktivt mot fake porn, innan det blir ett utbrett problem som är svårt för rättsväsendet att komma åt.<br>Since 2017 many internet users have gotten acquainted with deepfakes which is a type of digital imitation that allows you to attach a person’s face onto a picture or video of somebody else’s body. So called face swap apps and filters are hardly a novelty any- more but what distinguishes deepfakes is the fact that they are created using artificial in- telligence which can provide extremely realistic results. This technology is becoming increasingly more advanced and easily available even for those lacking more than aver- age skills in technology and computers. This increases the risk of deepfakes being abused e.g. for the purpose of creating fake porn where a person’s face is inserted into existing sexual content. The purpose of the thesis is to examine whether creating and spreading fake porn constitutes a crime according to Swedish criminal law. This includes investigating whether the protection of personal integrity extends to this sort of sexual material that portrays a person’s face but not their own body, or if fake porn has revealed blind spots in the protection of personal integrity in Swedish criminal law. For this purpose, the crimes sexual molestation and molestation will be tried regarding the creation of fake porn while defamation and unlawful intrusion of integrity will be tried regarding the spread of such content. It is concluded that, in general, creating fake porn is not punishable by criminal law, although it is likely that spreading it in many cases could constitute defamation. However, this is not a desirable way of managing fake porn since defamation is a crime designed to protect a person’s honor or reputation, not a person’s personal integrity. Applying the defamation provision on fake porn can therefore make for odd results in some cases. This highlights the need for proactivity from the legislator before fake porn becomes a widespread problem that the criminal justice system cannot handle.
APA, Harvard, Vancouver, ISO, and other styles
19

Parascandolo, Fiorenzo. "Trading System: a Deep Reinforcement Learning Approach." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Find full text
Abstract:
The main objective of this work is to show the advantages of Reinforcement Learning-based approaches to develop a Trading System. The experimental results showed the great adaptability of the developed models, which obtained very satisfactory econometric performances in five datasets of Forex Market characterized by different volatilities. The TradingEnv baseline provided by OpenAi was used to simulate the financial market. The latter has been improved by implementing a rendering of the simulation and the commission plan applied by a real Electronic Communication Network. As regards the artificial agent, the main contributions are the use of the Gramian Angular Field transformation to encode the historical financial series in images and the experimental proof that the presence of Locally Connected Layers brings a benefit in terms of performances. Vanilla Saliency Map was used as an explainability method to tune the window size of the observations of the environment. From the explanation of the best performing model it is possible to observe how the most important information are the price changes observed with greater granularity in accordance with the theoretical results proven at the state of the art on the historical financial series.
APA, Harvard, Vancouver, ISO, and other styles
20

KHICHI, MANISH. "DEEPFAKE OR REAL IMAGE PREDICTION USING MESONET." Thesis, 2021. http://dspace.dtu.ac.in:8080/jspui/handle/repository/18952.

Full text
Abstract:
Advance development in Machine Learning, Deep Learning, and Artificial Intelli gence (AI) allow people to exchange the faces and voices of other people in videos so that they look like people did or wanted to say. These videos and photos are called ”deepfake” and each day they are more complicated, which worries legislators. This technology uses machine learning technology to provide the computer with real data about the image so that we can falsify it. The creators of Deepfake use artificial in telligence and machine learning algorithms to mimic the work and characteristics of the real human. It differs from traditional fake media because it is difficult to iden tify. As the 2020 election approaches, 4,444 AI-generated DeepFakes have entered the news cycle. DeepFakes threatens facial recognition and online content. This hoax can be dangerous, because if used incorrectly, you can abuse this technique. Fake video, voice and audio clips can cause enormous damage.We will use Mesonet To make predictions on image data. we will examine four sets of images-correctly identified deepfakes, correctly identified reals, misIdentified deepfakes, misIdentified reals and we will see whether the human eye can pick up on any insights into the world of deepfakes.We will be using the Meso 4 model Trained on the deepfake and real data set.
APA, Harvard, Vancouver, ISO, and other styles
21

Chang, Ching-Tang, and 張景棠. "Detecting Deepfake Videos with CNN and Image Partitioning." Thesis, 2019. http://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi/login?o=dnclcdr&s=id=%22107NCHU5394052%22.&searchmode=basic.

Full text
Abstract:
碩士<br>國立中興大學<br>資訊科學與工程學系所<br>107<br>The AI­generated images are gradually similar to the pictures taken. When the generated images are used in inappropriate cases, it will cause damage to people’s rights and benefits. These doubtful images will cause illegal problems. The issue of detecting digital forgery has existed for many years. However, the fake images generated by the development of science and technology are more difficult to distinguish. Therefore, this thesis based on deep learning technology to detect the controversial face manipulation images. We proposed to segment the image block by block method and use CNN to train the features of each block separately. Finally, each feature is voted in an ensemble model to detect forgery images. Accurately, we recognize Faceswap, DeepFakes, and Face2Face with the dataset provided by FaceForensics++. Nowadays, classifiers require not only high accuracy but also the robustness of different datasets. Therefore, we train some data to test whether it is robust in other data. We collected digital forgeries generated by different methods on the video­sharing platform to test the generalization of our model in detecting these forgeries.
APA, Harvard, Vancouver, ISO, and other styles
22

SONI, ANKIT. "DETECTING DEEPFAKES USING HYBRID CNN-RNN MODEL." Thesis, 2022. http://dspace.dtu.ac.in:8080/jspui/handle/repository/19168.

Full text
Abstract:
We are living in the world of digital media and are connected to various types of digital media contents present in form of images and videos. Our lives are surrounded by digital contents and thus originality of content is very important. In the recent times, there is a huge emergence of deep learning-based tools that are used to create believable manipulated media known as Deepfakes. These are realistic fake media, that can cause threat to reputation, privacy and can even prove to be a serious threat to public security. These can even be used to create political distress, spread fake terrorism or for blackmailing anyone. As with growing technology, the tampered media getting generated are way more realistic that it can even bluff the human eyes. Hence, we need better deepfake detection algorithms for efficiently detect deepfakes. The proposed system that has been presented is based on a combination of CNN followed by RNN. The CNN model deployed here is SE-ResNeXt-101. The system proposed uses the CNN model SE-ResNeXt-101 model for extraction of feature vectors from the videos and further these feature vectors are utilized to train the RNN model which is LSTM model for classification of videos as Real or Deepfake. We evaluate our method on the dataset made by collecting huge number of videos from various distributed sources. We demonstrate how a simple architecture can be used to attain competitive results.
APA, Harvard, Vancouver, ISO, and other styles
23

RASOOL, AALE. "DETECTING DEEPFAKES WITH MULTI-MODEL NEURAL NETWORKS: A TRANSFER LEARNING APPROACH." Thesis, 2023. http://dspace.dtu.ac.in:8080/jspui/handle/repository/19993.

Full text
Abstract:
The prevalence of deepfake technology has led to serious worries about the veracity and dependability of visual media. To reduce any harm brought on by the malicious use of this technology, it is essential to identify deepfakes. By using the Vision Transformer (ViT) model for classification and the InceptionResNetV2 architecture for feature extraction, we offer a novel approach to deepfake detection in this thesis. The highly discriminative features are extracted from the input photos using the InceptionResNetV2 network, which has been pre-trained on a substantial dataset. The Vision Transformer model then receives these characteristics and uses the self attention method to identify long-range relationships and categorize the pictures as deepfakes or real. We use transfer learning techniques to improve the performance of the deepfake detection system. The InceptionResNetV2 model is fine-tuned using a deep fake specific dataset, which allows the pre-trained weights to adapt to whatever task is at hand, allowing the extraction of meaningful and discriminative deepfake features. Following that, the refined features are put into the ViT model for categorization. Extensive experiments are conducted to evaluate the performance of our proposed approach using various deepfake datasets. The results demonstrate the effectiveness of the InceptionResNetV2 and ViT combination, achieving high accuracy and robustness in deepfake detection across different types of manipulations, including face swapping and facial re-enactment. Additionally, the utilization of transfer learning significantly reduces the training time and computational resources required to train the deepfake detection system. This research's outcomes contribute to advancing deepfake detection techniques by leveraging state-of-the-art architectures for feature extraction and classification. The fusion of InceptionResNetV2 and ViT, along with the implementation of transfer learning, offers a powerful and efficient solution for accurate deepfake detection, thereby safeguarding the integrity and trustworthiness of visual media in an era of increasing digital manipulation.
APA, Harvard, Vancouver, ISO, and other styles
24

Gustav, Lindström, and Lerbom Ludvig. "AI - ett framtida verktyg för terrorism och organiserad brottslighet? : En framtidsstudie." Thesis, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-44890.

Full text
Abstract:
This paper explores the future of Artificial Intelligence (AI) and how it can be used by organised crime orterrorist organisations. It exploresthe fundamentals of AI, its history and how its use is affecting the waypolice operate. The paper shows how the development rate of AI is increasing and predicts how it willcontinue to evolve based on different parameters. A study of different types of AI shows the different usesthese systems have, and their potential misuse in the near future. By using the six pillars approach, aprediction concerning AI and the development of Artificial General Intelligence (AGI) is explored, andits ramifications to our society. The results show that in a world with AGI, AI-enabled crime as we knowit would cease to exist, but up until that point, the use of AI in crime will continue to impact our daily livesand security<br>Denna uppsats undersöker framtiden för AI och hur den kan användas av organiserad brottslighet ellerterroristorganisationer. Den utforskar grunderna för AI, dess historia och hur dess användning påverkarpolisens verksamhet. Uppsatsen visar hur utvecklingshastigheten för AI ökar och förutsäger hur denkommer att fortsätta utvecklas baserat på olika parametrar. En studie av olika typer av AI visar de olikaanvändningsområdena dessa system har och deras potentiella missbruk inom en snar framtid. Genom attanvända metoden sex pelare undersöks en förutsägelse om AI och utvecklingen av Artificiell Generellintelligens (AGI) och dess konsekvenser för vårt samhälle. Resultaten visar att i en värld med AGIkommer AI-aktiverad brottslighet som vi vet att den skulle upphöra att existera, men fram till den tidenkommer användningen av AI i brottslighet att fortsätta att påverka vårt dagliga liv och säkerhet.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography