Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „FAKE VIDEOS“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "FAKE VIDEOS" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "FAKE VIDEOS"
Abidin, Muhammad Indra, Ingrid Nurtanio und Andani Achmad. „Deepfake Detection in Videos Using Long Short-Term Memory and CNN ResNext“. ILKOM Jurnal Ilmiah 14, Nr. 3 (19.12.2022): 178–85. http://dx.doi.org/10.33096/ilkom.v14i3.1254.178-185.
Der volle Inhalt der QuelleLópez-Gil, Juan-Miguel, Rosa Gil und Roberto García. „Do Deepfakes Adequately Display Emotions? A Study on Deepfake Facial Emotion Expression“. Computational Intelligence and Neuroscience 2022 (18.10.2022): 1–12. http://dx.doi.org/10.1155/2022/1332122.
Der volle Inhalt der QuelleArunkumar, P. M., Yalamanchili Sangeetha, P. Vishnu Raja und S. N. Sangeetha. „Deep Learning for Forgery Face Detection Using Fuzzy Fisher Capsule Dual Graph“. Information Technology and Control 51, Nr. 3 (23.09.2022): 563–74. http://dx.doi.org/10.5755/j01.itc.51.3.31510.
Der volle Inhalt der QuelleWang, Shuting (Ada), Min-Seok Pang und Paul Pavlou. „Seeing Is Believing? How Including a Video in Fake News Influences Users’ Reporting of Fake News to Social Media Platforms“. MIS Quarterly 45, Nr. 3 (01.09.2022): 1323–54. http://dx.doi.org/10.25300/misq/2022/16296.
Der volle Inhalt der QuelleDeng, Liwei, Hongfei Suo und Dongjie Li. „Deepfake Video Detection Based on EfficientNet-V2 Network“. Computational Intelligence and Neuroscience 2022 (15.04.2022): 1–13. http://dx.doi.org/10.1155/2022/3441549.
Der volle Inhalt der QuelleShahar, Hadas, und Hagit Hel-Or. „Fake Video Detection Using Facial Color“. Color and Imaging Conference 2020, Nr. 28 (04.11.2020): 175–80. http://dx.doi.org/10.2352/issn.2169-2629.2020.28.27.
Der volle Inhalt der QuelleLin, Yih-Kai, und Hao-Lun Sun. „Few-Shot Training GAN for Face Forgery Classification and Segmentation Based on the Fine-Tune Approach“. Electronics 12, Nr. 6 (16.03.2023): 1417. http://dx.doi.org/10.3390/electronics12061417.
Der volle Inhalt der QuelleLiang, Xiaoyun, Zhaohong Li, Zhonghao Li und Zhenzhen Zhang. „Fake Bitrate Detection of HEVC Videos Based on Prediction Process“. Symmetry 11, Nr. 7 (15.07.2019): 918. http://dx.doi.org/10.3390/sym11070918.
Der volle Inhalt der QuellePei, Pengfei, Xianfeng Zhao, Jinchuan Li, Yun Cao und Xuyuan Lai. „Vision Transformer-Based Video Hashing Retrieval for Tracing the Source of Fake Videos“. Security and Communication Networks 2023 (28.06.2023): 1–16. http://dx.doi.org/10.1155/2023/5349392.
Der volle Inhalt der QuelleDas, Rashmiranjan, Gaurav Negi und Alan F. Smeaton. „Detecting Deepfake Videos Using Euler Video Magnification“. Electronic Imaging 2021, Nr. 4 (18.01.2021): 272–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.4.mwsf-272.
Der volle Inhalt der QuelleDissertationen zum Thema "FAKE VIDEOS"
Zou, Weiwen. „Face recognition from video“. HKBU Institutional Repository, 2012. https://repository.hkbu.edu.hk/etd_ra/1431.
Der volle Inhalt der QuelleLI, Songyu. „A New Hands-free Face to Face Video Communication Method : Profile based frontal face video reconstruction“. Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-152457.
Der volle Inhalt der QuelleLiu, Yiran. „Consistent and Accurate Face Tracking and Recognition in Videos“. Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1588598739996101.
Der volle Inhalt der QuelleCheng, Xin. „Nonrigid face alignment for unknown subject in video“. Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/65338/1/Xin_Cheng_Thesis.pdf.
Der volle Inhalt der QuelleJin, Yonghua. „A video human face tracker“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0032/MQ62226.pdf.
Der volle Inhalt der QuelleArandjelović, Ognjen. „Automatic face recognition from video“. Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.613375.
Der volle Inhalt der QuelleOmizo, Ryan Masaaki. „Facing Vernacular Video“. The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1339184415.
Der volle Inhalt der QuelleHadid, A. (Abdenour). „Learning and recognizing faces: from still images to video sequences“. Doctoral thesis, University of Oulu, 2005. http://urn.fi/urn:isbn:9514277597.
Der volle Inhalt der QuelleFernando, Warnakulasuriya Anil Chandana. „Video processing in the compressed domain“. Thesis, University of Bristol, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.326724.
Der volle Inhalt der QuelleWibowo, Moh Edi. „Towards pose-robust face recognition on video“. Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/77836/1/Moh%20Edi_Wibowo_Thesis.pdf.
Der volle Inhalt der QuelleBücher zum Thema "FAKE VIDEOS"
Mezaris, Vasileios, Lyndon Nixon, Symeon Papadopoulos und Denis Teyssou, Hrsg. Video Verification in the Fake News Era. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-26752-0.
Der volle Inhalt der QuelleNational Film Board of Canada, Hrsg. Face to face video guide: Video resources for race relations training and education. Montréal: National Film Board of Canada, 1993.
Den vollen Inhalt der Quelle findenJi, Qiang, Thomas B. Moeslund, Gang Hua und Kamal Nasrollahi, Hrsg. Face and Facial Expression Recognition from Real World Videos. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-13737-7.
Der volle Inhalt der QuelleBai, Xiang, Yi Fang, Yangqing Jia, Meina Kan, Shiguang Shan, Chunhua Shen, Jingdong Wang et al., Hrsg. Video Analytics. Face and Facial Expression Recognition. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-12177-8.
Der volle Inhalt der QuelleScreening the face. Houndmills, Basingstoke, Hampshire: Palgrave Macmillan, 2012.
Den vollen Inhalt der Quelle findenPrager, Alex. Face in the crowd. Washington, DC: Corcoran Gallery of Art, 2013.
Den vollen Inhalt der Quelle findenNasrollahi, Kamal, Cosimo Distante, Gang Hua, Andrea Cavallaro, Thomas B. Moeslund, Sebastiano Battiato und Qiang Ji, Hrsg. Video Analytics. Face and Facial Expression Recognition and Audience Measurement. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-56687-0.
Der volle Inhalt der QuelleNoll, Katherine. Scholastic's Pokémon hall of fame. New York: Scholastic, 2004.
Den vollen Inhalt der Quelle findenLevy, Frederick. 15 Minutes of Fame. New York: Penguin Group USA, Inc., 2008.
Den vollen Inhalt der Quelle findenKurit︠s︡yn, Vi︠a︡cheslav, Naili︠a︡ Allakhverdieva, Marat Gelʹman und Iulii︠a︡ Sorokina. Lit︠s︡o nevesty: Sovremennoe iskusstvo Kazakhstana = Face of the bride : contemporary art of Kazakhstan. Permʹ: Muzeĭ sovremennogo iskusstva PERMM, 2012.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "FAKE VIDEOS"
Roy, Ritaban, Indu Joshi, Abhijit Das und Antitza Dantcheva. „3D CNN Architectures and Attention Mechanisms for Deepfake Detection“. In Handbook of Digital Face Manipulation and Detection, 213–34. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_10.
Der volle Inhalt der QuelleHedge, Amrita Shivanand, M. N. Vinutha, Kona Supriya, S. Nagasundari und Prasad B. Honnavalli. „CLH: Approach for Detecting Deep Fake Videos“. In Communications in Computer and Information Science, 539–51. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-8059-5_33.
Der volle Inhalt der QuelleHernandez-Ortega, Javier, Ruben Tolosana, Julian Fierrez und Aythami Morales. „DeepFakes Detection Based on Heart Rate Estimation: Single- and Multi-frame“. In Handbook of Digital Face Manipulation and Detection, 255–73. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_12.
Der volle Inhalt der QuelleMarkatopoulou, Foteini, Markos Zampoglou, Evlampios Apostolidis, Symeon Papadopoulos, Vasileios Mezaris, Ioannis Patras und Ioannis Kompatsiaris. „Finding Semantically Related Videos in Closed Collections“. In Video Verification in the Fake News Era, 127–59. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-26752-0_5.
Der volle Inhalt der QuelleKordopatis-Zilos, Giorgos, Symeon Papadopoulos, Ioannis Patras und Ioannis Kompatsiaris. „Finding Near-Duplicate Videos in Large-Scale Collections“. In Video Verification in the Fake News Era, 91–126. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-26752-0_4.
Der volle Inhalt der QuelleSingh, Aadya, Abey Alex George, Pankaj Gupta und Lakshmi Gadhikar. „ShallowFake-Detection of Fake Videos Using Deep Learning“. In Conference Proceedings of ICDLAIR2019, 170–78. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67187-7_19.
Der volle Inhalt der QuellePapadopoulou, Olga, Markos Zampoglou, Symeon Papadopoulos und Ioannis Kompatsiaris. „Verification of Web Videos Through Analysis of Their Online Context“. In Video Verification in the Fake News Era, 191–221. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-26752-0_7.
Der volle Inhalt der QuelleLong, Chengjiang, Arslan Basharat und Anthony Hoogs. „Video Frame Deletion and Duplication“. In Multimedia Forensics, 333–62. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7621-5_13.
Der volle Inhalt der QuelleBoccignone, Giuseppe, Sathya Bursic, Vittorio Cuculo, Alessandro D’Amelio, Giuliano Grossi, Raffaella Lanzarotti und Sabrina Patania. „DeepFakes Have No Heart: A Simple rPPG-Based Method to Reveal Fake Videos“. In Image Analysis and Processing – ICIAP 2022, 186–95. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-06430-2_16.
Der volle Inhalt der QuelleBao, Heng, Lirui Deng, Jiazhi Guan, Liang Zhang und Xunxun Chen. „Improving Deepfake Video Detection with Comprehensive Self-consistency Learning“. In Communications in Computer and Information Science, 151–61. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-8285-9_11.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "FAKE VIDEOS"
Shang, Jiacheng, und Jie Wu. „Protecting Real-time Video Chat against Fake Facial Videos Generated by Face Reenactment“. In 2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS). IEEE, 2020. http://dx.doi.org/10.1109/icdcs47774.2020.00082.
Der volle Inhalt der QuelleLiu, Zhenguang, Sifan Wu, Chejian Xu, Xiang Wang, Lei Zhu, Shuang Wu und Fuli Feng. „Copy Motion From One to Another: Fake Motion Video Generation“. In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/171.
Der volle Inhalt der QuelleZhang, Daichi, Chenyu Li, Fanzhao Lin, Dan Zeng und Shiming Ge. „Detecting Deepfake Videos with Temporal Dropout 3DCNN“. In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/178.
Der volle Inhalt der QuelleCelebi, Naciye, Qingzhong Liu und Muhammed Karatoprak. „A Survey of Deep Fake Detection for Trial Courts“. In 9th International Conference on Artificial Intelligence and Applications (AIAPP 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.120919.
Der volle Inhalt der QuelleAgarwal, Shruti, Hany Farid, Ohad Fried und Maneesh Agrawala. „Detecting Deep-Fake Videos from Phoneme-Viseme Mismatches“. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2020. http://dx.doi.org/10.1109/cvprw50498.2020.00338.
Der volle Inhalt der QuelleAgarwal, Shruti, Hany Farid, Tarek El-Gaaly und Ser-Nam Lim. „Detecting Deep-Fake Videos from Appearance and Behavior“. In 2020 IEEE International Workshop on Information Forensics and Security (WIFS). IEEE, 2020. http://dx.doi.org/10.1109/wifs49906.2020.9360904.
Der volle Inhalt der QuelleAgarwal, Shruti, und Hany Farid. „Detecting Deep-Fake Videos from Aural and Oral Dynamics“. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2021. http://dx.doi.org/10.1109/cvprw53098.2021.00109.
Der volle Inhalt der QuelleGerstner, Candice R., und Hany Farid. „Detecting Real-Time Deep-Fake Videos Using Active Illumination“. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2022. http://dx.doi.org/10.1109/cvprw56347.2022.00015.
Der volle Inhalt der QuelleChauhan, Ruby, Renu Popli und Isha Kansal. „A Comprehensive Review on Fake Images/Videos Detection Techniques“. In 2022 10th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO). IEEE, 2022. http://dx.doi.org/10.1109/icrito56286.2022.9964871.
Der volle Inhalt der QuelleMira, Fahad. „Deep Learning Technique for Recognition of Deep Fake Videos“. In 2023 IEEE IAS Global Conference on Emerging Technologies (GlobConET). IEEE, 2023. http://dx.doi.org/10.1109/globconet56651.2023.10150143.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "FAKE VIDEOS"
Grother, Patrick J., George W. Quinn und Mei Lee Ngan. Face in video evaluation (FIVE) face recognition of non-cooperative subjects. Gaithersburg, MD: National Institute of Standards and Technology, März 2017. http://dx.doi.org/10.6028/nist.ir.8173.
Der volle Inhalt der QuelleChen, Yi-Chen, Vishal M. Patel, Sumit Shekhar, Rama Chellappa und P. Jonathon Phillips. Video-based face recognition via joint sparse representation. Gaithersburg, MD: National Institute of Standards and Technology, 2013. http://dx.doi.org/10.6028/nist.ir.7906.
Der volle Inhalt der QuelleLee, Yooyoung, P. Jonathon Phillips, James J. Filliben, J. Ross Beveridge und Hao Zhang. Identifying face quality and factor measures for video. National Institute of Standards and Technology, Mai 2014. http://dx.doi.org/10.6028/nist.ir.8004.
Der volle Inhalt der QuelleТарасова, Олена Юріївна, und Ірина Сергіївна Мінтій. Web application for facial wrinkle recognition. Кривий Ріг, КДПУ, 2022. http://dx.doi.org/10.31812/123456789/7012.
Der volle Inhalt der QuelleDrury, J., S. Arias, T. Au-Yeung, D. Barr, L. Bell, T. Butler, H. Carter et al. Public behaviour in response to perceived hostile threats: an evidence base and guide for practitioners and policymakers. University of Sussex, 2023. http://dx.doi.org/10.20919/vjvt7448.
Der volle Inhalt der QuelleNeural correlates of face familiarity in institutionalised children and links to attachment disordered behaviour. ACAMH, März 2023. http://dx.doi.org/10.13056/acamh.23409.
Der volle Inhalt der QuelleCybervictimization in adolescence and its association with subsequent suicidal ideation/attempt beyond face‐to‐face victimization: a longitudinal population‐based study – video Q & A. ACAMH, September 2020. http://dx.doi.org/10.13056/acamh.13319.
Der volle Inhalt der Quelle