Gotowa bibliografia na temat „Visual Digital Facial Markers”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Visual Digital Facial Markers”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Visual Digital Facial Markers"
Liu, Jia, Xianjie Zhou* i Yue Sun. "14 ANALYSIS OF THE EFFECTIVENESS OF DIGITAL EMOTION RECOGNITION DISPLAY DESIGN IN EARLY INTERVENTION FOR SCHIZOPHRENIA". Schizophrenia Bulletin 51, Supplement_1 (18.02.2025): S8. https://doi.org/10.1093/schbul/sbaf007.014.
Pełny tekst źródłaMai, Hang-Nga, i Du-Hyeong Lee. "Effects of Artificial Extraoral Markers on Accuracy of Three-Dimensional Dentofacial Image Integration: Smartphone Face Scan versus Stereophotogrammetry". Journal of Personalized Medicine 12, nr 3 (18.03.2022): 490. http://dx.doi.org/10.3390/jpm12030490.
Pełny tekst źródłaConley, Quincy. "Attracting Visual Attention in a Digital Age". International Journal of Cyber Behavior, Psychology and Learning 14, nr 1 (10.11.2024): 1–24. http://dx.doi.org/10.4018/ijcbpl.359336.
Pełny tekst źródłaLeone, Massimo. "Digital Cosmetics". Chinese Semiotic Studies 16, nr 4 (25.11.2020): 551–80. http://dx.doi.org/10.1515/css-2020-0030.
Pełny tekst źródłaKristanto, Verry Noval, Imam Riadi i Yudi Prayudi. "Analisa Deteksi dan Pengenalan Wajah pada Citra dengan Permasalahan Visual". JISKA (Jurnal Informatika Sunan Kalijaga) 8, nr 1 (30.01.2023): 78–89. http://dx.doi.org/10.14421/jiska.2023.8.1.78-89.
Pełny tekst źródłaBhumika M. N., Amit Chauhan, Pavana M. S., Sinchan Ullas Nayak, Sujana S., Sujith P., Vedashree D. i Fr Jobi Xavier. "Role of Genetic Markers in Deformation of Lip Prints: A Review". UTTAR PRADESH JOURNAL OF ZOOLOGY 44, nr 21 (14.10.2023): 334–40. http://dx.doi.org/10.56557/upjoz/2023/v44i213704.
Pełny tekst źródłaHansen, Mark B. N. "Affect as Medium, or the `Digital-Facial-Image'". Journal of Visual Culture 2, nr 2 (sierpień 2003): 205–28. http://dx.doi.org/10.1177/14704129030022004.
Pełny tekst źródłaRavelli, Louise J., i Theo Van Leeuwen. "Modality in the digital age". Visual Communication 17, nr 3 (13.04.2018): 277–97. http://dx.doi.org/10.1177/1470357218764436.
Pełny tekst źródłaKnific Košir, Aja, i Helena Gabrijelčič Tomc. "Visual effects and their importance in the field of visual media creation". Journal of graphic engineering and design 13, nr 2 (czerwiec 2022): 5–13. http://dx.doi.org/10.24867/jged-2022-2-005.
Pełny tekst źródłaNagtode, Priti. "Research Paper on Transformative Innovations in Identity Verification and Recognition". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, nr 05 (31.05.2024): 1–5. http://dx.doi.org/10.55041/ijsrem35159.
Pełny tekst źródłaRozprawy doktorskie na temat "Visual Digital Facial Markers"
Filali, razzouki Anas. "Deep learning-based video face-based digital markers for early detection and analysis of Parkinson disease". Electronic Thesis or Diss., Institut polytechnique de Paris, 2025. http://www.theses.fr/2025IPPAS002.
Pełny tekst źródłaThis thesis aims to develop robust digital biomarkers for early detection of Parkinson's disease (PD) by analyzing facial videos to identify changes associated with hypomimia. In this context, we introduce new contributions to the state of the art: one based on shallow machine learning and the other on deep learning.The first method employs machine learning models that use manually extracted facial features, particularly derivatives of facial action units (AUs). These models incorporate interpretability mechanisms that explain their decision-making process for stakeholders, highlighting the most distinctive facial features for PD. We examine the influence of biological sex on these digital biomarkers, compare them against neuroimaging data and clinical scores, and use them to predict PD severity.The second method leverages deep learning to automatically extract features from raw facial videos and optical flow using foundational models based on Video Vision Transformers. To address the limited training data, we propose advanced adaptive transfer learning techniques, utilizing foundational models trained on large-scale video classification datasets. Additionally, we integrate interpretability mechanisms to clarify the relationship between automatically extracted features and manually extracted facial AUs, enhancing the comprehensibility of the model's decisions.Finally, our generated facial features are derived from both cross-sectional and longitudinal data, which provides a significant advantage over existing work. We use these recordings to analyze the progression of hypomimia over time with these digital markers, and its correlation with the progression of clinical scores.Combining these two approaches allows for a classification AUC (Area Under the Curve) of over 90%, demonstrating the efficacy of machine learning and deep learning models in detecting hypomimia in early-stage PD patients through facial videos. This research could enable continuous monitoring of hypomimia outside hospital settings via telemedicine
Lucey, Simon. "Audio-visual speech processing". Thesis, Queensland University of Technology, 2002. https://eprints.qut.edu.au/36172/7/SimonLuceyPhDThesis.pdf.
Pełny tekst źródłaCzęści książek na temat "Visual Digital Facial Markers"
Majumdar, Puspita, Akshay Agarwal, Mayank Vatsa i Richa Singh. "Facial Retouching and Alteration Detection". W Handbook of Digital Face Manipulation and Detection, 367–87. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_17.
Pełny tekst źródłaNilsson, Oscar, Tom Sparrow, Andrew D. Holland i Andrew S. Wilson. "The Face of Stonehenge: 3D Surface Scanning, 3D Printing and Facial Reconstruction of the Winterbourne Stoke Cranium". W Visual Heritage: Digital Approaches in Heritage Science, 449–70. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-77028-0_22.
Pełny tekst źródłaLi, Yuezun, Pu Sun, Honggang Qi i Siwei Lyu. "Toward the Creation and Obstruction of DeepFakes". W Handbook of Digital Face Manipulation and Detection, 71–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_4.
Pełny tekst źródłaIsakowitsch, Clara. "How Augmented Reality Beauty Filters Can Affect Self-perception". W Communications in Computer and Information Science, 239–50. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26438-2_19.
Pełny tekst źródłaLi, Zhimin, Fan Li, Ching-hung Lee i Su Han. "Catching the Wanderer: Temporal and Visual Analysis of Mind Wandering in Digital Learning". W Advances in Transdisciplinary Engineering. IOS Press, 2024. https://doi.org/10.3233/atde240924.
Pełny tekst źródła"Facial Capture and Animation in Visual Effects". W Digital Representations of the Real World, 331–42. A K Peters/CRC Press, 2015. http://dx.doi.org/10.1201/b18154-32.
Pełny tekst źródłaDohen, Marion, Hélène Loevenbruck i Harold Hill. "Recognizing Prosody from the Lips". W Visual Speech Recognition, 416–38. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-186-5.ch014.
Pełny tekst źródłaBurlakova, Iryna. "Digital Empathy: Challenges and Opportunities in the World of Virtual Communication". W Digital Skills in a Digital Society: Requirements and Challenges, 257–80. Scientific Center of Innovative Research, 2024. https://doi.org/10.36690/dsds-257-280.
Pełny tekst źródłaMadhusudan, D., i Prudhvi Raj Budumuru. "FACE RECOGNITION WITH VOICE APPLICATION". W Artificial Intelligence and Emerging Technologies, 62–72. Iterative International Publishers, Selfypage Developers Pvt Ltd, 2024. http://dx.doi.org/10.58532/nbennurch306.
Pełny tekst źródłaDodiya, Kiranbhai R., Kapil Kumar, Akash Thakar, Grishma Pithiya, Krimisha Mungra i Piyush Topiya. "Unleashing the Power of AI". W Advances in Finance, Accounting, and Economics, 403–26. IGI Global, 2024. https://doi.org/10.4018/979-8-3693-8507-4.ch022.
Pełny tekst źródłaStreszczenia konferencji na temat "Visual Digital Facial Markers"
Andrus, Curtis, Junghyun Ahn, Michele Alessi, Abdallah Dib, Philippe Gosselin, Cédric Thébault, Louis Chevallier i Marco Romeo. "FaceLab: Scalable Facial Performance Capture for Visual Effects". W DigiPro '20: The Digital Production Symposium. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3403736.3403938.
Pełny tekst źródłaXu, Liming, Andrew P. French, Dave Towey i Steve Benford. "Recognizing the Presence of Hidden Visual Markers in Digital Images". W the. New York, New York, USA: ACM Press, 2017. http://dx.doi.org/10.1145/3126686.3126761.
Pełny tekst źródłaIshiuchi, Junko, Misako Ando, Sakiho Kai, Chiaki Ujihira, Hiroki Murase i Takao Furukawa. "Emotion-reacting fashion design: intelligent garment and accessory recognizing facial expressions". W 9th International Conference on Kansei Engineering and Emotion Research (KEER2022). Kansei Engineering and Emotion Research (KEER), 2022. http://dx.doi.org/10.5821/conference-9788419184849.29.
Pełny tekst źródłaJain, Pranav, i Conrad Tucker. "Mobile Based Real-Time Occlusion Between Real and Digital Objects in Augmented Reality". W ASME 2019 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/detc2019-98440.
Pełny tekst źródłaSchwenk, Rebecca, i Shana Smith. "Augmented Reality for the Positioning of Fasteners and Adhesives in Sheet Metal Joining Processes in the Automotive Industry". W ASME 2023 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/detc2023-116849.
Pełny tekst źródłaSilvério, Gabriel André, Pedro Arthur Possan, Mateus Pinto Marchetti, Isabela Louise Weber, Vera Cristina Terra, Karen Luiza Ramos Socher, Nancy Watanabe i Carlos Cesar Conrado Caggiano. "Multiple cranial couple syndrome secondary to neurosyphilis: case report". W XIV Congresso Paulista de Neurologia. Zeppelini Editorial e Comunicação, 2023. http://dx.doi.org/10.5327/1516-3180.141s1.548.
Pełny tekst źródłaNevoso, Isabella, Niccolò Casiddu, Annapaola Vacanti, Claudia Porfirione, Isabel Leggiero i Francesco Burlando. "HCD methodologies and simulation for visual rehabilitator’s education in oMERO project". W 9th International Conference on Human Interaction and Emerging Technologies - Artificial Intelligence and Future Applications. AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1002923.
Pełny tekst źródłaGeorgiou, Evangelos, Jian S. Dai i Michael Luck. "The KCLBOT: A Double Compass Self-Localizing Maneuverable Mobile Robot". W ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/detc2011-47753.
Pełny tekst źródłaAdnan, Nor Hafizah, Helmi Norman i Norazah Mohd Nordin. "Augmented Reality-based Learning using iPads for Children with Autism". W Tenth Pan-Commonwealth Forum on Open Learning. Commonwealth of Learning, 2022. http://dx.doi.org/10.56059/pcf10.8622.
Pełny tekst źródłaWang, Ping, Rong Chen i Jieling Xiao. "Image Detection System and its Application for Dynamic Wheel/Rail Contact in High Speed Railway". W 2013 Joint Rail Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/jrc2013-2516.
Pełny tekst źródłaRaporty organizacyjne na temat "Visual Digital Facial Markers"
Makhachashvili, Rusudan K., Svetlana I. Kovpik, Anna O. Bakhtina i Ekaterina O. Shmeltser. Technology of presentation of literature on the Emoji Maker platform: pedagogical function of graphic mimesis. [б. в.], lipiec 2020. http://dx.doi.org/10.31812/123456789/3864.
Pełny tekst źródła