Academic literature on the topic 'Facial animation'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Facial animation.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Facial animation"
Shakir, Samia, and Ali Al-Azza. "Facial Modelling and Animation: An Overview of The State-of-The Art." Iraqi Journal for Electrical and Electronic Engineering 18, no. 1 (November 24, 2021): 28–37. http://dx.doi.org/10.37917/ijeee.18.1.4.
Full textSun, Shuo, and Chunbao Ge. "A New Method of 3D Facial Expression Animation." Journal of Applied Mathematics 2014 (2014): 1–6. http://dx.doi.org/10.1155/2014/706159.
Full textTseng, Juin-Ling. "An Improved Surface Simplification Method for Facial Expression Animation Based on Homogeneous Coordinate Transformation Matrix and Maximum Shape Operator." Mathematical Problems in Engineering 2016 (2016): 1–14. http://dx.doi.org/10.1155/2016/2370919.
Full textAdibah Najihah Mat Noor, Noor, Norhaida Mohd Suaib, Muhammad Anwar Ahmad, and Ibrahim Ahmad. "Review on 3D Facial Animation Techniques." International Journal of Engineering & Technology 7, no. 4.44 (December 1, 2018): 181. http://dx.doi.org/10.14419/ijet.v7i4.44.26980.
Full textChechko, D. A., and A. V. Radionova. "THE VIDEOCOURSE “FACIAL ANIMATION IN BLENDER”." Informatics in school, no. 9 (December 20, 2018): 57–60. http://dx.doi.org/10.32517/2221-1993-2018-17-9-57-60.
Full textKocoń, Maja, and Zigniew Emirsajłow. "Facial expression animation overview." IFAC Proceedings Volumes 42, no. 13 (2009): 312–17. http://dx.doi.org/10.3182/20090819-3-pl-3002.00055.
Full textWilliams, Lance. "Performance-driven facial animation." ACM SIGGRAPH Computer Graphics 24, no. 4 (September 1990): 235–42. http://dx.doi.org/10.1145/97880.97906.
Full textPoole, M. D. "PR26 FACIAL RE-ANIMATION." ANZ Journal of Surgery 77, s1 (May 2007): A67. http://dx.doi.org/10.1111/j.1445-2197.2007.04127_25.x.
Full textParke, Frederic I. "Guest editorial: Facial animation." Journal of Visualization and Computer Animation 2, no. 4 (October 1991): 117. http://dx.doi.org/10.1002/vis.4340020403.
Full textYong, Jian Hua, and Ping Guang Cheng. "Design and Implementation of 3D Facial Animation Based on MPEG-4." Advanced Materials Research 433-440 (January 2012): 5045–49. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.5045.
Full textDissertations / Theses on the topic "Facial animation"
Miller, Kenneth D. (Kenneth Doyle). "A system for advanced facial animation." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/40605.
Full textIncludes bibliographical references (leaves 35-36).
by Kenneth D. Miller, III.
M.Eng.
Kalra, Prem Kumar. "An interactive multimodal facial animation system /." [S.l.] : [s.n.], 1993. http://library.epfl.ch/theses/?nr=1183.
Full textLin, Alice J. "THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS." UKnowledge, 2011. http://uknowledge.uky.edu/gradschool_diss/841.
Full textSloan, Robin J. S. "Emotional avatars : choreographing emotional facial expression animation." Thesis, Abertay University, 2011. https://rke.abertay.ac.uk/en/studentTheses/2363eb4a-2eba-4f94-979f-77b0d6586e94.
Full textZhao, Hui. "Expressive facial animation transfer for virtual actors /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?CSED%202007%20ZHAO.
Full textPueblo, Stephen J. (Stephen Jerell). "Videorealistic facial animation for speech-based interfaces." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53179.
Full textIncludes bibliographical references (p. 79-81).
This thesis explores the use of computer-generated, videorealistic facial animation (avatars) in speech-based interfaces to understand whether the use of such animations enhances the end user's experience. Research in spoken dialog systems is a robust area that has now permeated everyday life; most notably with spoken telephone dialog systems. Over the past decade, research with videorealistic animations, both photorealistic and non-photorealistic, has reached the point where there is little discernible difference between the mouth movements of videorealistic animations and the mouth movements of actual humans. Because of the minute differences between the two, videorealistic speech animations are an ideal candidate to use in dialog systems. This thesis presents two videorealistic facial animation systems: a web-based system and a real-time system.
by Stephen J. Pueblo.
M.Eng.
Barker, Dean. "Computer facial animation for sign language visualization." Thesis, Stellenbosch : Stellenbosch University, 2005. http://hdl.handle.net/10019.1/50300.
Full textENGLISH ABSTRACT: Sign Language is a fully-fledged natural language possessing its own syntax and grammar; a fact which implies that the problem of machine translation from a spoken source language to Sign Language is at least as difficult as machine translation between two spoken languages. Sign Language, however, is communicated in a modality fundamentally different from all spoken languages. Machine translation to Sign Language is therefore burdened not only by a mapping from one syntax and grammar to another, but also, by a non-trivial transformation from one communicational modality to another. With regards to the computer visualization of Sign Language; what is required is a three dimensional, temporally accurate, visualization of signs including both the manual and nonmanual components which can be viewed from arbitrary perspectives making accurate understanding and imitation more feasible. Moreover, given that facial expressions and movements represent a fundamental basis for the majority of non-manual signs, any system concerned with the accurate visualization of Sign Language must rely heavily on a facial animation component capable of representing a well-defined set of emotional expressions as well as a set of arbitrary facial movements. This thesis investigates the development of such a computer facial animation system. We address the problem of delivering coordinated, temporally constrained, facial animation sequences in an online environment using VRML. Furthermore, we investigate the animation, using a muscle model process, of arbitrary three-dimensional facial models consisting of multiple aligned NURBS surfaces of varying refinement. Our results showed that this approach is capable of representing and manipulating high fidelity three-dimensional facial models in such a manner that localized distortions of the models result in the recognizable and realistic display of human facial expressions and that these facial expressions can be displayed in a coordinated, synchronous manner.
AFRIKAANSE OPSOMMING: Gebaretaal is 'n volwaardige natuurlike taal wat oor sy eie sintaks en grammatika beskik. Hierdie feit impliseer dat die probleem rakende masjienvertaling vanuit 'n gesproke taal na Gebaretaal net so moeilik is as masjienvertaling tussen twee gesproke tale. Gebaretaal word egter in 'n modaliteit gekommunikeer wat in wese van alle gesproke tale verskil. Masjienvertaling in Gebaretaal word daarom nie net belas deur 'n afbeelding van een sintaks en grammatika op 'n ander nie, maar ook deur beduidende omvorming van een kommunikasiemodaliteit na 'n ander. Wat die gerekenariseerde visualisering van Gebaretaal betref, vereis dit 'n driedimensionele, tyds-akkurate visualisering van gebare, insluitend komponente wat met en sonder die gebruik van die hande uitgevoer word, en wat vanuit arbitrêre perspektiewe beskou kan word ten einde die uitvoerbaarheid van akkurate begrip en nabootsing te verhoog. Aangesien gesigsuitdrukkings en -bewegings die fundamentele grondslag van die meeste gebare wat nie met die hand gemaak word nie, verteenwoordig, moet enige stelsel wat te make het met die akkurate visualisering van Gebaretaal boonop sterk steun op 'n gesigsanimasiekomponent wat daartoe in staat is om 'n goed gedefinieerde stel emosionele uitdrukkings sowel as 'n stel arbitrre gesigbewegings voor te stel. Hierdie tesis ondersoek die ontwikkeling van so 'n gerekenariseerde gesigsanimasiestelsel. Die probleem rakende die lewering van gekordineerde, tydsbegrensde gesigsanimasiesekwensies in 'n intydse omgewing, wat gebruik maak van VRML, word aangeroer. Voorts word ondersoek ingestel na die animasie (hier word van 'n spiermodelproses gebruik gemaak) van arbitrre driedimensionele gesigsmodelle bestaande uit veelvoudige, opgestelde NURBS-oppervlakke waarvan die verfyning wissel. Die resultate toon dat hierdie benadering daartoe in staat is om hoë kwaliteit driedimensionele gesigsmodelle só voor te stel en te manipuleer dat gelokaliseerde vervormings van die modelle die herkenbare en realistiese tentoonstelling van menslike gesigsuitdrukkings tot gevolg het en dat hierdie gesigsuitdrukkings op 'n gekordineerde, sinchroniese wyse uitgebeeld kan word.
Ellner, Henrik. "Facial animation parameter extraction using high-dimensional manifolds." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6117.
Full textThis thesis presents and examines a method that can potentially be used for extracting parameters from a manifold in a space. In the thesis the method is presented, and a potential application is described. The application is determining FAP-values. FAP-values
are used for parameterizing faces, which can e.g. be used to compress data when sending video sequences over limited bandwidth.
Smith, Andrew Patrick. "Muscle-based facial animation using blendshapes in superposition." Texas A&M University, 2006. http://hdl.handle.net/1969.1/5007.
Full textAlvi, O. "Facial reconstruction and animation in tele-immersive environment." Thesis, University of Salford, 2010. http://usir.salford.ac.uk/26547/.
Full textBooks on the topic "Facial animation"
Keith, Waters, ed. Computer facial animation. 2nd ed. Wellesley, Mass: A K Peters, 2008.
Find full textDeng, Zhigang, and Ulrich Neumann, eds. Data-Driven 3D Facial Animation. London: Springer London, 2007. http://dx.doi.org/10.1007/978-1-84628-907-1.
Full textStop staring: Facial modeling and animation done right. 3rd ed. Indianapolis, Ind: Wiley Pub., 2010.
Find full textStop staring: Facial modeling and animation done right. San Francisco: SYBEX, 2003.
Find full textStop staring: Facial modeling and animation done right. 2nd ed. Indianapolis, Ind: Wiley/Sybex, 2007.
Find full textDarris, Dobbs, ed. Animating facial features and expression. Rockland, Mass: Charles River Media, 1999.
Find full textS, Pandzic Igor, and Forchheimer Robert, eds. MPEG-4 facial animation: The standard, implementation and applications. Chichester: J. Wiley, 2002.
Find full textWaters, Keith. The computer synthesis of expressive three-dimensional facial character animation. [London]: [Middlesex Polytechnic], 1988.
Find full textLee, Victor Yuencheng. The construction and animation of functional facial models from cylindrical range/reflectance data. Ottawa: National Library of Canada, 1993.
Find full textBook chapters on the topic "Facial animation"
Paul, Naas. "Facial Animation." In How To Cheat In Maya 2017: Tools And Techniques For Character Animation, 418–73. Boca Raton : Taylor & Francis, CRC Press, 2018.: CRC Press, 2018. http://dx.doi.org/10.1201/9780429443138-11.
Full textPatel, Manjula. "Facial Animation." In Models and Techniques in Computer Animation, 157–58. Tokyo: Springer Japan, 1993. http://dx.doi.org/10.1007/978-4-431-66911-1_14.
Full textAnjyo, Ken. "Blendshape Facial Animation." In Handbook of Human Motion, 2145–55. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-14418-4_2.
Full textAnjyo, Ken. "Blendshape Facial Animation." In Handbook of Human Motion, 1–11. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30808-1_2-1.
Full textAl-Qayedi, Ali, and Adrian F. Clark. "Agent-Based Facial Animation." In Digital Convergence: The Information Revolution, 73–88. London: Springer London, 1999. http://dx.doi.org/10.1007/978-1-4471-0863-4_6.
Full textLimantour, Philippe. "Performance Facial Animation Cloning." In Cyberworlds, 193–206. Tokyo: Springer Japan, 1998. http://dx.doi.org/10.1007/978-4-431-67941-7_12.
Full textYang, Tzong-Jer, I.-Chen Lin, Cheng-Sheng Hung, Chien-Feng Huang, and Ming Ouhyoung. "Speech Driven Facial Animation." In Eurographics, 99–108. Vienna: Springer Vienna, 1999. http://dx.doi.org/10.1007/978-3-7091-6423-5_10.
Full textParke, Frederic I. "Control Parameterization for Facial Animation." In Computer Animation ’91, 3–14. Tokyo: Springer Japan, 1991. http://dx.doi.org/10.1007/978-4-431-66890-9_1.
Full textPelachaud, Catherine, Norman I. Badler, and Mark Steedman. "Linguistic Issues in Facial Animation." In Computer Animation ’91, 15–30. Tokyo: Springer Japan, 1991. http://dx.doi.org/10.1007/978-4-431-66890-9_2.
Full textPatterson, Elizabeth C., Peter C. Litwinowicz, and Ned Greene. "Facial Animation by Spatial Mapping." In Computer Animation ’91, 31–44. Tokyo: Springer Japan, 1991. http://dx.doi.org/10.1007/978-4-431-66890-9_3.
Full textConference papers on the topic "Facial animation"
Terzopoulos, Demetri, Barbara Mones-Hattal, Beth Hofer, Frederic Parke, Doug Sweetland, and Keith Waters. "Facial animation (panel)." In the 24th annual conference. New York, New York, USA: ACM Press, 1997. http://dx.doi.org/10.1145/258734.258899.
Full textZhuang, Wenlin, Jinwei Qi, Peng Zhang, Bang Zhang, and Ping Tan. "Text/Speech-Driven Full-Body Animation." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/863.
Full textHaber, Jörg, and Demetri Terzopoulos. "Facial modeling and animation." In the conference. New York, New York, USA: ACM Press, 2004. http://dx.doi.org/10.1145/1103900.1103906.
Full textWeise, Thibaut, Sofien Bouaziz, Hao Li, and Mark Pauly. "Kinect-based facial animation." In SIGGRAPH Asia 2011 Emerging Technologies. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2073370.2073371.
Full textKakumanu, P., R. Gutierrez-Osuna, A. Esposito, R. Bryll, A. Goshtasby, and O. N. Garcia. "Speech driven facial animation." In the 2001 workshop. New York, New York, USA: ACM Press, 2001. http://dx.doi.org/10.1145/971478.971488.
Full textWilliams, Lance. "Performance-driven facial animation." In the 17th annual conference. New York, New York, USA: ACM Press, 1990. http://dx.doi.org/10.1145/97879.97906.
Full textWilliams, Lance. "Performance-driven facial animation." In ACM SIGGRAPH 2006 Courses. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1185657.1185856.
Full textQuan, Li, and Haiyi Zhang. "Facial Animation Using CycleGAN." In 2021 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME). IEEE, 2021. http://dx.doi.org/10.1109/iceccme52200.2021.9591087.
Full textArikan, Okan. "Session details: Facial animation." In SIGGRAPH '11: Special Interest Group on Computer Graphics and Interactive Techniques Conference. New York, NY, USA: ACM, 2011. http://dx.doi.org/10.1145/3244731.
Full textCosta, Paula D. Paro, and José Mario De Martino. "Image-Based Expressive Speech Animation Based on the OCC Model of Emotions." In FAA '15: Facial Analysis and Animation. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2813852.2813855.
Full text