Academic literature on the topic 'Facial animation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Facial animation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Facial animation"

1

Shakir, Samia, and Ali Al-Azza. "Facial Modelling and Animation: An Overview of The State-of-The Art." Iraqi Journal for Electrical and Electronic Engineering 18, no. 1 (November 24, 2021): 28–37. http://dx.doi.org/10.37917/ijeee.18.1.4.

Full text
Abstract:
Animating human face presents interesting challenges because of its familiarity as the face is the part utilized to recognize individuals. This paper reviewed the approaches used in facial modeling and animation and described their strengths and weaknesses. Realistic face animation of computer graphic models of human faces can be hard to achieve as a result of the many details that should be approximated in producing realistic facial expressions. Many methods have been researched to create more and more accurate animations that can efficiently represent human faces. We described the techniques that have been utilized to produce realistic facial animation. In this survey, we roughly categorized the facial modeling and animation approach into the following classes: blendshape or shape interpolation, parameterizations, facial action coding system-based approaches, moving pictures experts group-4 facial animation, physics-based muscle modeling, performance driven facial animation, visual speech animation.
APA, Harvard, Vancouver, ISO, and other styles
2

Sun, Shuo, and Chunbao Ge. "A New Method of 3D Facial Expression Animation." Journal of Applied Mathematics 2014 (2014): 1–6. http://dx.doi.org/10.1155/2014/706159.

Full text
Abstract:
Animating expressive facial animation is a very challenging topic within the graphics community. In this paper, we introduce a novel ERI (expression ratio image) driving framework based on SVR and MPEG-4 for automatic 3D facial expression animation. Through using the method of support vector regression (SVR), the framework can learn and forecast the regression relationship between the facial animation parameters (FAPs) and the parameters of expression ratio image. Firstly, we build a 3D face animation system driven by FAP. Secondly, through using the method of principle component analysis (PCA), we generate the parameter sets of eigen-ERI space, which will rebuild reasonable expression ratio image. Then we learn a model with the support vector regression mapping, and facial animation parameters can be synthesized quickly with the parameters of eigen-ERI. Finally, we implement our 3D face animation system driving by the result of FAP and it works effectively.
APA, Harvard, Vancouver, ISO, and other styles
3

Tseng, Juin-Ling. "An Improved Surface Simplification Method for Facial Expression Animation Based on Homogeneous Coordinate Transformation Matrix and Maximum Shape Operator." Mathematical Problems in Engineering 2016 (2016): 1–14. http://dx.doi.org/10.1155/2016/2370919.

Full text
Abstract:
Facial animation is one of the most popular 3D animation topics researched in recent years. However, when using facial animation, a 3D facial animation model has to be stored. This 3D facial animation model requires many triangles to accurately describe and demonstrate facial expression animation because the face often presents a number of different expressions. Consequently, the costs associated with facial animation have increased rapidly. In an effort to reduce storage costs, researchers have sought to simplify 3D animation models using techniques such as Deformation Sensitive Decimation and Feature Edge Quadric. The studies conducted have examined the problems in the homogeneity of the local coordinate system between different expression models and in the retainment of simplified model characteristics. This paper proposes a method that applies Homogeneous Coordinate Transformation Matrix to solve the problem of homogeneity of the local coordinate system and Maximum Shape Operator to detect shape changes in facial animation so as to properly preserve the features of facial expressions. Further, root mean square error and perceived quality error are used to compare the errors generated by different simplification methods in experiments. Experimental results show that, compared with Deformation Sensitive Decimation and Feature Edge Quadric, our method can not only reduce the errors caused by simplification of facial animation, but also retain more facial features.
APA, Harvard, Vancouver, ISO, and other styles
4

Adibah Najihah Mat Noor, Noor, Norhaida Mohd Suaib, Muhammad Anwar Ahmad, and Ibrahim Ahmad. "Review on 3D Facial Animation Techniques." International Journal of Engineering & Technology 7, no. 4.44 (December 1, 2018): 181. http://dx.doi.org/10.14419/ijet.v7i4.44.26980.

Full text
Abstract:
Generating facial animation has always been a challenge towards the graphical visualization area. Numerous efforts had been carried out in order to achieve high realism in facial animation. This paper surveys techniques applied in facial animation targeting towards realistic facial animation. We discuss the facial modeling techniques from different viewpoints; related geometric-based manipulation (that can be further categorized into interpolations, parameterization, muscle-based and pseudo–muscle-based model) and facial animation techniques involving speech-driven, image-based and data-captured. The paper will summarize and describe the related theories, strength and weaknesses for each technique.
APA, Harvard, Vancouver, ISO, and other styles
5

Chechko, D. A., and A. V. Radionova. "THE VIDEOCOURSE “FACIAL ANIMATION IN BLENDER”." Informatics in school, no. 9 (December 20, 2018): 57–60. http://dx.doi.org/10.32517/2221-1993-2018-17-9-57-60.

Full text
Abstract:
One of the most difcult areas of 3D computer graphics is a facial animation. To create it, you need knowledge of facial muscles anatomy, methods and techniques of facial animation, skills in a 3D modeling software. The article describes facial animation tutorials in Blender. The tutorials introduce two methods of facial animation: using shape keys and motion tracking. The tutorials can be used at school, but requires basic Blender knowledge.
APA, Harvard, Vancouver, ISO, and other styles
6

Kocoń, Maja, and Zigniew Emirsajłow. "Facial expression animation overview." IFAC Proceedings Volumes 42, no. 13 (2009): 312–17. http://dx.doi.org/10.3182/20090819-3-pl-3002.00055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Williams, Lance. "Performance-driven facial animation." ACM SIGGRAPH Computer Graphics 24, no. 4 (September 1990): 235–42. http://dx.doi.org/10.1145/97880.97906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Poole, M. D. "PR26 FACIAL RE-ANIMATION." ANZ Journal of Surgery 77, s1 (May 2007): A67. http://dx.doi.org/10.1111/j.1445-2197.2007.04127_25.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Parke, Frederic I. "Guest editorial: Facial animation." Journal of Visualization and Computer Animation 2, no. 4 (October 1991): 117. http://dx.doi.org/10.1002/vis.4340020403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yong, Jian Hua, and Ping Guang Cheng. "Design and Implementation of 3D Facial Animation Based on MPEG-4." Advanced Materials Research 433-440 (January 2012): 5045–49. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.5045.

Full text
Abstract:
Through the in-depth study of the MPEG-4 face model definition standard and animation-driven principles, learning from the existing generation technology of facial animation, this paper presents a 3D facial animation system design program. This program can accept driver information to generate a realistic facial expression animation and simulate the real face actions. At the same time, in the implementation process it also uses FAP frame with a mask and implementation method of FAP intermediate frame calculation, insert to reduce the amount of animation-driven data, and then improve the continuous effect of facial animation.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Facial animation"

1

Miller, Kenneth D. (Kenneth Doyle). "A system for advanced facial animation." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/40605.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (leaves 35-36).
by Kenneth D. Miller, III.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
2

Kalra, Prem Kumar. "An interactive multimodal facial animation system /." [S.l.] : [s.n.], 1993. http://library.epfl.ch/theses/?nr=1183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lin, Alice J. "THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS." UKnowledge, 2011. http://uknowledge.uky.edu/gradschool_diss/841.

Full text
Abstract:
Facial expression and animation are important aspects of the 3D environment featuring human characters. These animations are frequently used in many kinds of applications and there have been many efforts to increase the realism. Three aspects are still stimulating active research: the detailed subtle facial expressions, the process of rigging a face, and the transfer of an expression from one person to another. This dissertation focuses on the above three aspects. A system for freely designing and creating detailed, dynamic, and animated facial expressions is developed. The presented pattern functions produce detailed and animated facial expressions. The system produces realistic results with fast performance, and allows users to directly manipulate it and see immediate results. Two unique methods for generating real-time, vivid, and animated tears have been developed and implemented. One method is for generating a teardrop that continually changes its shape as the tear drips down the face. The other is for generating a shedding tear, which is a kind of tear that seamlessly connects with the skin as it flows along the surface of the face, but remains an individual object. The methods both broaden CG and increase the realism of facial expressions. A new method to automatically set the bones on facial/head models to speed up the rigging process of a human face is also developed. To accomplish this, vertices that describe the face/head as well as relationships between each part of the face/head are grouped. The average distance between pairs of vertices is used to place the head bones. To set the bones in the face with multi-density, the mean value of the vertices in a group is measured. The time saved with this method is significant. A novel method to produce realistic expressions and animations by transferring an existing expression to a new facial model is developed. The approach is to transform the source model into the target model, which then has the same topology as the source model. The displacement vectors are calculated. Each vertex in the source model is mapped to the target model. The spatial relationships of each mapped vertex are constrained.
APA, Harvard, Vancouver, ISO, and other styles
4

Sloan, Robin J. S. "Emotional avatars : choreographing emotional facial expression animation." Thesis, Abertay University, 2011. https://rke.abertay.ac.uk/en/studentTheses/2363eb4a-2eba-4f94-979f-77b0d6586e94.

Full text
Abstract:
As a universal element of human nature, the experience, expression, and perception of emotions permeate our daily lives. Many emotions are thought to be basic and common to all humanity, irrespective of social or cultural background. Of these emotions, the corresponding facial expressions of a select few are known to be truly universal, in that they can be identified by most observers without the need for training. Facial expressions of emotion are subsequently used as a method of communication, whether through close face-to-face contact, or the use of emoticons online and in mobile texting. Facial expressions are fundamental to acting for stage and screen, and to animation for film and computer games. Expressions of emotion have been the subject of intense experimentation in psychology and computer science research, both in terms of their naturalistic appearance and the virtual replication of facial movements. From this work much is known about expression universality, anatomy, psychology, and synthesis. Beyond the realm of scientific research, animation practitioners have scrutinised facial expressions and developed an artistic understanding of movement and performance. However, despite the ubiquitous quality of facial expressions in life and research, our understanding of how to produce synthetic, dynamic imitations of emotional expressions which are perceptually valid remains somewhat limited. The research covered in this thesis sought to unite an artistic understanding of expression animation with scientific approaches to facial expression assessment. Acting as both an animation practitioner and as a scientific researcher, the author set out to investigate emotional facial expression dynamics, with the particular aim of identifying spatio-temporal configurations of animated expressions that not only satisfied artistic judgement, but which also stood up to empirical assessment. These configurations became known as emotional expression choreographies. The final work presented in this thesis covers the performative, practice-led research into emotional expression choreography, the results of empirical experimentation (where choreographed animations were assessed by observers), and the findings of qualitative studies (which painted a more detailed picture of the potential context of choreographed expressions). The holistic evaluation of expression animation from these three epistemological perspectives indicated that emotional expressions can indeed be choreographed in order to create refined performances which have empirically measurable effects on observers, and which may be contextualised by the phenomenological interpretations of both student animators and general audiences.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhao, Hui. "Expressive facial animation transfer for virtual actors /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?CSED%202007%20ZHAO.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pueblo, Stephen J. (Stephen Jerell). "Videorealistic facial animation for speech-based interfaces." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53179.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Includes bibliographical references (p. 79-81).
This thesis explores the use of computer-generated, videorealistic facial animation (avatars) in speech-based interfaces to understand whether the use of such animations enhances the end user's experience. Research in spoken dialog systems is a robust area that has now permeated everyday life; most notably with spoken telephone dialog systems. Over the past decade, research with videorealistic animations, both photorealistic and non-photorealistic, has reached the point where there is little discernible difference between the mouth movements of videorealistic animations and the mouth movements of actual humans. Because of the minute differences between the two, videorealistic speech animations are an ideal candidate to use in dialog systems. This thesis presents two videorealistic facial animation systems: a web-based system and a real-time system.
by Stephen J. Pueblo.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
7

Barker, Dean. "Computer facial animation for sign language visualization." Thesis, Stellenbosch : Stellenbosch University, 2005. http://hdl.handle.net/10019.1/50300.

Full text
Abstract:
Thesis (MSc)--University of Stellenbosch, 2005.
ENGLISH ABSTRACT: Sign Language is a fully-fledged natural language possessing its own syntax and grammar; a fact which implies that the problem of machine translation from a spoken source language to Sign Language is at least as difficult as machine translation between two spoken languages. Sign Language, however, is communicated in a modality fundamentally different from all spoken languages. Machine translation to Sign Language is therefore burdened not only by a mapping from one syntax and grammar to another, but also, by a non-trivial transformation from one communicational modality to another. With regards to the computer visualization of Sign Language; what is required is a three dimensional, temporally accurate, visualization of signs including both the manual and nonmanual components which can be viewed from arbitrary perspectives making accurate understanding and imitation more feasible. Moreover, given that facial expressions and movements represent a fundamental basis for the majority of non-manual signs, any system concerned with the accurate visualization of Sign Language must rely heavily on a facial animation component capable of representing a well-defined set of emotional expressions as well as a set of arbitrary facial movements. This thesis investigates the development of such a computer facial animation system. We address the problem of delivering coordinated, temporally constrained, facial animation sequences in an online environment using VRML. Furthermore, we investigate the animation, using a muscle model process, of arbitrary three-dimensional facial models consisting of multiple aligned NURBS surfaces of varying refinement. Our results showed that this approach is capable of representing and manipulating high fidelity three-dimensional facial models in such a manner that localized distortions of the models result in the recognizable and realistic display of human facial expressions and that these facial expressions can be displayed in a coordinated, synchronous manner.
AFRIKAANSE OPSOMMING: Gebaretaal is 'n volwaardige natuurlike taal wat oor sy eie sintaks en grammatika beskik. Hierdie feit impliseer dat die probleem rakende masjienvertaling vanuit 'n gesproke taal na Gebaretaal net so moeilik is as masjienvertaling tussen twee gesproke tale. Gebaretaal word egter in 'n modaliteit gekommunikeer wat in wese van alle gesproke tale verskil. Masjienvertaling in Gebaretaal word daarom nie net belas deur 'n afbeelding van een sintaks en grammatika op 'n ander nie, maar ook deur beduidende omvorming van een kommunikasiemodaliteit na 'n ander. Wat die gerekenariseerde visualisering van Gebaretaal betref, vereis dit 'n driedimensionele, tyds-akkurate visualisering van gebare, insluitend komponente wat met en sonder die gebruik van die hande uitgevoer word, en wat vanuit arbitrêre perspektiewe beskou kan word ten einde die uitvoerbaarheid van akkurate begrip en nabootsing te verhoog. Aangesien gesigsuitdrukkings en -bewegings die fundamentele grondslag van die meeste gebare wat nie met die hand gemaak word nie, verteenwoordig, moet enige stelsel wat te make het met die akkurate visualisering van Gebaretaal boonop sterk steun op 'n gesigsanimasiekomponent wat daartoe in staat is om 'n goed gedefinieerde stel emosionele uitdrukkings sowel as 'n stel arbitrre gesigbewegings voor te stel. Hierdie tesis ondersoek die ontwikkeling van so 'n gerekenariseerde gesigsanimasiestelsel. Die probleem rakende die lewering van gekordineerde, tydsbegrensde gesigsanimasiesekwensies in 'n intydse omgewing, wat gebruik maak van VRML, word aangeroer. Voorts word ondersoek ingestel na die animasie (hier word van 'n spiermodelproses gebruik gemaak) van arbitrre driedimensionele gesigsmodelle bestaande uit veelvoudige, opgestelde NURBS-oppervlakke waarvan die verfyning wissel. Die resultate toon dat hierdie benadering daartoe in staat is om hoë kwaliteit driedimensionele gesigsmodelle só voor te stel en te manipuleer dat gelokaliseerde vervormings van die modelle die herkenbare en realistiese tentoonstelling van menslike gesigsuitdrukkings tot gevolg het en dat hierdie gesigsuitdrukkings op 'n gekordineerde, sinchroniese wyse uitgebeeld kan word.
APA, Harvard, Vancouver, ISO, and other styles
8

Ellner, Henrik. "Facial animation parameter extraction using high-dimensional manifolds." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6117.

Full text
Abstract:

This thesis presents and examines a method that can potentially be used for extracting parameters from a manifold in a space. In the thesis the method is presented, and a potential application is described. The application is determining FAP-values. FAP-values

are used for parameterizing faces, which can e.g. be used to compress data when sending video sequences over limited bandwidth.

APA, Harvard, Vancouver, ISO, and other styles
9

Smith, Andrew Patrick. "Muscle-based facial animation using blendshapes in superposition." Texas A&M University, 2006. http://hdl.handle.net/1969.1/5007.

Full text
Abstract:
The blendshape is an effective tool in computer facial animation, enabling represention of muscle actions. Limitations exist, however, in the level of realism attainable under conventional use of blendshapes as non-intersecting deformations. Using the principle of superposition, it is possible to create a facial model with overlapping blendshapes and achieve more realistic performance. When blendshapes overlap, the region of intersection is in superposition and usually exhibits undesired surface interference. In such cases we use a corrective blendshape to remove the interference automatically. The result is an animatable facial model implemented in Maya which represents the effects of muscle action superposition. Performance created with our model of a known human subject is compared to 3D scan reference data and video reference data of that person. Test animation is compared to video reference footage. The test animation seems to mimic the effects of actual muscle action superposition accurately.
APA, Harvard, Vancouver, ISO, and other styles
10

Alvi, O. "Facial reconstruction and animation in tele-immersive environment." Thesis, University of Salford, 2010. http://usir.salford.ac.uk/26547/.

Full text
Abstract:
Over the last decade, research in Human Computer Interaction has focused on the development of interfaces that leverage the users' pre-existing skills and expectations from the real world, rather than requiring them to adapt to the constraints of technology driven design. In the context of remote collaboration or communication interfaces, the ultimate goal has been to develop interfaces that will allow remote participants to interact with each other in a human sense, as if they were co-located or in a face-to-face meeting. Research in social psychology has shown that the face is an important channel in non-verbal communication and real world interactions. Non-verbal cues that come from the face are the basis for building trust and professional intimacy and are critical for collaboration, negotiation, persuasion and communication. This research investigated the challenges of bringing non-verbal cues conveyed by the face into a communication interface. To meet these challenges, the proposed system allowed participants to convey the most distinctive nonverbal cues by using three different modes; point cloud, dynamic texture mapping and geometric deformation. A human factor evaluation was undertaken to find out how realistically these non-verbal cues could be expressed by the personalized avatar of the participant.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Facial animation"

1

Keith, Waters, ed. Computer facial animation. 2nd ed. Wellesley, Mass: A K Peters, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Keith, Waters, ed. Computer facial animation. Wellesley, Mass: A K Peters, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Deng, Zhigang, and Ulrich Neumann, eds. Data-Driven 3D Facial Animation. London: Springer London, 2007. http://dx.doi.org/10.1007/978-1-84628-907-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Stop staring: Facial modeling and animation done right. 3rd ed. Indianapolis, Ind: Wiley Pub., 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Stop staring: Facial modeling and animation done right. San Francisco: SYBEX, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Stop staring: Facial modeling and animation done right. 2nd ed. Indianapolis, Ind: Wiley/Sybex, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Darris, Dobbs, ed. Animating facial features and expression. Rockland, Mass: Charles River Media, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

S, Pandzic Igor, and Forchheimer Robert, eds. MPEG-4 facial animation: The standard, implementation and applications. Chichester: J. Wiley, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Waters, Keith. The computer synthesis of expressive three-dimensional facial character animation. [London]: [Middlesex Polytechnic], 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Victor Yuencheng. The construction and animation of functional facial models from cylindrical range/reflectance data. Ottawa: National Library of Canada, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Facial animation"

1

Paul, Naas. "Facial Animation." In How To Cheat In Maya 2017: Tools And Techniques For Character Animation, 418–73. Boca Raton : Taylor & Francis, CRC Press, 2018.: CRC Press, 2018. http://dx.doi.org/10.1201/9780429443138-11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Patel, Manjula. "Facial Animation." In Models and Techniques in Computer Animation, 157–58. Tokyo: Springer Japan, 1993. http://dx.doi.org/10.1007/978-4-431-66911-1_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Anjyo, Ken. "Blendshape Facial Animation." In Handbook of Human Motion, 2145–55. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-14418-4_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Anjyo, Ken. "Blendshape Facial Animation." In Handbook of Human Motion, 1–11. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30808-1_2-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Al-Qayedi, Ali, and Adrian F. Clark. "Agent-Based Facial Animation." In Digital Convergence: The Information Revolution, 73–88. London: Springer London, 1999. http://dx.doi.org/10.1007/978-1-4471-0863-4_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Limantour, Philippe. "Performance Facial Animation Cloning." In Cyberworlds, 193–206. Tokyo: Springer Japan, 1998. http://dx.doi.org/10.1007/978-4-431-67941-7_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Tzong-Jer, I.-Chen Lin, Cheng-Sheng Hung, Chien-Feng Huang, and Ming Ouhyoung. "Speech Driven Facial Animation." In Eurographics, 99–108. Vienna: Springer Vienna, 1999. http://dx.doi.org/10.1007/978-3-7091-6423-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Parke, Frederic I. "Control Parameterization for Facial Animation." In Computer Animation ’91, 3–14. Tokyo: Springer Japan, 1991. http://dx.doi.org/10.1007/978-4-431-66890-9_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pelachaud, Catherine, Norman I. Badler, and Mark Steedman. "Linguistic Issues in Facial Animation." In Computer Animation ’91, 15–30. Tokyo: Springer Japan, 1991. http://dx.doi.org/10.1007/978-4-431-66890-9_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Patterson, Elizabeth C., Peter C. Litwinowicz, and Ned Greene. "Facial Animation by Spatial Mapping." In Computer Animation ’91, 31–44. Tokyo: Springer Japan, 1991. http://dx.doi.org/10.1007/978-4-431-66890-9_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Facial animation"

1

Terzopoulos, Demetri, Barbara Mones-Hattal, Beth Hofer, Frederic Parke, Doug Sweetland, and Keith Waters. "Facial animation (panel)." In the 24th annual conference. New York, New York, USA: ACM Press, 1997. http://dx.doi.org/10.1145/258734.258899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhuang, Wenlin, Jinwei Qi, Peng Zhang, Bang Zhang, and Ping Tan. "Text/Speech-Driven Full-Body Animation." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/863.

Full text
Abstract:
Due to the increasing demand in films and games, synthesizing 3D avatar animation has attracted much attention recently. In this work, we present a production-ready text/speech-driven full-body animation synthesis system. Given the text and corresponding speech, our system synthesizes face and body animations simultaneously, which are then skinned and rendered to obtain a video stream output. We adopt a learning-based approach for synthesizing facial animation and a graph-based approach to animate the body, which generates high-quality avatar animation efficiently and robustly. Our results demonstrate the generated avatar animations are realistic, diverse and highly text/speech-correlated.
APA, Harvard, Vancouver, ISO, and other styles
3

Haber, Jörg, and Demetri Terzopoulos. "Facial modeling and animation." In the conference. New York, New York, USA: ACM Press, 2004. http://dx.doi.org/10.1145/1103900.1103906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Weise, Thibaut, Sofien Bouaziz, Hao Li, and Mark Pauly. "Kinect-based facial animation." In SIGGRAPH Asia 2011 Emerging Technologies. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2073370.2073371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kakumanu, P., R. Gutierrez-Osuna, A. Esposito, R. Bryll, A. Goshtasby, and O. N. Garcia. "Speech driven facial animation." In the 2001 workshop. New York, New York, USA: ACM Press, 2001. http://dx.doi.org/10.1145/971478.971488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Williams, Lance. "Performance-driven facial animation." In the 17th annual conference. New York, New York, USA: ACM Press, 1990. http://dx.doi.org/10.1145/97879.97906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Williams, Lance. "Performance-driven facial animation." In ACM SIGGRAPH 2006 Courses. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1185657.1185856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Quan, Li, and Haiyi Zhang. "Facial Animation Using CycleGAN." In 2021 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME). IEEE, 2021. http://dx.doi.org/10.1109/iceccme52200.2021.9591087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Arikan, Okan. "Session details: Facial animation." In SIGGRAPH '11: Special Interest Group on Computer Graphics and Interactive Techniques Conference. New York, NY, USA: ACM, 2011. http://dx.doi.org/10.1145/3244731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Costa, Paula D. Paro, and José Mario De Martino. "Image-Based Expressive Speech Animation Based on the OCC Model of Emotions." In FAA '15: Facial Analysis and Animation. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2813852.2813855.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography