To see the other types of publications on this topic, follow the link: Facial animation.

Journal articles on the topic 'Facial animation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Facial animation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Shakir, Samia, and Ali Al-Azza. "Facial Modelling and Animation: An Overview of The State-of-The Art." Iraqi Journal for Electrical and Electronic Engineering 18, no. 1 (November 24, 2021): 28–37. http://dx.doi.org/10.37917/ijeee.18.1.4.

Full text
Abstract:
Animating human face presents interesting challenges because of its familiarity as the face is the part utilized to recognize individuals. This paper reviewed the approaches used in facial modeling and animation and described their strengths and weaknesses. Realistic face animation of computer graphic models of human faces can be hard to achieve as a result of the many details that should be approximated in producing realistic facial expressions. Many methods have been researched to create more and more accurate animations that can efficiently represent human faces. We described the techniques that have been utilized to produce realistic facial animation. In this survey, we roughly categorized the facial modeling and animation approach into the following classes: blendshape or shape interpolation, parameterizations, facial action coding system-based approaches, moving pictures experts group-4 facial animation, physics-based muscle modeling, performance driven facial animation, visual speech animation.
APA, Harvard, Vancouver, ISO, and other styles
2

Sun, Shuo, and Chunbao Ge. "A New Method of 3D Facial Expression Animation." Journal of Applied Mathematics 2014 (2014): 1–6. http://dx.doi.org/10.1155/2014/706159.

Full text
Abstract:
Animating expressive facial animation is a very challenging topic within the graphics community. In this paper, we introduce a novel ERI (expression ratio image) driving framework based on SVR and MPEG-4 for automatic 3D facial expression animation. Through using the method of support vector regression (SVR), the framework can learn and forecast the regression relationship between the facial animation parameters (FAPs) and the parameters of expression ratio image. Firstly, we build a 3D face animation system driven by FAP. Secondly, through using the method of principle component analysis (PCA), we generate the parameter sets of eigen-ERI space, which will rebuild reasonable expression ratio image. Then we learn a model with the support vector regression mapping, and facial animation parameters can be synthesized quickly with the parameters of eigen-ERI. Finally, we implement our 3D face animation system driving by the result of FAP and it works effectively.
APA, Harvard, Vancouver, ISO, and other styles
3

Tseng, Juin-Ling. "An Improved Surface Simplification Method for Facial Expression Animation Based on Homogeneous Coordinate Transformation Matrix and Maximum Shape Operator." Mathematical Problems in Engineering 2016 (2016): 1–14. http://dx.doi.org/10.1155/2016/2370919.

Full text
Abstract:
Facial animation is one of the most popular 3D animation topics researched in recent years. However, when using facial animation, a 3D facial animation model has to be stored. This 3D facial animation model requires many triangles to accurately describe and demonstrate facial expression animation because the face often presents a number of different expressions. Consequently, the costs associated with facial animation have increased rapidly. In an effort to reduce storage costs, researchers have sought to simplify 3D animation models using techniques such as Deformation Sensitive Decimation and Feature Edge Quadric. The studies conducted have examined the problems in the homogeneity of the local coordinate system between different expression models and in the retainment of simplified model characteristics. This paper proposes a method that applies Homogeneous Coordinate Transformation Matrix to solve the problem of homogeneity of the local coordinate system and Maximum Shape Operator to detect shape changes in facial animation so as to properly preserve the features of facial expressions. Further, root mean square error and perceived quality error are used to compare the errors generated by different simplification methods in experiments. Experimental results show that, compared with Deformation Sensitive Decimation and Feature Edge Quadric, our method can not only reduce the errors caused by simplification of facial animation, but also retain more facial features.
APA, Harvard, Vancouver, ISO, and other styles
4

Adibah Najihah Mat Noor, Noor, Norhaida Mohd Suaib, Muhammad Anwar Ahmad, and Ibrahim Ahmad. "Review on 3D Facial Animation Techniques." International Journal of Engineering & Technology 7, no. 4.44 (December 1, 2018): 181. http://dx.doi.org/10.14419/ijet.v7i4.44.26980.

Full text
Abstract:
Generating facial animation has always been a challenge towards the graphical visualization area. Numerous efforts had been carried out in order to achieve high realism in facial animation. This paper surveys techniques applied in facial animation targeting towards realistic facial animation. We discuss the facial modeling techniques from different viewpoints; related geometric-based manipulation (that can be further categorized into interpolations, parameterization, muscle-based and pseudo–muscle-based model) and facial animation techniques involving speech-driven, image-based and data-captured. The paper will summarize and describe the related theories, strength and weaknesses for each technique.
APA, Harvard, Vancouver, ISO, and other styles
5

Chechko, D. A., and A. V. Radionova. "THE VIDEOCOURSE “FACIAL ANIMATION IN BLENDER”." Informatics in school, no. 9 (December 20, 2018): 57–60. http://dx.doi.org/10.32517/2221-1993-2018-17-9-57-60.

Full text
Abstract:
One of the most difcult areas of 3D computer graphics is a facial animation. To create it, you need knowledge of facial muscles anatomy, methods and techniques of facial animation, skills in a 3D modeling software. The article describes facial animation tutorials in Blender. The tutorials introduce two methods of facial animation: using shape keys and motion tracking. The tutorials can be used at school, but requires basic Blender knowledge.
APA, Harvard, Vancouver, ISO, and other styles
6

Kocoń, Maja, and Zigniew Emirsajłow. "Facial expression animation overview." IFAC Proceedings Volumes 42, no. 13 (2009): 312–17. http://dx.doi.org/10.3182/20090819-3-pl-3002.00055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Williams, Lance. "Performance-driven facial animation." ACM SIGGRAPH Computer Graphics 24, no. 4 (September 1990): 235–42. http://dx.doi.org/10.1145/97880.97906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Poole, M. D. "PR26 FACIAL RE-ANIMATION." ANZ Journal of Surgery 77, s1 (May 2007): A67. http://dx.doi.org/10.1111/j.1445-2197.2007.04127_25.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Parke, Frederic I. "Guest editorial: Facial animation." Journal of Visualization and Computer Animation 2, no. 4 (October 1991): 117. http://dx.doi.org/10.1002/vis.4340020403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yong, Jian Hua, and Ping Guang Cheng. "Design and Implementation of 3D Facial Animation Based on MPEG-4." Advanced Materials Research 433-440 (January 2012): 5045–49. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.5045.

Full text
Abstract:
Through the in-depth study of the MPEG-4 face model definition standard and animation-driven principles, learning from the existing generation technology of facial animation, this paper presents a 3D facial animation system design program. This program can accept driver information to generate a realistic facial expression animation and simulate the real face actions. At the same time, in the implementation process it also uses FAP frame with a mask and implementation method of FAP intermediate frame calculation, insert to reduce the amount of animation-driven data, and then improve the continuous effect of facial animation.
APA, Harvard, Vancouver, ISO, and other styles
11

Cubas, Carlos, and Antonio Carlos Sementille. "A modular framework for performance-based facial animation." Journal on Interactive Systems 9, no. 2 (August 29, 2018): 1. http://dx.doi.org/10.5753/jis.2018.697.

Full text
Abstract:
In recent decades, interest in capturing human face movements and identifying expressions for the purpose of generating realistic facial animations has increased in both the scientific community and the entertainment industry. We present a modular framework for testing algorithms used in performance-based facial animation. The framework includes the modules used in pipelines found in the literature as a module for creating datasets of blendshapes which are, facial models, where the vectors represent individual facial expressions, an algorithm processing module for identification of weights and, finally, a redirection module that creates a virtual face based on blendshapes. The framework uses a RGB-D camera, the RealSense F200 camera from Intel.
APA, Harvard, Vancouver, ISO, and other styles
12

Seifi, Hasti, Steve DiPaola, and Ali Arya. "Expressive Animated Character Sequences Using Knowledge-Based Painterly Rendering." International Journal of Computer Games Technology 2011 (2011): 1–7. http://dx.doi.org/10.1155/2011/164949.

Full text
Abstract:
We propose a technique to enhance emotional expressiveness in games and animations. Artists have used colors and painting techniques to convey emotions in their paintings for many years. Moreover, researchers have found that colors and line properties affect users' emotions. We propose using painterly rendering for character sequences in games and animations with a knowledge-based approach. This technique is especially useful for parametric facial sequences. We introduce two parametric authoring tools for animation and painterly rendering and a method to integrate them into a knowledge-based painterly rendering system. Furthermore, we present the results of a preliminary study on using this technique for facial expressions in still images. The results of the study show the effect of different color palettes on the intensity perceived for an emotion by users. The proposed technique can provide the animator with a depiction tool to enhance the emotional content of a character sequence in games and animations.
APA, Harvard, Vancouver, ISO, and other styles
13

Trotman, Carroll-Ann, Julian J. Faraway, Kirsten T. Silvester, Geoffrey M. Greenlee, and Lysle E. Johnston. "Sensitivity of a Method for the Analysis of Facial Mobility. I. Vector of Displacement." Cleft Palate-Craniofacial Journal 35, no. 2 (March 1998): 132–41. http://dx.doi.org/10.1597/1545-1569_1998_035_0132_soamft_2.3.co_2.

Full text
Abstract:
Objective (1) To determine which facial landmarks show the greatest movement during specific facial animations and (2) to determine the sensitivity of our instrument in using these landmarks to detect putatively abnormal facial movements. Design Movements of an array of skin-based landmarks on five healthy human subjects (2 men and 3 women; mean age, 27.6 years; range, 26 to 29 years) were observed during the execution of specific facial animations. To investigate the instrument sensitivity, we analyzed facial movements during maximal smile animations in six patients with different types of functional problems. In parallel, a panel was asked to view video recordings of the patients and to rate the degree of motor impairment. Comparisons were made between the panel scores and those of the measurement instrument. Results Specific regions of the face display movement that is representative of specific animations. During the smile animation, landmarks on the mid- and lower facial regions demonstrated the greatest movement. A similar pattern of movement was seen during the cheek puff animation, except that the infraorbital and chin regions demonstrated minimal movement. For the grimace and eye closure animations, the upper, mid-facial, and upper-lip regions exhibited the greatest movement. During eye opening, the upper and mid-facial regions, excluding the upper lip and cheek, moved the most, and during lip purse, markers on the mid- and lower face demonstrated the most movement. We used the smile-sensitive landmarks to evaluate individuals with functional impairment and found good agreement between instrument rankings based on the data from these landmarks and the panel rankings. Conclusion The present method of three-dimensional tracking has the potential to detect and characterize a range of clinically significant functional deficits.
APA, Harvard, Vancouver, ISO, and other styles
14

Bian, Shaojun, Anzong Zheng, Lin Gao, Greg Maguire, Willem Kokke, Jon Macey, Lihua You, and Jian J. Zhang. "Fully Automatic Facial Deformation Transfer." Symmetry 12, no. 1 (December 21, 2019): 27. http://dx.doi.org/10.3390/sym12010027.

Full text
Abstract:
Facial Animation is a serious and ongoing challenge for the Computer Graphic industry. Because diverse and complex emotions need to be expressed by different facial deformation and animation, copying facial deformations from existing character to another is widely needed in both industry and academia, to reduce time-consuming and repetitive manual work of modeling to create the 3D shape sequences for every new character. But transfer of realistic facial animations between two 3D models is limited and inconvenient for general use. Modern deformation transfer methods require correspondences mapping, in most cases, which are tedious to get. In this paper, we present a fast and automatic approach to transfer the deformations of the facial mesh models by obtaining the 3D point-wise correspondences in the automatic manner. The key idea is that we could estimate the correspondences with different facial meshes using the robust facial landmark detection method by projecting the 3D model to the 2D image. Experiments show that without any manual labelling efforts, our method detects reliable correspondences faster and simpler compared with the state-of-the-art automatic deformation transfer method on the facial models.
APA, Harvard, Vancouver, ISO, and other styles
15

Adis, Fransisca, and Yohanes Merci Widiastomo. "Designing Emotion Of Characters By Referencing From Facs In Short Animated Film “RANA”." ULTIMART Jurnal Komunikasi Visual 9, no. 2 (March 21, 2018): 31–38. http://dx.doi.org/10.31937/ultimart.v9i2.747.

Full text
Abstract:
Facial expression is one of some aspects that can deliver story and character’s emotion in 3D animation. To achieve that, we need to plan the character facial from very beginning of the production. At early stage, the character designer need to think about the expression after theu done the character design. Rigger need to create a flexible rigging to achieve the design. Animator can get the clear picture how they animate the facial. Facial Action Coding System (FACS) that originally developed by Carl-Herman Hjortsjo and adopted by Paul Ekman and Wallace V. can be used to identify emotion in a person generally. This paper is going to explain how the Writer use FACS to help designing the facial expression in 3D characters. FACS will be used to determine the basic characteristic of basic shapes of the face when show emotions, while compare with actual face reference. Keywords: animation, facial expression, non-dialog
APA, Harvard, Vancouver, ISO, and other styles
16

Moser, Lucio, Chinyu Chien, Mark Williams, Jose Serra, Darren Hendler, and Doug Roble. "Semi-supervised video-driven facial animation transfer for production." ACM Transactions on Graphics 40, no. 6 (December 2021): 1–18. http://dx.doi.org/10.1145/3478513.3480515.

Full text
Abstract:
We propose a simple algorithm for automatic transfer of facial expressions, from videos to a 3D character, as well as between distinct 3D characters through their rendered animations. Our method begins by learning a common, semantically-consistent latent representation for the different input image domains using an unsupervised image-to-image translation model. It subsequently learns, in a supervised manner, a linear mapping from the character images' encoded representation to the animation coefficients. At inference time, given the source domain (i.e., actor footage), it regresses the corresponding animation coefficients for the target character. Expressions are automatically remapped between the source and target identities despite differences in physiognomy. We show how our technique can be used in the context of markerless motion capture with controlled lighting conditions, for one actor and for multiple actors. Additionally, we show how it can be used to automatically transfer facial animation between distinct characters without consistent mesh parameterization and without engineered geometric priors. We compare our method with standard approaches used in production and with recent state-of-the-art models on single camera face tracking.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhang, Liangjun. "Animation Expression Control Based on Facial Region Division." Scientific Programming 2022 (May 12, 2022): 1–13. http://dx.doi.org/10.1155/2022/5800099.

Full text
Abstract:
Science and technology are developing rapidly in the twenty-first century. With the development of information technology, computers play a great role in people’s life. At present, with people’s increasing love for animation, exquisite and realistic animation has become people’s pursuit goal. Generally speaking, the most impressive thing in animation is the animation character expression. Nowadays, with the rapid development of science, it is necessary to develop a computer technology that can be used in animation expression control technology. The facial division is just met by computer technology. It is very important for the animation to create an animation expression consistent with the character’s face. The character face has diversity and uniqueness, which plays an important role in animation expression control. There are many factors in the character’s face area that affect animation expression control. The coordinated movement of multiple facial organs shows various emotional states through the changes in muscle movements in various areas of the face, such as eye muscles, facial muscles, and oral muscles. It has strong integrity and particularity and has relatively high technical requirements. Generally, the expression control technology can transform and deform a specific area of the face. Based on the division of the face area, the computer technology is used to calculate the different expression features of the face for recognition, showing a more exquisite and realistic animation expression. Under this background, this paper divides the facial region and introduces the physiological structure of the face and the relationship and influence between facial expression and animation expression control. Several algorithms used in facial structure feature point extraction are compared experimentally. After experimental comparison, it is found that the improved algorithm is much more efficient than the original algorithm in the process of extracting facial feature points, can remove redundancy, greatly reduce the amount of operation data, and lay a good foundation for the follow-up animation expression control technology.
APA, Harvard, Vancouver, ISO, and other styles
18

Chandran, Prashanth, Loïc Ciccone, Markus Gross, and Derek Bradley. "Local anatomically-constrained facial performance retargeting." ACM Transactions on Graphics 41, no. 4 (July 2022): 1–14. http://dx.doi.org/10.1145/3528223.3530114.

Full text
Abstract:
Generating realistic facial animation for CG characters and digital doubles is one of the hardest tasks in animation. A typical production workflow involves capturing the performance of a real actor using mo-cap technology, and transferring the captured motion to the target digital character. This process, known as retargeting , has been used for over a decade, and typically relies on either large blendshape rigs that are expensive to create, or direct deformation transfer algorithms that operate on individual geometric elements and are prone to artifacts. We present a new method for high-fidelity offline facial performance retargeting that is neither expensive nor artifact-prone. Our two step method first transfers local expression details to the target, and is followed by a global face surface prediction that uses anatomical constraints in order to stay in the feasible shape space of the target character. Our method also offers artists with familiar blendshape controls to perform fine adjustments to the retargeted animation. As such, our method is ideally suited for the complex task of human-to-human 3D facial performance retargeting, where the quality bar is extremely high in order to avoid the uncanny valley, while also being applicable for more common human-to-creature settings. We demonstrate the superior performance of our method over traditional deformation transfer algorithms, while achieving a quality comparable to current blendshape-based techniques used in production while requiring significantly fewer input shapes at setup time. A detailed user study corroborates the realistic and artifact free animations generated by our method in comparison to existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
19

Xu, Shibiao, Guanghui Ma, Weiliang Meng, and Xiaopeng Zhang. "Statistical learning based facial animation." Journal of Zhejiang University SCIENCE C 14, no. 7 (July 2013): 542–50. http://dx.doi.org/10.1631/jzus.cide1307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hällgren, Åsa, and Bertil Lyberg. "Facial animation using visual polyphones." Journal of the Acoustical Society of America 102, no. 5 (November 1997): 3167. http://dx.doi.org/10.1121/1.420778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Cao, Yong, Wen C. Tien, Petros Faloutsos, and Frédéric Pighin. "Expressive speech-driven facial animation." ACM Transactions on Graphics (TOG) 24, no. 4 (October 2005): 1283–302. http://dx.doi.org/10.1145/1095878.1095881.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Weise, Thibaut, Sofien Bouaziz, Hao Li, and Mark Pauly. "Realtime performance-based facial animation." ACM Transactions on Graphics 30, no. 4 (July 2011): 1–10. http://dx.doi.org/10.1145/2010324.1964972.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

LUO, Chang-Wei, Jun YU, and Zeng-Fu WANG. "Synthesizing Performance-driven Facial Animation." Acta Automatica Sinica 40, no. 10 (October 2014): 2245–52. http://dx.doi.org/10.1016/s1874-1029(14)60361-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Seol, Yeongho, Jaewoo Seo, Paul Hyunjin Kim, J. P. Lewis, and Junyong Noh. "Artist friendly facial animation retargeting." ACM Transactions on Graphics 30, no. 6 (December 2011): 1–10. http://dx.doi.org/10.1145/2070781.2024196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Fratarcangeli, Marco. "Position-based facial animation synthesis." Computer Animation and Virtual Worlds 23, no. 3-4 (May 2012): 457–66. http://dx.doi.org/10.1002/cav.1450.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Santosa, Katherine B., Alexandra M. Keane, Mary Politi, and Alison K. Snyder‐Warwick. "Facial Animation Surgery for Long‐standing Facial Palsy." JAMA Facial Plastic Surgery 21, no. 1 (January 2019): 3–4. http://dx.doi.org/10.1001/jamafacial.2018.0930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Zarrad, Anis. "An Extensible Game Engine to Develop Animated Facial Avatars in 3D Virtual Environment." International Journal of Virtual Communities and Social Networking 8, no. 2 (April 2016): 12–27. http://dx.doi.org/10.4018/ijvcsn.2016040102.

Full text
Abstract:
Avatar facial expression and animation in 3D Collaborative Virtual Environment (CVE) systems are reconstructed through a complex manipulation of all details that compose it like muscles, bones and wrinkles in 3D space. The need for a fast and easy reconstruction approach has emerged out in the recent years due to its application in various domains; 3D disaster management and military training etc. These details simulation must be as realistic as possible to convey different emotions according to the constantly changing situations in CVE during the runtime. For example, in 3D disaster management, it is important to use dynamic avatar emotions; firefighters should be frightened when dealing with a fire disaster and smiling when treating injures and evacuating habitants. However, the solution of facial animation remains a challenge that restricts the rapid and ease development of facial animation systems. In this work, the author presents extensible game engine architecture to easily produce real-time facial animations using a script atomic action without having to deal with control structures and 3D programing language. The proposed architecture defines various controllers, object behaviors, tactical and graphics rendering, and collision effects to quickly design 3D virtual environment. Firstly, the author gives the concept of atomic expression, and the method to build a parametrized script file according to the atomic expression. Then the author shows the validity of the generated expressions based on the MPEG-4 facial animation framework. Finally, the feasibility of the proposed architecture is tested via a firefighter scenario. The author's approach has the advantages over previous techniques of fitting directly an easy and faster technology with a high degree of programming independence. The author also minimizes the interaction with the game engine during the runtime by injecting dynamically the XML file into the game engine without stopping or restarting the engine.
APA, Harvard, Vancouver, ISO, and other styles
28

Schiffer, Sheldon. "How Actors Can Animate Game Characters: Integrating Performance Theory in the Emotion Model of a Game Character." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 15, no. 1 (October 8, 2019): 227–29. http://dx.doi.org/10.1609/aiide.v15i1.5252.

Full text
Abstract:
Despite the development of sophisticated emotion models, game character facial animation is still often completed with laborious hand-controlled key framing, only marginally assisted by automation. Behavior trees and animation state machines are used mostly to manage animation transitions of physical business, like walking or lifting objects. Attempts at automated facial animation rely on discrete facial iconic emotion poses, thus resulting in mechanical “acting”. The techniques of acting instructor and theorist Sanford Meisner reveal a process of role development whose character model resembles components of Appraisal Theory. The similarity conjures an experiment to discover if an “emotion engine” and workflow method can model the emotions of an autonomous animated character using actor-centric techniques. Success would allow an animation director to autonomously animate a character’s face with the subtlety of gradient expressions. Using a head-shot video stream of one of two actors performing a structured Meisner-esque improvisation as the primary data source, this research demonstrates the viability of an actor-centric workflow to create an autonomous facial animation system.
APA, Harvard, Vancouver, ISO, and other styles
29

Gu, Kuangxiao, Yuqian Zhou, and Thomas Huang. "FLNet: Landmark Driven Fetching and Learning Network for Faithful Talking Facial Animation Synthesis." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 10861–68. http://dx.doi.org/10.1609/aaai.v34i07.6717.

Full text
Abstract:
Talking face synthesis has been widely studied in either appearance-based or warping-based methods. Previous works mostly utilize single face image as a source, and generate novel facial animations by merging other person's facial features. However, some facial regions like eyes or teeth, which may be hidden in the source image, can not be synthesized faithfully and stably. In this paper, We present a landmark driven two-stream network to generate faithful talking facial animation, in which more facial details are created, preserved and transferred from multiple source images instead of a single one. Specifically, we propose a network consisting of a learning and fetching stream. The fetching sub-net directly learns to attentively warp and merge facial regions from five source images of distinctive landmarks, while the learning pipeline renders facial organs from the training face space to compensate. Compared to baseline algorithms, extensive experiments demonstrate that the proposed method achieves a higher performance both quantitatively and qualitatively. Codes are at https://github.com/kgu3/FLNet_AAAI2020.
APA, Harvard, Vancouver, ISO, and other styles
30

Alkawaz, Mohammed Hazim, Ahmad Hoirul Basori, Dzulkifli Mohamad, and Farhan Mohamed. "Realistic Facial Expression of Virtual Human Based on Color, Sweat, and Tears Effects." Scientific World Journal 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/367013.

Full text
Abstract:
Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.
APA, Harvard, Vancouver, ISO, and other styles
31

Wen, Shi-Jiang, Hao Wu, and Jong-Hoon Yang. "Research on Voice-Driven Facial Expression Film and Television Animation Based on Compromised Node Detection in Wireless Sensor Networks." Computational Intelligence and Neuroscience 2022 (January 24, 2022): 1–8. http://dx.doi.org/10.1155/2022/8563818.

Full text
Abstract:
With the continuous development of social economy, film and television animation, as the spiritual needs of ordinary people, is more and more popular. Especially for the development of emerging technologies, the corresponding voice can be used to change AI expression. But at the same time, how to ensure the synchronization of language sound and facial expression is one of the difficulties in animation transformation. Relying on the compromised node detection of wireless sensor networks, this paper combs the synchronous traffic flow between the speech signals and facial expressions, finds the pattern distribution of facial motion based on unsupervised classification, realizes training and learning through neural networks, and realizes one-to-one mapping to facial expressions by using the rhyme distribution of speech features. It avoids the defect of robustness of speech recognition, improves the learning ability of speech recognition, and realizes the driving analysis of facial expression film and television animation. The simulation results show that the compromised node detection in wireless sensor networks is effective and can support the analysis and research of speech-driven facial expression film and television animation.
APA, Harvard, Vancouver, ISO, and other styles
32

Permadi, Johanes Baptista. "Analisis Akting dalam Animasi Karakter Amatir dengan Tolok Ukur Profesional." Humaniora 4, no. 2 (October 31, 2013): 1199. http://dx.doi.org/10.21512/humaniora.v4i2.3562.

Full text
Abstract:
Issue raised is the lacks of animation quality in facial expressions that often make the characters do not look quite alive. The method used to find a solution is recording human facial expressions as a reference in a state of silence, dialogue, movement, and interaction with the gadgets. Once the reference is obtained, the results of the study are implemented in a 3D model which is ready to be animated. The goal is to make animated facial expressions and lively as similar as possible to actual human facial expressions. After making the animation, the animation results are compared with the references recorded to infer the observation.
APA, Harvard, Vancouver, ISO, and other styles
33

Fagel, Sascha, Gérard Bailly, and Barry-John Theobald. "Animating Virtual Speakers or Singers from Audio: Lip-Synching Facial Animation." EURASIP Journal on Audio, Speech, and Music Processing 2009, no. 1 (2009): 826091. http://dx.doi.org/10.1186/1687-4722-2009-826091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Fagel, Sascha, Gérard Bailly, and Barry-John Theobald. "Animating Virtual Speakers or Singers from Audio: Lip-Synching Facial Animation." EURASIP Journal on Audio, Speech, and Music Processing 2009 (2009): 1–2. http://dx.doi.org/10.1155/2009/826091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Yekti, Bharoto. "Study of Laika’s Facial Expression Mechanism System for Stop-Motion Animation Puppet Through Knock-Down Strategies on Home-Scaled 3D Printer." New Trends and Issues Proceedings on Humanities and Social Sciences 4, no. 11 (December 28, 2017): 185–93. http://dx.doi.org/10.18844/prosoc.v4i11.2873.

Full text
Abstract:
The growth of 3D printing has been rapid over the decades. Laika is a United States-based animation production company, and the pioneer of 3D printing technology in stop-motion animation. Laika uses this technology in their production pipeline for making stop-motion puppets in most of their films, including their latest films, Kubo and the Two Strings (2016). Due to limited access and information of details of Laika’s facial expression, communities and fans of animation have tried to conduct experiments with their own 3D print, using footages of behind-the-screen processes from Laika studio. This paper explores facial expressions for creating stop-motion puppet using an affordable home scale 3D printer. Using limited technical information collected from documentation video from Laika as well as referring to articles written by stop-motion enthusiasts, this fan-based research ignites creativity to overcome the barriers of technology and access through strategies in producing affordable 3D print stop-motion animation. Keywords: Stop-motion animation, 3D printing, facial expressions.
APA, Harvard, Vancouver, ISO, and other styles
36

Bouaziz, Sofien, Yangang Wang, and Mark Pauly. "Online modeling for realtime facial animation." ACM Transactions on Graphics 32, no. 4 (July 21, 2013): 1–10. http://dx.doi.org/10.1145/2461912.2461976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Bianchi, Bernardo, Andrea Ferri, Silvano Ferrari, Massimiliano Leporati, Teore Ferri, and Enrico Sesenna. "Ancillary Procedures in Facial Animation Surgery." Journal of Oral and Maxillofacial Surgery 72, no. 12 (December 2014): 2582–90. http://dx.doi.org/10.1016/j.joms.2014.07.036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Yang, Yang, Nanning Zheng, Yuehu Liu, Shaoyi Du, Yuanqi Su, and Yoshifumi Nishio. "Expression transfer for facial sketch animation." Signal Processing 91, no. 11 (November 2011): 2465–77. http://dx.doi.org/10.1016/j.sigpro.2011.04.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Moiza, Gideon, Ayellet Tal, Ilan Shimshoni, David Barnett, and Yael Moses. "Image-based animation of facial expressions." Visual Computer 18, no. 7 (November 1, 2002): 445–67. http://dx.doi.org/10.1007/s003710100157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Sagar, Mark, Mike Seymour, and Annette Henderson. "Creating connection with autonomous facial animation." Communications of the ACM 59, no. 12 (December 2016): 82–91. http://dx.doi.org/10.1145/2950041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Patel, Narendra M., and Mukesh Zaveri. "Parametric Facial Expression Synthesis and Animation." International Journal of Computer Applications 3, no. 4 (June 10, 2010): 34–40. http://dx.doi.org/10.5120/719-1011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Choe, Byoungwon, Hanook Lee, and Hyeong-Seok Ko. "Performance-driven muscle-based facial animation." Journal of Visualization and Computer Animation 12, no. 2 (2001): 67–79. http://dx.doi.org/10.1002/vis.246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Tsai, Flora S. "Dimensionality reduction for computer facial animation." Expert Systems with Applications 39, no. 5 (April 2012): 4965–71. http://dx.doi.org/10.1016/j.eswa.2011.10.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Zeng, Ming, Lin Liang, Xinguo Liu, and Hujun Bao. "Video-driven state-aware facial animation." Computer Animation and Virtual Worlds 23, no. 3-4 (May 2012): 167–78. http://dx.doi.org/10.1002/cav.1455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Liu, Kun, Jun-Hong Chen, and Kang-Ming Chang. "A Study of Facial Features of American and Japanese Cartoon Characters." Symmetry 11, no. 5 (May 12, 2019): 664. http://dx.doi.org/10.3390/sym11050664.

Full text
Abstract:
Many researchers think that the characters in animated cartoons and comics are designed according to the exaggeration or reduction of some features based on the human face. However, the feature distribution of the human face is relatively symmetrical and uniform. Thus, to ensure the characters look exaggerated, but without breaking the principle of symmetry, some questions remain: Which facial features should be exaggerated during the design process? How exaggerated are the faces of cartoon characters compared to real faces? To answer these questions, we selected 100 cartoon characters from American and Japanese animation, collected data from their facial features and the facial features of real people, and then described the features using angles, lengths, and areas. Finally, we compared cartoon characters’ facial features values with real facial features and determined the key parts and degree of facial exaggeration of animated characters. The research results show that American and Japanese cartoon characters both exaggerate the eyes, nose, ears, forehead, and chin. Compared with human faces, taking the eye area as an example, American animation characters are twice as large compared with human faces, whereas Japanese animation characters are 3.4 times larger than human faces. The study results can be used for reference by animation character designers and researchers.
APA, Harvard, Vancouver, ISO, and other styles
46

Zinnatov, Aynur Ayratovich, and Vlada Vladimirovna Kugurakova. "Mechanisms of Realistic Facial Expressions for Anthropomorphic Social Agents." Russian Digital Libraries Journal 23, no. 5 (August 23, 2020): 1011–25. http://dx.doi.org/10.26907/1562-5419-2020-23-5-1011-1025.

Full text
Abstract:
Three-dimensional facial animation has been extensively studied, but the achievement of realistic, human-like performance has not yet been decided. This article discusses various approaches for generating animated facial expressions controlled by speech. Combining the considered approaches for both facial animation, and the identification of emotions and the creation of micro-facial expressions in one system, we get a solution suitable for tasks such as game video, avatars of virtual reality or any scenario in which a speaker, speech or language is not known in advance.
APA, Harvard, Vancouver, ISO, and other styles
47

Gaber, Amira, Mona F. Taher, Manal Abdel Wahed, Nevin Mohieldin Shalaby, and Sarah Gaber. "Comprehensive assessment of facial paralysis based on facial animation units." PLOS ONE 17, no. 12 (December 14, 2022): e0277297. http://dx.doi.org/10.1371/journal.pone.0277297.

Full text
Abstract:
Quantitative grading and classification of the severity of facial paralysis (FP) are important for selecting the treatment plan and detecting subtle improvement that cannot be detected clinically. To date, none of the available FP grading systems have gained widespread clinical acceptance. The work presented here describes the development and testing of a system for FP grading and assessment which is part of a comprehensive evaluation system for FP. The system is based on the Kinect v2 hardware and the accompanying software SDK 2.0 in extracting the real time facial landmarks and facial animation units (FAUs). The aim of this paper is to describe the development and testing of the FP assessment phase (first phase) of a larger comprehensive evaluation system of FP. The system includes two phases; FP assessment and FP classification. A dataset of 375 records from 13 unilateral FP patients was compiled for this study. The FP assessment includes three separate modules. One module is the symmetry assessment of both facial sides at rest and while performing five voluntary facial movements. Another module is responsible for recognizing the facial movements. The last module assesses the performance of each facial movement for both sides of the face depending on the involved FAUs. The study validates that the FAUs captured using the Kinect sensor can be processed and used to develop an effective tool for the automatic evaluation of FP. The developed FP grading system provides a detailed quantitative report and has significant advantages over the existing grading scales. It is fast, easy to use, user-independent, low cost, quantitative, and automated and hence it is suitable to be used as a clinical tool.
APA, Harvard, Vancouver, ISO, and other styles
48

Trotman, Carroll-Ann, Christian S. Stohler, and Lysle E. Johnston. "Measurement of Facial Soft Tissue Mobility in Man." Cleft Palate-Craniofacial Journal 35, no. 1 (January 1998): 16–25. http://dx.doi.org/10.1597/1545-1569_1998_035_0016_mofstm_2.3.co_2.

Full text
Abstract:
Objective The assessment of facial mobility is a key element in the treatment of patients with facial motor deficits. In this study, we explored the utility of a three-dimensional tracking system in the measurement of facial movements. Methods and Results First, the three-dimensional movement of potentially stable facial soft-tissue, headcap, and dental landmarks was measured with respect to a fixed space frame. Based on the assumption that the dental landmarks are stable, their motion during a series of standardized facial animations was subtracted from that of the facial and headcap landmarks to estimate their movement within the face. This residual movement was used to determine which points are relatively stable (≤1.5 mm of movement) and which are not (≥1.5 mm of movement). Headcap landmarks were found to be suitable as references during smile, cheek puff, and lip purse animations, and during talking. In contrast, skinbased landmarks were unsuitable as references because of their considerable and highly variable movement during facial animation. Second, the facial movements of patients with obvious facial deformities were compared with those of matched controls to characterize the face validity of three-dimensional tracking. In all instances, pictures that appear to be characteristic of the various functional deficits emerged. Conclusions Our results argue that tracking instrumentation is a potentially useful tool in the measurement of facial mobility.
APA, Harvard, Vancouver, ISO, and other styles
49

TANGUY, EMMANUEL, PHILIP J. WILLIS, and JOANNA J. BRYSON. "A DYNAMIC EMOTION REPRESENTATION MODEL WITHIN A FACIAL ANIMATION SYSTEM." International Journal of Humanoid Robotics 03, no. 03 (September 2006): 293–300. http://dx.doi.org/10.1142/s0219843606000758.

Full text
Abstract:
This paper presents the Dynamic Emotion Representation (DER), and demonstrates how an instance of this model can be integrated into a facial animation system. The DER model has been implemented to enable users to create their own emotion representation. Developers can select which emotions they include and how these interact. The instance of the DER model described in this paper is composed of three layers, each representing states changing over different time scales: behavior activations, emotions and moods. The design of this DER is discussed with reference to emotion theories and to the needs of a facial animation system. The DER is used in our Emotionally Expressive Facial Animation System (EE-FAS) to produce emotional expressions, to select facial signals corresponding to communicative functions in relation to the emotional state of the agent and also in relation to the comparison between the emotional state and the intended meanings expressed through communicative functions.
APA, Harvard, Vancouver, ISO, and other styles
50

LIN, SHENG-CHE, TSAN-HSUN HUANG, FONG-CHIN SU, and YOU-LI CHOU. "MOTION ANALYSIS OF MOUTH MOVEMENT UTILIZING CHALLIS TECHNIQUE-EXPERIMENT MODEL AND CLINICAL STUDY USING VIDEO-BASED SYSTEM." Biomedical Engineering: Applications, Basis and Communications 14, no. 03 (June 25, 2002): 131–37. http://dx.doi.org/10.4015/s1016237202000206.

Full text
Abstract:
An Expert Vision motion analysis system with five analog video cameras was used to evaluate the mouth movement during two facial animations (smile and puffy face). Sixteen skin markers were adhered to the face of subject according to the anatomic landmarks to represent the functional movement of the facial muscles. The trajectory of the four or eight peri-oral skin markers was simultaneously evaluated by Challis technique, instead of individual movement of single marker. The 4-marker method is a little less accurate than 8-marker method. But the 4-marker method can be incorporated as a part of our modality of facial motion analysis, including two/three dimensional displacement of individual marker and absolute/relative displacement of paired markers. It was much easy for data acqusition and no extra marker was needed in the whole modality of our facial motion analysis. Physicians can use this Challis technique to evaluate grouped movement of facial markers, as a whole, in different animation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography