To see the other types of publications on this topic, follow the link: Facial animation.

Dissertations / Theses on the topic 'Facial animation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Facial animation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Miller, Kenneth D. (Kenneth Doyle). "A system for advanced facial animation." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/40605.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (leaves 35-36).
by Kenneth D. Miller, III.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
2

Kalra, Prem Kumar. "An interactive multimodal facial animation system /." [S.l.] : [s.n.], 1993. http://library.epfl.ch/theses/?nr=1183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lin, Alice J. "THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS." UKnowledge, 2011. http://uknowledge.uky.edu/gradschool_diss/841.

Full text
Abstract:
Facial expression and animation are important aspects of the 3D environment featuring human characters. These animations are frequently used in many kinds of applications and there have been many efforts to increase the realism. Three aspects are still stimulating active research: the detailed subtle facial expressions, the process of rigging a face, and the transfer of an expression from one person to another. This dissertation focuses on the above three aspects. A system for freely designing and creating detailed, dynamic, and animated facial expressions is developed. The presented pattern functions produce detailed and animated facial expressions. The system produces realistic results with fast performance, and allows users to directly manipulate it and see immediate results. Two unique methods for generating real-time, vivid, and animated tears have been developed and implemented. One method is for generating a teardrop that continually changes its shape as the tear drips down the face. The other is for generating a shedding tear, which is a kind of tear that seamlessly connects with the skin as it flows along the surface of the face, but remains an individual object. The methods both broaden CG and increase the realism of facial expressions. A new method to automatically set the bones on facial/head models to speed up the rigging process of a human face is also developed. To accomplish this, vertices that describe the face/head as well as relationships between each part of the face/head are grouped. The average distance between pairs of vertices is used to place the head bones. To set the bones in the face with multi-density, the mean value of the vertices in a group is measured. The time saved with this method is significant. A novel method to produce realistic expressions and animations by transferring an existing expression to a new facial model is developed. The approach is to transform the source model into the target model, which then has the same topology as the source model. The displacement vectors are calculated. Each vertex in the source model is mapped to the target model. The spatial relationships of each mapped vertex are constrained.
APA, Harvard, Vancouver, ISO, and other styles
4

Sloan, Robin J. S. "Emotional avatars : choreographing emotional facial expression animation." Thesis, Abertay University, 2011. https://rke.abertay.ac.uk/en/studentTheses/2363eb4a-2eba-4f94-979f-77b0d6586e94.

Full text
Abstract:
As a universal element of human nature, the experience, expression, and perception of emotions permeate our daily lives. Many emotions are thought to be basic and common to all humanity, irrespective of social or cultural background. Of these emotions, the corresponding facial expressions of a select few are known to be truly universal, in that they can be identified by most observers without the need for training. Facial expressions of emotion are subsequently used as a method of communication, whether through close face-to-face contact, or the use of emoticons online and in mobile texting. Facial expressions are fundamental to acting for stage and screen, and to animation for film and computer games. Expressions of emotion have been the subject of intense experimentation in psychology and computer science research, both in terms of their naturalistic appearance and the virtual replication of facial movements. From this work much is known about expression universality, anatomy, psychology, and synthesis. Beyond the realm of scientific research, animation practitioners have scrutinised facial expressions and developed an artistic understanding of movement and performance. However, despite the ubiquitous quality of facial expressions in life and research, our understanding of how to produce synthetic, dynamic imitations of emotional expressions which are perceptually valid remains somewhat limited. The research covered in this thesis sought to unite an artistic understanding of expression animation with scientific approaches to facial expression assessment. Acting as both an animation practitioner and as a scientific researcher, the author set out to investigate emotional facial expression dynamics, with the particular aim of identifying spatio-temporal configurations of animated expressions that not only satisfied artistic judgement, but which also stood up to empirical assessment. These configurations became known as emotional expression choreographies. The final work presented in this thesis covers the performative, practice-led research into emotional expression choreography, the results of empirical experimentation (where choreographed animations were assessed by observers), and the findings of qualitative studies (which painted a more detailed picture of the potential context of choreographed expressions). The holistic evaluation of expression animation from these three epistemological perspectives indicated that emotional expressions can indeed be choreographed in order to create refined performances which have empirically measurable effects on observers, and which may be contextualised by the phenomenological interpretations of both student animators and general audiences.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhao, Hui. "Expressive facial animation transfer for virtual actors /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?CSED%202007%20ZHAO.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pueblo, Stephen J. (Stephen Jerell). "Videorealistic facial animation for speech-based interfaces." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53179.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Includes bibliographical references (p. 79-81).
This thesis explores the use of computer-generated, videorealistic facial animation (avatars) in speech-based interfaces to understand whether the use of such animations enhances the end user's experience. Research in spoken dialog systems is a robust area that has now permeated everyday life; most notably with spoken telephone dialog systems. Over the past decade, research with videorealistic animations, both photorealistic and non-photorealistic, has reached the point where there is little discernible difference between the mouth movements of videorealistic animations and the mouth movements of actual humans. Because of the minute differences between the two, videorealistic speech animations are an ideal candidate to use in dialog systems. This thesis presents two videorealistic facial animation systems: a web-based system and a real-time system.
by Stephen J. Pueblo.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
7

Barker, Dean. "Computer facial animation for sign language visualization." Thesis, Stellenbosch : Stellenbosch University, 2005. http://hdl.handle.net/10019.1/50300.

Full text
Abstract:
Thesis (MSc)--University of Stellenbosch, 2005.
ENGLISH ABSTRACT: Sign Language is a fully-fledged natural language possessing its own syntax and grammar; a fact which implies that the problem of machine translation from a spoken source language to Sign Language is at least as difficult as machine translation between two spoken languages. Sign Language, however, is communicated in a modality fundamentally different from all spoken languages. Machine translation to Sign Language is therefore burdened not only by a mapping from one syntax and grammar to another, but also, by a non-trivial transformation from one communicational modality to another. With regards to the computer visualization of Sign Language; what is required is a three dimensional, temporally accurate, visualization of signs including both the manual and nonmanual components which can be viewed from arbitrary perspectives making accurate understanding and imitation more feasible. Moreover, given that facial expressions and movements represent a fundamental basis for the majority of non-manual signs, any system concerned with the accurate visualization of Sign Language must rely heavily on a facial animation component capable of representing a well-defined set of emotional expressions as well as a set of arbitrary facial movements. This thesis investigates the development of such a computer facial animation system. We address the problem of delivering coordinated, temporally constrained, facial animation sequences in an online environment using VRML. Furthermore, we investigate the animation, using a muscle model process, of arbitrary three-dimensional facial models consisting of multiple aligned NURBS surfaces of varying refinement. Our results showed that this approach is capable of representing and manipulating high fidelity three-dimensional facial models in such a manner that localized distortions of the models result in the recognizable and realistic display of human facial expressions and that these facial expressions can be displayed in a coordinated, synchronous manner.
AFRIKAANSE OPSOMMING: Gebaretaal is 'n volwaardige natuurlike taal wat oor sy eie sintaks en grammatika beskik. Hierdie feit impliseer dat die probleem rakende masjienvertaling vanuit 'n gesproke taal na Gebaretaal net so moeilik is as masjienvertaling tussen twee gesproke tale. Gebaretaal word egter in 'n modaliteit gekommunikeer wat in wese van alle gesproke tale verskil. Masjienvertaling in Gebaretaal word daarom nie net belas deur 'n afbeelding van een sintaks en grammatika op 'n ander nie, maar ook deur beduidende omvorming van een kommunikasiemodaliteit na 'n ander. Wat die gerekenariseerde visualisering van Gebaretaal betref, vereis dit 'n driedimensionele, tyds-akkurate visualisering van gebare, insluitend komponente wat met en sonder die gebruik van die hande uitgevoer word, en wat vanuit arbitrêre perspektiewe beskou kan word ten einde die uitvoerbaarheid van akkurate begrip en nabootsing te verhoog. Aangesien gesigsuitdrukkings en -bewegings die fundamentele grondslag van die meeste gebare wat nie met die hand gemaak word nie, verteenwoordig, moet enige stelsel wat te make het met die akkurate visualisering van Gebaretaal boonop sterk steun op 'n gesigsanimasiekomponent wat daartoe in staat is om 'n goed gedefinieerde stel emosionele uitdrukkings sowel as 'n stel arbitrre gesigbewegings voor te stel. Hierdie tesis ondersoek die ontwikkeling van so 'n gerekenariseerde gesigsanimasiestelsel. Die probleem rakende die lewering van gekordineerde, tydsbegrensde gesigsanimasiesekwensies in 'n intydse omgewing, wat gebruik maak van VRML, word aangeroer. Voorts word ondersoek ingestel na die animasie (hier word van 'n spiermodelproses gebruik gemaak) van arbitrre driedimensionele gesigsmodelle bestaande uit veelvoudige, opgestelde NURBS-oppervlakke waarvan die verfyning wissel. Die resultate toon dat hierdie benadering daartoe in staat is om hoë kwaliteit driedimensionele gesigsmodelle só voor te stel en te manipuleer dat gelokaliseerde vervormings van die modelle die herkenbare en realistiese tentoonstelling van menslike gesigsuitdrukkings tot gevolg het en dat hierdie gesigsuitdrukkings op 'n gekordineerde, sinchroniese wyse uitgebeeld kan word.
APA, Harvard, Vancouver, ISO, and other styles
8

Ellner, Henrik. "Facial animation parameter extraction using high-dimensional manifolds." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6117.

Full text
Abstract:

This thesis presents and examines a method that can potentially be used for extracting parameters from a manifold in a space. In the thesis the method is presented, and a potential application is described. The application is determining FAP-values. FAP-values

are used for parameterizing faces, which can e.g. be used to compress data when sending video sequences over limited bandwidth.

APA, Harvard, Vancouver, ISO, and other styles
9

Smith, Andrew Patrick. "Muscle-based facial animation using blendshapes in superposition." Texas A&M University, 2006. http://hdl.handle.net/1969.1/5007.

Full text
Abstract:
The blendshape is an effective tool in computer facial animation, enabling represention of muscle actions. Limitations exist, however, in the level of realism attainable under conventional use of blendshapes as non-intersecting deformations. Using the principle of superposition, it is possible to create a facial model with overlapping blendshapes and achieve more realistic performance. When blendshapes overlap, the region of intersection is in superposition and usually exhibits undesired surface interference. In such cases we use a corrective blendshape to remove the interference automatically. The result is an animatable facial model implemented in Maya which represents the effects of muscle action superposition. Performance created with our model of a known human subject is compared to 3D scan reference data and video reference data of that person. Test animation is compared to video reference footage. The test animation seems to mimic the effects of actual muscle action superposition accurately.
APA, Harvard, Vancouver, ISO, and other styles
10

Alvi, O. "Facial reconstruction and animation in tele-immersive environment." Thesis, University of Salford, 2010. http://usir.salford.ac.uk/26547/.

Full text
Abstract:
Over the last decade, research in Human Computer Interaction has focused on the development of interfaces that leverage the users' pre-existing skills and expectations from the real world, rather than requiring them to adapt to the constraints of technology driven design. In the context of remote collaboration or communication interfaces, the ultimate goal has been to develop interfaces that will allow remote participants to interact with each other in a human sense, as if they were co-located or in a face-to-face meeting. Research in social psychology has shown that the face is an important channel in non-verbal communication and real world interactions. Non-verbal cues that come from the face are the basis for building trust and professional intimacy and are critical for collaboration, negotiation, persuasion and communication. This research investigated the challenges of bringing non-verbal cues conveyed by the face into a communication interface. To meet these challenges, the proposed system allowed participants to convey the most distinctive nonverbal cues by using three different modes; point cloud, dynamic texture mapping and geometric deformation. A human factor evaluation was undertaken to find out how realistically these non-verbal cues could be expressed by the personalized avatar of the participant.
APA, Harvard, Vancouver, ISO, and other styles
11

Aina, Olusola Olumide. "Generating anatomical substructures for physically-based facial animation." Thesis, Bournemouth University, 2011. http://eprints.bournemouth.ac.uk/18900/.

Full text
Abstract:
Physically-based facial animation techniques are capable of producing realistic facial deformations, but have failed to find meaningful use outside the academic community because they are notoriously difficult to create, reuse, and art-direct, in comparison to other methods of facial animation. This thesis addresses these shortcomings and presents a series of methods for automatically generating a skull, the superficial musculoaponeurotic system (SMAS – a layer of fascia investing and interlinking the mimic muscle system), and mimic muscles for any given 3D face model. This is done toward (the goal of) a production-viable framework or rig-builder for physically-based facial animation. This workflow consists of three major steps. First, a generic skull is fitted to a given head model using thin-plate splines computed from the correspondence between landmarks placed on both models. Second, the SMAS is constructed as a variational implicit or radial basis function surface in the interface between the head model and the generic skull fitted to it. Lastly, muscle fibres are generated as boundary-value straightest geodesics, connecting muscle attachment regions defined on the surface of the SMAS. Each step of this workflow is developed with speed, realism and reusability in mind.
APA, Harvard, Vancouver, ISO, and other styles
12

Sánchez, Lorenzo Manuel Antonio. "Techniques for performance based, real-time facial animation." Thesis, University of Sheffield, 2006. http://etheses.whiterose.ac.uk/14897/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Barrielle, Vincent. "Leveraging Blendshapes for Realtime Physics-Based Facial Animation." Thesis, CentraleSupélec, 2017. http://www.theses.fr/2017CSUP0003.

Full text
Abstract:
La génération d'animation faciale de synthèse constitue une étape cruciale en génération d’images de synthèse. Il est cependant difficile de produire des animations convaincantes. Le paradigme dominant pour la création d'animations faciales de haute qualité est la méthode des blendshapes, où les expressions sont décomposées comme la combinaison linéaire d’expressions plus basiques. Toutefois, cette technique requiert une grande quantité de travail manuel, réservée aux films à grand budget, pour produire la qualité requise. La production d'animation faciale réaliste est possible à l'aide de la simulation physique, mais ce processus requiert l'utilisation d’imagerie médicale coûteuse.Nous proposons de réunir le paradigme des blendshapes et celui de la simulation physique, afin de profiter de l'ubiquité des blendshapes tout en bénéficiant de la simulation physique pour produire des effets complexes. Nous introduisons donc blendforces, un paradigme où les blendshapes sont interprétées comme une base pour approximer les forces issues des muscles faciaux. Nous montrons que, combinées à un système physique approprié, ces blendforces produisent des animations faciales convaincantes, présentant une dynamique de mouvements de peau réaliste, gérant les contacts et le collage des lèvres, et intégrant les effets des forces inertielles et de la gravité. Nous utilisons ce formalisme pour la production en temps réel d'animation faciale avec effets physiques basée sur des mouvements capturés à l'aide d'une simple caméra. À notre connaissance, il s'agit de la première démonstration de simulation physique temps-réel appliquée au cas délicat de la simulation d'animations faciales
Generating synthetic facial animation is a crucial step in the creation of content for a wide variety of digital media such as movies and video games. However, producing convincing results is challenging, since humans are experts in analyzing facial expressions and will hence detect any artifact. The dominant paradigm for the production of high-quality facial animation is the blendshapes paradigm, where facial expressions are decomposed as a linear combination of more basic expressions. However, this technique requires large amounts of work to reach the desired quality, which reserves high-quality animation to large budget movies. Producing high-quality facial animation is possible using physical simulation, but this requires the costly acquisition of medical imaging data.We propose to merge the blendshapes and physical simulation paradigms, to build upon the ubiquity of blendshapes while benefiting from physical simulation for complex effects. We therefore introduce blendforces, a paradigm where blendshapes are interpreted as a basis for approximating the forces emanating from the facial muscles. We show that, combined with an appropriate face physical system, these blendforces can be used to produce convincing facial animation, with natural skin dynamics, handling of lips contacts, sticky lips, inertial effects and handling of gravity. We encompass this framework within a practical realtime performance capture setup, where we produce realtime facial animation with physical effects from a simple RGB camera feed. To the best of our knowledge, this constitutes the first instance of realtime physical simulation applied to the challenging task of facial animation
APA, Harvard, Vancouver, ISO, and other styles
14

Zavala, Chmelicka Marco Enrique. "Visual prosody in speech-driven facial animation: elicitation, prediction, and perceptual evaluation." Texas A&M University, 2005. http://hdl.handle.net/1969.1/2436.

Full text
Abstract:
Facial animations capable of articulating accurate movements in synchrony with a speech track have become a subject of much research during the past decade. Most of these efforts have focused on articulation of lip and tongue movements, since these are the primary sources of information in speech reading. However, a wealth of paralinguistic information is implicitly conveyed through visual prosody (e.g., head and eyebrow movements). In contrast with lip/tongue movements, however, for which the articulation rules are fairly well known (i.e., viseme-phoneme mappings, coarticulation), little is known about the generation of visual prosody. The objective of this thesis is to explore the perceptual contributions of visual prosody in speech-driven facial avatars. Our main hypothesis is that visual prosody driven by acoustics of the speech signal, as opposed to random or no visual prosody, results in more realistic, coherent and convincing facial animations. To test this hypothesis, we have developed an audio-visual system capable of capturing synchronized speech and facial motion from a speaker using infrared illumination and retro-reflective markers. In order to elicit natural visual prosody, a story-telling experiment was designed in which the actors were shown a short cartoon video, and subsequently asked to narrate the episode. From this audio-visual data, four different facial animations were generated, articulating no visual prosody, Perlin-noise, speech-driven movements, and ground truth movements. Speech-driven movements were driven by acoustic features of the speech signal (e.g., fundamental frequency and energy) using rule-based heuristics and autoregressive models. A pair-wise perceptual evaluation shows that subjects can clearly discriminate among the four visual prosody animations. It also shows that speech-driven movements and Perlin-noise, in that order, approach the performance of veridical motion. The results are quite promising and suggest that speech-driven motion could outperform Perlin-noise if more powerful motion prediction models are used. In addition, our results also show that exaggeration can bias the viewer to perceive a computer generated character to be more realistic motion-wise.
APA, Harvard, Vancouver, ISO, and other styles
15

Patel, Manjula. "Making FACES : the Facial Animation, Construction and Editing System." Thesis, University of Bath, 1991. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.524137.

Full text
Abstract:
The human face is a fascinating, but extremely complex object; the research project described is concerned with the computer generation and animation of faces. However, the age old captivation with the face transforms into a major obstacle when creating synthetic faces. The face and head are the most visible attributes of a person. We master the skills of recognising faces and interpreting facial movement at a very early age. As a result, we are likely to notice the smallest deviation from our concept of how a face should appear and behave. Computer animation in general, is often perceived to be ``wooden' and very ``rigid'; the aim is therefore to provide facilities for the generation of believable faces and convincing facial movement. The major issues addressed within the project concern the modelling of a large variety of faces and their animation. Computer modelling of arbitrary faces is an area that has received relatively little attention in comparison with the animation of faces. Another problem that has been considered is that of providing the user with adequate and effective control over the modelling and animation of the face. The Facial Animation, Construction and Editing System or FACES was conceived as a system for investigating these issues. A promising approach is to look a little deeper than the surface of the skin. A three-layer anatomical model of the head, which incorporates bone, muscle, skin and surface features, has been developed. As well as serving as a foundation which integrates all the facilities available within FACES, the advantage of the model is that it allows differing strategies to be used for modelling and animation. FACES is an interactive system, which helps with both the generation and animation of faces, while hiding the structural complexities of the face from the user. The software consists of four sub-systems; CONSTRUCT and MODIFY cater for modelling functionality, while ANIMATE allows animation sequences to be generated and RENDER provides for shading and motion evaluation.
APA, Harvard, Vancouver, ISO, and other styles
16

RODRIGUES, PAULA SALGADO LUCENA. "A SYSTEM FOR GENERATING DYNAMIC FACIAL EXPRESSIONS IN 3D FACIAL ANIMATION WITH SPEECH PROCESSING." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2007. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=11569@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Esta tese apresenta um sistema para geração de expressões faciais dinâmicas sincronizadas com a fala em uma face realista tridimensional. Entende-se por expressões faciais dinâmicas aquelas que variam ao longo do tempo e que semanticamente estão relacionadas às emoções, à fala e a fenômenos afetivos que podem modificar o comportamento de uma face em uma animação. A tese define um modelo de emoção para personagens virtuais falantes, de- nominado VeeM (Virtual emotion-to-expression Model ), proposto a partir de uma releitura e uma reestruturação do modelo do círculo emocional de Plutchik. O VeeM introduz o conceito de um hipercubo emocional no espaço canônico do R4 para combinar emoções básicas, dando origem a emoções derivadas. Para validação do VeeM é desenvolvida uma ferramenta de autoria e apresentação de animações faciais denominada DynaFeX (Dynamic Facial eXpression), onde um processamento de fala é realizado para permitir o sincronismo entre fonemas e visemas. A ferramenta permite a definição e o refinamento de emoções para cada quadro ou grupo de quadros de uma animação facial. O subsistema de autoria permite também, alternativamente, uma manipulação em alto-nível, através de scripts de animação. O subsistema de apresentação controla de modo sincronizado a fala da personagem e os aspectos emocionais editados. A DynaFeX faz uso de uma malha poligonal tridimensional baseada no padrão MPEG-4 de animação facial, favorecendo a interoperabilidade da ferramenta com outros sistemas de animação facial.
This thesis presents a system for generating dynamic facial expressions synchronized with speech, rendered using a tridimensional realistic face. Dynamic facial expressions are those temporal-based facial expressions semanti- cally related with emotions, speech and affective inputs that can modify a facial animation behavior. The thesis defines an emotion model for speech virtual actors, named VeeM (Virtual emotion-to-expression Model ), which is based on a revision of the emotional wheel of Plutchik model. The VeeM introduces the emotional hypercube concept in the R4 canonical space to combine pure emotions and create new derived emotions. In order to validate VeeM, it has been developed an authoring and player facial animation tool, named DynaFeX (Dynamic Facial eXpression), where a speech processing is realized to allow the phoneme and viseme synchronization. The tool allows either the definition and refinement of emotions for each frame, or group of frames, as the facial animation edition using a high-level approach based on animation scripts. The tool player controls the animation presentation synchronizing the speech and emotional features with the virtual character performance. DynaFeX is built over a tridimensional polygonal mesh, compliant with MPEG-4 facial animation standard, what favors tool interoperability with other facial animation systems.
APA, Harvard, Vancouver, ISO, and other styles
17

King, Scott Alan. "A Facial Model and Animation Techniques for Animated Speech." The Ohio State University, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=osu991423221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Somasundaram, Arunachalam. "A facial animation model for expressive audio-visual speech." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1148973645.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Erdogdu, Aysu. "Morphable 3d Facial Animation Based On Thin Plate Splines." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611910/index.pdf.

Full text
Abstract:
The aim of this study is to present a novel three dimensional (3D) facial animation method for morphing emotions and facial expressions from one face model to another. For this purpose, smooth and realistic face models were animated with thin plate splines (TPS). Neutral face models were animated and compared with the actual expressive face models. Neutral and expressive face models were obtained from subjects via a 3D face scanner. The face models were preprocessed for pose and size normalization. Then muscle and wrinkle control points were located to the source face with neutral expression according to the human anatomy. Facial Action Coding System (FACS) was used to determine the control points and the face regions in the underlying model. The final positions of the control points after a facial expression were received from the expressive scan data of the source face. Afterwards control points were transferred to the target face using the facial landmarks and TPS as the morphing function. Finally, the neutral target face was animated with control points by TPS. In order to visualize the method, face scans with expressions composed of a selected subset of action units found in Bosphorus Database were used. Five lower-face and three-upper face action units are simulated during this study. For experimental results, the facial expressions were created on the 3D neutral face scan data of a human subject and the synthetic faces were compared to the subject&rsquo
s actual 3D scan data with the same facial expressions taken from the dataset.
APA, Harvard, Vancouver, ISO, and other styles
20

Scheidt, November. "A facial animation driven by X-ray microbeam data." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0021/MQ54745.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

MARTINS, ANTONIA MUNIZ. "TECHNOLOGICAL EXPERIMENTATION ON FACIAL EXPRESSIONS IN STOP MOTION ANIMATION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=36153@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTS. DE ENSINO
PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTITUIÇÕES COMUNITÁRIAS DE ENSINO PARTICULARES
Esta dissertação investiga a partir de uma abordagem multidisciplinar o processo de animação de expressões faciais em longas-metragens de stop motion, contextualizando sua produção no Brasil e no exterior. Por meio de uma pesquisa exploratória e experimental, investiga tecnologias de interfaces físicas aplicadas na animação de expressões faciais de bonecos. A pesquisa se estrutura em três momentos. Primeiro, uma discussão sobre a técnica de animação stop motion é proposta a partir de um paralelismo histórico entre o desenvolvimento tecnológico e a sua relação com a técnica. No segundo momento, informações sobre produções de longas-metragens que utilizam a técnica de stop motion com bonecos são levantadas, relacionando as soluções encontradas para fazer a animação de expressões faciais nas produções nacionais e internacionais com a interpretação das personagens e a unidade dos bonecos. O levantamento parte de revisão bibliográfica e do contato com quatro produtoras nacionais em fase de produção de longas-metragens com a técnica stop motion. No terceiro momento, discutimos o ato de animar um boneco e propomos uma série de experimentações, buscando novas formas de solucionar a animação de suas expressões faciais, com auxílio da tecnologia. As experimentações foram embasadas por processos de prática reflexiva, conhecer-na-ação e reflexão-na-ação. A pesquisa contribui para a ampliação da discussão sobre a técnica stop motion no Brasil, a sua otimização e produção, e assim motivar a produção de novos longas-metragens em stop motion no mercado brasileiro.
This dissertation investigates in a multidisciplinary way the process of facial expressions animation in stop motion feature films. It contextualizes and observes the production of stop motion feature films in Brazil and abroad. Through exploratory and experimental research, we investigate physical interface technologies applied to puppet s facial expressions animation. The research is structured in three parts. First, stop motion technique is discussed through a historical parallel between technological development in relation to the technique. Next, information is gathered on feature film productions, from Brazil and abroad, using stop motion technique with puppets. Different techniques found in facial expression animation are observed in relation to the character s interpretation and the unity of the puppet. The survey starts with a bibliographic review and interviews with four national studios in the production stage of stop motion feature films in the last part, we discuss the act of animating a puppet and propose a series of experiments in search of new ways to animate the facial expressions of puppets with the aid of technology. The experiments were based on processes of reflexive practice, knowing-in-action and reflection-in-action. This research contributes to the expansion of the discussion about the stop motion technique in Brazil, its optimization and production, thus motivating the production of new stop motion feature films in the Brazilian market.
APA, Harvard, Vancouver, ISO, and other styles
22

Waite, Clea Theresa. "The facial action control editor, face : a parametric facial expression editor for computer generated animation." Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/14377.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Stoiber, Nicolas. "Modeling emotional facial expressions and their dynamics for realistic interactive facial animation on virtual characters." Rennes 1, 2010. https://tel.archives-ouvertes.fr/tel-00558851.

Full text
Abstract:
In all computer-graphics applications, one stimulating task has been the integration of believable virtual characters. Above all other features of a character, its face is arguably the most important one since it concentrates the most essential channels of human communication. In this work we focus on emotional facial expressions, which we believe represent the most interesting type of non-verbal facial communication. We propose an animation framework that learns practical characteristics of emotional facial expressions from human faces, and uses these characteristics to generate realistic facial animations for synthetic characters. Our main contributions are: - A method that automatically extracts a meaningful representation space for expressive facial deformations from the processing of actual data. This representation can then be used as an interface to intuitively manipulate facial expressions on any virtual character. - An animation system, based on a collection of motion models, which explicitly handles the dynamic aspect of natural facial expressions. The motion models learn the dynamic signature of expressions from data, and reproduce this natural signature when generating new facial movements. The obtained animation framework can ultimately synthesize realistic and adaptive facial animations in real-time interactive applications, such as video games or conversational agents. In Addition to its efficiency, the system can easily be associated to higher-level notions of human emotions; this makes facial animation more intuitive to non-expert users, and to affective computing applications that usually work at the semantic level
Dans les mondes virtuels, une des tâches les plus complexes est l'intégration de personnages virtuels réalistes et le visage est souvent considéré comme l'élément le plus important car il concentre les canaux de communications humains les plus essentiels. Dans ces travaux, nous nous concentrons sur les expressions faciales émotionnelles. Nous proposons une approche qui apprend les caractéristiques des expressions faciales directement sur des visages humains, et utilise cette connaissance pour générer des animations faciales réalistes pour des visages virtuels. Nos contributions sont les suivantes :une méthode capable d'extraire de données brutes un espace simple et pertinent pour la représentation des expressions faciales émotionnelles, cet espace de représentation peut ensuite être utilisé pour la manipulation intuitive des expressions ; un système d'animation, basé sur une collection de modèles de mouvement, qui pilote l'aspect dynamique de l'expressivité faciale. Les modèles de mouvement apprennent la signature dynamique des expressions naturelles à partir de données, et reproduisent cette signature lors de la synthèse de nouvelles animations. Le système global d'animation issu des ces travaux est capable de générer des animations faciales réalistes et adaptatives pour des applications temps-réel telles que les jeux vidéos ou les agents conversationnels. En plus de ses performances, le système peut être associé aux notions plus abstraites d'émotions humaines. Ceci rend le processus d'animation faciale plus intuitif, en particulier pour les utilisateurs non-experts et les applications d''affective computing' qui travaillent généralement à un niveau sémantique
APA, Harvard, Vancouver, ISO, and other styles
24

Waters, Keith. "The computer synthesis of expressive three-dimensional facial character animation." Thesis, Middlesex University, 1988. http://eprints.mdx.ac.uk/8095/.

Full text
Abstract:
This present research is concerned with the design, development and implementation of three-dimensional computer-generated facial images capable of expression gesture and speech. A review of previous work in chapter one shows that to date the model of computer-generated faces has been one in which construction and animation were not separated and which therefore possessed only a limited expressive range. It is argued in chapter two that the physical description of the face cannot be seen as originating from a single generic mould. Chapter three therefore describes data acquisition techniques employed in the computer generation of free-form surfaces which are applicable to three-dimensional faces. Expressions are the result of the distortion of the surface of the skin by the complex interactions of bone, muscle and skin. Chapter four demonstrates with static images and short animation sequences in video that a muscle model process algorithm can simulate the primary characteristics of the facial muscles. Three-dimensional speech synchronization was the most complex problem to achieve effectively. Chapter five describes two successful approaches: the direct mapping of mouth shapes in two dimensions to the model in three dimensions, and geometric distortions of the mouth created by the contraction of specified muscle combinations. Chapter six describes the implementation of software for this research and argues the case for a parametric approach. Chapter seven is concerned with the control of facial articulations and discusses a more biological approach to these. Finally chapter eight draws conclusions from the present research and suggests further extensions.
APA, Harvard, Vancouver, ISO, and other styles
25

Hjelm, John. "Facial Rigging and Animation in 3D : From a videogame perspective." Thesis, Högskolan på Gotland, Institutionen för speldesign, teknik och lärande, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hgo:diva-679.

Full text
Abstract:
What are some of the methods for rigging and animating a face in 3D and which method is preferable when and under which circumstances? In this report I will examine a few of the different methods available when rigging and animating a face in 3D. I will be working mainly with Autodesk 3D Studio Max so some knowledge with it is preferable to fully understand the process. At the end of the report I will look at the positive as well as negative aspects of each method as well as which method is preferable in what kind of production or with which assets.
APA, Harvard, Vancouver, ISO, and other styles
26

Coull, Alasdair D. "A physically-based muscle and skin model for facial animation." Thesis, University of Glasgow, 2006. http://theses.gla.ac.uk/3450/.

Full text
Abstract:
Facial animation is a popular area of research which has been around for over thirty years, but even with this long time scale, automatically creating realistic facial expressions is still an unsolved goal. This work furthers the state of the art in computer facial animation by introducing a new muscle and skin model and a method of easily transferring a full muscle and bone animation setup from one head mesh to another with very little user input. The developed muscle model allows muscles of any shape to be accurately simulated, preserving volume during contraction and interacting with surrounding muscles and skin in a lifelike manner. The muscles can drive a rigid body model of a jaw, giving realistic physically-based movement to all areas of the face. The skin model has multiple layers, mimicking the natural structure of skin and it connects onto the muscle model and is deformed realistically by the movements of the muscles and underlying bones. The skin smoothly transfers underlying movements into skin surface movements and propagates forces smoothly across the face. Once a head model has been set up with muscles and bones, moving this muscle and bone set to another head is a simple matter using the developed techniques. The developed software employs principles from forensic reconstruction, using specific landmarks on the head to map the bone and muscles to the new head model and once the muscles and skull have been quickly transferred, they provide animation capabilities on the new mesh within minutes.
APA, Harvard, Vancouver, ISO, and other styles
27

Kuo, Po Tsun Paul. "Improved facial feature fitting for model based coding and animation." Thesis, University of Edinburgh, 2006. http://hdl.handle.net/1842/11019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Trejo, Guerrero Sandra. "Model-Based Eye Detection and Animation." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7059.

Full text
Abstract:

In this thesis we present a system to extract the eye motion from a video stream containing a human face and applying this eye motion into a virtual character. By the notation eye motion estimation, we mean the information which describes the location of the eyes in each frame of the video stream. Applying this eye motion estimation into a virtual character, we achieve that the virtual face moves the eyes in the same way than the human face, synthesizing eye motion into a virtual character. In this study, a system capable of face tracking, eye detection and extraction, and finally iris position extraction using video stream containing a human face has been developed. Once an image containing a human face is extracted from the current frame of the video stream, the detection and extraction of the eyes is applied. The detection and extraction of the eyes is based on edge detection. Then the iris center is determined applying different image preprocessing and region segmentation using edge features on the eye picture extracted.

Once, we have extracted the eye motion, using MPEG-4 Facial Animation, this motion is translated into the Facial Animation arameters (FAPs). Thus we can improve the quality and quantity of Facial Animation expressions that we can synthesize into a virtual character.

APA, Harvard, Vancouver, ISO, and other styles
29

Huang, Jiajun. "Learning to Detect Compressed Facial Animation Forgery Data with Contrastive Learning." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/29183.

Full text
Abstract:
The facial forgery generation, which could be utilised to modify facial attributes, is the critical threat to digital society. The recent Deep Nerual Network based forgery generation methods, called Deepfake, can generate high quality results that are hard to be distinguished by human eyes. Various detection methods and datasets are proposed for detecting such data. However, recent research less considers facial animation, which is also important in the forgery attack side. It tries to animate face images with actions provided by driving videos. Our experiments show that the existed datasets are not sufficient to develop reliable detection methods for animation data. As a response, we propose a facial animation dataset, called DeepFake MNIST+. It includes 10,000 facial animation videos in 10 different actions. We also provide a baseline detection method and the comprehensive analysis of the method and dataset. Meanwhile, we notice that the data compression process could affect the detection performance. Thus creating a forgery detection model that can handle the data compressed with unknown levels is critical. To enhance the performance for such models, we consider the weak and strong compressed data as two views of the original data and they should have similar relationships with other samples. We propose a novel anti-compression forgery detection framework by maintaining closer relations within data under different compression levels. Specifically, the algorithm measures the pair-wise similarity within data as the relations and forcing the relations of weak and strong compressed data close to each other, thus improving the performance for detecting strong compressed data. To achieve a better strong compressed data relation guided by the less compressed one, we apply video level contrastive learning for weak compressed data. The experiment results show the proposed algorithm could adapt to multiple compression levels well.
APA, Harvard, Vancouver, ISO, and other styles
30

Pighin, Fre︠d︡e︠r︡ic. "Modeling and animating realistic faces from images /." Thesis, Connect to this title online; UW restricted, 1999. http://hdl.handle.net/1773/6886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Rudol, Piotr, and Mariusz Wzorek. "Editing, Streaming and Playing of MPEG-4 Facial Animations." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1687.

Full text
Abstract:

Computer animated faces have found their way into a wide variety of areas. Starting from entertainment like computer games, through television and films to user interfaces using “talking heads”. Animated faces are also becoming popular in web applications in form of human-like assistants or newsreaders.

This thesis presents a few aspects of dealing with human face animations, namely: editing, playing and transmitting such animations. It describes a standard for handling human face animations, the MPEG-4 Face Animation, and shows the process of designing, implementing and evaluating applications compliant to this standard.

First, it presents changes introduced to the existing components of the Visage|toolkit package for dealing with facial animations, offered by the company Visage Technologies AB. It also presents the process of designing and implementing of an application for editing facial animations compliant to the MPEG-4 Face Animation standard. Finally, it discusses several approaches to the problem of streaming facial animations over the Internet or the Local Area Network (LAN).

APA, Harvard, Vancouver, ISO, and other styles
32

Kähler, Kolja. "3D facial animation : recreating human heads with virtual skin, bones, and muscles /." Saarbrücken : VDM Verlag Dr. Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=3016048&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Kaiser, Moritz [Verfasser]. "Construction of a 3D Facial Model for Tracking and Animation / Moritz Kaiser." München : Verlag Dr. Hut, 2013. http://d-nb.info/1031845178/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Igeland, Viktor. "Generating Facial Animation With Emotions In A Neural Text-To-Speech Pipeline." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160535.

Full text
Abstract:
This thesis presents the work of incorporating facial animation with emotions into a neural text-to-speech pipeline. The project aims to allow for a digital human to utter sentences given only text, removing the need for video input. Our solution consists of a neural network able to generate blend shape weights from speech which is placed in a neural text-to-speech pipeline. We build on ideas from previous work and implement a recurrent neural network using four LSTM layers and later extend this implementation by incorporating emotions. The emotions are learned by the network itself via the emotion layer and used at inference to produce the desired emotion. While using LSTMs for speech-driven facial animation is not a new idea, it has not yet been combined with the idea of using emotional states that are learned by the network itself. Previous approaches are either only two-dimensional, of complicated design or require manual laboring of the emotional states. Thus, we implement a network of simple design, taking advantage of the sequence processing ability of LSTMs and combines it with the idea of emotional states. We trained several variations of the network on data captured using a head mounted camera, and the results of the best performing model were used in a subjective evaluation. During the evaluation the participants were presented several videos and asked to rate the naturalness of the face uttering the sentence. The results showed that the naturalness of the face greatly depends on which emotion vector was used, as some vectors limited the mobility of the face. However, our best achieving emotion vector was rated at the same level of naturalness as the ground truth, proving our method successful. The purpose of the thesis was fulfilled as our implementation demonstrates one possibility of incorporating facial animation into a text-to-speech pipeline.
APA, Harvard, Vancouver, ISO, and other styles
35

Larsson, Niklas. "Morph targets and bone rigging for 3D facial animation : A comparative case study." Thesis, Uppsala universitet, Institutionen för speldesign, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-327302.

Full text
Abstract:
Facial animation is an integral and increasing part of 3D games. This study investigates how the two most common methods of 3D facial animation compare to each other. The goal of this study is to summarize the situation and to provide animators and software developers with relevant recommendations. The two most utilized methods of facial animation; morph target animation and bone driven animation are examined with their strong and weak aspects presented. The investigation is based on literature analysis as well as a comparative case study approach which was used for comparing multiple formal and informal sources according to seven parameters such as: performance, production time, technical limitations, details and realism, ease of usability, cross platform compatibility and common combinations of systems. The strengths and weaknesses of the two methods of 3D facial animation are compared and discussed followed by a conclusion part which present recommendation to which is the preferable method to use under different circumstances. In some cases, the results are inconclusive due to a lack of data.  It is concluded that a combination of morph target and bone driven animation will give the most artistic control if time is not limited.
APA, Harvard, Vancouver, ISO, and other styles
36

Correa, Renata. "Animação facial por computador baseada em modelagem biomecanica." [s.n.], 2007. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259447.

Full text
Abstract:
Orientadores: Leo Pini Magalhães, Jose Mario De Martino
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-10T02:00:25Z (GMT). No. of bitstreams: 1 Correa_Renata_M.pdf: 4570462 bytes, checksum: c427bcfe94559d86730c51711bd67985 (MD5) Previous issue date: 2007
Resumo: A crescente busca pelo realismo em personagens virtuais encontrados em diversas aplicações na indústria do cinema, no ensino, jogos, entre outras, é a motivação do presente trabalho. O trabalho descreve um modelo de animação que emprega a estratégia biomecânica para o desenvolvimento de um protótipo computacional, chamado SABiom. A técnica utilizada baseia-se na simulação de características físicas da face humana, tais como as camadas de pele e músculos, que são modeladas de forma a permitir a simulação do comportamento mecânico do tecido facial sob a ação de forças musculares. Embora existam vários movimentos produzidos por uma face, o presente trabalho restringiu-se às simulações dos movimentos de expressões faciais focalizando os lábios. Para validar os resultados obtidos com o SABiom, comparou-se as imagens do modelo virtual obtidas através do protótipo desenvolvido com imagens obtidas de um modelo humano
Abstract: The increasing search for realism in virtual characters found in' many applications as movies, education, games, so on, is the motivation ofthis thesis. The thesis describes an animation model that employs the biomechanics strategy for the development of a computing prototype, called SABiom. The method used is based on simulation of physical features of the human face, such as layers of the skin and musc1es, that are modeled for simulation of the mechanical behavior of the facial tissue under the action of muscle forces. Although there are several movements produced by a face, the current work limits itself to the simulations of the facial expressions focusing the lips. To validate the results obtained from SABiom, we compared the images of the virtual model with images from a human model
Mestrado
Engenharia de Computação
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
37

Al-Qayedi, Ali. "Internet video-conferencing using model-based image coding with agent technology." Thesis, University of Essex, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298836.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Wang, Jing. "Reconstruction and Analysis of 3D Individualized Facial Expressions." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32588.

Full text
Abstract:
This thesis proposes a new way to analyze facial expressions through 3D scanned faces of real-life people. The expression analysis is based on learning the facial motion vectors that are the differences between a neutral face and a face with an expression. There are several expression analysis based on real-life face database such as 2D image-based Cohn-Kanade AU-Coded Facial Expression Database and Binghamton University 3D Facial Expression Database. To handle large pose variations and increase the general understanding of facial behavior, 2D image-based expression database is not enough. The Binghamton University 3D Facial Expression Database is mainly used for facial expression recognition and it is difficult to compare, resolve, and extend the problems related detailed 3D facial expression analysis. Our work aims to find a new and an intuitively way of visualizing the detailed point by point movements of 3D face model for a facial expression. In our work, we have created our own 3D facial expression database on a detailed level, which each expression model has been processed to have the same structure to compare differences between different people for a given expression. The first step is to obtain same structured but individually shaped face models. All the head models are recreated by deforming a generic model to adapt a laser-scanned individualized face shape in both coarse level and fine level. We repeat this recreation method on different human subjects to establish a database. The second step is expression cloning. The motion vectors are obtained by subtracting two head models with/without expression. The extracted facial motion vectors are applied onto a different human subject’s neutral face. Facial expression cloning is proved to be robust and fast as well as easy to use. The last step is about analyzing the facial motion vectors obtained from the second step. First we transferred several human subjects’ expressions on a single human neutral face. Then the analysis is done to compare different expression pairs in two main regions: the whole face surface analysis and facial muscle analysis. Through our work where smiling has been chosen for the experiment, we find our approach to analysis through face scanning a good way to visualize how differently people move their facial muscles for the same expression. People smile in a similar manner moving their mouths and cheeks in similar orientations, but each person shows her/his own unique way of moving. The difference between individual smiles is the differences of movements they make.
APA, Harvard, Vancouver, ISO, and other styles
39

Kaspersson, Max. "Facial Realism through Wrinkle Maps : The Perceived Impact of Different Dynamic Wrinkle Implementations." Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-10370.

Full text
Abstract:
Context. Real time rendering has many challenges to overcome, one of them being character realism. One way to move towards realism is to use wrinkle maps. Although already used in several games, there might be room for improvement, common practice suggests using two wrinkle maps, however, if this number can be reduced both texture usage and workload might be reduced as well. Objectives. To determine whether or not it is possible to reduce the number of wrinkle maps from two to one without having any significant impact on the perceived realism of a character. Methods. After a base character model was created, a setup in Maya were made so that dynamic wrinkles could be displayed on the character using both one and two wrinkle maps. The face were animated and rendered, displaying emotions using both techniques. A two-alternative forced choice experiment was then conducted where the participants selected which implementation displaying the same facial expression and having the same lighting condition they perceived as most realistic. Results. Results showed that some facial expressions had more of an impact of the perceived realism than others, favoring two wrinkle maps in every case where there was a significant difference. The expressions with the most impact were the ones that required different kinds of wrinkles at the same area of the face, such as the forehead, where one variant of wrinkles run at a more vertical manner and the other variant runs horizontally along the forehead. Conclusions. Using one wrinkle map can not fully replicate the effect of using two when it comes to realism. The difference on the implementations are dependant on the expression being displayed.
APA, Harvard, Vancouver, ISO, and other styles
40

Ersotelos, Nikolaos. "Highly automated method for facial expression synthesis." Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/4524.

Full text
Abstract:
The synthesis of realistic facial expressions has been an unexplored area for computer graphics scientists. Over the last three decades, several different construction methods have been formulated in order to obtain natural graphic results. Despite these advancements, though, current techniques still require costly resources, heavy user intervention and specific training and outcomes are still not completely realistic. This thesis, therefore, aims to achieve an automated synthesis that will produce realistic facial expressions at a low cost. This thesis, proposes a highly automated approach for achieving a realistic facial expression synthesis, which allows for enhanced performance in speed (3 minutes processing time maximum) and quality with a minimum of user intervention. It will also demonstrate a highly technical and automated method of facial feature detection, by allowing users to obtain their desired facial expression synthesis with minimal physical input. Moreover, it will describe a novel approach to the normalization of the illumination settings values between source and target images, thereby allowing the algorithm to work accurately, even in different lighting conditions. Finally, we will present the results obtained from the proposed techniques, together with our conclusions, at the end of the paper.
APA, Harvard, Vancouver, ISO, and other styles
41

Obaid, Mohammad Hisham Rashid. "A quadratic deformation model for representing facial expressions." Thesis, University of Canterbury. Computer Science and Software Engineering, 2011. http://hdl.handle.net/10092/5345.

Full text
Abstract:
Techniques for facial expression generation are employed in several applications in computer graphics as well as in the processing of image and video sequences containing faces. Video coding standards such as MPEG-4 support facial expression animation. There are a number of facial expression representations that are application dependent or facial animation standard dependent and most of them require a lot of computational effort. We have developed a completely novel and effective method for representing the primary facial expressions using a model-independent set of deformation parameters (derived using rubber-sheet transformations), which can be easily applied to transform facial feature points. The developed mathematical model captures the necessary non-linear characteristics of deformations of facial muscle regions; producing well-recognizable expressions on images, sketches, and three dimensional models of faces. To show the effectiveness of the method, we developed a variety of novel applications such as facial expression recognition, expression mapping, facial animation and caricature generation.
APA, Harvard, Vancouver, ISO, and other styles
42

He, Zhi-guang, and 何智光. "3D Facial Animation." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/09169533194285308870.

Full text
Abstract:
碩士
義守大學
電機工程學系碩士班
93
It is a challenge that how to realistically generate 3D human face models and their various movement expressions in computer graphics. With the progress of computer technology, people request for more and more multimedia effects. Therefore, the reconstruction of 3D human facial models and facial animations are enthusiastically investigated. There are several kinds of method used to reconstruct 3D models. We animate the 3D object models that are reconstructed by using photometric stereo method and shape from contours with three light sources in this paper. The 3D models are stored in point data and the 3D curved surface generated by applying Delaunay triangulation mesh method to the 3D models are displayed on personal computer. Then, we animate the motion sequence of mouth on the 3D human facial model for pronouncing Chinese and English words by using 3D Studio MAX. The animation for different words and sentences, and several facial expression are shown in this thesis .
APA, Harvard, Vancouver, ISO, and other styles
43

Jun-Ze, Huang. "Speech-Driven 3D Facial Animation." 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-2407200613525500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Huang, Jun-Ze, and 黃鈞澤. "Speech-Driven 3D Facial Animation." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/22243033424866606258.

Full text
Abstract:
碩士
國立臺灣大學
資訊管理學研究所
94
It is often difficult to animate a face model speaking a specific speech. Even for professional animators, it will take a lot of time. Our work provides a speech-driven 3D facial animation system which allows the user to easily generate facial animations. The user only needs to give a speech as the input. The output will be a 3D facial animation relative to the input speech. Our work can be divided into three sub-systems: One is the MMM (multidimensional morphable model). MMM is build from the pre-recorded training video using machine learning techniques. We can use MMM to generate realistic speech video respect to the input speech. The second part is Facial Tracking. Facial Tracking can extract the feature points of a human subject in the synthetic speech video. The third part is Mesh-IK (mesh based inverse kinematics). Mesh-IK takes the motion of feature points as the guide line to deform 3D face models, and makes the result model have the same looking in the corresponding frame of the speech video. Thus we can have a 3D facial animation as the output. Facing Tracking and Mesh-IK can also take a real speech video or even a real expression video as the input, and produce the corresponding facial animations.
APA, Harvard, Vancouver, ISO, and other styles
45

Lo, Ching-Tzuu, and 羅慶祖. "Texture Mapping for Facial Animation." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/93788378277565596580.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程研究所
81
The computer animation for human facial expression can be very sophisticated because there are an enormous huge set of facial expressions. The study of how to generate different facial expressions efficiently and how to render realistic images is an important topic in computer graphics. In early study of computer facial animation, most researchers focused on the generation of facial expression, but ignored the rendering of realistic images. They used wire-frame to represent a human face model without considering human face texture. Although this has no direct influence on the theory of facial animation, however, they all based on a plastic face instead of a real model, especially they ignored the fine structure of eye, mouth, skin and hair textures. The objective of this thesis is to perform the texture mapping of a scanned 2-D image onto a 3-D face model to produce not only a more lifelike model but also facial expressions with textures. This study is based on some computer graphics techniques such as texture mapping and some image warping techniques such as sampling and transformation.
APA, Harvard, Vancouver, ISO, and other styles
46

Wei, Shu-Chen, and 韋淑貞. "Animation of 3D facial expressions." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/38931393795866396065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Hodgkinson, Warren. "Interactive speech-driven facial animation." Thesis, 2008. http://hdl.handle.net/10210/807.

Full text
Abstract:
One of the fastest developing areas in the entertainment industry is digital animation. Television programmes and movies frequently use 3D animations to enhance or replace actors and scenery. With the increase in computing power, research is also being done to apply these animations in an interactive manner. Two of the biggest obstacles to the success of these undertakings are control (manipulating the models) and realism. This text describes many of the ways to improve control and realism aspects, in such a way that interactive animation becomes possible. Specifically, lip-synchronisation (driven by human speech), and various modeling and rendering techniques are discussed. A prototype that shows that interactive animation is feasible, is also described.
Mr. A. Hardy Prof. S. von Solms
APA, Harvard, Vancouver, ISO, and other styles
48

Serra, José Mário Figueiredo. "Intelligent facial animation: Creating emphatic characters with stimuli based animation." Doctoral thesis, 2017. https://hdl.handle.net/10216/110175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Serra, José Mário Figueiredo. "Intelligent facial animation: Creating emphatic characters with stimuli based animation." Tese, 2017. https://hdl.handle.net/10216/110175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Bastani, Hanieh. "A Nonlinear Framework for Facial Animation." Thesis, 2008. http://hdl.handle.net/1807/10428.

Full text
Abstract:
This thesis researches techniques for modelling static facial expressions, as well as the dynamics of continuous facial motion. We demonstrate how static and dynamic properties of facial expressions can be represented within a linear and nonlinear context, respectively. These two representations do not act in isolation, but are mutually reinforcing in conceding a cohesive framework for the analysis, animation, and manipulation of expressive faces. We derive a basis for the linear space of expressions through Principal Components Analysis (PCA). We introduce and formalize the notion of "expression manifolds", manifolds residing in PCA space that model motion dynamics for semantically similar expressions. We then integrate these manifolds into an animation workflow by performing Nonlinear Dimensionality Reduction (NLDR) on the expression manifolds. This operation yields expression maps that encode a wealth of information relating to complex facial dynamics, in a low dimensional space that is intuitive to navigate and efficient to manage.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography