Dissertations / Theses on the topic 'Facial animation'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Facial animation.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Miller, Kenneth D. (Kenneth Doyle). "A system for advanced facial animation." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/40605.
Full textIncludes bibliographical references (leaves 35-36).
by Kenneth D. Miller, III.
M.Eng.
Kalra, Prem Kumar. "An interactive multimodal facial animation system /." [S.l.] : [s.n.], 1993. http://library.epfl.ch/theses/?nr=1183.
Full textLin, Alice J. "THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS." UKnowledge, 2011. http://uknowledge.uky.edu/gradschool_diss/841.
Full textSloan, Robin J. S. "Emotional avatars : choreographing emotional facial expression animation." Thesis, Abertay University, 2011. https://rke.abertay.ac.uk/en/studentTheses/2363eb4a-2eba-4f94-979f-77b0d6586e94.
Full textZhao, Hui. "Expressive facial animation transfer for virtual actors /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?CSED%202007%20ZHAO.
Full textPueblo, Stephen J. (Stephen Jerell). "Videorealistic facial animation for speech-based interfaces." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53179.
Full textIncludes bibliographical references (p. 79-81).
This thesis explores the use of computer-generated, videorealistic facial animation (avatars) in speech-based interfaces to understand whether the use of such animations enhances the end user's experience. Research in spoken dialog systems is a robust area that has now permeated everyday life; most notably with spoken telephone dialog systems. Over the past decade, research with videorealistic animations, both photorealistic and non-photorealistic, has reached the point where there is little discernible difference between the mouth movements of videorealistic animations and the mouth movements of actual humans. Because of the minute differences between the two, videorealistic speech animations are an ideal candidate to use in dialog systems. This thesis presents two videorealistic facial animation systems: a web-based system and a real-time system.
by Stephen J. Pueblo.
M.Eng.
Barker, Dean. "Computer facial animation for sign language visualization." Thesis, Stellenbosch : Stellenbosch University, 2005. http://hdl.handle.net/10019.1/50300.
Full textENGLISH ABSTRACT: Sign Language is a fully-fledged natural language possessing its own syntax and grammar; a fact which implies that the problem of machine translation from a spoken source language to Sign Language is at least as difficult as machine translation between two spoken languages. Sign Language, however, is communicated in a modality fundamentally different from all spoken languages. Machine translation to Sign Language is therefore burdened not only by a mapping from one syntax and grammar to another, but also, by a non-trivial transformation from one communicational modality to another. With regards to the computer visualization of Sign Language; what is required is a three dimensional, temporally accurate, visualization of signs including both the manual and nonmanual components which can be viewed from arbitrary perspectives making accurate understanding and imitation more feasible. Moreover, given that facial expressions and movements represent a fundamental basis for the majority of non-manual signs, any system concerned with the accurate visualization of Sign Language must rely heavily on a facial animation component capable of representing a well-defined set of emotional expressions as well as a set of arbitrary facial movements. This thesis investigates the development of such a computer facial animation system. We address the problem of delivering coordinated, temporally constrained, facial animation sequences in an online environment using VRML. Furthermore, we investigate the animation, using a muscle model process, of arbitrary three-dimensional facial models consisting of multiple aligned NURBS surfaces of varying refinement. Our results showed that this approach is capable of representing and manipulating high fidelity three-dimensional facial models in such a manner that localized distortions of the models result in the recognizable and realistic display of human facial expressions and that these facial expressions can be displayed in a coordinated, synchronous manner.
AFRIKAANSE OPSOMMING: Gebaretaal is 'n volwaardige natuurlike taal wat oor sy eie sintaks en grammatika beskik. Hierdie feit impliseer dat die probleem rakende masjienvertaling vanuit 'n gesproke taal na Gebaretaal net so moeilik is as masjienvertaling tussen twee gesproke tale. Gebaretaal word egter in 'n modaliteit gekommunikeer wat in wese van alle gesproke tale verskil. Masjienvertaling in Gebaretaal word daarom nie net belas deur 'n afbeelding van een sintaks en grammatika op 'n ander nie, maar ook deur beduidende omvorming van een kommunikasiemodaliteit na 'n ander. Wat die gerekenariseerde visualisering van Gebaretaal betref, vereis dit 'n driedimensionele, tyds-akkurate visualisering van gebare, insluitend komponente wat met en sonder die gebruik van die hande uitgevoer word, en wat vanuit arbitrêre perspektiewe beskou kan word ten einde die uitvoerbaarheid van akkurate begrip en nabootsing te verhoog. Aangesien gesigsuitdrukkings en -bewegings die fundamentele grondslag van die meeste gebare wat nie met die hand gemaak word nie, verteenwoordig, moet enige stelsel wat te make het met die akkurate visualisering van Gebaretaal boonop sterk steun op 'n gesigsanimasiekomponent wat daartoe in staat is om 'n goed gedefinieerde stel emosionele uitdrukkings sowel as 'n stel arbitrre gesigbewegings voor te stel. Hierdie tesis ondersoek die ontwikkeling van so 'n gerekenariseerde gesigsanimasiestelsel. Die probleem rakende die lewering van gekordineerde, tydsbegrensde gesigsanimasiesekwensies in 'n intydse omgewing, wat gebruik maak van VRML, word aangeroer. Voorts word ondersoek ingestel na die animasie (hier word van 'n spiermodelproses gebruik gemaak) van arbitrre driedimensionele gesigsmodelle bestaande uit veelvoudige, opgestelde NURBS-oppervlakke waarvan die verfyning wissel. Die resultate toon dat hierdie benadering daartoe in staat is om hoë kwaliteit driedimensionele gesigsmodelle só voor te stel en te manipuleer dat gelokaliseerde vervormings van die modelle die herkenbare en realistiese tentoonstelling van menslike gesigsuitdrukkings tot gevolg het en dat hierdie gesigsuitdrukkings op 'n gekordineerde, sinchroniese wyse uitgebeeld kan word.
Ellner, Henrik. "Facial animation parameter extraction using high-dimensional manifolds." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6117.
Full textThis thesis presents and examines a method that can potentially be used for extracting parameters from a manifold in a space. In the thesis the method is presented, and a potential application is described. The application is determining FAP-values. FAP-values
are used for parameterizing faces, which can e.g. be used to compress data when sending video sequences over limited bandwidth.
Smith, Andrew Patrick. "Muscle-based facial animation using blendshapes in superposition." Texas A&M University, 2006. http://hdl.handle.net/1969.1/5007.
Full textAlvi, O. "Facial reconstruction and animation in tele-immersive environment." Thesis, University of Salford, 2010. http://usir.salford.ac.uk/26547/.
Full textAina, Olusola Olumide. "Generating anatomical substructures for physically-based facial animation." Thesis, Bournemouth University, 2011. http://eprints.bournemouth.ac.uk/18900/.
Full textSánchez, Lorenzo Manuel Antonio. "Techniques for performance based, real-time facial animation." Thesis, University of Sheffield, 2006. http://etheses.whiterose.ac.uk/14897/.
Full textBarrielle, Vincent. "Leveraging Blendshapes for Realtime Physics-Based Facial Animation." Thesis, CentraleSupélec, 2017. http://www.theses.fr/2017CSUP0003.
Full textGenerating synthetic facial animation is a crucial step in the creation of content for a wide variety of digital media such as movies and video games. However, producing convincing results is challenging, since humans are experts in analyzing facial expressions and will hence detect any artifact. The dominant paradigm for the production of high-quality facial animation is the blendshapes paradigm, where facial expressions are decomposed as a linear combination of more basic expressions. However, this technique requires large amounts of work to reach the desired quality, which reserves high-quality animation to large budget movies. Producing high-quality facial animation is possible using physical simulation, but this requires the costly acquisition of medical imaging data.We propose to merge the blendshapes and physical simulation paradigms, to build upon the ubiquity of blendshapes while benefiting from physical simulation for complex effects. We therefore introduce blendforces, a paradigm where blendshapes are interpreted as a basis for approximating the forces emanating from the facial muscles. We show that, combined with an appropriate face physical system, these blendforces can be used to produce convincing facial animation, with natural skin dynamics, handling of lips contacts, sticky lips, inertial effects and handling of gravity. We encompass this framework within a practical realtime performance capture setup, where we produce realtime facial animation with physical effects from a simple RGB camera feed. To the best of our knowledge, this constitutes the first instance of realtime physical simulation applied to the challenging task of facial animation
Zavala, Chmelicka Marco Enrique. "Visual prosody in speech-driven facial animation: elicitation, prediction, and perceptual evaluation." Texas A&M University, 2005. http://hdl.handle.net/1969.1/2436.
Full textPatel, Manjula. "Making FACES : the Facial Animation, Construction and Editing System." Thesis, University of Bath, 1991. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.524137.
Full textRODRIGUES, PAULA SALGADO LUCENA. "A SYSTEM FOR GENERATING DYNAMIC FACIAL EXPRESSIONS IN 3D FACIAL ANIMATION WITH SPEECH PROCESSING." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2007. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=11569@1.
Full textEsta tese apresenta um sistema para geração de expressões faciais dinâmicas sincronizadas com a fala em uma face realista tridimensional. Entende-se por expressões faciais dinâmicas aquelas que variam ao longo do tempo e que semanticamente estão relacionadas às emoções, à fala e a fenômenos afetivos que podem modificar o comportamento de uma face em uma animação. A tese define um modelo de emoção para personagens virtuais falantes, de- nominado VeeM (Virtual emotion-to-expression Model ), proposto a partir de uma releitura e uma reestruturação do modelo do círculo emocional de Plutchik. O VeeM introduz o conceito de um hipercubo emocional no espaço canônico do R4 para combinar emoções básicas, dando origem a emoções derivadas. Para validação do VeeM é desenvolvida uma ferramenta de autoria e apresentação de animações faciais denominada DynaFeX (Dynamic Facial eXpression), onde um processamento de fala é realizado para permitir o sincronismo entre fonemas e visemas. A ferramenta permite a definição e o refinamento de emoções para cada quadro ou grupo de quadros de uma animação facial. O subsistema de autoria permite também, alternativamente, uma manipulação em alto-nível, através de scripts de animação. O subsistema de apresentação controla de modo sincronizado a fala da personagem e os aspectos emocionais editados. A DynaFeX faz uso de uma malha poligonal tridimensional baseada no padrão MPEG-4 de animação facial, favorecendo a interoperabilidade da ferramenta com outros sistemas de animação facial.
This thesis presents a system for generating dynamic facial expressions synchronized with speech, rendered using a tridimensional realistic face. Dynamic facial expressions are those temporal-based facial expressions semanti- cally related with emotions, speech and affective inputs that can modify a facial animation behavior. The thesis defines an emotion model for speech virtual actors, named VeeM (Virtual emotion-to-expression Model ), which is based on a revision of the emotional wheel of Plutchik model. The VeeM introduces the emotional hypercube concept in the R4 canonical space to combine pure emotions and create new derived emotions. In order to validate VeeM, it has been developed an authoring and player facial animation tool, named DynaFeX (Dynamic Facial eXpression), where a speech processing is realized to allow the phoneme and viseme synchronization. The tool allows either the definition and refinement of emotions for each frame, or group of frames, as the facial animation edition using a high-level approach based on animation scripts. The tool player controls the animation presentation synchronizing the speech and emotional features with the virtual character performance. DynaFeX is built over a tridimensional polygonal mesh, compliant with MPEG-4 facial animation standard, what favors tool interoperability with other facial animation systems.
King, Scott Alan. "A Facial Model and Animation Techniques for Animated Speech." The Ohio State University, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=osu991423221.
Full textSomasundaram, Arunachalam. "A facial animation model for expressive audio-visual speech." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1148973645.
Full textErdogdu, Aysu. "Morphable 3d Facial Animation Based On Thin Plate Splines." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611910/index.pdf.
Full texts actual 3D scan data with the same facial expressions taken from the dataset.
Scheidt, November. "A facial animation driven by X-ray microbeam data." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0021/MQ54745.pdf.
Full textMARTINS, ANTONIA MUNIZ. "TECHNOLOGICAL EXPERIMENTATION ON FACIAL EXPRESSIONS IN STOP MOTION ANIMATION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=36153@1.
Full textCOORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTS. DE ENSINO
PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTITUIÇÕES COMUNITÁRIAS DE ENSINO PARTICULARES
Esta dissertação investiga a partir de uma abordagem multidisciplinar o processo de animação de expressões faciais em longas-metragens de stop motion, contextualizando sua produção no Brasil e no exterior. Por meio de uma pesquisa exploratória e experimental, investiga tecnologias de interfaces físicas aplicadas na animação de expressões faciais de bonecos. A pesquisa se estrutura em três momentos. Primeiro, uma discussão sobre a técnica de animação stop motion é proposta a partir de um paralelismo histórico entre o desenvolvimento tecnológico e a sua relação com a técnica. No segundo momento, informações sobre produções de longas-metragens que utilizam a técnica de stop motion com bonecos são levantadas, relacionando as soluções encontradas para fazer a animação de expressões faciais nas produções nacionais e internacionais com a interpretação das personagens e a unidade dos bonecos. O levantamento parte de revisão bibliográfica e do contato com quatro produtoras nacionais em fase de produção de longas-metragens com a técnica stop motion. No terceiro momento, discutimos o ato de animar um boneco e propomos uma série de experimentações, buscando novas formas de solucionar a animação de suas expressões faciais, com auxílio da tecnologia. As experimentações foram embasadas por processos de prática reflexiva, conhecer-na-ação e reflexão-na-ação. A pesquisa contribui para a ampliação da discussão sobre a técnica stop motion no Brasil, a sua otimização e produção, e assim motivar a produção de novos longas-metragens em stop motion no mercado brasileiro.
This dissertation investigates in a multidisciplinary way the process of facial expressions animation in stop motion feature films. It contextualizes and observes the production of stop motion feature films in Brazil and abroad. Through exploratory and experimental research, we investigate physical interface technologies applied to puppet s facial expressions animation. The research is structured in three parts. First, stop motion technique is discussed through a historical parallel between technological development in relation to the technique. Next, information is gathered on feature film productions, from Brazil and abroad, using stop motion technique with puppets. Different techniques found in facial expression animation are observed in relation to the character s interpretation and the unity of the puppet. The survey starts with a bibliographic review and interviews with four national studios in the production stage of stop motion feature films in the last part, we discuss the act of animating a puppet and propose a series of experiments in search of new ways to animate the facial expressions of puppets with the aid of technology. The experiments were based on processes of reflexive practice, knowing-in-action and reflection-in-action. This research contributes to the expansion of the discussion about the stop motion technique in Brazil, its optimization and production, thus motivating the production of new stop motion feature films in the Brazilian market.
Waite, Clea Theresa. "The facial action control editor, face : a parametric facial expression editor for computer generated animation." Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/14377.
Full textStoiber, Nicolas. "Modeling emotional facial expressions and their dynamics for realistic interactive facial animation on virtual characters." Rennes 1, 2010. https://tel.archives-ouvertes.fr/tel-00558851.
Full textDans les mondes virtuels, une des tâches les plus complexes est l'intégration de personnages virtuels réalistes et le visage est souvent considéré comme l'élément le plus important car il concentre les canaux de communications humains les plus essentiels. Dans ces travaux, nous nous concentrons sur les expressions faciales émotionnelles. Nous proposons une approche qui apprend les caractéristiques des expressions faciales directement sur des visages humains, et utilise cette connaissance pour générer des animations faciales réalistes pour des visages virtuels. Nos contributions sont les suivantes :une méthode capable d'extraire de données brutes un espace simple et pertinent pour la représentation des expressions faciales émotionnelles, cet espace de représentation peut ensuite être utilisé pour la manipulation intuitive des expressions ; un système d'animation, basé sur une collection de modèles de mouvement, qui pilote l'aspect dynamique de l'expressivité faciale. Les modèles de mouvement apprennent la signature dynamique des expressions naturelles à partir de données, et reproduisent cette signature lors de la synthèse de nouvelles animations. Le système global d'animation issu des ces travaux est capable de générer des animations faciales réalistes et adaptatives pour des applications temps-réel telles que les jeux vidéos ou les agents conversationnels. En plus de ses performances, le système peut être associé aux notions plus abstraites d'émotions humaines. Ceci rend le processus d'animation faciale plus intuitif, en particulier pour les utilisateurs non-experts et les applications d''affective computing' qui travaillent généralement à un niveau sémantique
Waters, Keith. "The computer synthesis of expressive three-dimensional facial character animation." Thesis, Middlesex University, 1988. http://eprints.mdx.ac.uk/8095/.
Full textHjelm, John. "Facial Rigging and Animation in 3D : From a videogame perspective." Thesis, Högskolan på Gotland, Institutionen för speldesign, teknik och lärande, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hgo:diva-679.
Full textCoull, Alasdair D. "A physically-based muscle and skin model for facial animation." Thesis, University of Glasgow, 2006. http://theses.gla.ac.uk/3450/.
Full textKuo, Po Tsun Paul. "Improved facial feature fitting for model based coding and animation." Thesis, University of Edinburgh, 2006. http://hdl.handle.net/1842/11019.
Full textTrejo, Guerrero Sandra. "Model-Based Eye Detection and Animation." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7059.
Full textIn this thesis we present a system to extract the eye motion from a video stream containing a human face and applying this eye motion into a virtual character. By the notation eye motion estimation, we mean the information which describes the location of the eyes in each frame of the video stream. Applying this eye motion estimation into a virtual character, we achieve that the virtual face moves the eyes in the same way than the human face, synthesizing eye motion into a virtual character. In this study, a system capable of face tracking, eye detection and extraction, and finally iris position extraction using video stream containing a human face has been developed. Once an image containing a human face is extracted from the current frame of the video stream, the detection and extraction of the eyes is applied. The detection and extraction of the eyes is based on edge detection. Then the iris center is determined applying different image preprocessing and region segmentation using edge features on the eye picture extracted.
Once, we have extracted the eye motion, using MPEG-4 Facial Animation, this motion is translated into the Facial Animation arameters (FAPs). Thus we can improve the quality and quantity of Facial Animation expressions that we can synthesize into a virtual character.
Huang, Jiajun. "Learning to Detect Compressed Facial Animation Forgery Data with Contrastive Learning." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/29183.
Full textPighin, Fre︠d︡e︠r︡ic. "Modeling and animating realistic faces from images /." Thesis, Connect to this title online; UW restricted, 1999. http://hdl.handle.net/1773/6886.
Full textRudol, Piotr, and Mariusz Wzorek. "Editing, Streaming and Playing of MPEG-4 Facial Animations." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1687.
Full textComputer animated faces have found their way into a wide variety of areas. Starting from entertainment like computer games, through television and films to user interfaces using “talking heads”. Animated faces are also becoming popular in web applications in form of human-like assistants or newsreaders.
This thesis presents a few aspects of dealing with human face animations, namely: editing, playing and transmitting such animations. It describes a standard for handling human face animations, the MPEG-4 Face Animation, and shows the process of designing, implementing and evaluating applications compliant to this standard.
First, it presents changes introduced to the existing components of the Visage|toolkit package for dealing with facial animations, offered by the company Visage Technologies AB. It also presents the process of designing and implementing of an application for editing facial animations compliant to the MPEG-4 Face Animation standard. Finally, it discusses several approaches to the problem of streaming facial animations over the Internet or the Local Area Network (LAN).
Kähler, Kolja. "3D facial animation : recreating human heads with virtual skin, bones, and muscles /." Saarbrücken : VDM Verlag Dr. Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=3016048&prov=M&dok_var=1&dok_ext=htm.
Full textKaiser, Moritz [Verfasser]. "Construction of a 3D Facial Model for Tracking and Animation / Moritz Kaiser." München : Verlag Dr. Hut, 2013. http://d-nb.info/1031845178/34.
Full textIgeland, Viktor. "Generating Facial Animation With Emotions In A Neural Text-To-Speech Pipeline." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160535.
Full textLarsson, Niklas. "Morph targets and bone rigging for 3D facial animation : A comparative case study." Thesis, Uppsala universitet, Institutionen för speldesign, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-327302.
Full textCorrea, Renata. "Animação facial por computador baseada em modelagem biomecanica." [s.n.], 2007. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259447.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-10T02:00:25Z (GMT). No. of bitstreams: 1 Correa_Renata_M.pdf: 4570462 bytes, checksum: c427bcfe94559d86730c51711bd67985 (MD5) Previous issue date: 2007
Resumo: A crescente busca pelo realismo em personagens virtuais encontrados em diversas aplicações na indústria do cinema, no ensino, jogos, entre outras, é a motivação do presente trabalho. O trabalho descreve um modelo de animação que emprega a estratégia biomecânica para o desenvolvimento de um protótipo computacional, chamado SABiom. A técnica utilizada baseia-se na simulação de características físicas da face humana, tais como as camadas de pele e músculos, que são modeladas de forma a permitir a simulação do comportamento mecânico do tecido facial sob a ação de forças musculares. Embora existam vários movimentos produzidos por uma face, o presente trabalho restringiu-se às simulações dos movimentos de expressões faciais focalizando os lábios. Para validar os resultados obtidos com o SABiom, comparou-se as imagens do modelo virtual obtidas através do protótipo desenvolvido com imagens obtidas de um modelo humano
Abstract: The increasing search for realism in virtual characters found in' many applications as movies, education, games, so on, is the motivation ofthis thesis. The thesis describes an animation model that employs the biomechanics strategy for the development of a computing prototype, called SABiom. The method used is based on simulation of physical features of the human face, such as layers of the skin and musc1es, that are modeled for simulation of the mechanical behavior of the facial tissue under the action of muscle forces. Although there are several movements produced by a face, the current work limits itself to the simulations of the facial expressions focusing the lips. To validate the results obtained from SABiom, we compared the images of the virtual model with images from a human model
Mestrado
Engenharia de Computação
Mestre em Engenharia Elétrica
Al-Qayedi, Ali. "Internet video-conferencing using model-based image coding with agent technology." Thesis, University of Essex, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298836.
Full textWang, Jing. "Reconstruction and Analysis of 3D Individualized Facial Expressions." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32588.
Full textKaspersson, Max. "Facial Realism through Wrinkle Maps : The Perceived Impact of Different Dynamic Wrinkle Implementations." Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-10370.
Full textErsotelos, Nikolaos. "Highly automated method for facial expression synthesis." Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/4524.
Full textObaid, Mohammad Hisham Rashid. "A quadratic deformation model for representing facial expressions." Thesis, University of Canterbury. Computer Science and Software Engineering, 2011. http://hdl.handle.net/10092/5345.
Full textHe, Zhi-guang, and 何智光. "3D Facial Animation." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/09169533194285308870.
Full text義守大學
電機工程學系碩士班
93
It is a challenge that how to realistically generate 3D human face models and their various movement expressions in computer graphics. With the progress of computer technology, people request for more and more multimedia effects. Therefore, the reconstruction of 3D human facial models and facial animations are enthusiastically investigated. There are several kinds of method used to reconstruct 3D models. We animate the 3D object models that are reconstructed by using photometric stereo method and shape from contours with three light sources in this paper. The 3D models are stored in point data and the 3D curved surface generated by applying Delaunay triangulation mesh method to the 3D models are displayed on personal computer. Then, we animate the motion sequence of mouth on the 3D human facial model for pronouncing Chinese and English words by using 3D Studio MAX. The animation for different words and sentences, and several facial expression are shown in this thesis .
Jun-Ze, Huang. "Speech-Driven 3D Facial Animation." 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-2407200613525500.
Full textHuang, Jun-Ze, and 黃鈞澤. "Speech-Driven 3D Facial Animation." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/22243033424866606258.
Full text國立臺灣大學
資訊管理學研究所
94
It is often difficult to animate a face model speaking a specific speech. Even for professional animators, it will take a lot of time. Our work provides a speech-driven 3D facial animation system which allows the user to easily generate facial animations. The user only needs to give a speech as the input. The output will be a 3D facial animation relative to the input speech. Our work can be divided into three sub-systems: One is the MMM (multidimensional morphable model). MMM is build from the pre-recorded training video using machine learning techniques. We can use MMM to generate realistic speech video respect to the input speech. The second part is Facial Tracking. Facial Tracking can extract the feature points of a human subject in the synthetic speech video. The third part is Mesh-IK (mesh based inverse kinematics). Mesh-IK takes the motion of feature points as the guide line to deform 3D face models, and makes the result model have the same looking in the corresponding frame of the speech video. Thus we can have a 3D facial animation as the output. Facing Tracking and Mesh-IK can also take a real speech video or even a real expression video as the input, and produce the corresponding facial animations.
Lo, Ching-Tzuu, and 羅慶祖. "Texture Mapping for Facial Animation." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/93788378277565596580.
Full text國立臺灣大學
資訊工程研究所
81
The computer animation for human facial expression can be very sophisticated because there are an enormous huge set of facial expressions. The study of how to generate different facial expressions efficiently and how to render realistic images is an important topic in computer graphics. In early study of computer facial animation, most researchers focused on the generation of facial expression, but ignored the rendering of realistic images. They used wire-frame to represent a human face model without considering human face texture. Although this has no direct influence on the theory of facial animation, however, they all based on a plastic face instead of a real model, especially they ignored the fine structure of eye, mouth, skin and hair textures. The objective of this thesis is to perform the texture mapping of a scanned 2-D image onto a 3-D face model to produce not only a more lifelike model but also facial expressions with textures. This study is based on some computer graphics techniques such as texture mapping and some image warping techniques such as sampling and transformation.
Wei, Shu-Chen, and 韋淑貞. "Animation of 3D facial expressions." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/38931393795866396065.
Full textHodgkinson, Warren. "Interactive speech-driven facial animation." Thesis, 2008. http://hdl.handle.net/10210/807.
Full textMr. A. Hardy Prof. S. von Solms
Serra, José Mário Figueiredo. "Intelligent facial animation: Creating emphatic characters with stimuli based animation." Doctoral thesis, 2017. https://hdl.handle.net/10216/110175.
Full textSerra, José Mário Figueiredo. "Intelligent facial animation: Creating emphatic characters with stimuli based animation." Tese, 2017. https://hdl.handle.net/10216/110175.
Full textBastani, Hanieh. "A Nonlinear Framework for Facial Animation." Thesis, 2008. http://hdl.handle.net/1807/10428.
Full text