Dissertations / Theses on the topic 'Facial expression analysis'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Facial expression analysis.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Baltrušaitis, Tadas. "Automatic facial expression analysis." Thesis, University of Cambridge, 2014. https://www.repository.cam.ac.uk/handle/1810/245253.
Full textLi, Jingting. "Facial Micro-Expression Analysis." Thesis, CentraleSupélec, 2019. http://www.theses.fr/2019CSUP0007.
Full textThe Micro-expressions (MEs) are very important nonverbal communication clues. However, due to their local and short nature, spotting them is challenging. In this thesis, we address this problem by using a dedicated local and temporal pattern (LTP) of facial movement. This pattern has a specific shape (S-pattern) when ME are displayed. Thus, by using a classical classification algorithm (SVM), MEs are distinguished from other facial movements. We also propose a global final fusion analysis on the whole face to improve the distinction between ME (local) and head (global) movements. However, the learning of S-patterns is limited by the small number of ME databases and the low volume of ME samples. Hammerstein models (HMs) are known to be a good approximation of muscle movements. By approximating each S-pattern with a HM, we can both filter outliers and generate new similar S-patterns. By this way, we perform a data augmentation for S-pattern training dataset and improve the ability to differentiate MEs from other facial movements. In the first ME spotting challenge of MEGC2019, we took part in the building of the new result evaluation method. In addition, we applied our method to spotting ME in long videos and provided the baseline result for the challenge. The spotting results, performed on CASME I and CASME II, SAMM and CAS(ME)2, show that our proposed LTP outperforms the most popular spotting method in terms of F1-score. Adding the fusion process and data augmentation improve even more the spotting performance
Carter, Jeffrey R. "Facial expression analysis in schizophrenia." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ58398.pdf.
Full textMunasinghe, Kankanamge Sarasi Madushika. "Facial analysis models for face and facial expression recognition." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/118197/1/Sarasi%20Madushika_Munasinghe%20Kankanamge_Thesis.pdf.
Full textShang, Lifeng, and 尚利峰. "Facial expression analysis with graphical models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B47849484.
Full textpublished_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
Feffer, Michael A. (Michael Anthony). "Personalized machine learning for facial expression analysis." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119763.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 35-36).
For this MEng Thesis Project, I investigated the personalization of deep convolutional networks for facial expression analysis. While prior work focused on population-based ("one-size-fits-all") models for prediction of affective states (valence/arousal), I constructed personalized versions of these models to improve upon state-of-the-art general models through solving a domain adaptation problem. This was done by starting with pre-trained deep models for face analysis and fine-tuning the last layers to specific subjects or subpopulations. For prediction, a "mixture of experts" (MoE) solution was employed to select the proper outputs based on the given input. The research questions answered in this project are: (1) What are the effects of model personalization on the estimation of valence and arousal from faces? (2) What is the amount of (un)supervised data needed to reach a target performance? Models produced in this research provide the foundation of a novel tool for personalized real-time estimation of target metrics.
by Michael A. Feffer.
M. Eng.
Shenoy, A. "Computational analysis of facial expressions." Thesis, University of Hertfordshire, 2010. http://hdl.handle.net/2299/4359.
Full textWang, Jing. "Reconstruction and Analysis of 3D Individualized Facial Expressions." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32588.
Full textMourão, André Belchior. "Robust facial expression analysis for affect-based interaction." Master's thesis, Faculdade de Ciências e Tecnologia, 2012. http://hdl.handle.net/10362/8292.
Full textInteraction is moving towards new and more natural approaches. Human Computer Interaction (HCI) is increasingly expanding towards more modalities of human expression such as gestures, body movements and other natural interactions. In this thesis, we propose to extend existing interaction paradigms by including the face as an affect-based input. Affective interaction methods can greatly change the way computers interact with humans; these methods can detect displays of user moods, such as frustration or engagement and adapt the experience accordingly. We have created an affect-based framework that encompasses face detection, face recognition and facial expression recognition and applied it in a computer game. ImEmotion is a two-player game where the player who best mimics an expression wins. The game combines face detection with facial expression recognition to recognize and rate an expression in real time. A controlled evaluation of the framework algorithms and a game trial with 46 users showed the potential of the framework and success of the usage of affect-based interaction based on facial expressions in the game. Despite the novelty of the interaction approach and the limitations of computer vision algorithms, players adapted and became competitive easily.
Yin, Lijun. "Facial expression analysis and synthesis for model based coding." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0011/NQ59702.pdf.
Full textWei, Xiaozhou. "3D facial expression modeling and analysis with topographic information." Diss., Online access via UMI:, 2008.
Find full textGupta, Ankit. "Live Performance and Emotional Analysis of MathSpring Intelligent Tutor System Students." Digital WPI, 2020. https://digitalcommons.wpi.edu/etd-theses/1372.
Full textSaeed, Anwar Maresh Qahtan [Verfasser]. "Automatic facial analysis methods : facial point localization, head pose estimation, and facial expression recognition / Anwar Maresh Qahtan Saeed." Magdeburg : Universitätsbibliothek, 2018. http://d-nb.info/1162189878/34.
Full textArnade, Elizabeth Amalia. "Measuring Consumer Emotional Response to Tastes and Foods through Facial Expression Analysis." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/54538.
Full textMaster of Science in Life Sciences
Wong, Ka-wai Teresa. "Event-related potential analysis of facial emotion processing." Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/HKUTO/record/B3955773X.
Full textWong, Ka-wai Teresa, and 黃嘉慧. "Event-related potential analysis of facial emotion processing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B3955773X.
Full textLeitch, Kristen Allison. "Evaluating Consumer Emotional Response to Beverage Sweeteners through Facial Expression Analysis." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/73695.
Full textMaster of Science in Life Sciences
Zhao, Xi. "3D face analysis : landmarking, expression recognition and beyond." Phd thesis, Ecole Centrale de Lyon, 2010. http://tel.archives-ouvertes.fr/tel-00599660.
Full textShreve, Matthew Adam. "Automatic Macro- and Micro-Facial Expression Spotting and Applications." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4770.
Full textPaknikar, Gayatri Suhas. "Facial Image Based Expression Classification System Using Committee Neural Networks." University of Akron / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=akron1210699575.
Full textMaurel, Pierre. "Shape gradients, shape warping and medical application to facial expression analysis." Paris 7, 2008. http://www.theses.fr/2008PA077151.
Full textThis work focuses on the issue of modeling prior knowledge about shapes, an essential problem in Computer Vision. A shape can be a planar curve in 2D or a surface in 3D. In order to model shape statistics, we studied in a first part, rather theoretical, shape warping and matching. We start by defining distances between shapes? Then, in order to deform a shape onto another, we define the gradient of this shape functional and apply a gradient descent scheme. We also developed a generalization of the gradient notion which can take priors into account and which do not derive from any inner product. We used this new notion for defining an extension of the very well-known level set method that can handle landmarks knowledge. On the application side and in collaboration with professor Patrick Chauvel at La Timone Hospital, Marseille, we worked on the task of correlating facial expressions and the electrical activity in the brain during the epileptic seizures. Therefore, we developed a method for fitting a three-dimensional face model under uncontrolled imaging conditions and used this method for analyzing facial expressions of epileptic patients. Finally we present a first step in the direction of being able to interrelate electrical activity produced by the brain during the seizure (and recorded by stereoelectroencephalography electrodes) and the facial expressions
Clark, Elizabeth A. "Application of Automated Facial Expression Analysis and Facial Action Coding System to Assess Affective Response to Consumer Products." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/97341.
Full textDoctor of Philosophy
Sensory and consumer sciences seek to comprehend the influences of sensory perception on consumer behaviors such as product liking and purchase. The food industry assesses product liking through consumer testing but often does not capture consumer response as it pertains to emotions such as those experienced while directly interacting with a product (i.e., product-generated emotions, PG) or those attributed to the product based on external information such as branding, marketing, nutrition, social environment, physical environment, memories, etc.( product-associated emotions, PA). This research investigated the application of PA and PG emotion methodology to better understand consumer experiences. A systematic review of the existing scientific literature was performed that focused on the Facial Action Coding System (FACS), a process used determine facially expressed emotion from facial muscular positioning, and its use to investigate consumer behavior and characterize human emotional response to product-based stimuli; the review revealed inconsistencies in how FACS is carried out as well as how emotional response is determined from facial muscular activation. Automatic Facial Expression Analysis (AFEA), which automates FACS, was then used in a two-part study. In the first study (n=50 participants), AFEA, a Check-All-That-Apply (CATA) emotions questionnaire, and a Single-Target Implicit Association Test (ST-IAT) were used to characterize the relationship between PA as well as PG emotions and consumer behavior (acceptability, purchase intent) towards milk in various types of packaging (k=6). While the ST-IAT did not yield significant results (p>0.05), CATA data produced illustrated term selection based on motivation to approach and/or withdrawal from milk based on packaging color. Additionally, the lack of difference (p>0.05) between emotions that do not produce similar facial muscle activations, such as happy and disgust, indicates that AFEA software may not be determining emotions as outlined in the established FACS procedures. In the second study, AFEA data from the sensory evaluation (n=48 participants) of light-exposed milk stimuli (k=4) stored in packaging with various light blocking properties underwent time series statistical analysis to determine if the nature of the control stimulus itself could impact the analysis of AFEA data. When compared against the limited sensory engaging control (a blank screen), contempt, happy, and angry were expressed more intensely (p<0.025) and consistently for the light-exposed milk stimuli; neutral was expressed exclusively in the same manner for the blank screen. Comparatively, intense neutral expression (p<0.025) was brief, fragmented, and often accompanied by intense (although fleeting) expressions of happy, sad, or contempt for the sensory engaging control (water); emotions such as surprised, scared, and sad were expressed similarly for the light-exposed milk stimuli. As such, it was determined that care should be taken as facial activation of muscles/AUs related to sensory perception (e.g., chewing, smelling) can impact the resulting interpretation. Collectively, the use of PA and PG emotion methodology provided additional insights to consumer-product related behaviors. However, it is hard to conclude whether AFEA is yielding emotional interpretations based on true facial expression of emotion or facial actions related to sensory perception for sensory engaging consumer products such as foods and beverages.
Derkach, Dmytro. "Spectrum analysis methods for 3D facial expression recognition and head pose estimation." Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/664578.
Full textFacial analysis has attracted considerable research efforts over the last decades, with a growing interest in improving the interaction and cooperation between people and computers. This makes it necessary that automatic systems are able to react to things such as the head movements of a user or his/her emotions. Further, this should be done accurately and in unconstrained environments, which highlights the need for algorithms that can take full advantage of 3D data. These systems could be useful in multiple domains such as human-computer interaction, tutoring, interviewing, health-care, marketing etc. In this thesis, we focus on two aspects of facial analysis: expression recognition and head pose estimation. In both cases, we specifically target the use of 3D data and present contributions that aim to identify meaningful representations of the facial geometry based on spectral decomposition methods: 1. We propose a spectral representation framework for facial expression recognition using exclusively 3D geometry, which allows a complete description of the underlying surface that can be further tuned to the desired level of detail. It is based on the decomposition of local surface patches in their spatial frequency components, much like a Fourier transform, which are related to intrinsic characteristics of the surface. We propose the use of Graph Laplacian Features (GLFs), which result from the projection of local surface patches into a common basis obtained from the Graph Laplacian eigenspace. The proposed approach is tested in terms of expression and Action Unit recognition and results confirm that the proposed GLFs produce state-of-the-art recognition rates. 2. We propose an approach for head pose estimation that allows modeling the underlying manifold that results from general rotations in 3D. We start by building a fully-automatic system based on the combination of landmark detection and dictionary-based features, which obtained the best results in the FG2017 Head Pose Estimation Challenge. Then, we use tensor representation and higher order singular value decomposition to separate the subspaces that correspond to each rotation factor and show that each of them has a clear structure that can be modeled with trigonometric functions. Such representation provides a deep understanding of data behavior, and can be used to further improve the estimation of the head pose angles.
Nordén, Frans, and Reis Marlevi Filip von. "A Comparative Analysis of Machine Learning Algorithms in Binary Facial Expression Recognition." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254259.
Full textBilbao, María de los Ángeles, Elza Techio, and Darío Páez. "Acknowledgement of emotional facial expression in Mexican college students." Pontificia Universidad Católica del Perú, 2012. http://repositorio.pucp.edu.pe/index/handle/123456789/102344.
Full textEste estudio presenta un meta-análisis sobre la relación entre los valores de Schwartz y el bienestar subjetivo en distintos contextos culturales, con estudiantes, sus familiares e inmigrantes en España. Los resultados confirman una asociación significativa entre los valores y el bienestar. Auto trascendencia y apertura al cambio, y con menor intensidad, conservación, se asocian positivamente con mayor bienestar. Auto trascendencia se asocia con felicidad y satisfacción de forma positiva no homogénea, siendo los inmigrantes quienes presentan medias más bajas. Apertura al cambio se asocia con felicidad, siendo más fuerte la asociación en inmigrantes que en estudiantes. Los valores conservacionistas se asocian homogéneamente. Un segundo estudio sobre criterios de salud psicosocial y bienestar subjetivo -analizando un país sudamericano colectivista y jerárquico como Brasil, y otro europeo más individualista e igualitario como España- confirma que los valores conservacionistas, así como los de apertura al cambio y auto trascendencia, son deseables y favorecen el bienestar.
Jottrand, Matthieu. "Support Vector Machines for Classification applied to Facial Expression Analysis and Remote Sensing." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2938.
Full textThe subject of this thesis is the application of Support Vector Machines on two totally different applications, facial expressions recognition and remote sensing.
The basic idea of kernel algorithms is to transpose input data in a higher dimensional space, the feature space, in which linear operations on the data can be processed more easily. These operations in the feature space can be expressed in terms of input data thanks to the kernel functions. Support Vector Machines is a classifier using this kernel method by computing, in the feature space and on basis of examples of the different classes, hyperplanes that separate the classes. The hyperplanes in the feature space correspond to non linear surfaces in the input space.
Concerning facial expressions, the aim is to train and test a classifier able to recognise, on basis of some pictures of faces, which emotion (among these six ones: anger, disgust, fear, joy, sad, and surprise) that is expressed by the person in the picture. In this application, each picture has to be seen has a point in an N-dimensional space where N is the number of pixels in the image.
The second application is the detection of camouflage nets hidden in vegetation using a hyperspectral image taken by an aircraft. In this case the classification is computed for each pixel, represented by a vector whose elements are the different frequency bands of this pixel.
Vadapalli, Hima Bindu. "Recognition of facial action units from video streams with recurrent neural networks : a new paradigm for facial expression recognition." University of the Western Cape, 2011. http://hdl.handle.net/11394/5415.
Full textThis research investigated the application of recurrent neural networks (RNNs) for recognition of facial expressions based on facial action coding system (FACS). Support vector machines (SVMs) were used to validate the results obtained by RNNs. In this approach, instead of recognizing whole facial expressions, the focus was on the recognition of action units (AUs) that are defined in FACS. Recurrent neural networks are capable of gaining knowledge from temporal data while SVMs, which are time invariant, are known to be very good classifiers. Thus, the research consists of four important components: comparison of the use of image sequences against single static images, benchmarking feature selection and network optimization approaches, study of inter-AU correlations by implementing multiple output RNNs, and study of difference images as an approach for performance improvement. In the comparative studies, image sequences were classified using a combination of Gabor filters and RNNs, while single static images were classified using Gabor filters and SVMs. Sets of 11 FACS AUs were classified by both approaches, where a single RNN/SVM classifier was used for classifying each AU. Results indicated that classifying FACS AUs using image sequences yielded better results than using static images. The average recognition rate (RR) and false alarm rate (FAR) using image sequences was 82.75% and 7.61%, respectively, while the classification using single static images yielded a RR and FAR of 79.47% and 9.22%, respectively. The better performance by the use of image sequences can be at- tributed to RNNs ability, as stated above, to extract knowledge from time-series data. Subsequent research then investigated benchmarking dimensionality reduction, feature selection and network optimization techniques, in order to improve the performance provided by the use of image sequences. Results showed that an optimized network, using weight decay, gave best RR and FAR of 85.38% and 6.24%, respectively. The next study was of the inter-AU correlations existing in the Cohn-Kanade database and their effect on classification models. To accomplish this, a model was developed for the classification of a set of AUs by a single multiple output RNN. Results indicated that high inter-AU correlations do in fact aid classification models to gain more knowledge and, thus, perform better. However, this was limited to AUs that start and reach apex at almost the same time. This suggests the need for availability of a larger database of AUs, which could provide both individual and AU combinations for further investigation. The final part of this research investigated use of difference images to track the motion of image pixels. Difference images provide both noise and feature reduction, an aspect that was studied. Results showed that the use of difference image sequences provided the best results, with RR and FAR of 87.95% and 3.45%, respectively, which is shown to be significant when compared to use of normal image sequences classified using RNNs. In conclusion, the research demonstrates that use of RNNs for classification of image sequences is a new and improved paradigm for facial expression recognition.
Maalej, Ahmed. "3D Facial Expressions Recognition Using Shape Analysis and Machine Learning." Thesis, Lille 1, 2012. http://www.theses.fr/2012LIL10025/document.
Full textFacial expression recognition is a challenging task, which has received growing interest within the research community, impacting important applications in fields related to human machine interaction (HMI). Toward building human-like emotionally intelligent HMI devices, scientists are trying to include the essence of human emotional state in such systems. The recent development of 3D acquisition sensors has made 3D data more available, and this kind of data comes to alleviate the problems inherent in 2D data such as illumination, pose and scale variations as well as low resolution. Several 3D facial databases are publicly available for the researchers in the field of face and facial expression recognition to validate and evaluate their approaches. This thesis deals with facial expression recognition (FER) problem and proposes an approach based on shape analysis to handle both static and dynamic FER tasks. Our approach includes the following steps: first, a curve-based representation of the 3D face model is proposed to describe facial features. Then, once these curves are extracted, their shape information is quantified using a Riemannain framework. We end up with similarity scores between different facial local shapes constituting feature vectors associated with each facial surface. Afterwards, these features are used as entry parameters to some machine learning and classification algorithms to recognize expressions. Exhaustive experiments are derived to validate our approach and results are presented and compared to the related work achievements
Chung, Koon Yin C. "Facial Expression Recognition by Using Class Mean Gabor Responses with Kernel Principal Component Analysis." Ohio University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1260468428.
Full textGhayoumi, Mehdi. "FACIAL EXPRESSION ANALYSIS USING DEEP LEARNING WITH PARTIAL INTEGRATION TO OTHER MODALITIES TO DETECT EMOTION." Kent State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=kent1501273062260458.
Full textChu, Wen-Sheng. "Automatic Analysis of Facial Actions: Learning from Transductive, Supervised and Unsupervised Frameworks." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/929.
Full textCrist, Courtney Alissa. "Application of Automated Facial Expression Analysis and Qualitative Analysis to Assess Consumer Perception and Acceptability of Beverages and Water." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/79718.
Full textPh. D.
Peschka-Daskalos, Patricia Jean. "An Intercultural Analysis of Differences in Appropriateness Ratings of Facial Expressions Between Japanese and American Subjects." PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/4700.
Full textAina, Segun. "Loughborough University Spontaneous Expression Database and baseline results for automatic emotion recognition." Thesis, Loughborough University, 2015. https://dspace.lboro.ac.uk/2134/19524.
Full textEner, Emrah. "Recognition Of Human Face Expressions." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/3/12607521/index.pdf.
Full textWang, Ding. "The systematic analysis and innovative design of the essential cultural elements with Peking Opera Painted Faces (POPF)." Thesis, Brunel University, 2016. http://bura.brunel.ac.uk/handle/2438/14785.
Full textArango, Duque Carlos. "Analysis of Micro-Expressions based on the Riesz Pyramid : Application to Spotting and Recognition." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSES062/document.
Full textMicro-expressions are brief and subtle facial expressions that go on and off the face in a fraction of a second. This kind of facial expressions usually occurs in high stake situations and is considered to reflect a humans real intent. They have been studied to better understand non-verbal communications and in medical applications where is almost impossible to engage in a conversation or try to read the facial emotions or body language of a patient. There has been some interest works in micro-expression analysis, however, a great majority of these methods are based on classically established computer vision methods such as local binary patterns, histogram of gradients and optical flow. Considering the fact that this area of research is relatively new, much contributions remains to be made. ln this thesis, we present a novel methodology for subtle motion and micro-expression analysis. We propose to use the Riesz pyramid, a multi-scale steerable Hilbert transformer which has been used for 2-D phase representation and video amplification, as the basis for our methodology. For the general subtle motion analysis step, we transform an image sequence with the Riesz pyramid, extract and lifter the image phase variations as proxies for motion. Furthermore, we isolate regions of intcrcst where subtle motion might take place and mask noisy areas by thresholding the local amplitude. The total sequence is transformed into a ID signal which is used fo temporal analysis and subtle motion spotting. We create our own database of subtle motion sequences to test our method. For the micro-expression spotting step, we adapt the previous method to process some facial regions of interest. We also develop a heuristic method to detect facial micro-events that separates real micro-expressions from eye blinkings and subtle eye movements. For the micro-expression classification step, we exploit the dominant orientation constancy fom the Riesz transform to average the micro-expression sequence into an image pair. Based on that, we introduce the Mean Oriented Riesz Feature descriptor. The accuracy of our methods are tested in Iwo spontaneous micro-expressions databases. Furthermore, wc analyse the parameter variations and their effect in our results
Hariri, Walid. "Contribution à la reconnaissance/authentification de visages 2D/3D." Thesis, Cergy-Pontoise, 2017. http://www.theses.fr/2017CERG0905/document.
Full text3D face analysis including 3D face recognition and 3D Facial expression recognition has become a very active area of research in recent years. Various methods using 2D image analysis have been presented to tackle these problems. 2D image-based methods are inherently limited by variability in imaging factors such as illumination and pose. The recent development of 3D acquisition sensors has made 3D data more and more available. Such data is relatively invariant to illumination and pose, but it is still sensitive to expression variation. The principal objective of this thesis is to propose efficient methods for 3D face recognition/verification and 3D facial expression recognition. First, a new covariance based method for 3D face recognition is presented. Our method includes the following steps : first 3D facial surface is preprocessed and aligned. A uniform sampling is then applied to localize a set of feature points, around each point, we extract a matrix as local region descriptor. Two matching strategies are then proposed, and various distances (geodesic and non-geodesic) are applied to compare faces. The proposed method is assessed on three datasetsincluding GAVAB, FRGCv2 and BU-3DFE. A hierarchical description using three levels of covariances is then proposed and validated. In the second part of this thesis, we present an efficient approach for 3D facial expression recognition using kernel methods with covariance matrices. In this contribution, we propose to use Gaussian kernel which maps covariance matrices into a high dimensional Hilbert space. This enables to use conventional algorithms developed for Euclidean valued data such as SVM on such non-linear valued data. The proposed method have been assessed on two known datasets including BU-3DFE and Bosphorus datasets to recognize the six prototypical expressions
Jain, Varun. "Visual Observation of Human Emotions." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GRENM006/document.
Full textIn this thesis we focus on the development of methods and techniques to infer affect from visual information. We focus on facial expression analysis since the face is one of the least occluded parts of the body and facial expressions are one of the most visible manifestations of affect. We explore the different psychological theories on affect and emotion, different ways to represent and classify emotions and the relationship between facial expressions and underlying emotions. We present the use of multiscale Gaussian derivatives as an image descriptor for head pose estimation, smile detection before using it for affect sensing. Principal Component Analysis is used for dimensionality reduction while Support Vector Machines are used for classification and regression. We are able to employ the same, simple and effective architecture for head pose estimation, smile detection and affect sensing. We also demonstrate that not only do multiscale Gaussian derivatives perform better than the popular Gabor Filters but are also computationally less expensive to compute. While performing these experiments we discovered that multiscale Gaussian derivatives do not provide an appropriately discriminative image description when the face is only partly illuminated. We overcome this problem by combining Gaussian derivatives with Local Binary Pattern (LBP) histograms. This combination helps us achieve state-of-the-art results for smile detection on the benchmark GENKI database which contains images of people in the "wild" collected from the internet. We use the same description method for face recognition on the CMU-PIE database and the challenging extended YaleB database and our results compare well with the state-of-the-art. In the case of face recognition we use metric learning for classification, adopting the Minkowski distance as the similarity measure. We find that L1 and L2 norms are not always the optimum distance metrics and the optimum is often an Lp norm where p is not an integer. Lastly we develop a multi-modal system for depression estimation with audio and video information as input. We use Local Binary Patterns -Three Orthogonal Planes (LBP-TOP) features to capture intra-facial movements in the videos and dense trajectories for macro movements such as the movement of the head and shoulders. These video features along with Low Level Descriptor (LLD) audio features are encoded using Fisher Vectors and finally a Support Vector Machine is used for regression. We discover that the LBP-TOP features encoded with Fisher Vectors alone are enough to outperform the baseline method on the Audio Visual Emotion Challenge (AVEC) 2014 database. We thereby present an effective technique for depression estimation which can be easily extended for other slowly varying aspects of emotions such as mood
Zhang, Yuyao. "Non-linear dimensionality reduction and sparse representation models for facial analysis." Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0019/document.
Full textFace analysis techniques commonly require a proper representation of images by means of dimensionality reduction leading to embedded manifolds, which aims at capturing relevant characteristics of the signals. In this thesis, we first provide a comprehensive survey on the state of the art of embedded manifold models. Then, we introduce a novel non-linear embedding method, the Kernel Similarity Principal Component Analysis (KS-PCA), into Active Appearance Models, in order to model face appearances under variable illumination. The proposed algorithm successfully outperforms the traditional linear PCA transform to capture the salient features generated by different illuminations, and reconstruct the illuminated faces with high accuracy. We also consider the problem of automatically classifying human face poses from face views with varying illumination, as well as occlusion and noise. Based on the sparse representation methods, we propose two dictionary-learning frameworks for this pose classification problem. The first framework is the Adaptive Sparse Representation pose Classification (ASRC). It trains the dictionary via a linear model called Incremental Principal Component Analysis (Incremental PCA), tending to decrease the intra-class redundancy which may affect the classification performance, while keeping the extra-class redundancy which is critical for sparse representation. The other proposed work is the Dictionary-Learning Sparse Representation model (DLSR) that learns the dictionary with the aim of coinciding with the classification criterion. This training goal is achieved by the K-SVD algorithm. In a series of experiments, we show the performance of the two dictionary-learning methods which are respectively based on a linear transform and a sparse representation model. Besides, we propose a novel Dictionary Learning framework for Illumination Normalization (DL-IN). DL-IN based on sparse representation in terms of coupled dictionaries. The dictionary pairs are jointly optimized from normally illuminated and irregularly illuminated face image pairs. We further utilize a Gaussian Mixture Model (GMM) to enhance the framework's capability of modeling data under complex distribution. The GMM adapt each model to a part of the samples and then fuse them together. Experimental results demonstrate the effectiveness of the sparsity as a prior for patch-based illumination normalization for face images
Weber, Marlene. "Automotive emotions : a human-centred approach towards the measurement and understanding of drivers' emotions and their triggers." Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/16647.
Full textHusseini, Orabi Ahmed. "Multi-Modal Technology for User Interface Analysis including Mental State Detection and Eye Tracking Analysis." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/36451.
Full textDagnes, Nicole. "3D human face analysis for recognition applications and motion capture." Thesis, Compiègne, 2020. http://www.theses.fr/2020COMP2542.
Full textThis thesis is intended as a geometrical study of the three-dimensional facial surface, whose aim is to provide an application framework of entities coming from Differential Geometry context to use as facial descriptors in face analysis applications, like FR and FER fields. Indeed, although every visage is unique, all faces are similar and their morphological features are the same for all mankind. Hence, it is primary for face analysis to extract suitable features. All the facial features, proposed in this study, are based only on the geometrical properties of the facial surface. Then, these geometrical descriptors and the related entities proposed have been applied in the description of facial surface in pattern recognition contexts. Indeed, the final goal of this research is to prove that Differential Geometry is a comprehensive tool oriented to face analysis and geometrical features are suitable to describe and compare faces and, generally, to extract relevant information for human face analysis in different practical application fields. Finally, since in the last decades face analysis has gained great attention also for clinical application, this work focuses on musculoskeletal disorders analysis by proposing an objective quantification of facial movements for helping maxillofacial surgery and facial motion rehabilitation. At this time, different methods are employed for evaluating facial muscles function. This research work investigates the 3D motion capture system, adopting the Technology, Sport and Health platform, located in the Innovation Centre of the University of Technology of Compiègne, in the Biomechanics and Bioengineering Laboratory (BMBI)
Dapogny, Arnaud. "A walk through randomness for face analysis in unconstrained environments." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066662/document.
Full textAutomatic face analysis is a key to the development of intelligent human-computer interaction systems and behavior understanding. However, there exist a number of factors that makes face analysis a difficult problem. This include morphological differences between different persons, head pose variations as well as the possibility of partial occlusions. In this PhD, we propose a number of adaptations of the so-called Random Forest algorithm to specifically adress those problems. Mainly, those improvements consist in:– The development of a Pairwise Conditional Random Forest framework, that consists in training Random Forests upon pairs of expressive images. Pairwise trees are conditionned on the expression label of the first frame of a pair to reduce the ongoing expression transition variability. Additionnally, trees can be conditionned upon a head pose estimate to peform facial expression recognition from an arbitrary viewpoint.– The design of a hierarchical autoencoder network to model the local face texture patterns. The reconstruction error of this network provides a confidence measurement that can be used to weight Randomized decision trees trained on spatially-defined local subspace of the face. Thus, we can provide an expression prediction that is robust to partial occlusions.– Improvements over the very recent Neural Decision Forests framework, that include both a simplified training procedure as well as a new greedy evaluation procedure, that allows to dramatically improve the evaluation runtime, with applications for online learning and, deep learning convolutional neural network-based features for facial expression recognition as well as feature point alignement
MIRCOLI, ALEX. "Lexicon- and Learning-based Techniques for Emotion Recognition in Social Contents." Doctoral thesis, Università Politecnica delle Marche, 2019. http://hdl.handle.net/11566/263357.
Full textIn recent years, the massive diffusion of social networks has made available large amounts of user-generated content, which often contains authentic information about people's emotions and thoughts. The analysis of such content through emotion recognition provides valuable insights into people's feeling about products, services and events, and allows to extend traditional processes of Business Intelligence. To this purpose, in the present work we propose novel techniques for lexicon- and learning-based emotion recognition, in particular for the analysis of social content. For what concerns lexicon-based approaches, the present work extends traditional techniques by introducing two algorithms for the disambiguation of polysemous words and the correct analysis of negated sentences. The former algorithm detects the most suitable semantic variant of a polysemous word with respect of its context, by searching for the shortest path in a lexical resource from the polysemous word to its nearby words. The latter detects the right scope of negation through the analysis of parse trees. Moreover, the paper describes the design and implementation of an application of the lexicon-based approach, that is a full-fledged platform for information discovery from multiple social networks, which allows for the analysis of users' opinions and characteristics and is based on Exploratory Data Analysis. For what concerns learning-based approaches, a methodology has been defined for the automatic creation of annotated corpora through the analysis of facial expressions in subtitled videos. The methodology is composed of several video preprocessing techniques, with the purpose of filtering out irrelevant frames, and a facial expression classifier, which can be implemented using two different approaches. The proposed techniques have been experimentally evaluated using several real-world datasets and the results are promising.
Rivera, Samuel. "Computational Methods for the Study of Face Perception." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354650651.
Full textCheng, Xin. "Nonrigid face alignment for unknown subject in video." Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/65338/1/Xin_Cheng_Thesis.pdf.
Full textMascaró, Oliver Miquel. "Expresión de emociones de alegría para personajes virtuales mediante la risa y la sonrisa." Doctoral thesis, Universitat de les Illes Balears, 2014. http://hdl.handle.net/10803/145970.
Full textL'animació facial és un dels tòpics encara no resolts tant en el camp de la interacció home màquina com en el de la informàtica gràfica. Les expressions d'alegria associades a riure i somriure són pel seu significat i importància, part fonamental d'aquests camps. En aquesta tesi es fa una aproximació a la representació dels diferents tipus de riure en animació facial alhora que es presenta un nou mètode capaç de reproduir tots aquests tipus. El mètode es valida mitjançant la recreació de seqüències cinematogràfiques i mitjançant la utilització de bases de dades d'expressions facials genèriques i específiques de somriure. Addicionalment es crea una base de dades pròpia que recull els diferents tipus de rialles classificats i generats en aquest treball. D'acord a aquesta base de dades pròpia es generen les expressions més representatives de cadascuna de les rialles i somriures considerades en l'estudi.
Nowadays, facial animation is one of the most relevant research topics still unresolved both in the field of human machine interaction and in the computer graphics. Expressions of joy associated with laughter and smiling are a key part of these fields mainly due to its meaning and importance. In this thesis an approach to the representation of different types of laughter in facial animation is done while a new method to reproduce all these types is proposed. The method is validated by recreating movie sequences and using databases of generic and specific facial smile expressions. Additionally, a proprietary database that lists the different types of classified and generated laughs in this work is created. According to this proprietary database the most representative of every smile expression considered in the study is generated.
Bezerra, Giuliana Silva. "A framework for investigating the use of face features to identify spontaneous emotions." Universidade Federal do Rio Grande do Norte, 2014. http://repositorio.ufrn.br/handle/123456789/19595.
Full textApproved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2016-01-15T18:57:11Z (GMT) No. of bitstreams: 1 GiulianaSilvaBezerra_DISSERT.pdf: 12899912 bytes, checksum: 413f2be6aef4a909500e6834e7b0ae63 (MD5)
Made available in DSpace on 2016-01-15T18:57:11Z (GMT). No. of bitstreams: 1 GiulianaSilvaBezerra_DISSERT.pdf: 12899912 bytes, checksum: 413f2be6aef4a909500e6834e7b0ae63 (MD5) Previous issue date: 2014-12-12
Emotion-based analysis has raised a lot of interest, particularly in areas such as forensics, medicine, music, psychology, and human-machine interface. Following this trend, the use of facial analysis (either automatic or human-based) is the most common subject to be investigated once this type of data can easily be collected and is well accepted in the literature as a metric for inference of emotional states. Despite this popularity, due to several constraints found in real world scenarios (e.g. lightning, complex backgrounds, facial hair and so on), automatically obtaining affective information from face accurately is a very challenging accomplishment. This work presents a framework which aims to analyse emotional experiences through naturally generated facial expressions. Our main contribution is a new 4-dimensional model to describe emotional experiences in terms of appraisal, facial expressions, mood, and subjective experiences. In addition, we present an experiment using a new protocol proposed to obtain spontaneous emotional reactions. The results have suggested that the initial emotional state described by the participants of the experiment was different from that described after the exposure to the eliciting stimulus, thus showing that the used stimuli were capable of inducing the expected emotional states in most individuals. Moreover, our results pointed out that spontaneous facial reactions to emotions are very different from those in prototypic expressions due to the lack of expressiveness in the latter.
Emotion-based analysis has raised a lot of interest, particularly in areas such as forensics, medicine, music, psychology, and human-machine interface. Following this trend, the use of facial analysis (either automatic or human-based) is the most common subject to be investigated once this type of data can easily be collected and is well accepted in the literature as a metric for inference of emotional states. Despite this popularity, due to several constraints found in real world scenarios (e.g. lightning, complex backgrounds, facial hair and so on), automatically obtaining affective information from face accurately is a very challenging accomplishment. This work presents a framework which aims to analyse emotional experiences through naturally generated facial expressions. Our main contribution is a new 4-dimensional model to describe emotional experiences in terms of appraisal, facial expressions, mood, and subjective experiences. In addition, we present an experiment using a new protocol proposed to obtain spontaneous emotional reactions. The results have suggested that the initial emotional state described by the participants of the experiment was different from that described after the exposure to the eliciting stimulus, thus showing that the used stimuli were capable of inducing the expected emotional states in most individuals. Moreover, our results pointed out that spontaneous facial reactions to emotions are very different from those in prototypic expressions due to the lack of expressiveness in the latter.
Lemaire, Pierre. "Contributions à l'analyse de visages en 3D : approche régions, approche holistique et étude de dégradations." Phd thesis, Ecole Centrale de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-01002114.
Full text