Academic literature on the topic 'Facial expression analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Facial expression analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Facial expression analysis"

1

Matsumoto, David, and Paul Ekman. "Facial expression analysis." Scholarpedia 3, no. 5 (2008): 4237. http://dx.doi.org/10.4249/scholarpedia.4237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kalburgi, Riya, Punit Solanki, Rounak Suthar, and Saurabh Suman. "Expression Analysis System." International Journal of Engineering and Advanced Technology 10, no. 3 (February 28, 2021): 13–15. http://dx.doi.org/10.35940/ijeat.c2128.0210321.

Full text
Abstract:
Expression is the most basic personality trait of an individual. Expressions, ubiquitous to humans from all cultures, can be pivotal in analyzing the personality which is not confined to boundaries. Analyzing the changes in the expression of the individual can bolster the process of deriving his/her personality traits underscoring the paramount reactions like anger, happiness, sadness and so on. This paper aims to exercise Neural Network algorithms to predict the personality traits of an individual from his/her facial expressions. In this paper, a methodology to analyze the personality traits of the individual by periodic monitoring of the changes in facial expressions is presented. The proposed system is intended to analyze the expressions by exploiting Neural Networks strategies to first analyze the facial expressions of the individual by constantly monitoring an individual under observation. This monitoring is done with the help of OpenCV which captures the facial expression at an interval of 15 secs. Thousands of images per expression are used to train the model to aptly distinguish between expression using prominent Neural Network Methodologies of Forward and Backward Propagation. The identified expression is then be fed to a derivative system which plots a graph highlighting the changes in the expression. The graph acts as the crux of the proposed system. The project is important from the perspective of serving as an alternative to manual monitoring which are prone to errors and subjective in nature.
APA, Harvard, Vancouver, ISO, and other styles
3

Sebe, N., M. S. Lew, Y. Sun, I. Cohen, T. Gevers, and T. S. Huang. "Authentic facial expression analysis." Image and Vision Computing 25, no. 12 (December 2007): 1856–63. http://dx.doi.org/10.1016/j.imavis.2005.12.021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zulhijah Awang Jesemi, Dayang Nur, Hamimah Ujir, Irwandi Hipiny, and Sarah Flora Samson Juan. "The analysis of facial feature deformation using optical flow algorithm." Indonesian Journal of Electrical Engineering and Computer Science 15, no. 2 (August 1, 2019): 769. http://dx.doi.org/10.11591/ijeecs.v15.i2.pp769-777.

Full text
Abstract:
<span>Facial features deformed according to the intended facial expression. Specific facial features are associated with specific facial expression, i.e. happy means the deformation of mouth. This paper presents the study of facial feature deformation for each facial expression by using an optical flow algorithm and segmented into three different regions of interest. The deformation of facial features shows the relation between facial the and facial expression. Based on the experiments, the deformations of eye and mouth are significant in all expressions except happy. For happy expression, cheeks and mouths are the significant regions. This work also suggests that different facial features' intensity varies in the way that they contribute to the recognition of the different facial expression intensity. The maximum magnitude across all expressions is shown by the mouth for surprise expression which is 9x10<sup>-4</sup>. While the minimum magnitude is shown by the mouth for angry expression which is 0.4x10<sup>-4</sup>.</span>
APA, Harvard, Vancouver, ISO, and other styles
5

BUCIU, IOAN, and IOAN NAFORNITA. "FEATURE EXTRACTION THROUGH CROSS-PHASE CONGRUENCY FOR FACIAL EXPRESSION ANALYSIS." International Journal of Pattern Recognition and Artificial Intelligence 23, no. 03 (May 2009): 617–35. http://dx.doi.org/10.1142/s021800140900717x.

Full text
Abstract:
Human face analysis has attracted a large number of researchers from various fields, such as computer vision, image processing, neurophysiology or psychology. One of the particular aspects of human face analysis is encompassed by facial expression recognition task. A novel method based on phase congruency for extracting the facial features used in the facial expression classification procedure is developed. Considering a set of image samples comprising humans expressing various expressions, this new approach computes the phase congruency map between the samples. The analysis is performed in the frequency space where the similarity (or dissimilarity) between sample phases is measured to form discriminant features. The experiments were run using samples from two facial expression databases. To assess the method's performance, the technique is compared to the state-of-the art techniques utilized for classifying facial expressions, such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), Linear Discriminant Analysis (LDA), and Gabor jets. The features extracted by the aforementioned techniques are further classified using two classifiers: a distance-based classifier and a Support Vector Machine-based classifier. Experiments reveal superior facial expression recognition performance for the proposed approach with respect to other techniques.
APA, Harvard, Vancouver, ISO, and other styles
6

LEE, CHAN-SU, and DIMITRIS SAMARAS. "ANALYSIS AND CONTROL OF FACIAL EXPRESSIONS USING DECOMPOSABLE NONLINEAR GENERATIVE MODELS." International Journal of Pattern Recognition and Artificial Intelligence 28, no. 05 (July 31, 2014): 1456009. http://dx.doi.org/10.1142/s0218001414560096.

Full text
Abstract:
Facial expressions convey personal characteristics and subtle emotional states. This paper presents a new framework for modeling subtle facial motions of different people with different types of expressions from high-resolution facial expression tracking data to synthesize new stylized subtle facial expressions. A conceptual facial motion manifold is used for a unified representation of facial motion dynamics from three-dimensional (3D) high-resolution facial motions as well as from two-dimensional (2D) low-resolution facial motions. Variant subtle facial motions in different people with different expressions are modeled by nonlinear mappings from the embedded conceptual manifold to input facial motions using empirical kernel maps. We represent facial expressions by a factorized nonlinear generative model, which decomposes expression style factors and expression type factors from different people with multiple expressions. We also provide a mechanism to control the high-resolution facial motion model from low-resolution facial video sequence tracking and analysis. Using the decomposable generative model with a common motion manifold embedding, we can estimate parameters to control 3D high resolution facial expressions from 2D tracking results, which allows performance-driven control of high-resolution facial expressions.
APA, Harvard, Vancouver, ISO, and other styles
7

Ujir, Hamimah, Irwandi Hipiny, and D. N.F. Awang Iskandar. "Facial Action Units Analysis using Rule-Based Algorithm." International Journal of Engineering & Technology 7, no. 3.20 (September 1, 2018): 284. http://dx.doi.org/10.14419/ijet.v7i3.20.19167.

Full text
Abstract:
Most works in quantifying facial deformation are based on action units (AUs) provided by the Facial Action Coding System (FACS) which describes facial expressions in terms of forty-six component movements. AU corresponds to the movements of individual facial muscles. This paper presents a rule based approach to classify the AU which depends on certain facial features. This work only covers deformation of facial features based on posed Happy and the Sad expression obtained from the BU-4DFE database. Different studies refer to different combination of AUs that form Happy and Sad expression. According to the FACS rules lined in this work, an AU has more than one facial property that need to be observed. The intensity comparison and analysis on the AUs involved in Sad and Happy expression are presented. Additionally, dynamic analysis for AUs is studied to determine the temporal segment of expressions, i.e. duration of onset, apex and offset time. Our findings show that AU15, for sad expression, and AU12, for happy expression, show facial features deformation consistency for all properties during the expression period. However for AU1 and AU4, their properties’ intensity is different during the expression period.
APA, Harvard, Vancouver, ISO, and other styles
8

Park, Sung, Seong Won Lee, and Mincheol Whang. "The Analysis of Emotion Authenticity Based on Facial Micromovements." Sensors 21, no. 13 (July 5, 2021): 4616. http://dx.doi.org/10.3390/s21134616.

Full text
Abstract:
People tend to display fake expressions to conceal their true feelings. False expressions are observable by facial micromovements that occur for less than a second. Systems designed to recognize facial expressions (e.g., social robots, recognition systems for the blind, monitoring systems for drivers) may better understand the user’s intent by identifying the authenticity of the expression. The present study investigated the characteristics of real and fake facial expressions of representative emotions (happiness, contentment, anger, and sadness) in a two-dimensional emotion model. Participants viewed a series of visual stimuli designed to induce real or fake emotions and were signaled to produce a facial expression at a set time. From the participant’s expression data, feature variables (i.e., the degree and variance of movement, and vibration level) involving the facial micromovements at the onset of the expression were analyzed. The results indicated significant differences in the feature variables between the real and fake expression conditions. The differences varied according to facial regions as a function of emotions. This study provides appraisal criteria for identifying the authenticity of facial expressions that are applicable to future research and the design of emotion recognition systems.
APA, Harvard, Vancouver, ISO, and other styles
9

Jeyalaksshmi, S., and S. Prasanna. "Simultaneous evolutionary neural network based automated video based facial expression analysis." International Journal of Engineering & Technology 7, no. 1.1 (December 21, 2017): 125. http://dx.doi.org/10.14419/ijet.v7i1.1.9211.

Full text
Abstract:
In real life scenario, facial expressions and emotions are nothing but responses to the external and internal events of human being. In Human Computer Interaction (HCI), recognition of end user’s expressions and emotions from the video streaming plays very important role. In such systems it is required to track the dynamic changes in human face movements quickly in order to deliver the required response system. In real time applications, this Facial Expression Recognition (FER) is very helpful like physical fatigue detection based on facial detection and expressions such as driver fatigue detection in order to prevent the accidents on road. Face expression based physical fatigue analysis or detection is out of scope of this work, but this work proposed a Simultaneous Evolutionary Neural Network (SENN) classification scheme is proposed for recognising human emotion or expression. In this work, at first, automatically detects and tracks facial landmarks in videos, and face is detected by using enhanced adaboost algorithm with haar features. Then, in order to describe facial expression modifications, geometric features are taken out and the Local Binary Pattern (LBP) is extracted to improve the detection accuracy and it has a much lower-dimensional size. With the aim of examining the temporal facial expression modifications, we apply SENN probabilistic classifiers, which examine the facial expressions in individual frames, and after that promulgate the likelihoods during the course of the video to take the temporal features of facial expressions such as glad, sad, anger, and fear feelings. The experimental results show that the performance of proposed SENN scheme is attained better results compared than existing recognition schemes like Time-Delay Neural Network with Support Vector Regression (TDNN-SVR) and SVR.
APA, Harvard, Vancouver, ISO, and other styles
10

Kulkarni, Praveen, and Rajesh T. M. "Analysis on techniques used to recognize and identifying the Human emotions." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 3 (June 1, 2020): 3307. http://dx.doi.org/10.11591/ijece.v10i3.pp3307-3314.

Full text
Abstract:
Facial expression is a major area for non-verbal language in day to day life communication. As the statistical analysis shows only 7 percent of the message in communication was covered in verbal communication while 55 percent transmitted by facial expression. Emotional expression has been a research subject of physiology since Darwin’s work on emotional expression in the 19th century. According to Psychological theory the classification of human emotion is classified majorly into six emotions: happiness, fear, anger, surprise, disgust, and sadness. Facial expressions which involve the emotions and the nature of speech play a foremost role in expressing these emotions. Thereafter, researchers developed a system based on Anatomic of face named Facial Action Coding System (FACS) in 1970. Ever since the development of FACS there is a rapid progress of research in the domain of emotion recognition. This work is intended to give a thorough comparative analysis of the various techniques and methods that were applied to recognize and identify human emotions. This analysis results will help to identify the proper and suitable techniques, algorithms and the methodologies for future research directions. In this paper extensive analysis on the various recognition techniques used to identify the complexity in recognizing the facial expression is presented. This work will also help researchers and scholars to ease out the problem in choosing the techniques used in the identification of the facial expression domain.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Facial expression analysis"

1

Baltrušaitis, Tadas. "Automatic facial expression analysis." Thesis, University of Cambridge, 2014. https://www.repository.cam.ac.uk/handle/1810/245253.

Full text
Abstract:
Humans spend a large amount of their time interacting with computers of one type or another. However, computers are emotionally blind and indifferent to the affective states of their users. Human-computer interaction which does not consider emotions, ignores a whole channel of available information. Faces contain a large portion of our emotionally expressive behaviour. We use facial expressions to display our emotional states and to manage our interactions. Furthermore, we express and read emotions in faces effortlessly. However, automatic understanding of facial expressions is a very difficult task computationally, especially in the presence of highly variable pose, expression and illumination. My work furthers the field of automatic facial expression tracking by tackling these issues, bringing emotionally aware computing closer to reality. Firstly, I present an in-depth analysis of the Constrained Local Model (CLM) for facial expression and head pose tracking. I propose a number of extensions that make location of facial features more accurate. Secondly, I introduce a 3D Constrained Local Model (CLM-Z) which takes full advantage of depth information available from various range scanners. CLM-Z is robust to changes in illumination and shows better facial tracking performance. Thirdly, I present the Constrained Local Neural Field (CLNF), a novel instance of CLM that deals with the issues of facial tracking in complex scenes. It achieves this through the use of a novel landmark detector and a novel CLM fitting algorithm. CLNF outperforms state-of-the-art models for facial tracking in presence of difficult illumination and varying pose. Lastly, I demonstrate how tracked facial expressions can be used for emotion inference from videos. I also show how the tools developed for facial tracking can be applied to emotion inference in music.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Jingting. "Facial Micro-Expression Analysis." Thesis, CentraleSupélec, 2019. http://www.theses.fr/2019CSUP0007.

Full text
Abstract:
Les micro-expressions (MEs) sont porteuses d'informations non verbales spécifiques. Cependant, en raison de leur nature locale et brève, il est difficile de les détecter. Dans cette thèse, nous proposons une méthode de détection par reconnaissance d'un motif local et temporel de mouvement du visage. Ce motif a une forme spécifique (S-pattern) lorsque la ME apparait. Ainsi, à l'aide de SVM, nous distinguons les MEs des autres mouvements faciaux. Nous proposons également une fusion spatiale et temporelle afin d'améliorer la distinction entre les MEs (locaux) et les mouvements de la tête (globaux). Cependant, l'apprentissage des S-patterns est limité par le petit nombre de bases de données de ME et par le faible volume d'échantillons de ME. Les modèles de Hammerstein (HM) est une bonne approximation des mouvements musculaires. En approximant chaque S-pattern par un HM, nous pouvons filtrer les S-patterns réels et générer de nouveaux S-patterns similaires. Ainsi, nous effectuons une augmentation et une fiabilisation des S-patterns pour l'apprentissage et améliorons ainsi la capacité de différencier les MEs d'autres mouvements. Lors du premier challenge de détection de MEs, nous avons participé à la création d’une nouvelle méthode d'évaluation des résultats. Cela a aussi été l’occasion d’appliquer notre méthode à longues vidéos. Nous avons fourni le résultat de base au challenge.Les expérimentions sont effectuées sur CASME I, CASME II, SAMM et CAS(ME)2. Les résultats montrent que notre méthode proposée surpasse la méthode la plus populaire en termes de F1-score. L'ajout du processus de fusion et de l'augmentation des données améliore encore les performances de détection
The Micro-expressions (MEs) are very important nonverbal communication clues. However, due to their local and short nature, spotting them is challenging. In this thesis, we address this problem by using a dedicated local and temporal pattern (LTP) of facial movement. This pattern has a specific shape (S-pattern) when ME are displayed. Thus, by using a classical classification algorithm (SVM), MEs are distinguished from other facial movements. We also propose a global final fusion analysis on the whole face to improve the distinction between ME (local) and head (global) movements. However, the learning of S-patterns is limited by the small number of ME databases and the low volume of ME samples. Hammerstein models (HMs) are known to be a good approximation of muscle movements. By approximating each S-pattern with a HM, we can both filter outliers and generate new similar S-patterns. By this way, we perform a data augmentation for S-pattern training dataset and improve the ability to differentiate MEs from other facial movements. In the first ME spotting challenge of MEGC2019, we took part in the building of the new result evaluation method. In addition, we applied our method to spotting ME in long videos and provided the baseline result for the challenge. The spotting results, performed on CASME I and CASME II, SAMM and CAS(ME)2, show that our proposed LTP outperforms the most popular spotting method in terms of F1-score. Adding the fusion process and data augmentation improve even more the spotting performance
APA, Harvard, Vancouver, ISO, and other styles
3

Carter, Jeffrey R. "Facial expression analysis in schizophrenia." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ58398.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Munasinghe, Kankanamge Sarasi Madushika. "Facial analysis models for face and facial expression recognition." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/118197/1/Sarasi%20Madushika_Munasinghe%20Kankanamge_Thesis.pdf.

Full text
Abstract:
This thesis examines the research and development of new approaches for face and facial expression recognition within the fields of computer vision and biometrics. Expression variation is a challenging issue in current face recognition systems and current approaches are not capable of recognizing facial variations effectively within human-computer interfaces, security and access control applications. This thesis presents new contributions for performing face and expression recognition simultaneously; face recognition in the wild; and facial expression recognition in challenging environments. The research findings include the development of new factor analysis and deep learning approaches which can better handle different facial variations.
APA, Harvard, Vancouver, ISO, and other styles
5

Shang, Lifeng, and 尚利峰. "Facial expression analysis with graphical models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B47849484.

Full text
Abstract:
Facial expression recognition has become an active research topic in recent years due to its applications in human computer interfaces and data-driven animation. In this thesis, we focus on the problem of how to e?ectively use domain, temporal and categorical information of facial expressions to help computer understand human emotions. Over the past decades, many techniques (such as neural networks, Gaussian processes, support vector machines, etc.) have been applied to facial expression analysis. Recently graphical models have emerged as a general framework for applying probabilistic models. They provide a natural framework for describing the generative process of facial expressions. However, these models often su?er from too many latent variables or too complex model structures, which makes learning and inference di±cult. In this thesis, we will try to analyze the deformation of facial expression by introducing some recently developed graphical models (e.g. latent topic model) or improving the recognition ability of some already widely used models (e.g. HMM). In this thesis, we develop three di?erent graphical models with di?erent representational assumptions: categories being represented by prototypes, sets of exemplars and topics in between. Our ¯rst model incorporates exemplar-based representation into graphical models. To further improve computational e±- ciency of the proposed model, we build it in a local linear subspace constructed by principal component analysis. The second model is an extension of the recently developed topic model by introducing temporal and categorical information into Latent Dirichlet Allocation model. In our discriminative temporal topic model (DTTM), temporal information is integrated by placing an asymmetric Dirichlet prior over document-topic distributions. The discriminative ability is improved by a supervised term weighting scheme. We describe the resulting DTTM in detail and show how it can be applied to facial expression recognition. Our third model is a nonparametric discriminative variation of HMM. HMM can be viewed as a prototype model, and transition parameters act as the prototype for one category. To increase the discrimination ability of HMM at both class level and state level, we introduce linear interpolation with maximum entropy (LIME) and member- ship coe±cients to HMM. Furthermore, we present a general formula for output probability estimation, which provides a way to develop new HMM. Experimental results show that the performance of some existing HMMs can be improved by integrating the proposed nonparametric kernel method and parameters adaption formula. In conclusion, this thesis develops three di?erent graphical models by (i) combining exemplar-based model with graphical models, (ii) introducing temporal and categorical information into Latent Dirichlet Allocation (LDA) topic model, and (iii) increasing the discrimination ability of HMM at both hidden state level and class level.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
6

Feffer, Michael A. (Michael Anthony). "Personalized machine learning for facial expression analysis." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119763.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 35-36).
For this MEng Thesis Project, I investigated the personalization of deep convolutional networks for facial expression analysis. While prior work focused on population-based ("one-size-fits-all") models for prediction of affective states (valence/arousal), I constructed personalized versions of these models to improve upon state-of-the-art general models through solving a domain adaptation problem. This was done by starting with pre-trained deep models for face analysis and fine-tuning the last layers to specific subjects or subpopulations. For prediction, a "mixture of experts" (MoE) solution was employed to select the proper outputs based on the given input. The research questions answered in this project are: (1) What are the effects of model personalization on the estimation of valence and arousal from faces? (2) What is the amount of (un)supervised data needed to reach a target performance? Models produced in this research provide the foundation of a novel tool for personalized real-time estimation of target metrics.
by Michael A. Feffer.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
7

Shenoy, A. "Computational analysis of facial expressions." Thesis, University of Hertfordshire, 2010. http://hdl.handle.net/2299/4359.

Full text
Abstract:
This PhD work constitutes a series of inter-disciplinary studies that use biologically plausible computational techniques and experiments with human subjects in analyzing facial expressions. The performance of the computational models and human subjects in terms of accuracy and response time are analyzed. The computational models process images in three stages. This includes: Preprocessing, dimensionality reduction and Classification. The pre-processing of face expression images includes feature extraction and dimensionality reduction. Gabor filters are used for feature extraction as they are closest biologically plausible computational method. Various dimensionality reduction methods: Principal Component Analysis (PCA), Curvilinear Component Analysis (CCA) and Fisher Linear Discriminant (FLD) are used followed by the classification by Support Vector Machines (SVM) and Linear Discriminant Analysis (LDA). Six basic prototypical facial expressions that are universally accepted are used for the analysis. They are: angry, happy, fear, sad, surprise and disgust. The performance of the computational models in classifying each expression category is compared with that of the human subjects. The Effect size and Encoding face enable the discrimination of the areas of the face specific for a particular expression. The Effect size in particular emphasizes the areas of the face that are involved during the production of an expression. This concept of using Effect size on faces has not been reported previously in the literature and has shown very interesting results. The detailed PCA analysis showed the significant PCA components specific for each of the six basic prototypical expressions. An important observation from this analysis was that with Gabor filtering followed by non linear CCA for dimensionality reduction, the dataset vector size may be reduced to a very small number, in most cases it was just 5 components. The hypothesis that the average response time (RT) for the human subjects in classifying the different expressions is analogous to the distance measure of the data points from the classification hyper-plane was verified. This means the harder a facial expression is to classify by human subjects, the closer to the classifying hyper-plane of the classifier it is. A bi-variate correlation analysis of the distance measure and the average RT suggested a significant anti-correlation. The signal detection theory (SDT) or the d-prime determined how well the model or the human subjects were in making the classification of an expressive face from a neutral one. On comparison, human subjects are better in classifying surprise, disgust, fear, and sad expressions. The RAW computational model is better able to distinguish angry and happy expressions. To summarize, there seems to some similarities between the computational models and human subjects in the classification process.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Jing. "Reconstruction and Analysis of 3D Individualized Facial Expressions." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32588.

Full text
Abstract:
This thesis proposes a new way to analyze facial expressions through 3D scanned faces of real-life people. The expression analysis is based on learning the facial motion vectors that are the differences between a neutral face and a face with an expression. There are several expression analysis based on real-life face database such as 2D image-based Cohn-Kanade AU-Coded Facial Expression Database and Binghamton University 3D Facial Expression Database. To handle large pose variations and increase the general understanding of facial behavior, 2D image-based expression database is not enough. The Binghamton University 3D Facial Expression Database is mainly used for facial expression recognition and it is difficult to compare, resolve, and extend the problems related detailed 3D facial expression analysis. Our work aims to find a new and an intuitively way of visualizing the detailed point by point movements of 3D face model for a facial expression. In our work, we have created our own 3D facial expression database on a detailed level, which each expression model has been processed to have the same structure to compare differences between different people for a given expression. The first step is to obtain same structured but individually shaped face models. All the head models are recreated by deforming a generic model to adapt a laser-scanned individualized face shape in both coarse level and fine level. We repeat this recreation method on different human subjects to establish a database. The second step is expression cloning. The motion vectors are obtained by subtracting two head models with/without expression. The extracted facial motion vectors are applied onto a different human subject’s neutral face. Facial expression cloning is proved to be robust and fast as well as easy to use. The last step is about analyzing the facial motion vectors obtained from the second step. First we transferred several human subjects’ expressions on a single human neutral face. Then the analysis is done to compare different expression pairs in two main regions: the whole face surface analysis and facial muscle analysis. Through our work where smiling has been chosen for the experiment, we find our approach to analysis through face scanning a good way to visualize how differently people move their facial muscles for the same expression. People smile in a similar manner moving their mouths and cheeks in similar orientations, but each person shows her/his own unique way of moving. The difference between individual smiles is the differences of movements they make.
APA, Harvard, Vancouver, ISO, and other styles
9

Mourão, André Belchior. "Robust facial expression analysis for affect-based interaction." Master's thesis, Faculdade de Ciências e Tecnologia, 2012. http://hdl.handle.net/10362/8292.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Interaction is moving towards new and more natural approaches. Human Computer Interaction (HCI) is increasingly expanding towards more modalities of human expression such as gestures, body movements and other natural interactions. In this thesis, we propose to extend existing interaction paradigms by including the face as an affect-based input. Affective interaction methods can greatly change the way computers interact with humans; these methods can detect displays of user moods, such as frustration or engagement and adapt the experience accordingly. We have created an affect-based framework that encompasses face detection, face recognition and facial expression recognition and applied it in a computer game. ImEmotion is a two-player game where the player who best mimics an expression wins. The game combines face detection with facial expression recognition to recognize and rate an expression in real time. A controlled evaluation of the framework algorithms and a game trial with 46 users showed the potential of the framework and success of the usage of affect-based interaction based on facial expressions in the game. Despite the novelty of the interaction approach and the limitations of computer vision algorithms, players adapted and became competitive easily.
APA, Harvard, Vancouver, ISO, and other styles
10

Yin, Lijun. "Facial expression analysis and synthesis for model based coding." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0011/NQ59702.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Facial expression analysis"

1

Skinner, Martin. Facial asymmetry in emotional expression: A meta-analysis of research. Leicester: British Psychological Society, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chang, Wei-Lin Melody. Face and face practices in Chinese talk-in-interaction: A study in interactional pragmatics. Sheffield, UK: Equinox Publishing Ltd, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Esposito, Anna, and Robert Vích, eds. Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03320-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gould, Allison Karen. Discrimination of drawn emotional facial expressions using a grid analysis. Sudbury, Ont: Laurentian University, Department of Psychology, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Osório, Flávia de Lima. Facial Expressions: Recognition Technologies and Analysis. Nova Science Publishers, Incorporated, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Diogo, Rui, and Sharlene E. Santana. Evolution of Facial Musculature. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780190613501.003.0008.

Full text
Abstract:
We review the origin and evolution of the facial musculature of mammals and pay special attention to the complex relationships between facial musculature, color patterns, mobility, and social group size during the evolution of humans and other primates. In addition, we discuss the modularity of the human head and the assymetrical use of facial expressions, as well as the evolvability of the muscles of facial expression, based on recent developmental and comparative studies and the use of a powerful new quantitative tool: anatomical networks analysis. We emphasizes the remarkable diversity of primate facial structures and the fact that the number of facial muscles present in our species is actually not as high when compared to many other mammals as previously thought. The use of new tools, such as anatomical network analyses, should be further explored to compare the musculoskeletal and other features of humans across stages of development and with other animal to enable a better understanding of the evolution of facial expressions.
APA, Harvard, Vancouver, ISO, and other styles
7

Durán, Juan I., Rainer Reisenzein, and José-Miguel Fernández-Dols. Coherence Between Emotions and Facial Expressions. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780190613501.003.0007.

Full text
Abstract:
The phrase “facial expression of emotion” contains the implicit assumption that facial expressions co-occur with, and are a consequence of, experienced emotions. Is this assumption true, or more precisely, to what degree is it true? In other words, what is the degree of statistical covariation, or coherence, between emotions and facial expressions? In this chapter, we review empirical evidence from laboratory and field studies that speaks to this question, summarizing studies results concerning expressions of emotions frequently considered as “basic”: happiness-amusement, surprise, disgust, sadness, anger and fear. We provide general and separate emotion mean correlations and proportions as coherence estimates as using meta-analytic methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Hallcrest, Judy Jacobs. Facial Expressions: Anatomy & Analysis, Index of Modern Authors & Subjects With Guide for Rapid Research. Abbe Pub Assn of Washington Dc, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hallcrest, Judy Jacobs. Facial Expressions: Anatomy & Analysis, Index of Modern Authors & Subjects With Guide for Rapid Research. Abbe Pub Assn of Washington Dc, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hallcrest, Judy Jacobs. Facial Expressions - Anatomy and Analysis: Index of Modern Authors and Subjects with Guide for Rapid Research. ABBE Publishers Association of Washington, D., 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Facial expression analysis"

1

Kanade, Takeo. "Facial Expression Analysis." In Lecture Notes in Computer Science, 1. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11564386_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

De la Torre, Fernando, and Jeffrey F. Cohn. "Facial Expression Analysis." In Visual Analysis of Humans, 377–409. London: Springer London, 2011. http://dx.doi.org/10.1007/978-0-85729-997-0_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gong, Shaogang, and Tao Xiang. "Understanding Facial Expression." In Visual Analysis of Behaviour, 69–93. London: Springer London, 2011. http://dx.doi.org/10.1007/978-0-85729-670-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bartlett, Marian Stewart. "Automated Facial Expression Analysis." In Face Image Analysis by Unsupervised Learning, 69–82. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4615-1637-8_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Valstar, Michel. "Automatic Facial Expression Analysis." In Understanding Facial Expressions in Communication, 143–72. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1934-7_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lekshmi, V. Praseeda, M. Sasikumar, Divya S. Vidyadharan, and S. Naveen. "Facial Expression Analysis Using PCA." In Lecture Notes in Electrical Engineering, 355–64. Dordrecht: Springer Netherlands, 2009. http://dx.doi.org/10.1007/978-90-481-2311-7_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Hao. "Facial Expression Synthesis and Analysis." In E-business and Telecommunications, 269–83. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-88653-2_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cao, Jie, Hong Wang, Po Hu, and Junwei Miao. "PAD Model Based Facial Expression Analysis." In Advances in Visual Computing, 450–59. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-89646-3_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tamminen, T., J. Kätsyri, M. Frydrych, and J. Lampinen. "Joint Modeling of Facial Expression and Shape from Video." In Image Analysis, 151–60. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11499145_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Maalej, Ahmed, Hedi Tabia, and Halim Benhabiles. "Dynamic 3D Facial Expression Recognition Using Robust Shape Features." In Image Analysis, 309–18. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38886-6_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Facial expression analysis"

1

Park, Sungsoo, Jongju Shin, and Daijin Kim. "Facial expression analysis with facial expression deformation." In 2008 19th International Conference on Pattern Recognition (ICPR). IEEE, 2008. http://dx.doi.org/10.1109/icpr.2008.4761398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Melaugh, Ryan, Nazmul Siddique, Sonya Coleman, and Pratheepan Yogarajah. "Facial Expression Recognition on partial facial sections." In 2019 11th International Symposium on Image and Signal Processing and Analysis (ISPA). IEEE, 2019. http://dx.doi.org/10.1109/ispa.2019.8868630.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zheng Zhang, Chi Fang, and Xiaoqing Ding. "Facial expression analysis across databases." In 2011 International Conference on Multimedia Technology (ICMT). IEEE, 2011. http://dx.doi.org/10.1109/icmt.2011.6001655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yadav, M. Raju, and P. Chandra Sekhar Reddy. "Survey Analysis on Facial Expression." In 2021 Fifth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC). IEEE, 2021. http://dx.doi.org/10.1109/i-smac52330.2021.9640832.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wardana, Aditya Yudha, Nana Ramadijanti, and Achmad Basuki. "Facial Expression Recognition System for Analysis of Facial Expression Changes when Singing." In 2018 International Electronics Symposium on Knowledge Creation and Intelligent Computing (IES-KCIC). IEEE, 2018. http://dx.doi.org/10.1109/kcic.2018.8628578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Wei-feng, Ji-li Lu, Zeng-fu Wang, and Hua-jun Song. "An Expression Space Model for Facial Expression Analysis." In 2008 Congress on Image and Signal Processing. IEEE, 2008. http://dx.doi.org/10.1109/cisp.2008.216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shan, C., S. Gong, and P. W. McOwan. "Capturing Correlations Among Facial Parts for Facial Expression Analysis." In British Machine Vision Conference 2007. British Machine Vision Association, 2007. http://dx.doi.org/10.5244/c.21.51.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vonikakis, Vassilios, and Stefan Winkler. "Identity-Invariant Facial Landmark Frontalization For Facial Expression Analysis." In 2020 IEEE International Conference on Image Processing (ICIP). IEEE, 2020. http://dx.doi.org/10.1109/icip40778.2020.9190989.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dixit, Bharati A., and A. N. Gaikwad. "Statistical moments based facial expression analysis." In 2015 IEEE International Advance Computing Conference (IACC). IEEE, 2015. http://dx.doi.org/10.1109/iadcc.2015.7154768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lian, Zheng, Ya Li, Jianhua Tao, Jian Huang, and Mingyue Niu. "Region Based Robust Facial Expression Analysis." In 2018 First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia). IEEE, 2018. http://dx.doi.org/10.1109/aciiasia.2018.8470391.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Facial expression analysis"

1

Peschka-Daskalos, Patricia. An Intercultural Analysis of Differences in Appropriateness Ratings of Facial Expressions Between Japanese and American Subjects. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.6584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sklenar, Ihor. The newspaper «Christian Voice» (Munich) in the postwar period: history, thematic range of expression, leading authors and publicists. Ivan Franko National University of Lviv, February 2022. http://dx.doi.org/10.30970/vjo.2022.51.11393.

Full text
Abstract:
The article considers the history, thematic range of expression and a number of authors and publicists of the newspaper «Christian Voice» (with the frequency of a fortnightly). It has been published in Munich by nationally conscious groups of migrants since 1949 as a part of the «Ukrainian Christian Publishing House». The significance of this Ukrainian newspaper in post-Nazi Germany is only partly comprehended in the works of a number of diaspora press’s researchers. Therefore, the purpose of this article is to supplement the scientific information about the «Christian Voice» in the postwar period, in particular, the yearbook for 1957 was chosen as the principal subject of analysis. In the process of writing the article, we used such methods: analysis, synthesis, content analysis, generalization and others. Thus, the results of our study became the socio-political and religious context in which the «Christian Voice» was founded. The article is also a concise overview of the titles of Ukrainian magazines in post-Nazi Germany in the 1940s and 1950s. The thematic analysis of publications of 1957 showed the main trends of journalistic texts in the newspaper and the journalistic skills of it’s iconic authors and publicists (D. Buchynsky, M. Bradovych, S. Shah, etc.). The thematic range of the newspaper after 1959 was somewhat narrowed due to the change in the status of the «Christian Voice» when it became the official newspaper of the UGCC in Germany. It has been distinguished two main thematic blocks of the newspaper ‒ social and religious. Historians will find interesting factual material from the newspaper publications about the life of Ukrainians in the diaspora. Historians of journalism can supplement the bibliographic apparatus in the journalistic and publicistic works of the authors in the postwar period of the newspaper and in subsequent years of publishing. Based upon the publications of the «Christian Voice» in different years, not only since 1957, journalists can study the contents and a form of different genres, linguistic peculiarities in the newspaper articles, and so on.
APA, Harvard, Vancouver, ISO, and other styles
3

Ismailova, L. Yu, S. V. Kosikov, V. S. Zaytsev, and I. O. Sleptsov. educational computer game THE ADVENTURES OF THE GUSARIK" OR THE BASIS OF THE THEORY OF THE STATE AND LAW (version 1.0). SIB-Expertise, July 2022. http://dx.doi.org/10.12731/er0577.04072022.

Full text
Abstract:
TRAINING GAME IS DESIGNED TO OBTAIN NEW AND TEST EXISTING KNOWLEDGE IN THE FIELD OF ONE OF THE MOST IMPORTANT LEGAL DISCIPLINES - THEORY OF STATE AND LAW. GAME ALLOWS TO TEST ITS FORCES IN INTERACTIVE MODE IN SOLVING A LARGE NUMBER OF THEORETICAL AND PRACTICAL QUESTIONS. THE STUDENT CAN WORK OUT NEW TOPICS USING NUMEROUS COMMENTS AND CHECK THE RESULTS OF THEIR ASSIMILATION. GAME CHARACTER'S CLUES AND FACIAL EXPRESSIONS MOTIVATE THE PLAYER TO CAREFULLY WORK WITH THE OBJECT AND ALLOW YOU TO INDEPENDENTLY WORK ON TOPICS THAT CAUSED DIFFICULTIES IN THE CONTROL MODE. GAME CONTENT COMPLIES WITH THE PROGRAM OF THE STATE STANDARD IN THE SPECIALTY "LAW." THE MAIN GOAL OF THE GAME IS TO HELP IN HIGHLIGHTING THEORETICAL LEGAL STRUCTURES IN PRACTICAL SITUATIONS, TO DEVELOP THE SKILLS OF LEGAL ANALYSIS OF THE TEXT OF LEGAL NORMS AND LAW ENFORCEMENT DOCUMENTS, AND THEREBY TO INCREASE THE EFFECTIVENESS OF THE APPLICATION OF LAW.IN ADDITION, THE EDucational GAME WILL INTRODUCE PROFESSIONAL LEGAL TERMINOLOGY IN THIS FIELD. THE GAME "THEORY OF STATE AND LAW" CAN BE USEFUL FOR STUDENTS OF LAW UNIVERSITIES AND FACULTIES, PRACTICING LAWYERS AND EVERYONE WISHING TO IMPROVE THEIR QUALIFICATIONS IN THE FIELD OF LAW. CERTAIN SECTIONS OF THE GAME WILL BE USEFUL FOR TRAINING IN THE UNIVERSITY IN LEGAL SPECIALTIES.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography