To see the other types of publications on this topic, follow the link: Facial expression analysis.

Journal articles on the topic 'Facial expression analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Facial expression analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Matsumoto, David, and Paul Ekman. "Facial expression analysis." Scholarpedia 3, no. 5 (2008): 4237. http://dx.doi.org/10.4249/scholarpedia.4237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kalburgi, Riya, Punit Solanki, Rounak Suthar, and Saurabh Suman. "Expression Analysis System." International Journal of Engineering and Advanced Technology 10, no. 3 (February 28, 2021): 13–15. http://dx.doi.org/10.35940/ijeat.c2128.0210321.

Full text
Abstract:
Expression is the most basic personality trait of an individual. Expressions, ubiquitous to humans from all cultures, can be pivotal in analyzing the personality which is not confined to boundaries. Analyzing the changes in the expression of the individual can bolster the process of deriving his/her personality traits underscoring the paramount reactions like anger, happiness, sadness and so on. This paper aims to exercise Neural Network algorithms to predict the personality traits of an individual from his/her facial expressions. In this paper, a methodology to analyze the personality traits of the individual by periodic monitoring of the changes in facial expressions is presented. The proposed system is intended to analyze the expressions by exploiting Neural Networks strategies to first analyze the facial expressions of the individual by constantly monitoring an individual under observation. This monitoring is done with the help of OpenCV which captures the facial expression at an interval of 15 secs. Thousands of images per expression are used to train the model to aptly distinguish between expression using prominent Neural Network Methodologies of Forward and Backward Propagation. The identified expression is then be fed to a derivative system which plots a graph highlighting the changes in the expression. The graph acts as the crux of the proposed system. The project is important from the perspective of serving as an alternative to manual monitoring which are prone to errors and subjective in nature.
APA, Harvard, Vancouver, ISO, and other styles
3

Sebe, N., M. S. Lew, Y. Sun, I. Cohen, T. Gevers, and T. S. Huang. "Authentic facial expression analysis." Image and Vision Computing 25, no. 12 (December 2007): 1856–63. http://dx.doi.org/10.1016/j.imavis.2005.12.021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zulhijah Awang Jesemi, Dayang Nur, Hamimah Ujir, Irwandi Hipiny, and Sarah Flora Samson Juan. "The analysis of facial feature deformation using optical flow algorithm." Indonesian Journal of Electrical Engineering and Computer Science 15, no. 2 (August 1, 2019): 769. http://dx.doi.org/10.11591/ijeecs.v15.i2.pp769-777.

Full text
Abstract:
<span>Facial features deformed according to the intended facial expression. Specific facial features are associated with specific facial expression, i.e. happy means the deformation of mouth. This paper presents the study of facial feature deformation for each facial expression by using an optical flow algorithm and segmented into three different regions of interest. The deformation of facial features shows the relation between facial the and facial expression. Based on the experiments, the deformations of eye and mouth are significant in all expressions except happy. For happy expression, cheeks and mouths are the significant regions. This work also suggests that different facial features' intensity varies in the way that they contribute to the recognition of the different facial expression intensity. The maximum magnitude across all expressions is shown by the mouth for surprise expression which is 9x10<sup>-4</sup>. While the minimum magnitude is shown by the mouth for angry expression which is 0.4x10<sup>-4</sup>.</span>
APA, Harvard, Vancouver, ISO, and other styles
5

BUCIU, IOAN, and IOAN NAFORNITA. "FEATURE EXTRACTION THROUGH CROSS-PHASE CONGRUENCY FOR FACIAL EXPRESSION ANALYSIS." International Journal of Pattern Recognition and Artificial Intelligence 23, no. 03 (May 2009): 617–35. http://dx.doi.org/10.1142/s021800140900717x.

Full text
Abstract:
Human face analysis has attracted a large number of researchers from various fields, such as computer vision, image processing, neurophysiology or psychology. One of the particular aspects of human face analysis is encompassed by facial expression recognition task. A novel method based on phase congruency for extracting the facial features used in the facial expression classification procedure is developed. Considering a set of image samples comprising humans expressing various expressions, this new approach computes the phase congruency map between the samples. The analysis is performed in the frequency space where the similarity (or dissimilarity) between sample phases is measured to form discriminant features. The experiments were run using samples from two facial expression databases. To assess the method's performance, the technique is compared to the state-of-the art techniques utilized for classifying facial expressions, such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), Linear Discriminant Analysis (LDA), and Gabor jets. The features extracted by the aforementioned techniques are further classified using two classifiers: a distance-based classifier and a Support Vector Machine-based classifier. Experiments reveal superior facial expression recognition performance for the proposed approach with respect to other techniques.
APA, Harvard, Vancouver, ISO, and other styles
6

LEE, CHAN-SU, and DIMITRIS SAMARAS. "ANALYSIS AND CONTROL OF FACIAL EXPRESSIONS USING DECOMPOSABLE NONLINEAR GENERATIVE MODELS." International Journal of Pattern Recognition and Artificial Intelligence 28, no. 05 (July 31, 2014): 1456009. http://dx.doi.org/10.1142/s0218001414560096.

Full text
Abstract:
Facial expressions convey personal characteristics and subtle emotional states. This paper presents a new framework for modeling subtle facial motions of different people with different types of expressions from high-resolution facial expression tracking data to synthesize new stylized subtle facial expressions. A conceptual facial motion manifold is used for a unified representation of facial motion dynamics from three-dimensional (3D) high-resolution facial motions as well as from two-dimensional (2D) low-resolution facial motions. Variant subtle facial motions in different people with different expressions are modeled by nonlinear mappings from the embedded conceptual manifold to input facial motions using empirical kernel maps. We represent facial expressions by a factorized nonlinear generative model, which decomposes expression style factors and expression type factors from different people with multiple expressions. We also provide a mechanism to control the high-resolution facial motion model from low-resolution facial video sequence tracking and analysis. Using the decomposable generative model with a common motion manifold embedding, we can estimate parameters to control 3D high resolution facial expressions from 2D tracking results, which allows performance-driven control of high-resolution facial expressions.
APA, Harvard, Vancouver, ISO, and other styles
7

Ujir, Hamimah, Irwandi Hipiny, and D. N.F. Awang Iskandar. "Facial Action Units Analysis using Rule-Based Algorithm." International Journal of Engineering & Technology 7, no. 3.20 (September 1, 2018): 284. http://dx.doi.org/10.14419/ijet.v7i3.20.19167.

Full text
Abstract:
Most works in quantifying facial deformation are based on action units (AUs) provided by the Facial Action Coding System (FACS) which describes facial expressions in terms of forty-six component movements. AU corresponds to the movements of individual facial muscles. This paper presents a rule based approach to classify the AU which depends on certain facial features. This work only covers deformation of facial features based on posed Happy and the Sad expression obtained from the BU-4DFE database. Different studies refer to different combination of AUs that form Happy and Sad expression. According to the FACS rules lined in this work, an AU has more than one facial property that need to be observed. The intensity comparison and analysis on the AUs involved in Sad and Happy expression are presented. Additionally, dynamic analysis for AUs is studied to determine the temporal segment of expressions, i.e. duration of onset, apex and offset time. Our findings show that AU15, for sad expression, and AU12, for happy expression, show facial features deformation consistency for all properties during the expression period. However for AU1 and AU4, their properties’ intensity is different during the expression period.
APA, Harvard, Vancouver, ISO, and other styles
8

Park, Sung, Seong Won Lee, and Mincheol Whang. "The Analysis of Emotion Authenticity Based on Facial Micromovements." Sensors 21, no. 13 (July 5, 2021): 4616. http://dx.doi.org/10.3390/s21134616.

Full text
Abstract:
People tend to display fake expressions to conceal their true feelings. False expressions are observable by facial micromovements that occur for less than a second. Systems designed to recognize facial expressions (e.g., social robots, recognition systems for the blind, monitoring systems for drivers) may better understand the user’s intent by identifying the authenticity of the expression. The present study investigated the characteristics of real and fake facial expressions of representative emotions (happiness, contentment, anger, and sadness) in a two-dimensional emotion model. Participants viewed a series of visual stimuli designed to induce real or fake emotions and were signaled to produce a facial expression at a set time. From the participant’s expression data, feature variables (i.e., the degree and variance of movement, and vibration level) involving the facial micromovements at the onset of the expression were analyzed. The results indicated significant differences in the feature variables between the real and fake expression conditions. The differences varied according to facial regions as a function of emotions. This study provides appraisal criteria for identifying the authenticity of facial expressions that are applicable to future research and the design of emotion recognition systems.
APA, Harvard, Vancouver, ISO, and other styles
9

Jeyalaksshmi, S., and S. Prasanna. "Simultaneous evolutionary neural network based automated video based facial expression analysis." International Journal of Engineering & Technology 7, no. 1.1 (December 21, 2017): 125. http://dx.doi.org/10.14419/ijet.v7i1.1.9211.

Full text
Abstract:
In real life scenario, facial expressions and emotions are nothing but responses to the external and internal events of human being. In Human Computer Interaction (HCI), recognition of end user’s expressions and emotions from the video streaming plays very important role. In such systems it is required to track the dynamic changes in human face movements quickly in order to deliver the required response system. In real time applications, this Facial Expression Recognition (FER) is very helpful like physical fatigue detection based on facial detection and expressions such as driver fatigue detection in order to prevent the accidents on road. Face expression based physical fatigue analysis or detection is out of scope of this work, but this work proposed a Simultaneous Evolutionary Neural Network (SENN) classification scheme is proposed for recognising human emotion or expression. In this work, at first, automatically detects and tracks facial landmarks in videos, and face is detected by using enhanced adaboost algorithm with haar features. Then, in order to describe facial expression modifications, geometric features are taken out and the Local Binary Pattern (LBP) is extracted to improve the detection accuracy and it has a much lower-dimensional size. With the aim of examining the temporal facial expression modifications, we apply SENN probabilistic classifiers, which examine the facial expressions in individual frames, and after that promulgate the likelihoods during the course of the video to take the temporal features of facial expressions such as glad, sad, anger, and fear feelings. The experimental results show that the performance of proposed SENN scheme is attained better results compared than existing recognition schemes like Time-Delay Neural Network with Support Vector Regression (TDNN-SVR) and SVR.
APA, Harvard, Vancouver, ISO, and other styles
10

Kulkarni, Praveen, and Rajesh T. M. "Analysis on techniques used to recognize and identifying the Human emotions." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 3 (June 1, 2020): 3307. http://dx.doi.org/10.11591/ijece.v10i3.pp3307-3314.

Full text
Abstract:
Facial expression is a major area for non-verbal language in day to day life communication. As the statistical analysis shows only 7 percent of the message in communication was covered in verbal communication while 55 percent transmitted by facial expression. Emotional expression has been a research subject of physiology since Darwin’s work on emotional expression in the 19th century. According to Psychological theory the classification of human emotion is classified majorly into six emotions: happiness, fear, anger, surprise, disgust, and sadness. Facial expressions which involve the emotions and the nature of speech play a foremost role in expressing these emotions. Thereafter, researchers developed a system based on Anatomic of face named Facial Action Coding System (FACS) in 1970. Ever since the development of FACS there is a rapid progress of research in the domain of emotion recognition. This work is intended to give a thorough comparative analysis of the various techniques and methods that were applied to recognize and identify human emotions. This analysis results will help to identify the proper and suitable techniques, algorithms and the methodologies for future research directions. In this paper extensive analysis on the various recognition techniques used to identify the complexity in recognizing the facial expression is presented. This work will also help researchers and scholars to ease out the problem in choosing the techniques used in the identification of the facial expression domain.
APA, Harvard, Vancouver, ISO, and other styles
11

Jin, Bo, Yue Qu, Liang Zhang, and Zhan Gao. "Diagnosing Parkinson Disease Through Facial Expression Recognition: Video Analysis." Journal of Medical Internet Research 22, no. 7 (July 10, 2020): e18697. http://dx.doi.org/10.2196/18697.

Full text
Abstract:
Background The number of patients with neurological diseases is currently increasing annually, which presents tremendous challenges for both patients and doctors. With the advent of advanced information technology, digital medical care is gradually changing the medical ecology. Numerous people are exploring new ways to receive a consultation, track their diseases, and receive rehabilitation training in more convenient and efficient ways. In this paper, we explore the use of facial expression recognition via artificial intelligence to diagnose a typical neurological system disease, Parkinson disease (PD). Objective This study proposes methods to diagnose PD through facial expression recognition. Methods We collected videos of facial expressions of people with PD and matched controls. We used relative coordinates and positional jitter to extract facial expression features (facial expression amplitude and shaking of small facial muscle groups) from the key points returned by Face++. Algorithms from traditional machine learning and advanced deep learning were utilized to diagnose PD. Results The experimental results showed our models can achieve outstanding facial expression recognition ability for PD diagnosis. Applying a long short-term model neural network to the positions of the key features, precision and F1 values of 86% and 75%, respectively, can be reached. Further, utilizing a support vector machine algorithm for the facial expression amplitude features and shaking of the small facial muscle groups, an F1 value of 99% can be achieved. Conclusions This study contributes to the digital diagnosis of PD based on facial expression recognition. The disease diagnosis model was validated through our experiment. The results can help doctors understand the real-time dynamics of the disease and even conduct remote diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
12

Leo, Marco, Pierluigi Carcagnì, Cosimo Distante, Pier Luigi Mazzeo, Paolo Spagnolo, Annalisa Levante, Serena Petrocchi, and Flavia Lecciso. "Computational Analysis of Deep Visual Data for Quantifying Facial Expression Production." Applied Sciences 9, no. 21 (October 25, 2019): 4542. http://dx.doi.org/10.3390/app9214542.

Full text
Abstract:
The computational analysis of facial expressions is an emerging research topic that could overcome the limitations of human perception and get quick and objective outcomes in the assessment of neurodevelopmental disorders (e.g., Autism Spectrum Disorders, ASD). Unfortunately, there have been only a few attempts to quantify facial expression production and most of the scientific literature aims at the easier task of recognizing if either a facial expression is present or not. Some attempts to face this challenging task exist but they do not provide a comprehensive study based on the comparison between human and automatic outcomes in quantifying children’s ability to produce basic emotions. Furthermore, these works do not exploit the latest solutions in computer vision and machine learning. Finally, they generally focus only on a homogeneous (in terms of cognitive capabilities) group of individuals. To fill this gap, in this paper some advanced computer vision and machine learning strategies are integrated into a framework aimed to computationally analyze how both ASD and typically developing children produce facial expressions. The framework locates and tracks a number of landmarks (virtual electromyography sensors) with the aim of monitoring facial muscle movements involved in facial expression production. The output of these virtual sensors is then fused to model the individual ability to produce facial expressions. Gathered computational outcomes have been correlated with the evaluation provided by psychologists and evidence has been given that shows how the proposed framework could be effectively exploited to deeply analyze the emotional competence of ASD children to produce facial expressions.
APA, Harvard, Vancouver, ISO, and other styles
13

Oh, SeungJun, and Dong-Keun Kim. "Comparative Analysis of Emotion Classification Based on Facial Expression and Physiological Signals Using Deep Learning." Applied Sciences 12, no. 3 (January 26, 2022): 1286. http://dx.doi.org/10.3390/app12031286.

Full text
Abstract:
This study aimed to classify emotion based on facial expression and physiological signals using deep learning and to compare the analyzed results. We asked 53 subjects to make facial expressions, expressing four types of emotion. Next, the emotion-inducing video was watched for 1 min, and the physiological signals were obtained. We defined four emotions as positive and negative emotions and designed three types of deep-learning models that can classify emotions. Each model used facial expressions and physiological signals as inputs, and a model in which these two types of input were applied simultaneously was also constructed. The accuracy of the model was 81.54% when physiological signals were used, 99.9% when facial expressions were used, and 86.2% when both were used. Constructing a deep-learning model with only facial expressions showed good performance. The results of this study confirm that the best approach for classifying emotion is using only facial expressions rather than data from multiple inputs. However, this is an opinion presented only in terms of accuracy without considering the computational cost, and it is suggested that physiological signals and multiple inputs be used according to the situation and research purpose.
APA, Harvard, Vancouver, ISO, and other styles
14

Laskar, Takrim Ul Islam, and Parismita Sarma. "Facial Landmark Detection for Expression Analysis." International Journal of Computer Sciences and Engineering 7, no. 5 (May 31, 2019): 1617–22. http://dx.doi.org/10.26438/ijcse/v7i5.16171622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Rufenacht, Dominic, and Appu Shaji. "Customized Facial Expression Analysis in Video." SMPTE Motion Imaging Journal 131, no. 3 (April 2022): 17–24. http://dx.doi.org/10.5594/jmi.2022.3151144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Patil, Prof P. A. "Facial Expression Recognition For Mood Analysis." International Journal for Research in Applied Science and Engineering Technology 7, no. 4 (April 30, 2019): 1455–57. http://dx.doi.org/10.22214/ijraset.2019.4263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

C P, Sumathi. "Automatic Facial Expression Analysis A Survey." International Journal of Computer Science & Engineering Survey 3, no. 6 (December 31, 2012): 47–59. http://dx.doi.org/10.5121/ijcses.2012.3604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Saito, Kei, Michio Isono, Masahiro Ishikawa, Makoto Kawamoto, Hiroaki Miyashita, Koh Yoshikawa, Tatsuyuki Yamamoto, and Kiyotaka Murata. "Three-Dimensional Analysis of Facial Expression." Otology & Neurotology 23, Sup 1 (2002): S76. http://dx.doi.org/10.1097/00129492-200200001-00198.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Fasel, B., and Juergen Luettin. "Automatic facial expression analysis: a survey." Pattern Recognition 36, no. 1 (January 2003): 259–75. http://dx.doi.org/10.1016/s0031-3203(02)00052-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Ligang, Brijesh Verma, Dian Tjondronegoro, and Vinod Chandran. "Facial Expression Analysis under Partial Occlusion." ACM Computing Surveys 51, no. 2 (June 2, 2018): 1–49. http://dx.doi.org/10.1145/3158369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Chang, Ya, Changbo Hu, Rogerio Feris, and Matthew Turk. "Manifold based analysis of facial expression." Image and Vision Computing 24, no. 6 (June 2006): 605–14. http://dx.doi.org/10.1016/j.imavis.2005.08.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Pant, Dibakar Raj, and Rolisha Sthapit. "Analysis of Micro Facial Expression by Machine and Deep Learning Methods: Haar, CNN, and RNN." Journal of the Institute of Engineering 16, no. 1 (April 12, 2021): 95–101. http://dx.doi.org/10.3126/jie.v16i1.36562.

Full text
Abstract:
Facial expressions are due to the actions of the facial muscles located at different facial regions. These expressions are two types: Macro and Micro expressions. The second one is more important in computer vision. Analysis of micro expressions categorized by disgust, happiness, anger, sadness, surprise, contempt, and fear are challenging because of very fast and subtle facial movements. This article presents one machine learning method: Haar and two deep learning methods: Convolution Neural Network (CNN) and Recurrent Neural Network (RNN) to perform recognition of micro-facial expression analysis. First, Haar Cascade Classifier is used to detect the face as a pre-image-processing step. Secondly, those detected faces are passed through series of Convolutional Neural Network (CNN) layers for the features extraction. Thirdly, the Recurrent Neural Network (RNN) classifies micro facial expressions. Two types of data sets are used for training and testing of the proposed method: Chinese Academy of Sciences Micro-Expression II (CSAME II) and Spontaneous Actions and Micro-Movements (SAMM) database. The test accuracy of SAMM and CASME II are obtained as 84.76%, and 87% respectively. In addition, the distinction between micro facial expressions and non- micro facial expressions are analyzed by the ROC curve.
APA, Harvard, Vancouver, ISO, and other styles
23

Johnston, D. J., D. T. Millett, A. F. Ayoub, and M. Bock. "Are Facial Expressions Reproducible?" Cleft Palate-Craniofacial Journal 40, no. 3 (May 2003): 291–96. http://dx.doi.org/10.1597/1545-1569_2003_040_0291_afer_2.0.co_2.

Full text
Abstract:
Objectives To determine the extent of reproducibility of five facial expressions. Design Thirty healthy Caucasian volunteers (15 males, 15 females) aged 21 to 30 years had 20 landmarks highlighted on the face with a fine eyeliner pencil. Subjects were asked to perform a sequence of five facial expressions that were captured by a three-dimensional camera system. Each expression was repeated after 15 minutes to investigate intrasession expression reproducibility. To investigate intersession expression reproducibility, each subject returned 2 weeks after the first session. A single operator identified 3-dimensional coordinate values of each landmark. A partial ordinary procrustes analysis was used to adjust for differences in head posture between similar expressions. Statistical analysis was undertaken using analysis of variance (linear mixed effects model). Results Intrasession expression reproducibility was least between cheek puffs (1.12 mm) and greatest between rest positions (0.74 mm). The reproducibility of individual landmarks was expression specific. Except for the lip purse, the reproducibility of facial expressions was not statistically different within each of the two sessions. Rest position was most reproducible, followed by lip purse, maximal smile, natural smile, and cheek puff. Subjects did not perform expressions with the same degree of symmetry on each occasion. Female subjects demonstrated significantly better reproducibility with regard to the maximal smile than males (p = .036). Conclusions Under standardized conditions, intrasession expression reproducibility was high. Variation in expression reproducibility between sessions was minimal. The extent of reproducibility is expression specific. Differences in expression reproducibility exist between males and females.
APA, Harvard, Vancouver, ISO, and other styles
24

Xu, Chao, Zhi Yong Feng, and Yu Zhang. "Context-Aware Model Based Facial Expression Nets Analysis." Applied Mechanics and Materials 130-134 (October 2011): 3173–76. http://dx.doi.org/10.4028/www.scientific.net/amm.130-134.3173.

Full text
Abstract:
Facial expression analysis is important in Human-Computer Interaction. How to improve the accuracy and objectivity of analysis should be concerned about. In this paper, a novel approach to context-aware based facial expression nets model is proposed. According to interactive environment’s characteristics and the relationship between context factors and facial expression features, CFEN model is constructed and applied to compute and analyze participant’s realistic facial expression in the context environment. It is possible to achieve facial expression’s knowledge representation and reasoning analysis. Experiment is designed and conducted to verify the theoretical model and approach proposed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
25

Dornaika, Fadi, Abdelmalik Moujahid, and Bogdan Raducanu. "Facial expression recognition using tracked facial actions: Classifier performance analysis." Engineering Applications of Artificial Intelligence 26, no. 1 (January 2013): 467–77. http://dx.doi.org/10.1016/j.engappai.2012.09.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Triyanti, Vivi, Yassierli Yassierli, and Hardianto Iridiastadi. "Basic Emotion Recogniton using Automatic Facial Expression Analysis Software." Jurnal Optimasi Sistem Industri 18, no. 1 (May 16, 2019): 55. http://dx.doi.org/10.25077/josi.v18.n1.p55-64.2019.

Full text
Abstract:
Facial expression was proven to show a person's emotions, including 6 basic human emotions, namely happy, sad, surprise, disgusted, angry, and fear. Currently, the recognition of basic emotions is applied using the automatic facial expression analysis software. In fact, not all emotions are performed with the same expressions. This study aims to analyze whether the six basic human emotions can be recognized by software. Ten subjects were asked to spontaneously show the expressions of the six basic emotions sequentially. Subjects are not given instructions on how the standard expressions of each of the basic emotions are. The results show that only happy expressions can be consistently identified clearly by the software, while sad expressions are almost unrecognizable. On the other hand surprise expressions tend to be recognized as mixed emotions of surprise and happy. There are two emotions that are difficult to express by the subject, namely fear and anger. The subject interpretation of these two emotions varies widely and tends to be unrecognizable by software. The conclusion of this study is the way a person shows his emotions varies. Although there are some similarities in expression, it cannot be proven that all expressions of basic human emotions can be generalized. Further implication of this research needs further discussion.
APA, Harvard, Vancouver, ISO, and other styles
27

Kawamura, Satoru, Masashi Komori, and Yusuke Miyamoto. "Smiling Reduces Masculinity: Principal Component Analysis Applied to Facial Images." Perception 37, no. 11 (January 1, 2008): 1637–48. http://dx.doi.org/10.1068/p5811.

Full text
Abstract:
We examined the effect of facial expression on the assignment of gender to facial images. A computational analysis of the facial images was applied to examine whether physical aspects of the face itself induced this effect. Thirty-six observers rated the degree of masculinity of the faces of 48 men, and the degree of femininity of the faces of 48 women. Half of the faces had a neutral facial expression, and the other half was smiling. Smiling significantly reduced the perceived masculinity of men's faces, especially for male observers, whereas no effect of smiling on femininity ratings was obtained for women's faces. A principal component analysis was conducted on the matrix of pixel luminance values for each facial image × all the images. The third principle component explained a relatively high proportion of the variance of both facial expressions and gender of face. These results suggest that the effect of smiling on the assignment of gender is caused, at least in part, by the physical relationship between facial expression and face gender.
APA, Harvard, Vancouver, ISO, and other styles
28

Huang, Yanhui, Xing Zhang, Yangyu Fan, Lijun Yin, Lee Seversky, James Allen, Tao Lei, and Weijun Dong. "Reshaping 3D facial scans for facial appearance modeling and 3D facial expression analysis." Image and Vision Computing 30, no. 10 (October 2012): 750–61. http://dx.doi.org/10.1016/j.imavis.2011.12.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

M Gowda, Shashank, and H. N. Suresh. "Facial Expression Analysis and Estimation Based on Facial Salient Points and Action Unit (AUs)." International Journal of Electrical and Electronics Research 10, no. 1 (March 30, 2022): 7–17. http://dx.doi.org/10.37391/ijeer.100102.

Full text
Abstract:
Humans use their facial expressions as one of the most effective, quick, and natural ways to convey their feelings and intentions to others. In this research, presents the analyses of human facial structure along with its components using Facial Action Units (AUs) and Geometric structures for identifying human facial expressions. The approach considers facial components such as Nose, Mouth, eyes and eye brows for FER. Nostril contours such as left lower tip, right lower tip, and centre tip are considered as salient points of Nose. Various salient points for Mouth are extracted from the left and right end point, upper and lower lip mid points along with curve. These salient points are extracted for all facial expression of the same subject considering neutral face as reference. The Geometric structure for neutral face is mapped along with other facial expression faces. The deformation is estimated using the Euclidean distance. The classification algorithms such as LibSVM, MLP, RF has achieved classification accuracy of 86.56% on an average. The findings of the experiments show that the extraction of picture characteristics is more efficient in terms of computing and gives promising outcomes.
APA, Harvard, Vancouver, ISO, and other styles
30

OGOSHI, Yasuhiro, Sakiko OGOSHI, Tomohiro TAKEZAWA, and Yoshinori MITSUHASHI. "Facial Electromyogram (FEMG) Analysis of Perception and Rendering of Facial Expression." Transactions of Japan Society of Kansei Engineering 17, no. 2 (2017): 243–49. http://dx.doi.org/10.5057/jjske.tjske-d-17-00012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Pantic, M., and L. J. M. Rothkrantz. "Facial Action Recognition for Facial Expression Analysis From Static Face Images." IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 34, no. 3 (June 2004): 1449–61. http://dx.doi.org/10.1109/tsmcb.2004.825931.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Maalej, Ahmed, Boulbaba Ben Amor, Mohamed Daoudi, Anuj Srivastava, and Stefano Berretti. "Shape analysis of local facial patches for 3D facial expression recognition." Pattern Recognition 44, no. 8 (August 2011): 1581–89. http://dx.doi.org/10.1016/j.patcog.2011.02.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Kotsia, Irene, Ioan Buciu, and Ioannis Pitas. "An analysis of facial expression recognition under partial facial image occlusion." Image and Vision Computing 26, no. 7 (July 2008): 1052–67. http://dx.doi.org/10.1016/j.imavis.2007.11.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Pantic, Maja. "Machine analysis of facial behaviour: naturalistic and dynamic behaviour." Philosophical Transactions of the Royal Society B: Biological Sciences 364, no. 1535 (December 12, 2009): 3505–13. http://dx.doi.org/10.1098/rstb.2009.0135.

Full text
Abstract:
This article introduces recent advances in the machine analysis of facial expressions. It describes the problem space, surveys the problem domain and examines the state of the art. Two recent research topics are discussed with particular attention: analysis of facial dynamics and analysis of naturalistic (spontaneously displayed) facial behaviour. Scientific and engineering challenges in the field in general, and in these specific subproblem areas in particular, are discussed and recommendations for accomplishing a better facial expression measurement technology are outlined.
APA, Harvard, Vancouver, ISO, and other styles
35

Salinas, Freddy Alejandro Castro, Geovanny Genaro Reivan Ortiz, and Pedro Carlos Martínez Suarez. "Recognition of emotions through facial expression analysis." South Florida Journal of Development 2, no. 2 (May 17, 2021): 2102–18. http://dx.doi.org/10.46932/sfjdv2n2-076.

Full text
Abstract:
The possibility of recognizing what emotion one of our peers is experiencing has been the subject of study by various researchers over the years, Paul Ekman being the one who has delved most deeply into this subject, the most viable and simple way to achieve this would be through the analysis of people's facial expressions. The search for information was carried out using rigorous exclusion criteria such as studies corresponding to grizzly data and letters to the editor, and inclusion criteria such as studies published only in high impact journals such as PubMed, Elsevier, Taylor & Francis, ScienceDirect, APA PsycNet and Springer, PRISMA guidelines and AMSTAR check-list were used. The main objective of this systematic review was to determine whether there is sufficient scientific literature evidence to clarify whether it is possible to accurately identify the six basic universal emotions "happiness, surprise, sadness, anger, fear and disgust" proposed by Paul Ekman through facial expressions. After the analysis of the articles collected and based on the main findings, it is concluded that the recognition of emotions through facial expressions is a subject that still needs to be studied in greater depth, as suggested by the results obtained.
APA, Harvard, Vancouver, ISO, and other styles
36

Tobitani, Kensuke, Kunihito Kato, and Kazuhiko Yamamoto. "Analysis of Facial Expression by Taste Stimulation." IEEJ Transactions on Industry Applications 131, no. 4 (2011): 586–91. http://dx.doi.org/10.1541/ieejias.131.586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Asada, Taro, Yasunari Yoshitomi, Airi Tsuji, Ryota Kato, Masayoshi Tabuse, Noriaki Kuwahara, and Jin Narumoto. "Facial Expression Analysis while Using Video Phone." Journal of Robotics, Networking and Artificial Life 2, no. 4 (2016): 258. http://dx.doi.org/10.2991/jrnal.2016.2.4.12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Shbib, Reda, and Shikun Zhou. "Facial Expression Analysis using Active Shape Model." International Journal of Signal Processing, Image Processing and Pattern Recognition 8, no. 1 (January 31, 2015): 9–22. http://dx.doi.org/10.14257/ijsip.2015.8.1.02.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Tian, Y. I., T. Kanade, and J. F. Cohn. "Recognizing action units for facial expression analysis." IEEE Transactions on Pattern Analysis and Machine Intelligence 23, no. 2 (2001): 97–115. http://dx.doi.org/10.1109/34.908962.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Jain, V., E. Mavridou, J. L. Crowley, and A. Lux. "Facial expression analysis and the affect space." Pattern Recognition and Image Analysis 25, no. 3 (July 2015): 430–36. http://dx.doi.org/10.1134/s1054661815030086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Ding, Xiaoyu, Wen-Sheng Chu, Fernando De la Torre, Jeffery F. Cohn, and Qiao Wang. "Cascade of Tasks for facial expression analysis." Image and Vision Computing 51 (July 2016): 36–48. http://dx.doi.org/10.1016/j.imavis.2016.03.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Dahmane, Mohamed, and Jean Meunier. "Prototype-Based Modeling for Facial Expression Analysis." IEEE Transactions on Multimedia 16, no. 6 (October 2014): 1574–84. http://dx.doi.org/10.1109/tmm.2014.2321113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Bondarenko, Yakov A., and Galina Ya Menshikova. "EXPLORING ANALYTICAL AND HOLISTIC PROCESSING IN FACIAL EXPRESSION RECOGNITION." Moscow University Psychology Bulletin, no. 2 (2020): 103–40. http://dx.doi.org/10.11621/vsp.2020.02.06.

Full text
Abstract:
Background. The study explores two main processes of perception of facial expression: analytical (perception based on individual facial features) and holistic (holistic and non-additive perception of all features). The relative contribution of each process to facial expression recognition is still an open question. Objective. To identify the role of holistic and analytical mechanisms in the process of facial expression recognition. Methods. A method was developed and tested for studying analytical and holistic processes in the task of evaluating subjective differences of expressions, using composite and inverted facial images. A distinctive feature of the work is the use of a multidimensional scaling method, by which a judgment of the contribution of holistic and analytical processes to the perception of facial expressions is based on the analysis of the subjective space of the similarity of expressions obtained when presenting upright and inverted faces. Results. It was shown, first, that when perceiving upright faces, a characteristic clustering of expressions is observed in the subjective space of similarities of expression, which we interpret as a predominance of holistic processes; second, by inversion of the face, there is a change in the spatial configuration of expressions that may reflect a strengthening of analytical processes; in general, the method of multidimensional scaling has proven its effectiveness in solving the problem of the relation between holistic and analytical processes in recognition of facial expressions. Conclusion. The analysis of subjective spaces of the similarity of emotional faces is productive for the study of the ratio of analytical and holistic processes in the recognition of facial expressions.
APA, Harvard, Vancouver, ISO, and other styles
44

CHOI, JAE-YOUNG, TAEG-KEUN WHANGBO, YOUNG-GYU YANG, MURLIKRISHNA VISWANATHAN, and NAK-BIN KIM. "POSE-EXPRESSION NORMALIZATION FOR FACE RECOGNITION USING CONNECTED COMPONENTS ANALYSIS." International Journal of Pattern Recognition and Artificial Intelligence 20, no. 06 (September 2006): 869–81. http://dx.doi.org/10.1142/s0218001406005010.

Full text
Abstract:
Accurate measurement of poses and expressions can increase the efficiency of recognition systems by avoiding the recognition of spurious faces. This paper presents a novel and robust pose-expression invariant face recognition method in order to improve the existing face recognition techniques. First, we apply the TSL color model for detecting facial region and estimate the vector X-Y-Z of face using connected components analysis. Second, the input face is mapped by a deformable 3D facial model. Third, the mapped face is transformed to the frontal face which appropriates for face recognition by the estimated pose vector and action unit of expression. Finally, the damaged regions which occur during the process of normalization are reconstructed using PCA. Several empirical tests are used to validate the application of face detection model and the method for estimating facial poses and expression. In addition, the tests suggest that recognition rate is greatly boosted through the normalization of the poses and expression.
APA, Harvard, Vancouver, ISO, and other styles
45

Syefrida Yulina and Mona Elviyenti. "An Exploratory Data Analysis for Synchronous Online Learning Based on AFEA Digital Images." Jurnal Nasional Teknik Elektro dan Teknologi Informasi 11, no. 2 (May 30, 2022): 114–20. http://dx.doi.org/10.22146/jnteti.v11i2.3867.

Full text
Abstract:
The spread of COVID-19 throughout the world has affected the education sector. In some higher education institution, such as Polytechnic Caltex Riau (PCR), it is mandatory for students to participate in synchronous or asynchronous learning activities via virtual classroom. Synchronous online learning is usually supported by video conferencing media such as Google Meeting or Zoom Meeting. The communication between lecturers and students is captured as an image as evidence of students’ interaction and participation in certain learning subjects. These images can provide information for lecturers in determining students’ internal feelings and measuring students’ interest through facial emotions. Taking this reason into account, the current research aims to analyze the emotions detected in facial expression through images using automatic facial expression analysis (AFEA) and exploratory data analysis (EDA), then visualize the data to determine the possible solution to improve the educational process’ sustainability. The AFEA steps applied were face acquisition to detect facial parts in an image, facial data extraction and representation to process feature extraction on the face, and facial expression recognition to classify faces into emotional expressions. Thus, this paper presents the results obtained from applying machine learning algorithms to classify facial expressions into happy and unhappy emotions with mean values of 5.58 and 2.70, respectively. The data were taken from the second semester of 2020/2021 academic year with 1,206 images. The result highlighted the fact that students showed the facial emotion based on the lecture types, hours, departments, and classes. It indicates that there are, in fact, several factors contributing to the variances of students’ facial emotions classified in synchronous online learning.
APA, Harvard, Vancouver, ISO, and other styles
46

Alamsyah, Natasya Evelyn. "Multimodal Analysis: Revealing Bayu Skak’s Frustration in “Valentine Janc#k” Video." K@ta Kita 9, no. 1 (March 24, 2021): 44–52. http://dx.doi.org/10.9744/katakita.9.1.44-52.

Full text
Abstract:
This qualitative study aimed to know the meaning of an expression especially Bayu’s expressions in this “Valentine Janc#k” video. Expression is something that cannot be separated from a conversation. The expression itself has various meanings; sometimes the same expression can have two or even more meanings. To get the accurate meaning, all aspects within us must work together such as: facial expressions, gesture, and also the choice of words used. Just like what the writer found in this "Valentine Janc#k" video, where Bayu shows an expression like being angry which the writer then follows up to get the accurate meaning. In doing so, the writer uses a multimodal theory which focuses on semiotic modes such as linguistic modes, gestural modes, and visual modes. Then, the writer found out that all of the facial expressions, gestures, and also the setting of the place that the writer analyzes in the video show signs of expression that lead to frustration. When talking about Valentine, Bayu shows an annoyed and unpleasant facial expression, the body movements shown by Bayu also show disinterest in the topic of conversation, as well as a messy viewpoint which describes Bayu's messed feelings for Valentine. The choice of words Bayu uses against Valentine (swear words) also plays a big role in showing the frustration. Keywords: multimodal, semiotic modes, lingustic modes, gestural modes, visual modes
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Yu, and Xiao Jiao Wang. "Analysis and Research of Humanoid Robot Facial Expression Behavior Simulations." Advanced Materials Research 645 (January 2013): 239–42. http://dx.doi.org/10.4028/www.scientific.net/amr.645.239.

Full text
Abstract:
By analyzing the motion of eyes, eyebrows, mouth, and lower jaw in typical facial expressions, it obtains the motion scopes of each organ. Base on the humanoid head mechanism design, a robot model is created with existing software, which blends head mechanism model and facial elastomeric model. It simulates four typical facial expressions of the humanoid robot (happiness, sadness, surprise, anger) by using finite element method to analysis and simulation; and discuses under different displacement load, the degree of realizing facial expressions. It provided data for humanoid robot to be farther designed and developed.
APA, Harvard, Vancouver, ISO, and other styles
48

Malawski, Filip. "Acquisition of databases for facial analysis." Challenges of Modern Technology 7, no. 3 (September 29, 2016): 3–7. http://dx.doi.org/10.5604/01.3001.0009.5442.

Full text
Abstract:
This article describes guidelines and recommendations for acquisition of databases for facial analysis. New devices and methods for both face recognition and facial expression recognition are constantly developed. In order to evaluate these devices and methods, dedicated datasets are recorded. Acquisition of a database for facial analysis is not an easy task and requires taking into account multiple issues. Based on our experience with recording databases for facial expression recognition, we provide guidelines regarding the acquisition process. Multiple aspects of such process are discussed in this work, namely selection of sensors and data streams, design and structure of the database, technical aspects, acquisition conditions and design of the user interface. Recommendations how to address these aspects are provided and justified. An acquisition software, designed according to these guidelines, is also discussed. The software was used for recording an extended version of our previous facial expression recognition database and proved to both ensure correct data and be convenient for the recorded subjects.
APA, Harvard, Vancouver, ISO, and other styles
49

Owusu, Ebenezer, Jacqueline Asor Kumi, and Justice Kwame Appati. "On Facial Expression Recognition Benchmarks." Applied Computational Intelligence and Soft Computing 2021 (September 17, 2021): 1–20. http://dx.doi.org/10.1155/2021/9917246.

Full text
Abstract:
Facial expression is an important form of nonverbal communication, as it is noted that 55% of what humans communicate is expressed in facial expressions. There are several applications of facial expressions in diverse fields including medicine, security, gaming, and even business enterprises. Thus, currently, automatic facial expression recognition is a hotbed research area that attracts lots of grants and therefore the need to understand the trends very well. This study, as a result, aims to review selected published works in the domain of study and conduct valuable analysis to determine the most common and useful algorithms employed in the study. We selected published works from 2010 to 2021 and extracted, analyzed, and summarized the findings based on the most used techniques in feature extraction, feature selection, validation, databases, and classification. The result of the study indicates strongly that local binary pattern (LBP), principal component analysis (PCA), saturated vector machine (SVM), CK+, and 10-fold cross-validation are the most widely used feature extraction, feature selection, classifier, database, and validation method used, respectively. Therefore, in line with our findings, this study provides recommendations for research specifically for new researchers with little or no background as to which methods they can employ and strive to improve.
APA, Harvard, Vancouver, ISO, and other styles
50

Kinchella, Jade, and Kun Guo. "Facial Expression Ambiguity and Face Image Quality Affect Differently on Expression Interpretation Bias." Perception 50, no. 4 (March 12, 2021): 328–42. http://dx.doi.org/10.1177/03010066211000270.

Full text
Abstract:
We often show an invariant or comparable recognition performance for perceiving prototypical facial expressions, such as happiness and anger, under different viewing settings. However, it is unclear to what extent the categorisation of ambiguous expressions and associated interpretation bias are invariant in degraded viewing conditions. In this exploratory eye-tracking study, we systematically manipulated both facial expression ambiguity (via morphing happy and angry expressions in different proportions) and face image clarity/quality (via manipulating image resolution) to measure participants’ expression categorisation performance, perceived expression intensity, and associated face-viewing gaze distribution. Our analysis revealed that increasing facial expression ambiguity and decreasing face image quality induced the opposite direction of expression interpretation bias (negativity vs. positivity bias, or increased anger vs. increased happiness categorisation), the same direction of deterioration impact on rating expression intensity, and qualitatively different influence on face-viewing gaze allocation (decreased gaze at eyes but increased gaze at mouth vs. stronger central fixation bias). These novel findings suggest that in comparison with prototypical facial expressions, our visual system has less perceptual tolerance in processing ambiguous expressions which are subject to viewing condition-dependent interpretation bias.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography