Academic literature on the topic 'Facial expression understanding'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Facial expression understanding.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Facial expression understanding"

1

Calder, Andrew J., and Andrew W. Young. "Understanding the recognition of facial identity and facial expression." Nature Reviews Neuroscience 6, no. 8 (August 2005): 641–51. http://dx.doi.org/10.1038/nrn1724.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lisetti, Christine L., and Diane J. Schiano. "Automatic facial expression interpretation." Facial Information Processing 8, no. 1 (May 17, 2000): 185–235. http://dx.doi.org/10.1075/pc.8.1.09lis.

Full text
Abstract:
We discuss here one of our projects, aimed at developing an automatic facial expression interpreter, mainly in terms of signaled emotions. We present some of the relevant findings on facial expressions from cognitive science and psychology that can be understood by and be useful to researchers in Human-Computer Interaction and Artificial Intelligence. We then give an overview of HCI applications involving automated facial expression recognition, we survey some of the latest progresses in this area reached by various approaches in computer vision, and we describe the design of our facial expression recognizer. We also give some background knowledge about our motivation for understanding facial expressions and we propose an architecture for a multimodal intelligent interface capable of recognizing and adapting to computer users’ affective states. Finally, we discuss current interdisciplinary issues and research questions which will need to be addressed for further progress to be made in the promising area of computational facial expression recognition.
APA, Harvard, Vancouver, ISO, and other styles
3

Lazzeri, Nicole, Daniele Mazzei, Maher Ben Moussa, Nadia Magnenat-Thalmann, and Danilo De Rossi. "The influence of dynamics and speech on understanding humanoid facial expressions." International Journal of Advanced Robotic Systems 15, no. 4 (July 1, 2018): 172988141878315. http://dx.doi.org/10.1177/1729881418783158.

Full text
Abstract:
Human communication relies mostly on nonverbal signals expressed through body language. Facial expressions, in particular, convey emotional information that allows people involved in social interactions to mutually judge the emotional states and to adjust its behavior appropriately. First studies aimed at investigating the recognition of facial expressions were based on static stimuli. However, facial expressions are rarely static, especially in everyday social interactions. Therefore, it has been hypothesized that the dynamics inherent in a facial expression could be fundamental in understanding its meaning. In addition, it has been demonstrated that nonlinguistic and linguistic information can contribute to reinforce the meaning of a facial expression making it easier to be recognized. Nevertheless, few studies have been performed on realistic humanoid robots. This experimental work aimed at demonstrating the human-like expressive capability of a humanoid robot by examining whether the effect of motion and vocal content influenced the perception of its facial expressions. The first part of the experiment aimed at studying the recognition capability of two kinds of stimuli related to the six basic expressions (i.e. anger, disgust, fear, happiness, sadness, and surprise): static stimuli, that is, photographs, and dynamic stimuli, that is, video recordings. The second and third parts were focused on comparing the same six basic expressions performed by a virtual avatar and by a physical robot under three different conditions: (1) muted facial expressions, (2) facial expressions with nonlinguistic vocalizations, and (3) facial expressions with an emotionally neutral verbal sentence. The results show that static stimuli performed by a human being and by the robot were more ambiguous than the corresponding dynamic stimuli on which motion and vocalization were associated. This hypothesis has been also investigated with a 3-dimensional replica of the physical robot demonstrating that even in case of a virtual avatar, dynamic and vocalization improve the emotional conveying capability.
APA, Harvard, Vancouver, ISO, and other styles
4

Padmapriya K.C., Leelavathy V., and Angelin Gladston. "Automatic Multiface Expression Recognition Using Convolutional Neural Network." International Journal of Artificial Intelligence and Machine Learning 11, no. 2 (July 2021): 1–13. http://dx.doi.org/10.4018/ijaiml.20210701.oa8.

Full text
Abstract:
The human facial expressions convey a lot of information visually. Facial expression recognition plays a crucial role in the area of human-machine interaction. Automatic facial expression recognition system has many applications in human behavior understanding, detection of mental disorders and synthetic human expressions. Recognition of facial expression by computer with high recognition rate is still a challenging task. Most of the methods utilized in the literature for the automatic facial expression recognition systems are based on geometry and appearance. Facial expression recognition is usually performed in four stages consisting of pre-processing, face detection, feature extraction, and expression classification. In this paper we applied various deep learning methods to classify the seven key human emotions: anger, disgust, fear, happiness, sadness, surprise and neutrality. The facial expression recognition system developed is experimentally evaluated with FER dataset and has resulted with good accuracy.
APA, Harvard, Vancouver, ISO, and other styles
5

ter Stal, Silke, Gerbrich Jongbloed, and Monique Tabak. "Embodied Conversational Agents in eHealth: How Facial and Textual Expressions of Positive and Neutral Emotions Influence Perceptions of Mutual Understanding." Interacting with Computers 33, no. 2 (March 2021): 167–76. http://dx.doi.org/10.1093/iwc/iwab019.

Full text
Abstract:
Abstract Embodied conversational agents (ECAs) could engage users in eHealth by building mutual understanding (i.e. rapport) via emotional expressions. We compared an ECA’s emotions expressed in text with an ECA’s emotions in facial expressions on users’ perceptions of rapport. We used a $2 \times 2$ design, combining a happy or neutral facial expression with a happy or neutral textual expression. Sixty-three participants (mean, 48$ \pm $22 years) had a dialogue with an ECA on healthy living and rated multiple rapport items. Results show that participants’ perceived rapport for an ECA with a happy facial expression and neutral textual expression and an ECA with a neutral facial expression and happy textual expression was significantly higher than the neutral value of the rapport scale ($P = 0.049$ and $P = 0.008$, respectively). Furthermore, results show no significant difference in overall rapport between the conditions ($P = 0.062$), but a happy textual expression for an ECA with a neutral facial expression shows higher ratings of the individual rapport items helpfulness ($P = 0.019$) and enjoyableness ($P = 0.028$). Future research should investigate users’ rapport towards an ECA with different emotions in long-term interaction and how a user’s age and personality and an ECA’s animations affect rapport building. Optimizing rapport building between a user and an ECA could contribute to achieving long-term interaction with eHealth.
APA, Harvard, Vancouver, ISO, and other styles
6

Pancotti, Francesco, Sonia Mele, Vincenzo Callegari, Raffaella Bivi, Francesca Saracino, and Laila Craighero. "Efficacy of Facial Exercises in Facial Expression Categorization in Schizophrenia." Brain Sciences 11, no. 7 (June 22, 2021): 825. http://dx.doi.org/10.3390/brainsci11070825.

Full text
Abstract:
Embodied cognition theories suggest that observation of facial expression induces the same pattern of muscle activation, and that this contributes to emotion recognition. Consequently, the inability to form facial expressions would affect emotional understanding. Patients with schizophrenia show a reduced ability to express and perceive facial emotions. We assumed that a physical training specifically developed to mobilize facial muscles could improve the ability to perform facial movements, and, consequently, spontaneous mimicry and facial expression recognition. Twenty-four inpatient participants with schizophrenia were randomly assigned to the experimental and control group. At the beginning and at the end of the study, both groups were submitted to a facial expression categorization test and their data compared. The experimental group underwent a training period during which the lip muscles, and the muscles around the eyes were mobilized through the execution of transitive actions. Participants were trained three times a week for five weeks. Results showed a positive impact of the physical training in the recognition of others’ facial emotions, specifically for the responses of “fear”, the emotion for which the recognition deficit in the test is most severe. This evidence suggests that a specific deficit of the sensorimotor system may result in a specific cognitive deficit.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Fengyuan, Jianhua Lv, Guode Ying, Shenghui Chen, and Chi Zhang. "Facial expression recognition from image based on hybrid features understanding." Journal of Visual Communication and Image Representation 59 (February 2019): 84–88. http://dx.doi.org/10.1016/j.jvcir.2018.11.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Parr, Lisa A., and Bridget M. Waller. "Understanding chimpanzee facial expression: insights into the evolution of communication." Social Cognitive and Affective Neuroscience 1, no. 3 (December 1, 2006): 221–28. http://dx.doi.org/10.1093/scan/nsl031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Uddin, Md Azher, Joolekha Bibi Joolee, and Kyung-Ah Sohn. "Dynamic Facial Expression Understanding Using Deep Spatiotemporal LDSP On Spark." IEEE Access 9 (2021): 16866–77. http://dx.doi.org/10.1109/access.2021.3053276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Arya, Ali, Steve DiPaola, and Avi Parush. "Perceptually Valid Facial Expressions for Character-Based Applications." International Journal of Computer Games Technology 2009 (2009): 1–13. http://dx.doi.org/10.1155/2009/462315.

Full text
Abstract:
This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animations. Most existing methods for creating facial expressions of mixed emotions use operations like averaging to create the combined effect of two universal emotions. Such methods may be mathematically justifiable but are not necessarily valid from a perceptual point of view. The research reported here starts by user experiments aiming at understanding how people combine facial actions to express mixed emotions, and how the viewers perceive a set of facial actions in terms of underlying emotions. Using the results of these experiments and a three-dimensional emotion model, we associate facial actions to dimensions and regions in the emotion space, and create a facial expression based on the location of the mixed emotion in the three-dimensional space. We call these regionalized facial actions “facial expression units.”
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Facial expression understanding"

1

Choudhury, Tanzeem Khalid 1975. "FaceFacts : study of facial features for understanding expression." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/61109.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 1999.
Includes bibliographical references (p. 79-83).
by Tanzeem Khalid Choudhury.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
2

Bloom, Elana. "Recognition, expression, and understanding facial expressions of emotion in adolescents with nonverbal and general learning disabilities." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=100323.

Full text
Abstract:
Students with learning disabilities (LD) have been found to exhibit social difficulties compared to those without LD (Wong, 2004). Recognition, expression, and understanding of facial expressions of emotions have been shown to be important for social functioning (Custrini & Feldman, 1989; Philippot & Feldman, 1990). LD subtypes have been studied (Rourke, 1999) and children with nonverbal learning disabilities (NVLD) have been observed to be worse at recognizing facial expressions compared to children with verbal learning disabilities (VLD), no learning disability (NLD; Dimitrovsky, Spector, Levy-Shiff, & Vakil, 1998; Dimitrovsky, Spector, & Levy-Shiff, 2000), and those with psychiatric difficulties without LD controls (Petti, Voelker, Shore, & Hyman-Abello, 2003). However, little has been done in this area with adolescents with NVLD. Recognition, expression and understanding facial expressions of emotion, as well as general social functioning have yet to be studied simultaneously among adolescents with NVLD, NLD, and general learning disabilities (GLD). The purpose of this study was to examine abilities of adolescents with NVLD, GLD, and without LD to recognize, express, and understand facial expressions of emotion, in addition to their general social functioning.
Adolescents aged 12 to 15 were screened for LD and NLD using the Wechsler Intelligence Scale for Children---Third Edition (WISC-III; Weschler, 1991) and the Wide Range Achievement Test---Third Edition (WRAT3; Wilkinson, 1993) and subtyped into NVLD and GLD groups based on the WRAT3. The NVLD ( n = 23), matched NLD (n = 23), and a comparable GLD (n = 23) group completed attention, mood, and neuropsychological measures. The adolescent's ability to recognize (Pictures of Facial Affect; Ekman & Friesen, 1976), express, and understand facial expressions of emotion, and their general social functioning was assessed. Results indicated that the GLD group was significantly less accurate at recognizing and understanding facial expressions of emotion compared to the NVLD and NLD groups, who did not differ from each other. No differences emerged between the NVLD, NLD, and GLD groups on the expression or social functioning tasks. The neuropsychological measures did not account for a significant portion of the variance on the emotion tasks. Implications regarding severity of LD are discussed.
APA, Harvard, Vancouver, ISO, and other styles
3

Maas, Casey. "Decoding Faces: The Contribution of Self-Expressiveness Level and Mimicry Processes to Emotional Understanding." Scholarship @ Claremont, 2014. http://scholarship.claremont.edu/scripps_theses/406.

Full text
Abstract:
Facial expressions provide valuable information in making judgments about internal emotional states. Evaluation of facial expressions can occur through mimicry processes via the mirror neuron system (MNS) pathway, where a decoder mimics a target’s facial expression and proprioceptive perception prompts emotion recognition. Female participants rated emotional facial expressions when mimicry was inhibited by immobilization of facial muscles and when mimicry was uncontrolled, and were evaluated for self-expressiveness level. A mixed ANOVA was conducted to determine how self-expressiveness level and manipulation of facial muscles impacted recognition accuracy for facial expressions. Main effects of self-expressiveness level and facial muscle manipulation were not found to be significant (p > .05), nor did these variables appear to interact (p > .05). The results of this study suggest that an individual’s self-expressiveness level and use of mimicry processes may not play a central role in emotion recognition.
APA, Harvard, Vancouver, ISO, and other styles
4

Miners, William Ben. "Toward Understanding Human Expression in Human-Robot Interaction." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/789.

Full text
Abstract:
Intelligent devices are quickly becoming necessities to support our activities during both work and play. We are already bound in a symbiotic relationship with these devices. An unfortunate effect of the pervasiveness of intelligent devices is the substantial investment of our time and effort to communicate intent. Even though our increasing reliance on these intelligent devices is inevitable, the limits of conventional methods for devices to perceive human expression hinders communication efficiency. These constraints restrict the usefulness of intelligent devices to support our activities. Our communication time and effort must be minimized to leverage the benefits of intelligent devices and seamlessly integrate them into society. Minimizing the time and effort needed to communicate our intent will allow us to concentrate on tasks in which we excel, including creative thought and problem solving.

An intuitive method to minimize human communication effort with intelligent devices is to take advantage of our existing interpersonal communication experience. Recent advances in speech, hand gesture, and facial expression recognition provide alternate viable modes of communication that are more natural than conventional tactile interfaces. Use of natural human communication eliminates the need to adapt and invest time and effort using less intuitive techniques required for traditional keyboard and mouse based interfaces.

Although the state of the art in natural but isolated modes of communication achieves impressive results, significant hurdles must be conquered before communication with devices in our daily lives will feel natural and effortless. Research has shown that combining information between multiple noise-prone modalities improves accuracy. Leveraging this complementary and redundant content will improve communication robustness and relax current unimodal limitations.

This research presents and evaluates a novel multimodal framework to help reduce the total human effort and time required to communicate with intelligent devices. This reduction is realized by determining human intent using a knowledge-based architecture that combines and leverages conflicting information available across multiple natural communication modes and modalities. The effectiveness of this approach is demonstrated using dynamic hand gestures and simple facial expressions characterizing basic emotions. It is important to note that the framework is not restricted to these two forms of communication. The framework presented in this research provides the flexibility necessary to include additional or alternate modalities and channels of information in future research, including improving the robustness of speech understanding.

The primary contributions of this research include the leveraging of conflicts in a closed-loop multimodal framework, explicit use of uncertainty in knowledge representation and reasoning across multiple modalities, and a flexible approach for leveraging domain specific knowledge to help understand multimodal human expression. Experiments using a manually defined knowledge base demonstrate an improved average accuracy of individual concepts and an improved average accuracy of overall intents when leveraging conflicts as compared to an open-loop approach.
APA, Harvard, Vancouver, ISO, and other styles
5

Stott, Dorthy A. "Recognition of Emotion in Facial Expressions by Children with Language Impairment." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2513.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Weber, Marlene. "Automotive emotions : a human-centred approach towards the measurement and understanding of drivers' emotions and their triggers." Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/16647.

Full text
Abstract:
The automotive industry is facing significant technological and sociological shifts, calling for an improved understanding of driver and passenger behaviours, emotions and needs, and a transformation of the traditional automotive design process. This research takes a human-centred approach to automotive research, investigating the users' emotional states during automobile driving, with the goal to develop a framework for automotive emotion research, thus enabling the integration of technological advances into the driving environment. A literature review of human emotion and emotion in an automotive context was conducted, followed by three driving studies investigating emotion through Facial-Expression Analysis (FEA): An exploratory study investigated whether emotion elicitation can be applied in driving simulators, and if FEA can detect the emotions triggered. The results allowed confidence in the applicability of emotion elicitation to a lab-based environment to trigger emotional responses, and FEA to detect those. An on-road driving study was conducted in a natural setting to investigate whether natures and frequencies of emotion events could be automatically measured. The possibility of assigning triggers to those was investigated. Overall, 730 emotion events were detected during a total driving time of 440 minutes, and event triggers were assigned to 92% of the emotion events. A similar second on-road study was conducted in a partially controlled setting on a planned road circuit. In 840 minutes, 1947 emotion events were measured, and triggers were successfully assigned to 94% of those. The differences in natures, frequencies and causes of emotions on different road types were investigated. Comparison of emotion events for different roads demonstrated substantial variances of natures, frequencies and triggers of emotions on different road types. The results showed that emotions play a significant role during automobile driving. The possibility of assigning triggers can be used to create a better understanding of causes of emotions in the automotive habitat. Both on-road studies were compared through statistical analysis to investigate influences of the different study settings. Certain conditions (e.g. driving setting, social interaction) showed significant influence on emotions during driving. This research establishes and validates a methodology for the study of emotions and their causes in the driving environment through which systems and factors causing positive and negative emotional effects can be identified. The methodology and results can be applied to design and research processes, allowing the identification of issues and opportunities in current automotive design to address challenges of future automotive design. Suggested future research includes the investigation of a wider variety of road types and situations, testing with different automobiles and the combination of multiple measurement techniques.
APA, Harvard, Vancouver, ISO, and other styles
7

Sauer, Patrick Martin. "Model-based understanding of facial expressions." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/modelbased-understanding-of-facial-expressions(e88bff4f-d72e-4d11-b964-fc20f009609b).html.

Full text
Abstract:
In this thesis we present novel methods for constructing and fitting 2d models of shape and appearance which are used for analysing human faces. The first contribution builds on previous work on discriminative fitting strategies for active appearance models (AAMs) in which regression models are trained to predict the location of shapes based on texture samples. In particular, we investigate non-parametric regression methods including random forests and Gaussian processes which are used together with gradient-like features for shape model fitting. We then develop two training algorithms which combine such models into sequences, and systematically compare their performance to existing linear generative AAM algorithms. Inspired by the performance of the Gaussian process-based regression methods, we investigate a group of non-linear latent variable models known as Gaussian process latent variable models (GPLVM). We discuss how such models may be used to develop a generative active appearance model algorithm whose texture model component is non-linear, and show how this leads to lower-dimensional models which are capable of generating more natural-looking images of faces when compared to equivalent linear models. We conclude by describing a novel supervised non-linear latent variable model based on Gaussian processes which we apply to the problem of recognising emotions from facial expressions.
APA, Harvard, Vancouver, ISO, and other styles
8

Alves, Ana Rita Coutinho. "Desenvolvimento das competências emocionais em crianças com idades compreendidas entre os 3 e os 5 anos através do programa "ser e conhecer"." Master's thesis, ISPA - Instituto Universitário, 2013. http://hdl.handle.net/10400.12/2815.

Full text
Abstract:
Dissertação de Mestrado em Psicologia Educacional apresentada ao ISPA - Instituto Universitário
O presente estudo tem por objectivo verificar o impacto que o programa “Ser e Conhecer” tem sobre o desenvolvimento de determinadas competências emocionais, mais especificamente na nomeação e identificação de expressões faciais de emoções básicas, na capacidade de compreensão causal e na capacidade de descentração afectiva. Participaram neste estudo 57 crianças, com idades compreendidas entre os 36 e 71 meses, frequentadoras do Jardim Infantil e salas mistas. O desenvolvimento das competências emocionais foi avaliado através do Teste de Conhecimento das Emoções: Manual do Fantoche, sendo este a versão portuguesa do Affect Knowledge Test, num pré e num pós-teste. Entretanto, os participantes pertencentes ao grupo experimental participaram no programa “Ser e Conhecer”. Uma vez que somente as emoções básicas são abordadas, o programa demonstrou não ter impacto no desenvolvimento das competências.
ABSTRACT: The present study aims to assess the impact that the program "Being and Knowing" has on the development of certain emotional competencies, specifically in nomination and identification of facial expressions of basic emotions, the ability of causal understanding and capacity to affective decentration. 57 children participated in the study, aged from 36 and 71 months, who attend the kindergarten and mixed classrooms. The development of emotional competence was evaluated using the “Teste de Conhecimento das Emoções: Manual do Fantoche, which is the Portuguese version of the Affect Knowledge Test, in a pre and a post test. In between, the participants of the experimental groups participated on program "Being and Knowing". Since only the basic emotions are addressed, the program demonstrated no impact on competence development.
APA, Harvard, Vancouver, ISO, and other styles
9

Collins, Michael S. "Understanding the Expressive Cartoon Drawings of a Student with Autism Spectrum Disorder." VCU Scholars Compass, 2017. http://scholarscompass.vcu.edu/etd/4893.

Full text
Abstract:
This study focuses on the highly expressive comic drawings of Amy, a child with autism. This study connects larger fields of research: the study of how people with autism spectrum disorder [ASD] process faces and emotions; and, research about artists with ASD. Amy's understanding of emotion was analyzed by asking her to view and identify humans and cartoon characters expressing different emotions. Her ability to illustrate emotion is tested by asking her to respond to various drawing prompts. The study concluded that Amy has difficulty identifying the emotions of humans and cartoons, but she does have the ability to illustrate characters that express a range of emotions. This individual case study shows that students with autism were able to process visual expressions of emotion with a high degree of accuracy. The results provide art educators a model with which to investigate how their students with autism process emotional expression.
APA, Harvard, Vancouver, ISO, and other styles
10

Jeng-PingChiu and 邱正平. "Understanding System on Facial Expression and Action for SUFFERING Factors." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/gt7j54.

Full text
Abstract:
碩士
國立成功大學
電機工程學系
104
In recent years, the application and demand of intelligent human-machine interface are increased gradually. The technique for understanding the human emotion was no longer restricted to analysis the text, voice, observation and so on. With the rapidly improvement of the pattern recognition, facial expression and activity recognition technology has been widely used in the field of home care robotics, monitoring equipment and human behavior analysis. This thesis proposes an understanding system for SUFFERING factors to interpret the negative emotions, since human’s feeling cannot be represented just by facial expressions or actions. Our proposed system composed of facial expression recognition and action detection, respectively. Compared with the Action Unit (AU), this work proposes a novel Suffering Unit (SU), the SU consists of facial and posture action units. After capturing the whole body from Kinect v2, the system performs the recognition for both facial expression as well as action and output the results in real time. The proposed Hierarchy-Coherence K Nearest Neighbor (HC-KNN) calculate the coherence of training data and can improve the performance in comparison of KNN in facial expression recognition. On the other hand, an Average Moving Action Status Window (AMASW) is also proposed to build our action detection system. With the proposed understanding system, we can identify SUFFERING factors by SU which contains 19 kinds of facial expressions and actions. The experimental results have demonstrated the effectiveness of the proposed system, the recognition rate can achieve 87.74% for facial expression and 90.81% for action, respectively.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Facial expression understanding"

1

Zhixin, Yi, ed. Xin li xue jia de mian xiang shu: Jie du qing xu de mi ma = Emotions revealed : understanding faces and feelings. Taibei Shi: Xin ling gong fang wen hua shi ye gu fen you xian gong si, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

1934-, Neu Harold C., ed. Understanding infectious disease. St. Louis: Mosby Year Book, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mandal, Manas K., and Avinash Awasthi, eds. Understanding Facial Expressions in Communication. New Delhi: Springer India, 2015. http://dx.doi.org/10.1007/978-81-322-1934-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Profyt, Linda. Children's understanding of emotions in facial expressions. Sudbury, Ont: Laurentian University, Department of Psychology, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Daniel H., and Adam K. Anderson. Form and Function of Facial Expressive Origins. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780190613501.003.0010.

Full text
Abstract:
Facial expressions are an important source of social communication. But we do not know why they appear the way they do and how they arose. Here we discuss evidence supporting Darwin’s theory that our expressions originated for sensory egocentric function for the expresser, which were then co-opted as signals for allocentric social function. We show that facial expressions of fear and disgust have distinct opposing sensory effects that serve each emotion’s theorized function, regulating the intake of nasal and visual information. Then, we show how such egocentrically adaptive expressive forms may have been socially co-opted for allocentric function, transmitting basic gaze signals and complex mental states adaptively congruent for the receiver as the expresser. Together, the evidence connects the appearance of our expressions from their evolutionary origins to their modern-day communicative role, providing a functional perspective for organizing and understanding expression forms.
APA, Harvard, Vancouver, ISO, and other styles
6

Diogo, Rui, and Sharlene E. Santana. Evolution of Facial Musculature. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780190613501.003.0008.

Full text
Abstract:
We review the origin and evolution of the facial musculature of mammals and pay special attention to the complex relationships between facial musculature, color patterns, mobility, and social group size during the evolution of humans and other primates. In addition, we discuss the modularity of the human head and the assymetrical use of facial expressions, as well as the evolvability of the muscles of facial expression, based on recent developmental and comparative studies and the use of a powerful new quantitative tool: anatomical networks analysis. We emphasizes the remarkable diversity of primate facial structures and the fact that the number of facial muscles present in our species is actually not as high when compared to many other mammals as previously thought. The use of new tools, such as anatomical network analyses, should be further explored to compare the musculoskeletal and other features of humans across stages of development and with other animal to enable a better understanding of the evolution of facial expressions.
APA, Harvard, Vancouver, ISO, and other styles
7

Ross, Jacob DC. Making Faces: Understanding Facial Expressions for Autistic Kids. Jacob DC Ross, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mason, Peggy. From Movement to Action. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780190237493.003.0023.

Full text
Abstract:
Tracts descending from motor control centers in the brainstem and cortex target motor interneurons and in select cases motoneurons. The mechanisms and constraints of postural control are elaborated and the effect of body mass on posture discussed. Feed-forward reflexes that maintain posture during standing and other conditions of self-motion are described. The role of descending tracts in postural control and the pathological posturing is described. Pyramidal (corticospinal and corticobulbar) and extrapyramidal control of body and face movements is contrasted. Special emphasis is placed on cortical regions and tracts involved in deliberate control of facial expression; these pathways are contrasted with mechanisms for generating emotional facial expressions. The signs associated with lesions of either motoneurons or motor control centers are clearly detailed. The mechanisms and presentation of cerebral palsy are described. Finally, understanding how pre-motor cortical regions generate actions is used to introduce apraxia, a disorder of action.
APA, Harvard, Vancouver, ISO, and other styles
9

Mandal, Manas K., and Avinash Awasthi. Understanding Facial Expressions in Communication: Cross-cultural and Multidisciplinary Perspectives. Springer, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mandal, Manas K., and Avinash Awasthi. Understanding Facial Expressions in Communication: Cross-cultural and Multidisciplinary Perspectives. Springer, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Facial expression understanding"

1

Gong, Shaogang, and Tao Xiang. "Understanding Facial Expression." In Visual Analysis of Behaviour, 69–93. London: Springer London, 2011. http://dx.doi.org/10.1007/978-0-85729-670-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Valstar, Michel. "Automatic Facial Expression Analysis." In Understanding Facial Expressions in Communication, 143–72. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1934-7_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Poria, Swarup, Ananya Mondal, and Pritha Mukhopadhyay. "Evaluation of the Intricacies of Emotional Facial Expression of Psychiatric Patients Using Computational Models." In Understanding Facial Expressions in Communication, 199–226. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1934-7_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Drira, Hassen, Boulbaba Ben Amor, Mohamed Daoudi, and Stefano Berretti. "A Dense Deformation Field for Facial Expression Analysis in Dynamic Sequences of 3D Scans." In Human Behavior Understanding, 148–59. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-02714-2_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Luefeng, Min Wu, Witold Pedrycz, and Kaoru Hirota. "Weight-Adapted Convolution Neural Network for Facial Expression Recognition." In Emotion Recognition and Understanding for Emotional Human-Robot Interaction Systems, 57–75. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61577-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cruz, Alberto C., B. Bhanu, and N. S. Thakoor. "Understanding of the Biological Process of Nonverbal Communication: Facial Emotion and Expression Recognition." In Computational Biology, 329–47. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23724-4_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Regenbogen, Christina, and Ute Habel. "Facial Expressions in Empathy Research." In Understanding Facial Expressions in Communication, 101–17. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1934-7_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Awasthi, Avinash, and Manas K. Mandal. "Facial Expressions of Emotions: Research Perspectives." In Understanding Facial Expressions in Communication, 1–18. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1934-7_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Frank, Mark G., and Elena Svetieva. "Microexpressions and Deception." In Understanding Facial Expressions in Communication, 227–42. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1934-7_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Castillo, Paola A. "The Detection of Deception in Cross-Cultural Contexts." In Understanding Facial Expressions in Communication, 243–63. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1934-7_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Facial expression understanding"

1

"Session: Facial Expression Understanding." In 2019 IEEE 15th International Conference on Intelligent Computer Communication and Processing (ICCP). IEEE, 2019. http://dx.doi.org/10.1109/iccp48234.2019.8959631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ou, Yang-Yen, Ta-Wen Kuan, An-Chao Tsai, Jhing-Fa Wang, and Jheng-Ping Chiou. "Sunnfering understanding system based on facial expression and human action." In 2016 International Conference on Orange Technologies (ICOT). IEEE, 2016. http://dx.doi.org/10.1109/icot.2016.8278977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

McDuff, Daniel, Rana el Kaliouby, Karim Kassam, and Rosalind Picard. "Acume: A new visualization tool for understanding facial expression and gesture data." In Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

"DYNAMIC FACIAL EXPRESSION UNDERSTANDING BASED ON TEMPORAL MODELLING OF TRANSFERABLE BELIEF MODEL." In International Conference on Computer Vision Theory and Applications. SciTePress - Science and and Technology Publications, 2006. http://dx.doi.org/10.5220/0001377600930100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yongmian Zhang and Qiang Ji. "Facial expression understanding in image sequences using dynamic and active visual information fusion." In ICCV 2003: 9th International Conference on Computer Vision. IEEE, 2003. http://dx.doi.org/10.1109/iccv.2003.1238640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pham, Phuong, and Jingtao Wang. "Understanding Emotional Responses to Mobile Video Advertisements via Physiological Signal Sensing and Facial Expression Analysis." In IUI'17: 22nd International Conference on Intelligent User Interfaces. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3025171.3025186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Buzuti, Lucas Fontes, and Carlos Eduardo Thomaz. "Understanding fully-connected and convolution allayers in unsupervised learning using face images." In XV Workshop de Visão Computacional. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/wvc.2019.7621.

Full text
Abstract:
The goal of this paper is to implement and compare two unsupervised models of deep learning: Autoencoder and Convolutional Autoencoder. These neural network models have been trained to learn regularities in well-framed face images with different facial expressions. The Autoencoder's basic topology is addressed here, composed of encoding and decoding multilayers. This paper approaches these automatic codings using multivariate statistics to visually understand the bottleneck differences between the fully-connected and convolutional layers and the corresponding importance of the dropout strategy when applied in a model.
APA, Harvard, Vancouver, ISO, and other styles
8

Seshadri, Priya, Youyi Bi, Jaykishan Bhatia, Ross Simons, Jeffrey Hartley, and Tahira Reid. "Evaluations That Matter: Customer Preferences Using Industry-Based Evaluations and Eye-Gaze Data." In ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/detc2016-60293.

Full text
Abstract:
This study is the first stage of a research program aimed at understanding differences in how people process 2D and 3D automotive stimuli, using psychophysiological tools such as galvanic skin response (GSR), eye tracking, electroencephalography (EEG), and facial expressions coding, along with respondent ratings. The current study uses just one measure, eye tracking, and one stimulus format, 2D realistic renderings of vehicles, to reveal where people expect to find information about brand and other industry-relevant topics, such as sportiness. The eye-gaze data showed differences in the percentage of fixation time that people spent on different views of cars while evaluating the “Brand” and the degree to which they looked “Sporty/Conservative”, “Calm/Exciting”, and “Basic/Luxurious”. The results of this work can give designers insights on where they can invest their design efforts when considering brand and styling cues.
APA, Harvard, Vancouver, ISO, and other styles
9

Sun, Ran, Harald Haraldsson, Yuhang Zhao, and Serge Belongie. "Anon-Emoji: An Optical See-Through Augmented Reality System for Children with Autism Spectrum Disorders to promote Understanding of Facial Expressions and Emotions." In 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2019. http://dx.doi.org/10.1109/ismar-adjunct.2019.00052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Myasnikova, Lyudmila, and Elena Shlegel. "Transformation of Individuality & Publicity: Philosophic-Anthropological Analysis." In The Public/Private in Modern Civilization, the 22nd Russian Scientific-Practical Conference (with international participation) (Yekaterinburg, April 16-17, 2020). Liberal Arts University – University for Humanities, Yekaterinburg, 2020. http://dx.doi.org/10.35853/ufh-public/private-2020-02.

Full text
Abstract:
The problem of the balance between society and personality, awareness of ‘individuality’, ‘personality’, as well as ‘publicity’ (publicness) are ranked among the central philosophical issues. There are many interpretations of them. And these matters remain critical in today’s ‘individualised’ society. Based on a philosophic-anthropological approach, and using comparative-historical methods, the authors trace the cultural-historical transformation of the subsistence of an individual in society from Antiquity to the present. An individual is characterised via such conceptions as ‘social type’, ‘individuality’, ‘personality’. The author’s interpretation of these concepts does not always coincide with the generally accepted one. In particular, the individual is often understood as an ‘ensemble of social relations’, i.e. as synonymous with the social. Furthermore, the authors define the term ‘social type’ as an expression of the societal, the term ‘individuality’ as a holograph or verge of the world, the absolute, mankind, whereas the term ‘personality’ is understood as an individuality rendered ‘in-being-with-others’. The main developmental trend in the relationship between the individual and society is the long cultural-historical transition from an individuality ‘outside the world’ to an individuality ‘in the world’. The authors justify the idea that an individualised society is not a society of individuals. Furthermore, the transformation of the conventional conception of publicness is revealed, the ephemerality of publicness in contemporary society in general, and particularly in virtual space, is highlighted. Publicness is substituted with cocktail parties, ‘cloakroom communities’, and shindigs. The article deals with the construction of virtual identity in the social media of the younger generation. At the end of the article, the authors conclude that in the contemporary world of multiple identities, a person has to look for life values, once again facing the problem of choice and a new understanding of freedom.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography