To see the other types of publications on this topic, follow the link: Facial expression understanding.

Journal articles on the topic 'Facial expression understanding'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Facial expression understanding.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Calder, Andrew J., and Andrew W. Young. "Understanding the recognition of facial identity and facial expression." Nature Reviews Neuroscience 6, no. 8 (August 2005): 641–51. http://dx.doi.org/10.1038/nrn1724.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lisetti, Christine L., and Diane J. Schiano. "Automatic facial expression interpretation." Facial Information Processing 8, no. 1 (May 17, 2000): 185–235. http://dx.doi.org/10.1075/pc.8.1.09lis.

Full text
Abstract:
We discuss here one of our projects, aimed at developing an automatic facial expression interpreter, mainly in terms of signaled emotions. We present some of the relevant findings on facial expressions from cognitive science and psychology that can be understood by and be useful to researchers in Human-Computer Interaction and Artificial Intelligence. We then give an overview of HCI applications involving automated facial expression recognition, we survey some of the latest progresses in this area reached by various approaches in computer vision, and we describe the design of our facial expression recognizer. We also give some background knowledge about our motivation for understanding facial expressions and we propose an architecture for a multimodal intelligent interface capable of recognizing and adapting to computer users’ affective states. Finally, we discuss current interdisciplinary issues and research questions which will need to be addressed for further progress to be made in the promising area of computational facial expression recognition.
APA, Harvard, Vancouver, ISO, and other styles
3

Lazzeri, Nicole, Daniele Mazzei, Maher Ben Moussa, Nadia Magnenat-Thalmann, and Danilo De Rossi. "The influence of dynamics and speech on understanding humanoid facial expressions." International Journal of Advanced Robotic Systems 15, no. 4 (July 1, 2018): 172988141878315. http://dx.doi.org/10.1177/1729881418783158.

Full text
Abstract:
Human communication relies mostly on nonverbal signals expressed through body language. Facial expressions, in particular, convey emotional information that allows people involved in social interactions to mutually judge the emotional states and to adjust its behavior appropriately. First studies aimed at investigating the recognition of facial expressions were based on static stimuli. However, facial expressions are rarely static, especially in everyday social interactions. Therefore, it has been hypothesized that the dynamics inherent in a facial expression could be fundamental in understanding its meaning. In addition, it has been demonstrated that nonlinguistic and linguistic information can contribute to reinforce the meaning of a facial expression making it easier to be recognized. Nevertheless, few studies have been performed on realistic humanoid robots. This experimental work aimed at demonstrating the human-like expressive capability of a humanoid robot by examining whether the effect of motion and vocal content influenced the perception of its facial expressions. The first part of the experiment aimed at studying the recognition capability of two kinds of stimuli related to the six basic expressions (i.e. anger, disgust, fear, happiness, sadness, and surprise): static stimuli, that is, photographs, and dynamic stimuli, that is, video recordings. The second and third parts were focused on comparing the same six basic expressions performed by a virtual avatar and by a physical robot under three different conditions: (1) muted facial expressions, (2) facial expressions with nonlinguistic vocalizations, and (3) facial expressions with an emotionally neutral verbal sentence. The results show that static stimuli performed by a human being and by the robot were more ambiguous than the corresponding dynamic stimuli on which motion and vocalization were associated. This hypothesis has been also investigated with a 3-dimensional replica of the physical robot demonstrating that even in case of a virtual avatar, dynamic and vocalization improve the emotional conveying capability.
APA, Harvard, Vancouver, ISO, and other styles
4

Padmapriya K.C., Leelavathy V., and Angelin Gladston. "Automatic Multiface Expression Recognition Using Convolutional Neural Network." International Journal of Artificial Intelligence and Machine Learning 11, no. 2 (July 2021): 1–13. http://dx.doi.org/10.4018/ijaiml.20210701.oa8.

Full text
Abstract:
The human facial expressions convey a lot of information visually. Facial expression recognition plays a crucial role in the area of human-machine interaction. Automatic facial expression recognition system has many applications in human behavior understanding, detection of mental disorders and synthetic human expressions. Recognition of facial expression by computer with high recognition rate is still a challenging task. Most of the methods utilized in the literature for the automatic facial expression recognition systems are based on geometry and appearance. Facial expression recognition is usually performed in four stages consisting of pre-processing, face detection, feature extraction, and expression classification. In this paper we applied various deep learning methods to classify the seven key human emotions: anger, disgust, fear, happiness, sadness, surprise and neutrality. The facial expression recognition system developed is experimentally evaluated with FER dataset and has resulted with good accuracy.
APA, Harvard, Vancouver, ISO, and other styles
5

ter Stal, Silke, Gerbrich Jongbloed, and Monique Tabak. "Embodied Conversational Agents in eHealth: How Facial and Textual Expressions of Positive and Neutral Emotions Influence Perceptions of Mutual Understanding." Interacting with Computers 33, no. 2 (March 2021): 167–76. http://dx.doi.org/10.1093/iwc/iwab019.

Full text
Abstract:
Abstract Embodied conversational agents (ECAs) could engage users in eHealth by building mutual understanding (i.e. rapport) via emotional expressions. We compared an ECA’s emotions expressed in text with an ECA’s emotions in facial expressions on users’ perceptions of rapport. We used a $2 \times 2$ design, combining a happy or neutral facial expression with a happy or neutral textual expression. Sixty-three participants (mean, 48$ \pm $22 years) had a dialogue with an ECA on healthy living and rated multiple rapport items. Results show that participants’ perceived rapport for an ECA with a happy facial expression and neutral textual expression and an ECA with a neutral facial expression and happy textual expression was significantly higher than the neutral value of the rapport scale ($P = 0.049$ and $P = 0.008$, respectively). Furthermore, results show no significant difference in overall rapport between the conditions ($P = 0.062$), but a happy textual expression for an ECA with a neutral facial expression shows higher ratings of the individual rapport items helpfulness ($P = 0.019$) and enjoyableness ($P = 0.028$). Future research should investigate users’ rapport towards an ECA with different emotions in long-term interaction and how a user’s age and personality and an ECA’s animations affect rapport building. Optimizing rapport building between a user and an ECA could contribute to achieving long-term interaction with eHealth.
APA, Harvard, Vancouver, ISO, and other styles
6

Pancotti, Francesco, Sonia Mele, Vincenzo Callegari, Raffaella Bivi, Francesca Saracino, and Laila Craighero. "Efficacy of Facial Exercises in Facial Expression Categorization in Schizophrenia." Brain Sciences 11, no. 7 (June 22, 2021): 825. http://dx.doi.org/10.3390/brainsci11070825.

Full text
Abstract:
Embodied cognition theories suggest that observation of facial expression induces the same pattern of muscle activation, and that this contributes to emotion recognition. Consequently, the inability to form facial expressions would affect emotional understanding. Patients with schizophrenia show a reduced ability to express and perceive facial emotions. We assumed that a physical training specifically developed to mobilize facial muscles could improve the ability to perform facial movements, and, consequently, spontaneous mimicry and facial expression recognition. Twenty-four inpatient participants with schizophrenia were randomly assigned to the experimental and control group. At the beginning and at the end of the study, both groups were submitted to a facial expression categorization test and their data compared. The experimental group underwent a training period during which the lip muscles, and the muscles around the eyes were mobilized through the execution of transitive actions. Participants were trained three times a week for five weeks. Results showed a positive impact of the physical training in the recognition of others’ facial emotions, specifically for the responses of “fear”, the emotion for which the recognition deficit in the test is most severe. This evidence suggests that a specific deficit of the sensorimotor system may result in a specific cognitive deficit.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Fengyuan, Jianhua Lv, Guode Ying, Shenghui Chen, and Chi Zhang. "Facial expression recognition from image based on hybrid features understanding." Journal of Visual Communication and Image Representation 59 (February 2019): 84–88. http://dx.doi.org/10.1016/j.jvcir.2018.11.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Parr, Lisa A., and Bridget M. Waller. "Understanding chimpanzee facial expression: insights into the evolution of communication." Social Cognitive and Affective Neuroscience 1, no. 3 (December 1, 2006): 221–28. http://dx.doi.org/10.1093/scan/nsl031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Uddin, Md Azher, Joolekha Bibi Joolee, and Kyung-Ah Sohn. "Dynamic Facial Expression Understanding Using Deep Spatiotemporal LDSP On Spark." IEEE Access 9 (2021): 16866–77. http://dx.doi.org/10.1109/access.2021.3053276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Arya, Ali, Steve DiPaola, and Avi Parush. "Perceptually Valid Facial Expressions for Character-Based Applications." International Journal of Computer Games Technology 2009 (2009): 1–13. http://dx.doi.org/10.1155/2009/462315.

Full text
Abstract:
This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animations. Most existing methods for creating facial expressions of mixed emotions use operations like averaging to create the combined effect of two universal emotions. Such methods may be mathematically justifiable but are not necessarily valid from a perceptual point of view. The research reported here starts by user experiments aiming at understanding how people combine facial actions to express mixed emotions, and how the viewers perceive a set of facial actions in terms of underlying emotions. Using the results of these experiments and a three-dimensional emotion model, we associate facial actions to dimensions and regions in the emotion space, and create a facial expression based on the location of the mixed emotion in the three-dimensional space. We call these regionalized facial actions “facial expression units.”
APA, Harvard, Vancouver, ISO, and other styles
11

Ramis, Silvia, Jose Maria Buades, and Francisco J. Perales. "Using a Social Robot to Evaluate Facial Expressions in the Wild." Sensors 20, no. 23 (November 24, 2020): 6716. http://dx.doi.org/10.3390/s20236716.

Full text
Abstract:
In this work an affective computing approach is used to study the human-robot interaction using a social robot to validate facial expressions in the wild. Our global goal is to evaluate that a social robot can be used to interact in a convincing manner with human users to recognize their potential emotions through facial expressions, contextual cues and bio-signals. In particular, this work is focused on analyzing facial expression. A social robot is used to validate a pre-trained convolutional neural network (CNN) which recognizes facial expressions. Facial expression recognition plays an important role in recognizing and understanding human emotion by robots. Robots equipped with expression recognition capabilities can also be a useful tool to get feedback from the users. The designed experiment allows evaluating a trained neural network in facial expressions using a social robot in a real environment. In this paper a comparison between the CNN accuracy and human experts is performed, in addition to analyze the interaction, attention and difficulty to perform a particular expression by 29 non-expert users. In the experiment, the robot leads the users to perform different facial expressions in motivating and entertaining way. At the end of the experiment, the users are quizzed about their experience with the robot. Finally, a set of experts and the CNN classify the expressions. The obtained results allow affirming that the use of social robot is an adequate interaction paradigm for the evaluation on facial expression.
APA, Harvard, Vancouver, ISO, and other styles
12

Mendolia, Marilyn. "Facial Identity Memory Is Enhanced When Sender’s Expression Is Congruent to Perceiver’s Experienced Emotion." Psychological Reports 121, no. 5 (November 24, 2017): 892–908. http://dx.doi.org/10.1177/0033294117741655.

Full text
Abstract:
The role of the social context in facial identity recognition and expression recall was investigated by manipulating the sender’s emotional expression and the perceiver’s experienced emotion during encoding. A mixed-design with one manipulated between-subjects factor (perceiver’s experienced emotion) and two within-subjects factors (change in experienced emotion and sender’s emotional expression) was used. Senders’ positive and negative expressions were implicitly encoded while perceivers experienced their baseline emotion and then either a positive or a negative emotion. Facial identity recognition was then tested using senders’ neutral expressions. Memory for senders previously seen expressing positive or negative emotion was facilitated if the perceiver initially encoded the expression while experiencing a positive or a negative emotion, respectively. Furthermore, perceivers were confident of their decisions. This research provides a more detailed understanding of the social context by exploring how the sender–perceiver interaction affects the memory for the sender.
APA, Harvard, Vancouver, ISO, and other styles
13

Almeida, João, Luís Vilaça, Inês N. Teixeira, and Paula Viana. "Emotion Identification in Movies through Facial Expression Recognition." Applied Sciences 11, no. 15 (July 25, 2021): 6827. http://dx.doi.org/10.3390/app11156827.

Full text
Abstract:
Understanding how acting bridges the emotional bond between spectators and films is essential to depict how humans interact with this rapidly growing digital medium. In recent decades, the research community made promising progress in developing facial expression recognition (FER) methods. However, no emphasis has been put in cinematographic content, which is complex by nature due to the visual techniques used to convey the desired emotions. Our work represents a step towards emotion identification in cinema through facial expressions’ analysis. We presented a comprehensive overview of the most relevant datasets used for FER, highlighting problems caused by their heterogeneity and to the inexistence of a universal model of emotions. Built upon this understanding, we evaluated these datasets with a standard image classification models to analyze the feasibility of using facial expressions to determine the emotional charge of a film. To cope with the problem of lack of datasets for the scope under analysis, we demonstrated the feasibility of using a generic dataset for the training process and propose a new way to look at emotions by creating clusters of emotions based on the evidence obtained in the experiments.
APA, Harvard, Vancouver, ISO, and other styles
14

Yang, Defeng, Hao Shen, and Robert S. Wyer. "The face is the index of the mind: understanding the association between self-construal and facial expressions." European Journal of Marketing 55, no. 6 (January 26, 2021): 1664–78. http://dx.doi.org/10.1108/ejm-03-2019-0295.

Full text
Abstract:
Purpose This study aims to examine the relationship between consumers’ emotional expressions and their self-construals. The authors suggest that because an independent self-construal can reinforce the free expression of emotion, the expression of extreme emotions is likely to become associated with feelings of independence through social learning. Design/methodology/approach The paper includes five studies. Study 1A provided evidence that priming participants with different types of self-construal can influence the extremity of their emotional expressions. Study 1B showed that chronic self-construal could predict facial expressions of students who were told to smile for a group photograph. Studies 2–4 found that inducing people to either manifest or to simply view an extreme facial expression activated an independent social orientation and influenced their performance on tasks that reflect this orientation. Findings The studies provide support for a bidirectional causal relationship between individuals’ self-construals and the extremity of their emotional expressions. They show that people’s general social orientation could predict the spontaneous facial expressions that they manifest in their daily lives. Research limitations/implications Although this research was generally restricted to the effects of smiling, similar considerations influence the expression of other emotions. That is, dispositions to exhibit extreme expressions can generalize over different types of emotions. To this extent, expressions of sadness, anger or fear might be similarly associated with people’s social orientation and the behavior that is influenced by it. Practical implications The paper provides marketing implications into how marketers can influence consumers’ choices of unique options and how marketers can assess consumers’ social orientation based on their observation of consumers’ emotional expressions. Originality/value To the best of the authors’ knowledge, this research is the first to demonstrate a bidirectional causal relationship between individuals’ self-construals and the extremity of their emotional expressions, and to demonstrate the association between chronic social orientation and emotional expression people spontaneously make in their daily lives.
APA, Harvard, Vancouver, ISO, and other styles
15

Miyamoto, Yuko. "Young children's representational theory of mind in understanding masked facial expression." Japanese journal of psychology 69, no. 4 (1998): 271–78. http://dx.doi.org/10.4992/jjpsy.69.271.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Siritanawan, Prarinya, Kazunori Kotani, and Fan Chen. "Cumulative Differential Gabor Features for Facial Expression Classification." International Journal of Semantic Computing 09, no. 02 (June 2015): 193–213. http://dx.doi.org/10.1142/s1793351x15400036.

Full text
Abstract:
Emotions are written all over our faces. Facial expressions of emotions can be possibly read by computer vision and machine learning system. Regarding the evidence in cognitive science, the perception of facial dynamics is necessary for understanding the facial expression of human emotions. Our previous study proposed a temporal feature to model the levels of facial muscle activation. However, the quality of the feature suffers from various types of interference such as translation, scaling, noise, blurriness, and varying illumination. To cope with such problems, we derive a novel feature descriptor by expanding 2D Gabor features for a time series data. This feature is called Cumulative Differential Gabor feature (CDG). Then, we use a discriminative subspace for estimating an emotion class. As a result, our method gains the advantages of using both spatial and frequency components. The experimental results show the performance and the robustness to the underlying conditions.
APA, Harvard, Vancouver, ISO, and other styles
17

LIEBAL, KRISTIN, MALINDA CARPENTER, and MICHAEL TOMASELLO. "Young children's understanding of markedness in non-verbal communication*." Journal of Child Language 38, no. 4 (March 8, 2011): 888–903. http://dx.doi.org/10.1017/s0305000910000383.

Full text
Abstract:
ABSTRACTSpeakers often anticipate how recipients will interpret their utterances. If they wish some other, less obvious interpretation, they may ‘mark’ their utterance (e.g. with special intonations or facial expressions). We investigated whether two- and three-year-olds recognize when adults mark a non-verbal communicative act – in this case a pointing gesture – as special, and so search for a not-so-obvious referent. We set up the context of cleaning up and then pointed to an object. Three-year-olds inferred that the adult intended the pointing gesture to indicate that object, and so cleaned it up. However, when the adult marked her pointing gesture (with exaggerated facial expression) they took the object's hidden contents or a hidden aspect of it as the intended referent. Two-year-olds' appreciation of such marking was less clear-cut. These results demonstrate that markedness is not just a linguistic phenomenon, but rather something concerning the pragmatics of intentional communication more generally.
APA, Harvard, Vancouver, ISO, and other styles
18

Abdulwakil Auma, Muazu, Eric Manzi, and Jibril Aminu. "A SYSTEMATIC REVIEW OF METHODS OF EMOTION RECOGNITION BY FACIAL EXPRESSIONS." International Journal of Advanced Research 9, no. 5 (May 31, 2021): 1141–52. http://dx.doi.org/10.21474/ijar01/12951.

Full text
Abstract:
Facial recognition is integral and essential in todays society, and the recognition of emotions based on facial expressions is already becoming more usual. This paper analytically provides an overview of the databases of video data of facial expressions and several approaches to recognizing emotions by facial expressions by including the three main image analysis stages, which are pre-processing, feature extraction, and classification. The paper presents approaches based on deep learning using deep neural networks and traditional means to recognizing human emotions based on visual facial features. The current results of some existing algorithms are presented. When reviewing scientific and technical literature, the focus was mainly on sources containing theoretical and research information of the methods under consideration and comparing traditional techniques and methods based on deep neural networks supported by experimental research. An analysis of scientific and technical literature describing methods and algorithms for analyzing and recognizing facial expressions and world scientific research results has shown that traditional methods of classifying facial expressions are inferior in speed and accuracy to artificial neural networks. This reviews main contributions provide a general understanding of modern approaches to facial expression recognition, which will allow new researchers to understand the main components and trends in facial expression recognition. A comparison of world scientific research results has shown that the combination of traditional approaches and approaches based on deep neural networks show better classification accuracy. However, the best classification methods are artificial neural networks.
APA, Harvard, Vancouver, ISO, and other styles
19

Missaghi-Lakshman, Monica, and Cynthia Whissell. "Children's Understanding of Facial Expression of Emotion: II. Drawing of Emotion-Faces." Perceptual and Motor Skills 72, no. 3_suppl (June 1991): 1228–30. http://dx.doi.org/10.2466/pms.1991.72.3c.1228.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Yongmian Zhang and Qiang Ji. "Active and dynamic information fusion for facial expression understanding from image sequences." IEEE Transactions on Pattern Analysis and Machine Intelligence 27, no. 5 (May 2005): 699–714. http://dx.doi.org/10.1109/tpami.2005.93.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Zhang, Li, Chee Peng Lim, and Jungong Han. "Guest editorial: Automatic facial and bodily expression perception for human behaviour understanding." Multimedia Tools and Applications 78, no. 21 (August 10, 2019): 30331–34. http://dx.doi.org/10.1007/s11042-019-08055-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

ZHAN, YONGZHAO, JINGFU YE, DEJIAO NIU, and PENG CAO. "FACIAL EXPRESSION RECOGNITION BASED ON GABOR WAVELET TRANSFORMATION AND ELASTIC TEMPLATES MATCHING." International Journal of Image and Graphics 06, no. 01 (January 2006): 125–38. http://dx.doi.org/10.1142/s0219467806002112.

Full text
Abstract:
Facial expression recognition technology plays an important role in research areas such as psychological studies, image understanding and virtual reality etc. In order to achieve subject-independent facial expression recognition and obtain robustness against illumination variety and image deformation, facial expression recognition methods based on Gabor wavelet transformation and elastic templates matching are presented in this paper. First given a still image containing facial expression information, preprocessors are executed which include gray and scale normalization. Secondly, Gabor wavelet filters are adopted to extract expression features. Then the elastic graph for expression features is constructed. Finally, elastic templates matching algorithm and K-nearest neighbors classifier are used to recognize facial expression. Experiments show that expression features can be extracted effectively by Gabor wavelet transformation, which is insensitive to illumination variety and individual difference, and high recognition rate can be obtained using elastic templates matching algorithm, which is subject-independent.
APA, Harvard, Vancouver, ISO, and other styles
23

Jeong, Dami, Byung-Gyu Kim, and Suh-Yeon Dong. "Deep Joint Spatiotemporal Network (DJSTN) for Efficient Facial Expression Recognition." Sensors 20, no. 7 (March 30, 2020): 1936. http://dx.doi.org/10.3390/s20071936.

Full text
Abstract:
Understanding a person’s feelings is a very important process for the affective computing. People express their emotions in various ways. Among them, facial expression is the most effective way to present human emotional status. We propose efficient deep joint spatiotemporal features for facial expression recognition based on the deep appearance and geometric neural networks. We apply three-dimensional (3D) convolution to extract spatial and temporal features at the same time. For the geometric network, 23 dominant facial landmarks are selected to express the movement of facial muscle through the analysis of energy distribution of whole facial landmarks.We combine these features by the designed joint fusion classifier to complement each other. From the experimental results, we verify the recognition accuracy of 99.21%, 87.88%, and 91.83% for CK+, MMI, and FERA datasets, respectively. Through the comparative analysis, we show that the proposed scheme is able to improve the recognition accuracy by 4% at least.
APA, Harvard, Vancouver, ISO, and other styles
24

Bloom, Elana, and Nancy Heath. "Recognition, Expression, and Understanding Facial Expressions of Emotion in Adolescents With Nonverbal and General Learning Disabilities." Journal of Learning Disabilities 43, no. 2 (October 20, 2009): 180–92. http://dx.doi.org/10.1177/0022219409345014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Hu, De Kun, An Sheng Ye, Li Li, and Li Zhang. "Recognition of Facial Expression via Kernel PCA Network." Applied Mechanics and Materials 631-632 (September 2014): 498–501. http://dx.doi.org/10.4028/www.scientific.net/amm.631-632.498.

Full text
Abstract:
In this work, a kernel principle component analysis network (KPCANet) is proposed for classification of the facial expression in unconstrained images, which comprises only the very basic data processing components: cascaded kernel principal component analysis (KPCA), binary hashing, and block-wise histograms. In the proposed model, KPCA is employed to learn multistage filter banks. It is followed by simple binary hashing and block histograms for indexing and pooling. For comparison and better understanding, We have tested these basic networks extensively on many benchmark visual datasets ( such as the JAFFE [13] database, the CMU AMP face expression database, a part of the Extended Cohn-Kanade (CK+) database), The results demonstrate the potential of the KPCANet serving as a simple but highly competitive baseline for facial expression recognition.
APA, Harvard, Vancouver, ISO, and other styles
26

Pioggia, Giovanni, Arti Ahluwalia, Federico Carpi, Andrea Marchetti, Marcello Ferro, Walter Rocchia, and Danilo De Rossi. "FACE: Facial Automaton for Conveying Emotions." Applied Bionics and Biomechanics 1, no. 2 (2004): 91–100. http://dx.doi.org/10.1155/2004/153078.

Full text
Abstract:
The human face is the main organ of expression, capable of transmitting emotions that are almost instantly recognised by fellow beings. In this paper, we describe the development of a lifelike facial display based on the principles of biomimetic engineering. A number of paradigms that can be used for developing believable emotional displays, borrowing from elements of anthropomorphic mechanics and control, and materials science, are outlined. These are used to lay down the technological and philosophical premises necessary to construct a man-machine interface for expressing emotions through a biomimetic mechanical head. Applications in therapy to enhance social skills and understanding emotion in people with autism are discussed.
APA, Harvard, Vancouver, ISO, and other styles
27

Profyt, Linda, and Cynthia Whissell. "Children's Understanding of Facial Expression of Emotion: I. Voluntary Creation of Emotion-Faces." Perceptual and Motor Skills 73, no. 1 (August 1991): 199–202. http://dx.doi.org/10.2466/pms.1991.73.1.199.

Full text
Abstract:
22 children (ages 4 to 6 yr.) from a day-care service were asked to “make a face” that would show how they would feel in five situations representing the basic emotions of happiness, sadness, disgust, anger, and fear. Children decoded their own videotaped responses one week later, and they also decoded the same expressions presented by a child whom they did not know. Groups of day-care teachers and university students were employed in decoding the children's facial responses. Recognizability of all emotions by all decoders improved with the age of the child in a linear manner (9% gain per year). Happy and disgusted expressions were the most easily decoded.
APA, Harvard, Vancouver, ISO, and other styles
28

Strand, Paul S., Andrew Downs, and Celestina Barbosa-Leiker. "Does facial expression recognition provide a toehold for the development of emotion understanding?" Developmental Psychology 52, no. 8 (August 2016): 1182–91. http://dx.doi.org/10.1037/dev0000144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Sajjad, Muhammad, Sana Zahir, Amin Ullah, Zahid Akhtar, and Khan Muhammad. "Human Behavior Understanding in Big Multimedia Data Using CNN based Facial Expression Recognition." Mobile Networks and Applications 25, no. 4 (September 9, 2019): 1611–21. http://dx.doi.org/10.1007/s11036-019-01366-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Alsaggaf, Wafaa, Georgios Tsaramirsis, Norah Al-Malki, Fazal Qudus Khan, Miadah Almasry, Mohamad Abdulhalim Serafi, and Alaa Almarzuqi. "Association of Game Events with Facial Animations of Computer-Controlled Virtual Characters Based on Probabilistic Human Reaction Modeling." Applied Sciences 10, no. 16 (August 14, 2020): 5636. http://dx.doi.org/10.3390/app10165636.

Full text
Abstract:
Computer-controlled virtual characters are essential parts of most virtual environments and especially computer games. Interaction between these virtual agents and human players has a direct impact on the believability of and immersion in the application. The facial animations of these characters are a key part of these interactions. The player expects the elements of the virtual world to act in a similar manner to the real world. For example, in a board game, if the human player wins, he/she would expect the computer-controlled character to be sad. However, the reactions, more specifically, the facial expressions of virtual characters in most games are not linked with the game events. Instead, they have pre-programmed or random behaviors without any understanding of what is really happening in the game. In this paper, we propose a virtual character facial expression probabilistic decision model that will determine when various facial animations should be played. The model was developed by studying the facial expressions of human players while playing a computer videogame that was also developed as part of this research. The model is represented in the form of trees with 15 extracted game events as roots and 10 associated animations of facial expressions with their corresponding probability of occurrence. Results indicated that only 1 out of 15 game events had a probability of producing an unexpected facial expression. It was found that the “win, lose, tie” game events have more dominant associations with the facial expressions than the rest of game events, followed by “surprise” game events that occurred rarely, and finally, the “damage dealing” events.
APA, Harvard, Vancouver, ISO, and other styles
31

Said, Christopher P., James V. Haxby, and Alexander Todorov. "Brain systems for assessing the affective value of faces." Philosophical Transactions of the Royal Society B: Biological Sciences 366, no. 1571 (June 12, 2011): 1660–70. http://dx.doi.org/10.1098/rstb.2010.0351.

Full text
Abstract:
Cognitive neuroscience research on facial expression recognition and face evaluation has proliferated over the past 15 years. Nevertheless, large questions remain unanswered. In this overview, we discuss the current understanding in the field, and describe what is known and what remains unknown. In §2, we describe three types of behavioural evidence that the perception of traits in neutral faces is related to the perception of facial expressions, and may rely on the same mechanisms. In §3, we discuss cortical systems for the perception of facial expressions, and argue for a partial segregation of function in the superior temporal sulcus and the fusiform gyrus. In §4, we describe the current understanding of how the brain responds to emotionally neutral faces. To resolve some of the inconsistencies in the literature, we perform a large group analysis across three different studies, and argue that one parsimonious explanation of prior findings is that faces are coded in terms of their typicality. In §5, we discuss how these two lines of research—perception of emotional expressions and face evaluation—could be integrated into a common, cognitive neuroscience framework.
APA, Harvard, Vancouver, ISO, and other styles
32

McCulloch, Victoria. "How do we make facial expressions? A study into how modelling can aid in the public's understanding of the muscles of facial expression." Translational Research in Anatomy 19 (June 2020): 100068. http://dx.doi.org/10.1016/j.tria.2020.100068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

ASHWIN, CHRIS, SALLY WHEELWRIGHT, and SIMON BARON-COHEN. "Attention bias to faces in Asperger Syndrome: a pictorial emotion Stroop study." Psychological Medicine 36, no. 6 (March 2, 2006): 835–43. http://dx.doi.org/10.1017/s0033291706007203.

Full text
Abstract:
Background. Emotional Stroop tasks have shown attention biases of clinical populations towards stimuli related to their condition. Asperger Syndrome (AS) is a neuropsychiatric condition with social and communication deficits, repetitive behaviours and narrow interests. Social deficits are particularly striking, including difficulties in understanding others.Method. We investigated colour-naming latencies of adults with and without AS to name colours of pictures containing angry facial expressions, neutral expressions or non-social objects. We tested three hypotheses: whether (1) controls show longer colour-naming latencies for angry versus neutral facial expressions with male actors, (2) people with AS show differential latencies across picture types, and (3) differential response latencies persist when photographs contain females.Results. Controls had longer latencies to pictures of male faces with angry compared to neutral expressions. The AS group did not show longer latencies to angry versus neutral expressions in male faces, instead showing slower latencies to pictures containing any facial expression compared to objects. When pictures contained females, controls no longer showed longer latencies for angry versus neutral expressions. However, the AS group still showed longer latencies to all facial picture types, compared to objects, providing further evidence that faces produce interference effects for this clinical group.Conclusions. The pictorial emotional Stroop paradigm reveals normal attention biases towards threatening emotional faces. The AS group showed Stroop interference effects to all facial stimuli regardless of expression or sex, suggesting that faces cause disproportionate interference in AS.
APA, Harvard, Vancouver, ISO, and other styles
34

Leo, Marco, Pierluigi Carcagnì, Cosimo Distante, Paolo Spagnolo, Pier Mazzeo, Anna Rosato, Serena Petrocchi, et al. "Computational Assessment of Facial Expression Production in ASD Children." Sensors 18, no. 11 (November 16, 2018): 3993. http://dx.doi.org/10.3390/s18113993.

Full text
Abstract:
In this paper, a computational approach is proposed and put into practice to assess the capability of children having had diagnosed Autism Spectrum Disorders (ASD) to produce facial expressions. The proposed approach is based on computer vision components working on sequence of images acquired by an off-the-shelf camera in unconstrained conditions. Action unit intensities are estimated by analyzing local appearance and then both temporal and geometrical relationships, learned by Convolutional Neural Networks, are exploited to regularize gathered estimates. To cope with stereotyped movements and to highlight even subtle voluntary movements of facial muscles, a personalized and contextual statistical modeling of non-emotional face is formulated and used as a reference. Experimental results demonstrate how the proposed pipeline can improve the analysis of facial expressions produced by ASD children. A comparison of system’s outputs with the evaluations performed by psychologists, on the same group of ASD children, makes evident how the performed quantitative analysis of children’s abilities helps to go beyond the traditional qualitative ASD assessment/diagnosis protocols, whose outcomes are affected by human limitations in observing and understanding multi-cues behaviors such as facial expressions.
APA, Harvard, Vancouver, ISO, and other styles
35

Susanto, Ferri. "An Educational Perspective is As An Analysis Method of Facial Expressions on Joking Internet at the Social Media." International Journal of Advances in Social and Economics 1, no. 1 (July 10, 2019): 28. http://dx.doi.org/10.33122/ijase.v1i1.37.

Full text
Abstract:
The title of this research is An educational perspective is an Analysis of Facial Expressions on Joking Internet at the Social Media. The objective of this research was the Analysis of Facial Expressions, that involves : “Derp Face”, “Derpina Face”, “Troll Face”, “Fuuuu Face”, “Forever Alone”, “LOL Face”, “Me Gusta Face”, “Okay Face”, and “Poker Face” the limited on the research only took the Data Febuary 2017. The design of this research was descriptive research. This method used to get the description about facial expressions by analyzing, interpreting and concluding. After that the researcher analyzed the data, so the researcher concluded that the Analysis of Facial Expressions on Internet such as: 1)“Derp Face” indicating neutral expression, 2)“Derpina Face” indicating neutral expression, 3)“Troll Face” indicating feel glad, 4)“Fuuuu Face” indicated that someone feels angry, 5)“Forever Alone” indicated that someone feels sad, and Alone, 6)“LOL Face” indicated that someone feels glad, 7)“Me Gusta” indicated that someone feels like, 8)“Okay Face” indicated someone feels sad, 9)“Poker Face” indicated that someone feels no specific emotion. And the last, the researcher suggested that the study about semiotics is supposed to give understanding and knowledge about signs. The results of this research could be as a reference about how to analysis facial expressions could analysis by eyebrows, forehead, eyes, nose, cheeks and skin. This research is also supposed to be a reference for the next researcher as well.
APA, Harvard, Vancouver, ISO, and other styles
36

Dethier, Marie, Sylvie Blairy, Hannah Rosenberg, and Skye McDonald. "Emotional Regulation Impairments Following Severe Traumatic Brain Injury: An Investigation of the Body and Facial Feedback Effects." Journal of the International Neuropsychological Society 19, no. 4 (January 28, 2013): 367–79. http://dx.doi.org/10.1017/s1355617712001555.

Full text
Abstract:
AbstractThe object of this study was to evaluate the combined effect of body and facial feedback in adults who had suffered from a severe traumatic brain injury (TBI) to gain some understanding of their difficulties in the regulation of negative emotions. Twenty-four participants with TBI and 28 control participants adopted facial expressions and body postures according to specific instructions and maintained these positions for 10 s. Expressions and postures entailed anger, sadness, and happiness as well as a neutral (baseline) condition. After each expression/posture manipulation, participants evaluated their subjective emotional state (including cheerfulness, sadness, and irritation). TBI participants were globally less responsive to the effects of body and facial feedback than control participants, F(1,50) = 5.89, p = .02, η2 = .11. More interestingly, the TBI group differed from the Control group across emotions, F(8,400) = 2.51, p = .01, η2 = .05. Specifically, participants with TBI were responsive to happy but not to negative expression/posture manipulations whereas control participants were responsive to happy, angry, and sad expression/posture manipulations. In conclusion, TBI appears to impair the ability to recognize both the physical configuration of a negative emotion and its associated subjective feeling. (JINS, 2013, 19, 1–13)
APA, Harvard, Vancouver, ISO, and other styles
37

Buluk, Katarzyna, and Celina Timoszyk-Tomczak. "„Co wyraża twarz?” – rozpoznawanie ekspresji emocjonalnej twarzy przez osoby głuche i słyszące." Psychologia Rozwojowa 25, no. 4 (2020): 101–10. http://dx.doi.org/10.4467/20843879pr.20.030.13438.

Full text
Abstract:
„What does the Face Express?” – Recognition of Emotional Facial Expressions in Deaf and Hearing People An analysis of emotional functioning of deaf people is important for understanding their activities in different areas of life. Emotional functioning is related to emotional intelligence, which involves emotion perception and recognition as well as emotional expressiveness. The aim of the study was to compare the ability to recognize facial emotional expression among deaf and hearing people. The present study was conducted on 80 individuals (40 deaf people and 40 hearing people). The Emotional Intelligence Scale – Faces (Matczak, Piekarska, Studniarek, 2005) and a set of photographs used by Paul Ekman in his study of basic emotions were used for the data collection. The results obtained show that deaf people differ from hearing people in recognizing facial expressions. The analysis was conducted in terms of differences in recognition of expression of basic and complex emotions. The study included variables such as the moment of hearing loss (congenital or acquired deafness) or upbringing with deaf or hearing parents.
APA, Harvard, Vancouver, ISO, and other styles
38

Rahadika, Fadhil Yusuf, Novanto Yudistira, and Yuita Arum Sari. "Facial Expression Recognition using Residual Convnet with Image Augmentations." Jurnal Ilmu Komputer dan Informasi 14, no. 2 (July 4, 2021): 127–35. http://dx.doi.org/10.21609/jiki.v14i2.968.

Full text
Abstract:
During the COVID-19 pandemic, many offline activities are turned into online activities via video meetings to prevent the spread of the COVID 19 virus. In the online video meeting, some micro-interactions are missing when compared to direct social interactions. The use of machines to assist facial expression recognition in online video meetings is expected to increase understanding of the interactions among users. Many studies have shown that CNN-based neural networks are quite effective and accurate in image classification. In this study, some open facial expression datasets were used to train CNN-based neural networks with a total number of training data of 342,497 images. This study gets the best results using ResNet-50 architecture with Mish activation function and Accuracy Booster Plus block. This architecture is trained using the Ranger and Gradient Centralization optimization method for 60000 steps with a batch size of 256. The best results from the training result in accuracy of AffectNet validation data of 0.5972, FERPlus validation data of 0.8636, FERPlus test data of 0.8488, and RAF-DB test data of 0.8879. From this study, the proposed method outperformed plain ResNet in all test scenarios without transfer learning, and there is a potential for better performance with the pre-training model. The code is available at https://github.com/yusufrahadika-facial-expressions-essay.
APA, Harvard, Vancouver, ISO, and other styles
39

Wolf, Kirsten. "Somatic Semiotics: Emotion and the Human Face in the Sagas andÞættirof Icelanders." Traditio 69 (2014): 125–45. http://dx.doi.org/10.1017/s0362152900001938.

Full text
Abstract:
The human face has the capacity to generate expressions associated with a wide range of affective states. Despite the fact that there are few words to describe human facial behaviors, the facial muscles allow for more than a thousand different facial appearances. Some examples of feelings that can be expressed are anger, concentration, contempt, excitement, nervousness, and surprise. Regardless of culture or language, the same expressions are associated with the same emotions and vary only in intensity. Using modern psychological analyses as a point of departure, this essay examines descriptions of human facial expressions as well as such bodily “symptoms” as flushing, turning pale, and weeping in Old Norse-Icelandic literature. The aim is to analyze the manner in which facial signs are used as a means of non-verbal communication to convey the impression of an individual's internal state to observers. More specifically, this essay seeks to determine when and why characters in these works are described as expressing particular facial emotions and, especially, the range of emotions expressed. The Sagas andþættirof Icelanders are in the forefront of the analysis and yield well over one hundred references to human facial expression and color. The examples show that through gaze, smiling, weeping, brows that are raised or knitted, and coloration, the Sagas andþættirof Icelanders tell of happiness or amusement, pleasant and unpleasant surprise, fear, anger, rage, sadness, interest, concern, and even mixed emotions for which language has no words. The Sagas andþættirof Icelanders may be reticent in talking about emotions and poor in emotional vocabulary, but this poverty is compensated for by making facial expressions signifiers of emotion. This essay makes clear that the works are less emotionally barren than often supposed. It also shows that our understanding of Old Norse-Icelandic “somatic semiotics” may well depend on the universality of facial expressions and that culture-specific “display rules” or “elicitors” are virtually nonexistent.
APA, Harvard, Vancouver, ISO, and other styles
40

Scherer, Klaus R., Marcello Mortillaro, and Marc Mehu. "Understanding the Mechanisms Underlying the Production of Facial Expression of Emotion: A Componential Perspective." Emotion Review 5, no. 1 (January 2013): 47–53. http://dx.doi.org/10.1177/1754073912451504.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Levin, Brooke. "Portraiture and social understanding." Advances in Autism 1, no. 1 (July 30, 2015): 30–40. http://dx.doi.org/10.1108/aia-05-2015-0004.

Full text
Abstract:
Purpose – The purpose of this paper is to discuss the possible explanations for deficits in social understanding evident in individuals with autism spectrum disorder (ASD). A potential intervention technique is proposed that has not yet been examined in this population: viewing and drawing portraits. This portraiture-based intervention seeks to address some of the core issues set forth in each of the theories explaining impaired social functioning. Furthermore, this intervention is intended to specifically increase exposure to facial stimuli in a safe and controlled environment. Instructions about how to look closely at a social partner’s face and how to glean salient emotional information from the facial expression displayed can be developed through a focused exploration of drawing and viewing portraits. Current techniques such as eye tracking and fMRI are discussed in the context of this proposed intervention. Design/methodology/approach – This paper reviews existing research about ASD and seeks to present a new proposal for an intervention using portraiture. First the paper discusses existing interventions and reviews the current research about potential causes/areas of deficiency in individuals on the spectrum. This paper subsequently proposes a new type of intervention and discusses the reasons underpinning its potential success in the context of existing research. Findings – This was a proposed study so no empirical findings have been reported. However, observations of individuals on the spectrum engaging with artwork are discussed in this paper. Originality/value – No other research or study has been proposed in current literature relating specifically to the use of portraits (looking at and creating) to help individuals with ASD.
APA, Harvard, Vancouver, ISO, and other styles
42

Kurtić, Azra, and Nurka Pranjić. "Facial expression recognition accuracy of valence emotion among high and low indicated PTSD." Primenjena psihologija 4, no. 1 (March 9, 2011): 5–11. http://dx.doi.org/10.19090/pp.2011.1.5-11.

Full text
Abstract:
Introduction: Emotional experience of stressful event reflects itself in form of inability to start and maintain social contact, to cope with stress and sometimes distorted cognitive outages. Aim: To test hypothesis that facially expressed emotions were useful monitor in practice as mediator for understanding nature of emotionally difficulties of traumatized forty-two individuals are facing with. Primary task was assessed whether psychologically traumatized individuals differ in facial recognition accuracy, and secondary, accuracy positive versus negative emotions among two studied groups. Subject and methods: The total sample of participants were divided in two groups based on score results of DSM- IV Harvard Trauma Questionnaire, Bosnia and Herzegovina version which was expressed perception of their PTSD symptoms self- assessed used of the score results of DSM- IV Harvard Trauma Questionnaire– Bosnia and Herzegovina version (the experimental group with high indicative PTSD and control group without moderate PTSD). Accuracy of recognition of seven facially expressed emotions was investigated. The authors presented results of significantly lower (p<.05) recognition accuracy in experimental group for all studied emotions with exception of emotion of sadness. Also, recognition of negative emotions are more accurate (p<.05). These findings suggest that emotional stress leads to a less accurate recognition of facially expressed emotions especially positive valence emotions.
APA, Harvard, Vancouver, ISO, and other styles
43

Canedo, Daniel, and António J. R. Neves. "Facial Expression Recognition Using Computer Vision: A Systematic Review." Applied Sciences 9, no. 21 (November 2, 2019): 4678. http://dx.doi.org/10.3390/app9214678.

Full text
Abstract:
Emotion recognition has attracted major attention in numerous fields because of its relevant applications in the contemporary world: marketing, psychology, surveillance, and entertainment are some examples. It is possible to recognize an emotion through several ways; however, this paper focuses on facial expressions, presenting a systematic review on the matter. In addition, 112 papers published in ACM, IEEE, BASE and Springer between January 2006 and April 2019 regarding this topic were extensively reviewed. Their most used methods and algorithms will be firstly introduced and summarized for a better understanding, such as face detection, smoothing, Principal Component Analysis (PCA), Local Binary Patterns (LBP), Optical Flow (OF), Gabor filters, among others. This review identified a clear difficulty in translating the high facial expression recognition (FER) accuracy in controlled environments to uncontrolled and pose-variant environments. The future efforts in the FER field should be put into multimodal systems that are robust enough to face the adversities of real world scenarios. A thorough analysis on the research done on FER in Computer Vision based on the selected papers is presented. This review aims to not only become a reference for future research on emotion recognition, but also to provide an overview of the work done in this topic for potential readers.
APA, Harvard, Vancouver, ISO, and other styles
44

Othman, Ehsan, Frerk Saxen, Dmitri Bershadskyy, Philipp Werner, Ayoub Al-Hamadi, and Joachim Weimann. "Predicting Group Contribution Behaviour in a Public Goods Game from Face-to-Face Communication." Sensors 19, no. 12 (June 21, 2019): 2786. http://dx.doi.org/10.3390/s19122786.

Full text
Abstract:
Experimental economic laboratories run many studies to test theoretical predictions with actual human behaviour, including public goods games. With this experiment, participants in a group have the option to invest money in a public account or to keep it. All the invested money is multiplied and then evenly distributed. This structure incentivizes free riding, resulting in contributions to the public goods declining over time. Face-to-face Communication (FFC) diminishes free riding and thus positively affects contribution behaviour, but the question of how has remained mostly unknown. In this paper, we investigate two communication channels, aiming to explain what promotes cooperation and discourages free riding. Firstly, the facial expressions of the group in the 3-minute FFC videos are automatically analysed to predict the group behaviour towards the end of the game. The proposed automatic facial expressions analysis approach uses a new group activity descriptor and utilises random forest classification. Secondly, the contents of FFC are investigated by categorising strategy-relevant topics and using meta-data. The results show that it is possible to predict whether the group will fully contribute to the end of the games based on facial expression data from three minutes of FFC, but deeper understanding requires a larger dataset. Facial expression analysis and content analysis found that FFC and talking until the very end had a significant, positive effect on the contributions.
APA, Harvard, Vancouver, ISO, and other styles
45

Arar, Khalid. "Emotional expression at different managerial career stages." Educational Management Administration & Leadership 45, no. 6 (July 18, 2016): 929–43. http://dx.doi.org/10.1177/1741143216636114.

Full text
Abstract:
This paper examines emotional expression experienced by female principals in the Arab school system in Israel over their managerial careers – role-related emotions that they choose to express or repress before others. I employed narrative methodology, interviewing nine female principals from the Arab school system to investigate expression of emotions in professional life stories that they narrated. Findings indicate that the principals’ emotional expressions differ according to career stage; on induction into principalship, they are stressed, feel threatened, distressed and challenged. As they establish themselves in their role they are calmer, use more humour and more ‘correct’ facial expressions. At a more advanced career stage, they express empathy and compassion, and concern for the maintenance of educational achievements. Understanding principals’ emotional expression at different career stages contributes to the quality of principal-teacher relations in the school.
APA, Harvard, Vancouver, ISO, and other styles
46

Mustapha, Roslinda, Md Azman Shahadan, and Hazalizah Hamzah. "The Effect of Trait Anxiety on Recognition of Threatening Emotional Facial Expressions: A Study among High School Students." International Journal of Social Sciences and Humanities Invention 6, no. 2 (February 28, 2019): 5312–18. http://dx.doi.org/10.18535/ijsshi/v6i2.07.

Full text
Abstract:
Previous studies indicated that sensitivity to facial expressions of threat is related to anxiety in children, adolescents and adults. A small amount of anxiety often improves students' performance, but a high level of anxiety can interfere the learning process. The feeling of being threatened by particular stimuli would cause them to perceive many daily situations as threatening and this will result in more frequent experiences of fear of what may happen, especially for the high anxiety students. This research will explore the threat perception that the secondary school students might have in relation to negative facial expression and examine the sensitivity towards anger expressions as threatening stimuli. 49 students (25 low anxiety and 24 high anxiety) age between 16 to 18 years old have been recruited to answer a set of anxiety questionnaires and they were also required to identify the facial expression to explore the threat perception by looking at images posing facial expression in 2 and 3 dimensions. These images have been transformed into 5 levels of anger using FaceGen Modeller 3.5. Results demonstrated that the high anxiety students can identify threat stimuli from faces more accurately and faster than the low anxiety students. It is suggested that angry faces may be perceived as particularly threatening amongst students and play a significant role in their emotional well being. It is hoped that this research will increase our understanding of the relationship between anxiety and threat perception and this unique visual stimulus can generate a wealth of other research in Malaysia.
APA, Harvard, Vancouver, ISO, and other styles
47

Domaneschi, Filippo, Marcello Passarelli, and Luca Andrighetto. "Performing Orders: Speech Acts, Facial Expressions and Gender Bias." Journal of Cognition and Culture 18, no. 3-4 (August 13, 2018): 343–57. http://dx.doi.org/10.1163/15685373-12340034.

Full text
Abstract:
AbstractThe business of a sentence is not only to describe some state of affairs but also to perform other kinds of speech acts like ordering, suggesting, asking, etc. Understanding the kind of action performed by a speaker who utters a sentence is a multimodal process which involves the computing of verbal and non-verbal information. This work aims at investigating if the understanding of a speech act is affected by the gender of the actor that produces the utterance in combination with a certain facial expression. Experimental data collected show that, as compared to men, women are less likely to be perceived as performers of orders and are more likely to be perceived as performers of questions. This result reveals a gender bias which reflects a process of women’s subordination according to which women are hardly considered as holding the hierarchical social position required for the correct execution of an order
APA, Harvard, Vancouver, ISO, and other styles
48

Trinkler, Iris, Laurent Cleret de Langavant, and Anne-Catherine Bachoud-Lévi. "Joint recognition–expression impairment of facial emotions in Huntington's disease despite intact understanding of feelings." Cortex 49, no. 2 (February 2013): 549–58. http://dx.doi.org/10.1016/j.cortex.2011.12.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Kim, Chang-Min, Ellen J. Hong, Kyungyong Chung, and Roy C. Park. "Driver Facial Expression Analysis Using LFA-CRNN-Based Feature Extraction for Health-Risk Decisions." Applied Sciences 10, no. 8 (April 24, 2020): 2956. http://dx.doi.org/10.3390/app10082956.

Full text
Abstract:
As people communicate with each other, they use gestures and facial expressions as a means to convey and understand emotional state. Non-verbal means of communication are essential to understanding, based on external clues to a person’s emotional state. Recently, active studies have been conducted on the lifecare service of analyzing users’ facial expressions. Yet, rather than a service necessary for everyday life, the service is currently provided only for health care centers or certain medical institutions. It is necessary to conduct studies to prevent accidents that suddenly occur in everyday life and to cope with emergencies. Thus, we propose facial expression analysis using line-segment feature analysis-convolutional recurrent neural network (LFA-CRNN) feature extraction for health-risk assessments of drivers. The purpose of such an analysis is to manage and monitor patients with chronic diseases who are rapidly increasing in number. To prevent automobile accidents and to respond to emergency situations due to acute diseases, we propose a service that monitors a driver’s facial expressions to assess health risks and alert the driver to risk-related matters while driving. To identify health risks, deep learning technology is used to recognize expressions of pain and to determine if a person is in pain while driving. Since the amount of input-image data is large, analyzing facial expressions accurately is difficult for a process with limited resources while providing the service on a real-time basis. Accordingly, a line-segment feature analysis algorithm is proposed to reduce the amount of data, and the LFA-CRNN model was designed for this purpose. Through this model, the severity of a driver’s pain is classified into one of nine types. The LFA-CRNN model consists of one convolution layer that is reshaped and delivered into two bidirectional gated recurrent unit layers. Finally, biometric data are classified through softmax. In addition, to evaluate the performance of LFA-CRNN, the performance was compared through the CRNN and AlexNet Models based on the University of Northern British Columbia and McMaster University (UNBC-McMaster) database.
APA, Harvard, Vancouver, ISO, and other styles
50

Keutmann, M. K., R. E. Gur, and R. C. Gur. "Understanding emotion processing in schizophrenia." Die Psychiatrie 07, no. 04 (October 2010): 217–26. http://dx.doi.org/10.1055/s-0038-1669583.

Full text
Abstract:
SummaryImpaired emotional functioning is a prominent feature of schizophrenia. Although positive symptoms have traditionally attracted more attention and targeted treatment, negative symptoms, including flat affect, are increasingly recognized as the more debilitating and resistant to intervention. We describe studies examining affect processing in schizophrenia, focusing on facial affect with initial findings in vocal affect, or prosody. Deficits in schizophrenia are pronounced, and studies with functional neuroimaging indicate that the neural substrates for these deficits center on the amygdala and its projections. The abnormalities are highly correlated with symptom severity and functional outcome. While there is quite extensive work on affect recognition abnormalities, deficits have also been documented in the ability to express affect on the face and in voice, and perhaps to a lesser extent in the experience of emotion. These abnormalities can be better studied when methods for quantitative analysis of emotional expression are available. Recognizing the existence of such deficits and their neural substrates will lead to improved approaches to pharmacological and behavioral treatment.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography