To see the other types of publications on this topic, follow the link: Facial-expression communication.

Journal articles on the topic 'Facial-expression communication'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Facial-expression communication.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yoshitomi, Yasunari. "Human-Computer Communication Using Facial Expression." Proceedings of International Conference on Artificial Life and Robotics 26 (January 21, 2021): 275–78. http://dx.doi.org/10.5954/icarob.2021.ps-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ju, Wang, Ding Rui, and Chun Yan Nie. "Research on the Facial Expression Feature Extraction of Facial Expression Recognition Based on MATLAB." Advanced Materials Research 1049-1050 (October 2014): 1522–25. http://dx.doi.org/10.4028/www.scientific.net/amr.1049-1050.1522.

Full text
Abstract:
In such a developed day of information communication, communication is an important essential way of interpersonal communication. As a carrier of information, expression is rich in human behavior information. Facial expression recognition is a combination of many fields, but also a new topic in the field of pattern recognition. This paper mainly studied the facial feature extraction based on MATLAB, by MATLAB software, extracting the expression features through a large number of facial expressions, which can be divided into different facial expressions more accurate classification .
APA, Harvard, Vancouver, ISO, and other styles
3

Park, Hyun-Shin. "A Study on Nonverbal Communication for Effective Sermon Delivery Focusing on Facial Expression and Eye Communication." Gospel and Praxis 50 (February 20, 2019): 69–99. http://dx.doi.org/10.25309/kept.2019.2.20.069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Ju, Rui Ding, and Mei Hong. "Facial Expression Recognition Based on MATLAB." Applied Mechanics and Materials 543-547 (March 2014): 2188–91. http://dx.doi.org/10.4028/www.scientific.net/amm.543-547.2188.

Full text
Abstract:
In today's social life, communication between people is essential to do anything important way. The facial expressions are rich in human behavior information, it is very important means of communication, as a carrier of information, expression can convey a lot of voice can not convey the information. Expression recognition field of pattern recognition is a new task, is an essential part of intelligent machines. This paper studies the discrete wavelet transform feature extraction and expression using MATLAB software image feature extraction and treatment with an elastic template matching algorithm to do the appropriate test expression recognition.
APA, Harvard, Vancouver, ISO, and other styles
5

Sumeisey, Vivian Savenia, Rahmadsyah Rangkuti, and Rohani Ganie. "NON-VERBAL COMMUNICATION OF THE SIMPSONS MEMES IN “MEMES.COM” INSTAGRAM." Language Literacy: Journal of Linguistics, Literature, and Language Teaching 3, no. 1 (July 5, 2019): 83–88. http://dx.doi.org/10.30743/ll.v3i1.992.

Full text
Abstract:
The research aims to identify the nonverbal communication especially kinesics aspect in the Simpsons memes in “memes.com” instagram. The nonverbal communications in the Simpsons memes convey the meme users’ emotions, feelings and messages through expressive actions. By analyzing the non verbal communication, the meme users are able to understand the meaning of the meme and the meme readers are able to understand what the memes senders try to communicate. The research was conducted by means qualitative descriptive analysis. The data of the research was the Simpsons meme and the source of data was “memes.com” instagram. The data collection was qualitative audio and visual material because the data is a picture. The sample of the research was forteen Simpsons memes. Facial expression, posture and gesture are the kinesics aspect that found in the Simpsons memes in “memes.com” instagram. The results of the researchwere one meme showed posture and gesture, two memes showed facial expression and gesture, three memes showed facial expression and posture, memes only showed posture and five memes showed the character’s facial expression in conveying the message.
APA, Harvard, Vancouver, ISO, and other styles
6

Taee, Elaf J. Al, and Qasim Mohammed Jasim. "Blurred Facial Expression Recognition System by Using Convolution Neural Network." Webology 17, no. 2 (December 21, 2020): 804–16. http://dx.doi.org/10.14704/web/v17i2/web17068.

Full text
Abstract:
A facial expression is a visual impression of a person's situations, emotions, cognitive activity, personality, intention and psychopathology, it has an active and vital role in the exchange of information and communication between people. In machines and robots which dedicated to communication with humans, the facial expressions recognition play an important and vital role in communication and reading of what is the person implies, especially in the field of health. For that the research in this field leads to development in communication with the robot. This topic has been discussed extensively, and with the progress of deep learning and use Convolution Neural Network CNN in image processing which widely proved efficiency, led to use CNN in the recognition of facial expressions. Automatic system for Facial Expression Recognition FER require to perform detection and location of faces in a cluttered scene, feature extraction, and classification. In this research, the CNN used for perform the process of FER. The target is to label each image of facial into one of the seven facial emotion categories considered in the JAFFE database. JAFFE facial expression database with seven facial expression labels as sad, happy, fear, surprise, anger, disgust, and natural are used in this research. We trained CNN with different depths using gray-scale images from the JAFFE database.The accuracy of proposed system was 100%.
APA, Harvard, Vancouver, ISO, and other styles
7

Shang, Yuyi, Mie Sato, and Masao Kasuga. "An Interactive System with Facial Expression Recognition." Journal of Advanced Computational Intelligence and Intelligent Informatics 9, no. 6 (November 20, 2005): 637–42. http://dx.doi.org/10.20965/jaciii.2005.p0637.

Full text
Abstract:
To make communication between users and machines more comfortable, we focus on facial expressions and automatically classify them into 4 expression candidates: “joy,” “anger, ” “sadness,” and “surprise.” The classification uses features that correspond to expression-motion patterns, and then voice data is output based on classification results. When we output voice data, insufficiency in classification is taken into account. We choose the first and second expression candidates from classification results. To realize interactive communication between users and machines, information on these candidates is used when we access a voice database. The voice database contains voice data corresponding to emotions.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Jianmin, Yuxi Wang, Yujia Liu, Tianyang Yue, Chengji Wang, Weiguang Yang, Preben Hansen, and Fang You. "Experimental Study on Abstract Expression of Human-Robot Emotional Communication." Symmetry 13, no. 9 (September 14, 2021): 1693. http://dx.doi.org/10.3390/sym13091693.

Full text
Abstract:
With the continuous development of intelligent product interaction technology, the facial expression design of virtual images on the interactive interface of intelligent products has become an important research topic. Based on the current research on facial expression design of existing intelligent products, we symmetrically mapped the PAD (pleasure–arousal–dominance) emotion value to the image design, explored the characteristics of abstract expressions and the principles of expression design, and evaluated them experimentally. In this study, the experiment of PAD scores was conducted on the emotion expression design of abstract expressions, and the data results were analyzed to iterate the expression design. The experimental results show that PAD values can effectively guide designers in expression design. Meanwhile, the efficiency and recognition accuracy of human communication with abstract expression design can be improved by facial auxiliary elements and eyebrows.
APA, Harvard, Vancouver, ISO, and other styles
9

Made Chintya Maha, Yekti. "KINESICS INTERACTION: TOWARDS EYE CONTACT, POSTURE AND FACIAL EXPRESSION OF EDWARD AND BELLA IN A MOVIE ENTITLED “TWILIGHT”." Lingua Scientia 24, no. 1 (June 30, 2017): 27. http://dx.doi.org/10.23887/ls.v24i1.18795.

Full text
Abstract:
This study discusses the nonverbal communication particularly body language. This study focuses on kinesics such as: eye contact, posture, and facial expression of the male main character (Edward Cullen) and the female main character (Bella Swan) in Twilight movie by Stephenie Meyer. The aims of this study is to know the meaning behind those nonverbal communications of male main character and female main character as their acting in the movie. The method used to answer the problem of this study is Descriptive qualitative. The data of this study is a film entitled Twilight produced in 2008. The data is described in the form of images and words. From this study, it can be seen that there are three kinds of nonverbal communication used by the male and female main character. Those are eye contact, posture, and facial expression where the nonverbal communication used by the male character is concerned, serious, brave, romantic, cool postures, friendly and bright eyes. Whereas the female character uses dim eye contact, glace and shock posture, and amazed facial expression. It is found that there are several differences of using nonverbal communication between male and female character in the movie.
APA, Harvard, Vancouver, ISO, and other styles
10

LI, YI, and MINORU HASHIMOTO. "EMOTIONAL SYNCHRONIZATION-BASED HUMAN–ROBOT COMMUNICATION AND ITS EFFECTS." International Journal of Humanoid Robotics 10, no. 01 (March 2013): 1350014. http://dx.doi.org/10.1142/s021984361350014x.

Full text
Abstract:
This paper presents a natural and comfortable communication system between human and robot based on synchronization to human emotional state using human facial expression recognition. The system consists of three parts: human emotion recognition, robotic emotion generation, and robotic emotion expression. The robot recognizes human emotion through human facial expressions, and robotic emotion is generated and synchronized with human emotion dynamically using a vector field of dynamics. The robot makes dynamically varying facial expressions to express its own emotions to the human. A communication experiment was conducted to examine the effectiveness of the proposed system. The authors found that subjects became much more comfortable after communicating with the robot with synchronized emotions. Subjects felt somewhat uncomfortable after communicating with the robot with non-synchronized emotions. During emotional synchronization, subjects communicated much more with the robot, and the communication time was double that during non-synchronization. Furthermore, in the case of emotional synchronization, subjects had good impressions of the robot, much better than the impressions in the case of non-synchronization. It was confirmed in this study that emotional synchronization in human–robot communication can be effective in making humans comfortable and makes the robot much more favorable and acceptable to humans.
APA, Harvard, Vancouver, ISO, and other styles
11

Hall, Cathy W., Rosina Chia, and Deng F. Wang. "Nonverbal Communication among American and Chinese Students." Psychological Reports 79, no. 2 (October 1996): 419–28. http://dx.doi.org/10.2466/pr0.1996.79.2.419.

Full text
Abstract:
The present study assessed nonverbal communication in a sample of Chinese and American elementary students. Participants were 412 children ranging in age from 7 years to 11 years (Grades 2 through 4), 241 from mainland China and 171 from the USA. Perception of nonverbal communication was assessed by use of the Diagnostic Analysis of Nonverbal Accuracy which assesses receptive nonverbal communication through facial expression, posture, gestures, and paralanguage (tone of voice). Only facial expression, posture, and gestures were examined, and significant differences between the two groups on gestures and postures were found but not on facial expressions. Teachers were also asked to rate their students using the Social Perception Behavior Rating Scale. Surprisingly, the teachers rated Chinese boys as having more difficulty with social behaviors and lower social perception than Chinese girls or American boys and girls.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhan, Ce, Wanqing Li, Philip Ogunbona, and Farzad Safaei. "A Real-Time Facial Expression Recognition System for Online Games." International Journal of Computer Games Technology 2008 (2008): 1–7. http://dx.doi.org/10.1155/2008/542918.

Full text
Abstract:
Multiplayer online games (MOGs) have become increasingly popular because of the opportunity they provide for collaboration, communication, and interaction. However, compared with ordinary human communication, MOG still has several limitations, especially in communication using facial expressions. Although detailed facial animation has already been achieved in a number of MOGs, players have to use text commands to control the expressions of avatars. In this paper, we propose an automatic expression recognition system that can be integrated into an MOG to control the facial expressions of avatars. To meet the specific requirements of such a system, a number of algorithms are studied, improved, and extended. In particular, Viola and Jones face-detection method is extended to detect small-scale key facial components; and fixed facial landmarks are used to reduce the computational load with little performance degradation in the recognition accuracy.
APA, Harvard, Vancouver, ISO, and other styles
13

Owusu, Ebenezer, Jacqueline Asor Kumi, and Justice Kwame Appati. "On Facial Expression Recognition Benchmarks." Applied Computational Intelligence and Soft Computing 2021 (September 17, 2021): 1–20. http://dx.doi.org/10.1155/2021/9917246.

Full text
Abstract:
Facial expression is an important form of nonverbal communication, as it is noted that 55% of what humans communicate is expressed in facial expressions. There are several applications of facial expressions in diverse fields including medicine, security, gaming, and even business enterprises. Thus, currently, automatic facial expression recognition is a hotbed research area that attracts lots of grants and therefore the need to understand the trends very well. This study, as a result, aims to review selected published works in the domain of study and conduct valuable analysis to determine the most common and useful algorithms employed in the study. We selected published works from 2010 to 2021 and extracted, analyzed, and summarized the findings based on the most used techniques in feature extraction, feature selection, validation, databases, and classification. The result of the study indicates strongly that local binary pattern (LBP), principal component analysis (PCA), saturated vector machine (SVM), CK+, and 10-fold cross-validation are the most widely used feature extraction, feature selection, classifier, database, and validation method used, respectively. Therefore, in line with our findings, this study provides recommendations for research specifically for new researchers with little or no background as to which methods they can employ and strive to improve.
APA, Harvard, Vancouver, ISO, and other styles
14

Chandrasiri, N. P., T. Naemura, M. Ishizuka, H. Harashima, and I. Barakonyi. "Internet communication using real-time facial expression analysis and synthesis." IEEE Multimedia 11, no. 3 (July 2004): 20–29. http://dx.doi.org/10.1109/mmul.2004.10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Parr, Lisa A., and Bridget M. Waller. "Understanding chimpanzee facial expression: insights into the evolution of communication." Social Cognitive and Affective Neuroscience 1, no. 3 (December 1, 2006): 221–28. http://dx.doi.org/10.1093/scan/nsl031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Morishima, Shigeo. "Modeling of facial expression and emotion for human communication system." Displays 17, no. 1 (August 1996): 15–25. http://dx.doi.org/10.1016/0141-9382(95)01008-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Yoshitomi, Yasunari. "Human–Computer Communication Using Recognition and Synthesis of Facial Expression." Journal of Robotics, Networking and Artificial Life 8, no. 1 (2021): 10. http://dx.doi.org/10.2991/jrnal.k.210521.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Prasetyo, Jarot Dwi, Zaehol Fatah, and Taufik Saleh. "EKSTRAKSI FITUR BERBASIS AVERAGE FACE UNTUK PENGENALAN EKSPRESI WAJAH." Jurnal Ilmiah Informatika 2, no. 2 (December 9, 2017): 130–34. http://dx.doi.org/10.35316/jimi.v2i2.464.

Full text
Abstract:
In recent years it appears interest in the interaction between humans and computers. Facial expressions play a fundamental role in social interaction with other humans. In two human communications is only 7% of communication due to language linguistic message, 38% due to paralanguage, while 55% through facial expressions. Therefore, to facilitate human machine interface more friendly on multimedia products, the facial expression recognition on interface very helpful in interacting comfort. One of the steps that affect the facial expression recognition is the accuracy in facial feature extraction. Several approaches to facial expression recognition in its extraction does not consider the dimensions of the data as input features of machine learning Through this research proposes a wavelet algorithm used to reduce the dimension of data features. Data features are then classified using SVM-multiclass machine learning to determine the difference of six facial expressions are anger, hatred, fear of happy, sad, and surprised Jaffe found in the database. Generating classification obtained 81.42% of the 208 sample data.
APA, Harvard, Vancouver, ISO, and other styles
19

Kulkarni, Praveen, and Rajesh T. M. "Analysis on techniques used to recognize and identifying the Human emotions." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 3 (June 1, 2020): 3307. http://dx.doi.org/10.11591/ijece.v10i3.pp3307-3314.

Full text
Abstract:
Facial expression is a major area for non-verbal language in day to day life communication. As the statistical analysis shows only 7 percent of the message in communication was covered in verbal communication while 55 percent transmitted by facial expression. Emotional expression has been a research subject of physiology since Darwin’s work on emotional expression in the 19th century. According to Psychological theory the classification of human emotion is classified majorly into six emotions: happiness, fear, anger, surprise, disgust, and sadness. Facial expressions which involve the emotions and the nature of speech play a foremost role in expressing these emotions. Thereafter, researchers developed a system based on Anatomic of face named Facial Action Coding System (FACS) in 1970. Ever since the development of FACS there is a rapid progress of research in the domain of emotion recognition. This work is intended to give a thorough comparative analysis of the various techniques and methods that were applied to recognize and identify human emotions. This analysis results will help to identify the proper and suitable techniques, algorithms and the methodologies for future research directions. In this paper extensive analysis on the various recognition techniques used to identify the complexity in recognizing the facial expression is presented. This work will also help researchers and scholars to ease out the problem in choosing the techniques used in the identification of the facial expression domain.
APA, Harvard, Vancouver, ISO, and other styles
20

Jeni, Laszlo A., Hideki Hashimoto, and Takashi Kubota. "Robust Facial Expression Recognition Using Near Infrared Cameras." Journal of Advanced Computational Intelligence and Intelligent Informatics 16, no. 2 (March 20, 2012): 341–48. http://dx.doi.org/10.20965/jaciii.2012.p0341.

Full text
Abstract:
In human-human communication we use verbal, vocal and non-verbal signals to communicate with others. Facial expressions are a form of non-verbal communication, recognizing them helps to improve the human-machine interaction. This paper proposes a system for pose- and illumination-invariant recognition of facial expressions using near-infrared camera images and precise 3D shape registration. Precise 3D shape information of the human face can be computed by means of Constrained Local Models (CLM), which fits a dense model to an unseen image in an iterative manner. We used a multi-class SVM to classify the acquired 3D shape into different emotion categories. Results surpassed human performance and show poseinvariant performance. Varying lighting conditions can influence the fitting process and reduce the recognition precision. We built a near-infrared and visible light camera array to test the method with different illuminations. Results shows that the near-infrared camera configuration is suitable for robust and reliable facial expression recognition with changing lighting conditions.
APA, Harvard, Vancouver, ISO, and other styles
21

Jenkinson, Elizabeth, Kathleen Bogart, Claire Hamlet, and Laura Davies. "Living with Moebius syndrome." Journal of Aesthetic Nursing 9, no. 6 (July 2, 2020): 233–37. http://dx.doi.org/10.12968/joan.2020.9.6.233.

Full text
Abstract:
Moebius syndrome is a congenital neurological disorder that impacts facial expression, communication and appearance. In this article, the authors will discuss the psychological and social impacts of living with this rare form of facial palsy. Existing research suggests that patients may face challenges in developing psychological wellbeing, positive body image and in communicating effectively with others. Therefore, recommendations for nursing practitioners in how to best support this patient group are discussed.
APA, Harvard, Vancouver, ISO, and other styles
22

Dennis, Maureen, Alba Agostino, H. Gerry Taylor, Erin D. Bigler, Kenneth Rubin, Kathryn Vannatta, Cynthia A. Gerhardt, Terry Stancin, and Keith Owen Yeates. "Emotional Expression and Socially Modulated Emotive Communication in Children with Traumatic Brain Injury." Journal of the International Neuropsychological Society 19, no. 1 (November 19, 2012): 34–43. http://dx.doi.org/10.1017/s1355617712000884.

Full text
Abstract:
AbstractFacial emotion expresses feelings, but is also a vehicle for social communication. Using five basic emotions (happiness, sadness, fear, disgust, and anger) in a comprehension paradigm, we studied how facial expression reflects inner feelings (emotional expression) but may be socially modulated to communicate a different emotion from the inner feeling (emotive communication, a form of affective theory of mind). Participants were 8- to 12-year-old children with TBI (n = 78) and peers with orthopedic injuries (n = 56). Children with mild–moderate or severe TBI performed more poorly than the OI group, and chose less cognitively sophisticated strategies for emotive communication. Compared to the OI and mild–moderate TBI groups, children with severe TBI had more deficits in anger, fear, and sadness; neutralized emotions less often; produced socially inappropriate responses; and failed to differentiate the core emotional dimension of arousal. Children with TBI have difficulty understanding the dual role of facial emotions in expressing feelings and communicating socially relevant but deceptive emotions, and these difficulties likely contribute to their social problems. (JINS, 2013, 18, 1–10)
APA, Harvard, Vancouver, ISO, and other styles
23

Moe Htay, Moe. "Feature extraction and classification methods of facial expression: a survey." Computer Science and Information Technologies 2, no. 1 (March 1, 2021): 26–32. http://dx.doi.org/10.11591/csit.v2i1.p26-32.

Full text
Abstract:
Facial Expression is a significant role in affective computing and one of the non-verbal communication for human computer interaction. Automatic recognition of human affects has become more challenging and interesting problem in recent years. Facial Expression is the significant features to recognize the human emotion in human daily life. Facial Expression Recognition System (FERS) can be developed for the application of human affect analysis, health care assessment, distance learning, driver fatigue detection and human computer interaction. Basically, there are three main components to recognize the human facial expression. They are face or face’s components detection, feature extraction of face image, classification of expression. The study proposed the methods of feature extraction and classification for FER.
APA, Harvard, Vancouver, ISO, and other styles
24

Watts, Amy J., and Jacinta M. Douglas. "Interpreting facial expression and communication competence following severe traumatic brain injury." Aphasiology 20, no. 8 (August 2006): 707–22. http://dx.doi.org/10.1080/02687030500489953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Barabanschikov, V. A., and O. A. Korolkova. "Perception of “Live” Facial Expressions." Experimental Psychology (Russia) 13, no. 3 (2020): 55–73. http://dx.doi.org/10.17759/exppsy.2020130305.

Full text
Abstract:
The article provides a review of experimental studies of interpersonal perception on the material of static and dynamic facial expressions as a unique source of information about the person’s inner world. The focus is on the patterns of perception of a moving face, included in the processes of communication and joint activities (an alternative to the most commonly studied perception of static images of a person outside of a behavioral context). The review includes four interrelated topics: face statics and dynamics in the recognition of emotional expressions; specificity of perception of moving face expressions; multimodal integration of emotional cues; generation and perception of facial expressions in communication processes. The analysis identifies the most promising areas of research of face in motion. We show that the static and dynamic modes of facial perception complement each other, and describe the role of qualitative features of the facial expression dynamics in assessing the emotional state of a person. Facial expression is considered as part of a holistic multimodal manifestation of emotions. The importance of facial movements as an instrument of social interaction is emphasized.
APA, Harvard, Vancouver, ISO, and other styles
26

Kusuma, Hendra, Muhammad Attamimi, and Hasby Fahrudin. "Deep learning based facial expressions recognition system for assisting visually impaired persons." Bulletin of Electrical Engineering and Informatics 9, no. 3 (June 1, 2020): 1208–19. http://dx.doi.org/10.11591/eei.v9i3.2030.

Full text
Abstract:
In general, a good interaction including communication can be achieved when verbal and non-verbal information such as body movements, gestures, facial expressions, can be processed in two directions between the speaker and listener. Especially the facial expression is one of the indicators of the inner state of the speaker and/or the listener during the communication. Therefore, recognizing the facial expressions is necessary and becomes the important ability in communication. Such ability will be a challenge for the visually impaired persons. This fact motivated us to develop a facial recognition system. Our system is based on deep learning algorithm. We implemented the proposed system on a wearable device which enables the visually impaired persons to recognize facial expressions during the communication. We have conducted several experiments involving the visually impaired persons to validate our proposed system and the promising results were achieved.
APA, Harvard, Vancouver, ISO, and other styles
27

TROVATO, GABRIELE, MASSIMILIANO ZECCA, TATSUHIRO KISHI, NOBUTSUNA ENDO, KENJI HASHIMOTO, and ATSUO TAKANISHI. "GENERATION OF HUMANOID ROBOT'S FACIAL EXPRESSIONS FOR CONTEXT-AWARE COMMUNICATION." International Journal of Humanoid Robotics 10, no. 01 (March 2013): 1350013. http://dx.doi.org/10.1142/s0219843613500138.

Full text
Abstract:
Communication between humans and robots is a very important aspect in the field of Humanoid Robotics. For a natural interaction, robots capable of nonverbal communication must be developed. However, despite the most recent efforts, robots still can show only limited expression capabilities. The purpose of this work is to create a facial expression generator that can be applied to the 24 DoF head of the humanoid robot KOBIAN-R. In this manuscript, we present a system that based on relevant studies of human communication and facial anatomy can produce thousands of combinations of facial and neck movements. The wide range of expressions covers not only primary emotions, but also complex or blended ones, as well as communication acts that are not strictly categorized as emotions. Results showed that the recognition rate of expressions produced by this system is comparable to the rate of recognition of the most common facial expressions. Context-based recognition, which is especially important in case of more complex communication acts, was also evaluated. Results proved that produced robotic expressions can alter the meaning of a sentence in the same way as human expressions do. We conclude that our system can successfully improve the communication abilities of KOBIAN-R, making it capable of complex interaction in the future.
APA, Harvard, Vancouver, ISO, and other styles
28

Hong, Yu-Jin, Sung Eun Choi, Gi Pyo Nam, Heeseung Choi, Junghyun Cho, and Ig-Jae Kim. "Adaptive 3D Model-Based Facial Expression Synthesis and Pose Frontalization." Sensors 20, no. 9 (May 1, 2020): 2578. http://dx.doi.org/10.3390/s20092578.

Full text
Abstract:
Facial expressions are one of the important non-verbal ways used to understand human emotions during communication. Thus, acquiring and reproducing facial expressions is helpful in analyzing human emotional states. However, owing to complex and subtle facial muscle movements, facial expression modeling from images with face poses is difficult to achieve. To handle this issue, we present a method for acquiring facial expressions from a non-frontal single photograph using a 3D-aided approach. In addition, we propose a contour-fitting method that improves the modeling accuracy by automatically rearranging 3D contour landmarks corresponding to fixed 2D image landmarks. The acquired facial expression input can be parametrically manipulated to create various facial expressions through a blendshape or expression transfer based on the FACS (Facial Action Coding System). To achieve a realistic facial expression synthesis, we propose an exemplar-texture wrinkle synthesis method that extracts and synthesizes appropriate expression wrinkles according to the target expression. To do so, we constructed a wrinkle table of various facial expressions from 400 people. As one of the applications, we proved that the expression-pose synthesis method is suitable for expression-invariant face recognition through a quantitative evaluation, and showed the effectiveness based on a qualitative evaluation. We expect our system to be a benefit to various fields such as face recognition, HCI, and data augmentation for deep learning.
APA, Harvard, Vancouver, ISO, and other styles
29

Benton, Christopher P. "Effect of Photographic Negation on Face Expression Aftereffects." Perception 38, no. 9 (January 1, 2009): 1267–74. http://dx.doi.org/10.1068/p6468.

Full text
Abstract:
Our visual representation of facial expression is examined in this study: is this representation built from edge information, or does it incorporate surface-based information? To answer this question, photographic negation of grey-scale images is used. Negation preserves edge information whilst disrupting the surface-based information. In two experiments visual aftereffects produced by prolonged viewing of images of facial expressions were measured. This adaptation-based technique allows a behavioural assessment of the characteristics encoded by the neural systems underlying our representation of facial expression. The experiments show that photographic negation of the adapting images results in a profound decrease of expression aftereffect. Our visual representation of facial expression therefore appears to not just be built from edge information, but to also incorporate surface information. The latter allows an appreciation of the 3-D structure of the expressing face that, it is argued, may underpin the subtlety and range of our non-verbal facial communication.
APA, Harvard, Vancouver, ISO, and other styles
30

Sarafoleanu, Dorin, and Andreea Bejenariu. "Facial nerve paralysis." Romanian Journal of Rhinology 10, no. 39 (September 1, 2020): 68–77. http://dx.doi.org/10.2478/rjr-2020-0016.

Full text
Abstract:
AbstractThe facial nerve, the seventh pair of cranial nerves, has an essential role in non-verbal communication through facial expression. Besides innervating the muscles involved in facial expression, the complex structure of the facial nerve contains sensory fibres involved in the perception of taste and parasympathetic fibres involved in the salivation and tearing processes. Damage to the facial nerve manifested by facial paralysis translates into a decrease or disappearance of mobility of normal facial expression.Facial nerve palsy is one of the common causes of presenting to the Emergency Room. Most facial paralysis are idiopathic, followed by traumatic, infectious, tumor causes. A special place is occupied by the child’s facial paralysis. Due to the multitude of factors that can determine or favour its appearance, it requires a multidisciplinary evaluation consisting of otorhinolaryngologist, neurologist, ophthalmologist, internist.Early presentation to the doctor, accurate determination of the cause, correctly performed topographic diagnosis is the key to proper treatment and complete functional recovery.
APA, Harvard, Vancouver, ISO, and other styles
31

Abdellaoui, Benyoussef, Aniss Moumen, Younes El Bouzekri El Idrissi, and Ahmed Remaida. "The emotional state through visual expression, auditory expression and physiological representation." SHS Web of Conferences 119 (2021): 05008. http://dx.doi.org/10.1051/shsconf/202111905008.

Full text
Abstract:
As emotional content reflects human behaviour, automatic emotion recognition is a topic of growing interest. During the communication of an emotional message, the use of physiological signals and facial expressions gives several advantages that can be expected to understand a person’s personality and psychopathology better and determine human communication and human-machine interaction. In this article, we will present some notions about identifying the emotional state through visual expression, auditory expression and physiological representation, and the techniques used to measure emotions.
APA, Harvard, Vancouver, ISO, and other styles
32

Liu, Zhen-Tao, Si-Han Li, Wei-Hua Cao, Dan-Yun Li, Man Hao, and Ri Zhang. "Combining 2D Gabor and Local Binary Pattern for Facial Expression Recognition Using Extreme Learning Machine." Journal of Advanced Computational Intelligence and Intelligent Informatics 23, no. 3 (May 20, 2019): 444–55. http://dx.doi.org/10.20965/jaciii.2019.p0444.

Full text
Abstract:
The efficiency of facial expression recognition (FER) is important for human-robot interaction. Detection of the facial region, extraction of discriminative facial expression features, and identification of categories of facial expressions are all related to the recognition accuracy and time-efficiency. An FER framework is proposed, in which 2D Gabor and local binary pattern (LBP) are combined to extract discriminative features of salient facial expression patches, and extreme learning machine (ELM) is adopted to identify facial expression categories. The combination of 2D Gabor and LBP can not only describe multiscale and multidirectional textural features, but also capture small local details. The FER of ELM and support vector machine (SVM) is performed using the Japanese female facial expression database and extended Cohn-Kanade database, respectively, in which both ELM and SVM achieve an accuracy of more than 85%, and the computational efficiency of ELM is higher than that of SVM. The proposed framework has been used in the multimodal emotional communication based humans-robots interaction system, in which FER within 2 seconds enables real-time human-robot interaction.
APA, Harvard, Vancouver, ISO, and other styles
33

O'Neill, Brittney. "Mirror, Mirror on the Screen, What Does All this ASCII Mean?: A Pilot Study of Spontaneous Facial Mirroring of Emotions." Arbutus Review 4, no. 1 (November 1, 2013): 19. http://dx.doi.org/10.18357/tar41201312681.

Full text
Abstract:
Though an ever-increasing mode of communication, computer-mediated communication (CMC) faces challenges in its lack of paralinguistic cues, such as vocal tone and facial expression. Researchers suggest that emoticons fill the gap left by facial expression (Rezabek & Cochenour, 1998; Thompson & Foulger, 1996). The fMRI research of Yuasa, Saito, and Mukawa (2011b), in contrast, finds that viewing ASCII (American Standard Code for Information Interchange) emoticons (e.g., :), :( ) does not activate the same parts of the brain as does viewing facial expressions. In the current study, an online survey was conducted to investigate the effects of emoticons on perception of ambiguous sentences and users’ beliefs about the effects of and reasons for emoticon use. In the second stage of the study, eleven undergraduate students participated in an experiment to reveal facial mimicry responses to both faces and emoticons. Overall, the students produced more smiling than frowning gestures. Emoticons were found to elicit facial mimicry to a somewhat lesser degree than photographs of faces, while male and female participants differed in response to both ASCII emoticons and distractor images (photos of non-human, non-facial subjects used to prevent participants from immediately grasping the specific goal of the study). This pilot study suggests that emoticons, though not analogous to faces, affect viewers in ways similar to facial expression whilst also triggering other unique effects.
APA, Harvard, Vancouver, ISO, and other styles
34

Buck, Ross. "Social and emotional functions in facial expression and communication: the readout hypothesis." Biological Psychology 38, no. 2-3 (October 1994): 95–115. http://dx.doi.org/10.1016/0301-0511(94)90032-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Krishnan, Sharada, Emily Kilroy, Christiana Butera, Laura Harrison, Aditya Jayashankar, Anusha Hossain, Alexis Nalbach, and Lisa Aziz-Zadeh. "Emotional Facial Expression and Social Communication in Children With Autism Spectrum Disorder." American Journal of Occupational Therapy 75, Supplement_2 (August 1, 2021): 7512505133p1. http://dx.doi.org/10.5014/ajot.2021.75s2-rp133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Wauters, Lisa, and Thomas Marquardt. "Disorders of Emotional Communication in Traumatic Brain Injury." Seminars in Speech and Language 40, no. 01 (January 7, 2019): 013–26. http://dx.doi.org/10.1055/s-0038-1676364.

Full text
Abstract:
AbstractTraumatic brain injury (TBI) leads to a wide array of behavioral and cognitive deficits. Individuals with TBI often demonstrate difficulties with the recognition and expression of emotion communicated through multiple modalities including facial expression, vocal prosody, and linguistic content. Deficits in emotional communication contribute to a pattern of social pragmatic communication problems, leading to decreased psychosocial function. Growing evidence supports intervention targeting affective processing. This article summarizes the current evidence for evaluation and treatment of affective processing disorders in TBI.
APA, Harvard, Vancouver, ISO, and other styles
37

Rahayu, Ega. "THE INVESTIGATION OF NONVERBAL COMMUNICATION TOWARDS AN AUTISM CHILD." Indonesian EFL Journal 2, no. 2 (September 12, 2017): 127. http://dx.doi.org/10.25134/ieflj.v2i2.645.

Full text
Abstract:
This research purposes to investigate the types of nonverbal communication used by an autism child during his activity in Pusat Layanan Autis Jati Kersa and home, and to describe the meanings of those nonverbal communication. Nonverbal communication is a communication form that delivers the message without word, written or spoken, but uses body language including facial expression, gesture, posture, eye contact, touching, clothing, space, and paralanguage. Autism is developmental disorder especially in the brain that causes autism people are difficult to communicate and interact. The research employed a qualitative method to collect and analyze the data. This research involved an autism child in low function level. The data were collected through observation and interview. The result of this research shows that an autism child uses several nonverbal communication types such as body movement; gesture, posture, eye contact, and facial expression; paralanguage; and personal presentation; touching (haptics) as well. Then, the meanings of nonverbal communication used by the autism child are various. Each nonverbal communication used by him has its own meaning.Keywords: communication, nonverbal communication, autism
APA, Harvard, Vancouver, ISO, and other styles
38

Carrasco, Luis, Manuel Jesus Jiménez-Roldán, Borja Sañudo, José M. Riquelme, and Inmaculada C. Martínez-Díaz. "Promoting an Active Life Through Threatening Communication: Effects on College Student’s Emotions." GYMNASIUM XXI, no. 2 (December 30, 2020): 116. http://dx.doi.org/10.29081/gsjesh.2020.21.2.08.

Full text
Abstract:
This pilot study was aimed to evaluate the acute effects of a sedentary-focused intervention through threatening communication on college student’s emotions. Thirty-six female college students (mean age 20.8  2.7 years) who participated voluntarily where exposed to five neutral and five sedentary-related threatening video messages. In order to evaluate the emotional impact of the messages, the subjects’ faces were recorded and analyzed during these expositions using a facial expression recognition software (Face Reader System 4.0), and assessing the time-lapse percentage of the following basic expressions: neutral, sad, angry, surprised, scared, and disgusted. Compared to the neutral messages, a non-significant increase in sad, angry, and disgusted expressions were observed after threatening intervention; nevertheless, the effect size (d) for the disgusted expression was .832. Moreover, the time-lapse percentage of neutral facial expression decreased after threatening messages although no statistical significance was reached (p = .174).
APA, Harvard, Vancouver, ISO, and other styles
39

Parkinson, Brian. "Do Facial Movements Express Emotions or Communicate Motives?" Personality and Social Psychology Review 9, no. 4 (November 2005): 278–311. http://dx.doi.org/10.1207/s15327957pspr0904_1.

Full text
Abstract:
This article addresses the debate between emotion-expression and motive-communication approaches to facial movements, focusing on Ekman's (1972) and Fridlund's (1994) contrasting models and their historical antecedents. Available evidence suggests that the presence of others either reduces or increases facial responses, depending on the quality and strength of the emotional manipulation and on the nature of the relationship between interactants. Although both display rules and social motives provide viable explanations of audience “inhibition ” effects, some audience facilitation effects are less easily accommodated within an emotion-expression perspective. In particular emotion is not a sufficient condition for a corresponding “expression,” even discounting explicit regulation, and, apparently, “spontaneous ”facial movements may be facilitated by the presence of others. Further, there is no direct evidence that any particular facial movement provides an unambiguous expression of a specific emotion. However, information communicated by facial movements is not necessarily extrinsic to emotion. Facial movements not only transmit emotion-relevant information but also contribute to ongoing processes of emotional action in accordance with pragmatic theories.
APA, Harvard, Vancouver, ISO, and other styles
40

Grossman, Ruth B., Lisa R. Edelson, and Helen Tager-Flusberg. "Emotional Facial and Vocal Expressions During Story Retelling by Children and Adolescents With High-Functioning Autism." Journal of Speech, Language, and Hearing Research 56, no. 3 (June 2013): 1035–44. http://dx.doi.org/10.1044/1092-4388(2012/12-0067).

Full text
Abstract:
Purpose People with high-functioning autism (HFA) have qualitative differences in facial expression and prosody production, which are rarely systematically quantified. The authors' goals were to qualitatively and quantitatively analyze prosody and facial expression productions in children and adolescents with HFA. Method Participants were 22 male children and adolescents with HFA and 18 typically developing (TD) controls (17 males, 1 female). The authors used a story retelling task to elicit emotionally laden narratives, which were analyzed through the use of acoustic measures and perceptual codes. Naïve listeners coded all productions for emotion type, degree of expressiveness, and awkwardness. Results The group with HFA was not significantly different in accuracy or expressiveness of facial productions, but was significantly more awkward than the TD group. Participants with HFA were significantly more expressive in their vocal productions, with a trend for greater awkwardness. Severity of social communication impairment, as captured by the Autism Diagnostic Observation Schedule (ADOS; Lord, Rutter, DiLavore, & Risi, 1999), was correlated with greater vocal and facial awkwardness. Conclusions Facial and vocal expressions of participants with HFA were as recognizable as those of their TD peers but were qualitatively different, particularly when listeners coded samples with intact dynamic properties. These preliminary data show qualitative differences in nonverbal communication that may have significant negative impact on the social communication success of children and adolescents with HFA.
APA, Harvard, Vancouver, ISO, and other styles
41

Oña, Linda S., Wendy Sandler, and Katja Liebal. "A stepping stone to compositionality in chimpanzee communication." PeerJ 7 (September 12, 2019): e7623. http://dx.doi.org/10.7717/peerj.7623.

Full text
Abstract:
Compositionality refers to a structural property of human language, according to which the meaning of a complex expression is a function of the meaning of its parts and the way they are combined. Compositionality is a defining characteristic of all human language, spoken and signed. Comparative research into the emergence of human language aims at identifying precursors to such key features of human language in the communication of other primates. While it is known that chimpanzees, our closest relatives, produce a variety of gestures, facial expressions and vocalizations in interactions with their group members, little is known about how these signals combine simultaneously. Therefore, the aim of the current study is to investigate whether there is evidence for compositional structures in the communication of chimpanzees. We investigated two semi-wild groups of chimpanzees, with focus on their manual gestures and their combinations with facial expressions across different social contexts. If there are compositional structures in chimpanzee communication, adding a facial expression to a gesture should convey a different message than the gesture alone, a difference that we expect to be measurable by the recipient’s response. Furthermore, we expect context-dependent usage of these combinations. Based on a form-based coding procedure of the collected video footage, we identified two frequently used manual gestures (stretched arm gesture and bent arm gesture) and two facial expression (bared teeth face and funneled lip face). We analyzed whether the recipients’ response varied depending on the signaler’s usage of a given gesture + face combination and the context in which these were used. Overall, our results suggest that, in positive contexts, such as play or grooming, specific combinations had an impact on the likelihood of the occurrence of particular responses. Specifically, adding a bared teeth face to a gesture either increased the likelihood of affiliative behavior (for stretched arm gesture) or eliminated the bias toward an affiliative response (for bent arm gesture). We show for the first time that the components under study are recombinable, and that different combinations elicit different responses, a property that we refer to as componentiality. Yet our data do not suggest that the components have consistent meanings in each combination—a defining property of compositionality. We propose that the componentiality exhibited in this study represents a necessary stepping stone toward a fully evolved compositional system.
APA, Harvard, Vancouver, ISO, and other styles
42

Kuehne, Maria, Isabelle Siwy, Tino Zaehle, Hans-Jochen Heinze, and Janek S. Lobmaier. "Out of Focus: Facial Feedback Manipulation Modulates Automatic Processing of Unattended Emotional Faces." Journal of Cognitive Neuroscience 31, no. 11 (November 2019): 1631–40. http://dx.doi.org/10.1162/jocn_a_01445.

Full text
Abstract:
Facial expressions provide information about an individual's intentions and emotions and are thus an important medium for nonverbal communication. Theories of embodied cognition assume that facial mimicry and resulting facial feedback plays an important role in the perception of facial emotional expressions. Although behavioral and electrophysiological studies have confirmed the influence of facial feedback on the perception of facial emotional expressions, the influence of facial feedback on the automatic processing of such stimuli is largely unexplored. The automatic processing of unattended facial expressions can be investigated by visual expression-related MMN. The expression-related MMN reflects a differential ERP of automatic detection of emotional changes elicited by rarely presented facial expressions (deviants) among frequently presented facial expressions (standards). In this study, we investigated the impact of facial feedback on the automatic processing of facial expressions. For this purpose, participants ( n = 19) performed a centrally presented visual detection task while neutral (standard), happy, and sad faces (deviants) were presented peripherally. During the task, facial feedback was manipulated by different pen holding conditions (holding the pen with teeth, lips, or nondominant hand). Our results indicate that automatic processing of facial expressions is influenced and thus dependent on the own facial feedback.
APA, Harvard, Vancouver, ISO, and other styles
43

Bhatti, Yusra Khalid, Afshan Jamil, Nudrat Nida, Muhammad Haroon Yousaf, Serestina Viriri, and Sergio A. Velastin. "Facial Expression Recognition of Instructor Using Deep Features and Extreme Learning Machine." Computational Intelligence and Neuroscience 2021 (April 30, 2021): 1–17. http://dx.doi.org/10.1155/2021/5570870.

Full text
Abstract:
Classroom communication involves teacher’s behavior and student’s responses. Extensive research has been done on the analysis of student’s facial expressions, but the impact of instructor’s facial expressions is yet an unexplored area of research. Facial expression recognition has the potential to predict the impact of teacher’s emotions in a classroom environment. Intelligent assessment of instructor behavior during lecture delivery not only might improve the learning environment but also could save time and resources utilized in manual assessment strategies. To address the issue of manual assessment, we propose an instructor’s facial expression recognition approach within a classroom using a feedforward learning model. First, the face is detected from the acquired lecture videos and key frames are selected, discarding all the redundant frames for effective high-level feature extraction. Then, deep features are extracted using multiple convolution neural networks along with parameter tuning which are then fed to a classifier. For fast learning and good generalization of the algorithm, a regularized extreme learning machine (RELM) classifier is employed which classifies five different expressions of the instructor within the classroom. Experiments are conducted on a newly created instructor’s facial expression dataset in classroom environments plus three benchmark facial datasets, i.e., Cohn–Kanade, the Japanese Female Facial Expression (JAFFE) dataset, and the Facial Expression Recognition 2013 (FER2013) dataset. Furthermore, the proposed method is compared with state-of-the-art techniques, traditional classifiers, and convolutional neural models. Experimentation results indicate significant performance gain on parameters such as accuracy, F1-score, and recall.
APA, Harvard, Vancouver, ISO, and other styles
44

Powell, Jennie A. "Communication interventions in dementia." Reviews in Clinical Gerontology 10, no. 2 (May 2000): 161–68. http://dx.doi.org/10.1017/s0959259800000277.

Full text
Abstract:
Communication is integral to human life. During the communication process, an idea or concept is received by an individual either from another individual or from the physical environment. Person-to-person transmission of ideas could be through the medium of spoken/written language or through non-verbal media such as body language, gesture or facial expression. Ideas or concepts communicated from the physical environment allow the individual to function within that environment. For example, the concept of traffic must be understood, so as to allow safe and effective day-to-day functioning, whilst comprehension of the concept of a CD player could allow the individual to acquire appropriate auditory stimulation for relaxation and enjoyment.
APA, Harvard, Vancouver, ISO, and other styles
45

Gosselin, Pierre, Gilles Kirouac, and Francois Y. Doré. "Components and recognition of facial expression in the communication of emotion by actors." Journal of Personality and Social Psychology 68, no. 1 (1995): 83–96. http://dx.doi.org/10.1037/0022-3514.68.1.83.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Ma, Ringo. "The relationship between intercultural and nonverbal communication revisited: From facial expression to discrimination." New Jersey Journal of Communication 7, no. 2 (September 1999): 180–89. http://dx.doi.org/10.1080/15456879909367366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Othman, Ehsan, Frerk Saxen, Dmitri Bershadskyy, Philipp Werner, Ayoub Al-Hamadi, and Joachim Weimann. "Predicting Group Contribution Behaviour in a Public Goods Game from Face-to-Face Communication." Sensors 19, no. 12 (June 21, 2019): 2786. http://dx.doi.org/10.3390/s19122786.

Full text
Abstract:
Experimental economic laboratories run many studies to test theoretical predictions with actual human behaviour, including public goods games. With this experiment, participants in a group have the option to invest money in a public account or to keep it. All the invested money is multiplied and then evenly distributed. This structure incentivizes free riding, resulting in contributions to the public goods declining over time. Face-to-face Communication (FFC) diminishes free riding and thus positively affects contribution behaviour, but the question of how has remained mostly unknown. In this paper, we investigate two communication channels, aiming to explain what promotes cooperation and discourages free riding. Firstly, the facial expressions of the group in the 3-minute FFC videos are automatically analysed to predict the group behaviour towards the end of the game. The proposed automatic facial expressions analysis approach uses a new group activity descriptor and utilises random forest classification. Secondly, the contents of FFC are investigated by categorising strategy-relevant topics and using meta-data. The results show that it is possible to predict whether the group will fully contribute to the end of the games based on facial expression data from three minutes of FFC, but deeper understanding requires a larger dataset. Facial expression analysis and content analysis found that FFC and talking until the very end had a significant, positive effect on the contributions.
APA, Harvard, Vancouver, ISO, and other styles
48

Becker-Stoll, Fabienne, Andrea Delius, and Stephanie Scheitenberger. "Adolescents’ nonverbal emotional expressions during negotiation of a disagreement with their mothers: An attachment approach." International Journal of Behavioral Development 25, no. 4 (July 2001): 344–53. http://dx.doi.org/10.1080/01650250143000102.

Full text
Abstract:
The present study investigates the influence of attachment representation on adolescents’ nonverbal behaviour during an observed mother-adolescent interaction task. In a follow-up of the Regensburg longitudinal study, 43 (of the original 51 participating families) 16-year-old adolescents and their mothers were observed in a short revealed differences task. Ekman and Friesen’s (1978) facial expression descriptions were used in the second-by-second analysis of the adolescents’ facial expressions. The analysis assessed emotional states (anger, sadness, surprise, uneasiness, joy, smiling), manipulators or adapters as signs of tension (biting of lips, biting nails), emblems, and eye contact. Concurrently, adolescents were given the Adult Attachment Interview to assess their attachment representations using Kobak’s Adult-Attachment-Interview Q-sort. Results showed a significant relationship between adolescent attachment representation and adolescent nonverbal facial expression during the interaction task. Attachment security was related to open and positive expression of emotion, whereas dismissive attachment style was associated with communication inhibiting behaviour. The results are congruent with attachment theory claiming that coherent emotional appraisals of one’s own attachment history is a prerequisite to open emotional expression and communication of one’s feelings to others.
APA, Harvard, Vancouver, ISO, and other styles
49

Watanabe, Ayako, Masaki Ogino, and Minoru Asada. "Mapping Facial Expression to Internal States Based on Intuitive Parenting." Journal of Robotics and Mechatronics 19, no. 3 (June 20, 2007): 315–23. http://dx.doi.org/10.20965/jrm.2007.p0315.

Full text
Abstract:
Sympathy is a key issue in interaction and communication between robots and their users. In developmental psychology, intuitive parenting is considered the maternal scaffolding upon which children develop sympathy when caregivers mimic or exaggerate the child’s emotional facial expressions [1]. We model human intuitive parenting using a robot that associates a caregiver’s mimicked or exaggerated facial expressions with the robot’s internal state to learn a sympathetic response. The internal state space and facial expressions are defined using psychological studies and change dynamically in response to external stimuli. After learning, the robot responds to the caregiver’s internal state by observing human facial expressions. The robot then expresses its own internal state facially if synchronization evokes a response to the caregiver’s internal state.
APA, Harvard, Vancouver, ISO, and other styles
50

Niu, Ben, Zhenxing Gao, and Bingbing Guo. "Facial Expression Recognition with LBP and ORB Features." Computational Intelligence and Neuroscience 2021 (January 12, 2021): 1–10. http://dx.doi.org/10.1155/2021/8828245.

Full text
Abstract:
Emotion plays an important role in communication. For human–computer interaction, facial expression recognition has become an indispensable part. Recently, deep neural networks (DNNs) are widely used in this field and they overcome the limitations of conventional approaches. However, application of DNNs is very limited due to excessive hardware specifications requirement. Considering low hardware specifications used in real-life conditions, to gain better results without DNNs, in this paper, we propose an algorithm with the combination of the oriented FAST and rotated BRIEF (ORB) features and Local Binary Patterns (LBP) features extracted from facial expression. First of all, every image is passed through face detection algorithm to extract more effective features. Second, in order to increase computational speed, the ORB and LBP features are extracted from the face region; specifically, region division is innovatively employed in the traditional ORB to avoid the concentration of the features. The features are invariant to scale and grayscale as well as rotation changes. Finally, the combined features are classified by Support Vector Machine (SVM). The proposed method is evaluated on several challenging databases such as Cohn-Kanade database (CK+), Japanese Female Facial Expressions database (JAFFE), and MMI database; experimental results of seven emotion state (neutral, joy, sadness, surprise, anger, fear, and disgust) show that the proposed framework is effective and accurate.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography