Academic literature on the topic 'Facial expression understanding'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Facial expression understanding.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Facial expression understanding"
Calder, Andrew J., and Andrew W. Young. "Understanding the recognition of facial identity and facial expression." Nature Reviews Neuroscience 6, no. 8 (August 2005): 641–51. http://dx.doi.org/10.1038/nrn1724.
Full textLisetti, Christine L., and Diane J. Schiano. "Automatic facial expression interpretation." Facial Information Processing 8, no. 1 (May 17, 2000): 185–235. http://dx.doi.org/10.1075/pc.8.1.09lis.
Full textLazzeri, Nicole, Daniele Mazzei, Maher Ben Moussa, Nadia Magnenat-Thalmann, and Danilo De Rossi. "The influence of dynamics and speech on understanding humanoid facial expressions." International Journal of Advanced Robotic Systems 15, no. 4 (July 1, 2018): 172988141878315. http://dx.doi.org/10.1177/1729881418783158.
Full textPadmapriya K.C., Leelavathy V., and Angelin Gladston. "Automatic Multiface Expression Recognition Using Convolutional Neural Network." International Journal of Artificial Intelligence and Machine Learning 11, no. 2 (July 2021): 1–13. http://dx.doi.org/10.4018/ijaiml.20210701.oa8.
Full textter Stal, Silke, Gerbrich Jongbloed, and Monique Tabak. "Embodied Conversational Agents in eHealth: How Facial and Textual Expressions of Positive and Neutral Emotions Influence Perceptions of Mutual Understanding." Interacting with Computers 33, no. 2 (March 2021): 167–76. http://dx.doi.org/10.1093/iwc/iwab019.
Full textPancotti, Francesco, Sonia Mele, Vincenzo Callegari, Raffaella Bivi, Francesca Saracino, and Laila Craighero. "Efficacy of Facial Exercises in Facial Expression Categorization in Schizophrenia." Brain Sciences 11, no. 7 (June 22, 2021): 825. http://dx.doi.org/10.3390/brainsci11070825.
Full textWang, Fengyuan, Jianhua Lv, Guode Ying, Shenghui Chen, and Chi Zhang. "Facial expression recognition from image based on hybrid features understanding." Journal of Visual Communication and Image Representation 59 (February 2019): 84–88. http://dx.doi.org/10.1016/j.jvcir.2018.11.010.
Full textParr, Lisa A., and Bridget M. Waller. "Understanding chimpanzee facial expression: insights into the evolution of communication." Social Cognitive and Affective Neuroscience 1, no. 3 (December 1, 2006): 221–28. http://dx.doi.org/10.1093/scan/nsl031.
Full textUddin, Md Azher, Joolekha Bibi Joolee, and Kyung-Ah Sohn. "Dynamic Facial Expression Understanding Using Deep Spatiotemporal LDSP On Spark." IEEE Access 9 (2021): 16866–77. http://dx.doi.org/10.1109/access.2021.3053276.
Full textArya, Ali, Steve DiPaola, and Avi Parush. "Perceptually Valid Facial Expressions for Character-Based Applications." International Journal of Computer Games Technology 2009 (2009): 1–13. http://dx.doi.org/10.1155/2009/462315.
Full textDissertations / Theses on the topic "Facial expression understanding"
Choudhury, Tanzeem Khalid 1975. "FaceFacts : study of facial features for understanding expression." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/61109.
Full textIncludes bibliographical references (p. 79-83).
by Tanzeem Khalid Choudhury.
S.M.
Bloom, Elana. "Recognition, expression, and understanding facial expressions of emotion in adolescents with nonverbal and general learning disabilities." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=100323.
Full textAdolescents aged 12 to 15 were screened for LD and NLD using the Wechsler Intelligence Scale for Children---Third Edition (WISC-III; Weschler, 1991) and the Wide Range Achievement Test---Third Edition (WRAT3; Wilkinson, 1993) and subtyped into NVLD and GLD groups based on the WRAT3. The NVLD ( n = 23), matched NLD (n = 23), and a comparable GLD (n = 23) group completed attention, mood, and neuropsychological measures. The adolescent's ability to recognize (Pictures of Facial Affect; Ekman & Friesen, 1976), express, and understand facial expressions of emotion, and their general social functioning was assessed. Results indicated that the GLD group was significantly less accurate at recognizing and understanding facial expressions of emotion compared to the NVLD and NLD groups, who did not differ from each other. No differences emerged between the NVLD, NLD, and GLD groups on the expression or social functioning tasks. The neuropsychological measures did not account for a significant portion of the variance on the emotion tasks. Implications regarding severity of LD are discussed.
Maas, Casey. "Decoding Faces: The Contribution of Self-Expressiveness Level and Mimicry Processes to Emotional Understanding." Scholarship @ Claremont, 2014. http://scholarship.claremont.edu/scripps_theses/406.
Full textMiners, William Ben. "Toward Understanding Human Expression in Human-Robot Interaction." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/789.
Full textAn intuitive method to minimize human communication effort with intelligent devices is to take advantage of our existing interpersonal communication experience. Recent advances in speech, hand gesture, and facial expression recognition provide alternate viable modes of communication that are more natural than conventional tactile interfaces. Use of natural human communication eliminates the need to adapt and invest time and effort using less intuitive techniques required for traditional keyboard and mouse based interfaces.
Although the state of the art in natural but isolated modes of communication achieves impressive results, significant hurdles must be conquered before communication with devices in our daily lives will feel natural and effortless. Research has shown that combining information between multiple noise-prone modalities improves accuracy. Leveraging this complementary and redundant content will improve communication robustness and relax current unimodal limitations.
This research presents and evaluates a novel multimodal framework to help reduce the total human effort and time required to communicate with intelligent devices. This reduction is realized by determining human intent using a knowledge-based architecture that combines and leverages conflicting information available across multiple natural communication modes and modalities. The effectiveness of this approach is demonstrated using dynamic hand gestures and simple facial expressions characterizing basic emotions. It is important to note that the framework is not restricted to these two forms of communication. The framework presented in this research provides the flexibility necessary to include additional or alternate modalities and channels of information in future research, including improving the robustness of speech understanding.
The primary contributions of this research include the leveraging of conflicts in a closed-loop multimodal framework, explicit use of uncertainty in knowledge representation and reasoning across multiple modalities, and a flexible approach for leveraging domain specific knowledge to help understand multimodal human expression. Experiments using a manually defined knowledge base demonstrate an improved average accuracy of individual concepts and an improved average accuracy of overall intents when leveraging conflicts as compared to an open-loop approach.
Stott, Dorthy A. "Recognition of Emotion in Facial Expressions by Children with Language Impairment." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2513.pdf.
Full textWeber, Marlene. "Automotive emotions : a human-centred approach towards the measurement and understanding of drivers' emotions and their triggers." Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/16647.
Full textSauer, Patrick Martin. "Model-based understanding of facial expressions." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/modelbased-understanding-of-facial-expressions(e88bff4f-d72e-4d11-b964-fc20f009609b).html.
Full textAlves, Ana Rita Coutinho. "Desenvolvimento das competências emocionais em crianças com idades compreendidas entre os 3 e os 5 anos através do programa "ser e conhecer"." Master's thesis, ISPA - Instituto Universitário, 2013. http://hdl.handle.net/10400.12/2815.
Full textO presente estudo tem por objectivo verificar o impacto que o programa “Ser e Conhecer” tem sobre o desenvolvimento de determinadas competências emocionais, mais especificamente na nomeação e identificação de expressões faciais de emoções básicas, na capacidade de compreensão causal e na capacidade de descentração afectiva. Participaram neste estudo 57 crianças, com idades compreendidas entre os 36 e 71 meses, frequentadoras do Jardim Infantil e salas mistas. O desenvolvimento das competências emocionais foi avaliado através do Teste de Conhecimento das Emoções: Manual do Fantoche, sendo este a versão portuguesa do Affect Knowledge Test, num pré e num pós-teste. Entretanto, os participantes pertencentes ao grupo experimental participaram no programa “Ser e Conhecer”. Uma vez que somente as emoções básicas são abordadas, o programa demonstrou não ter impacto no desenvolvimento das competências.
ABSTRACT: The present study aims to assess the impact that the program "Being and Knowing" has on the development of certain emotional competencies, specifically in nomination and identification of facial expressions of basic emotions, the ability of causal understanding and capacity to affective decentration. 57 children participated in the study, aged from 36 and 71 months, who attend the kindergarten and mixed classrooms. The development of emotional competence was evaluated using the “Teste de Conhecimento das Emoções: Manual do Fantoche, which is the Portuguese version of the Affect Knowledge Test, in a pre and a post test. In between, the participants of the experimental groups participated on program "Being and Knowing". Since only the basic emotions are addressed, the program demonstrated no impact on competence development.
Collins, Michael S. "Understanding the Expressive Cartoon Drawings of a Student with Autism Spectrum Disorder." VCU Scholars Compass, 2017. http://scholarscompass.vcu.edu/etd/4893.
Full textJeng-PingChiu and 邱正平. "Understanding System on Facial Expression and Action for SUFFERING Factors." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/gt7j54.
Full text國立成功大學
電機工程學系
104
In recent years, the application and demand of intelligent human-machine interface are increased gradually. The technique for understanding the human emotion was no longer restricted to analysis the text, voice, observation and so on. With the rapidly improvement of the pattern recognition, facial expression and activity recognition technology has been widely used in the field of home care robotics, monitoring equipment and human behavior analysis. This thesis proposes an understanding system for SUFFERING factors to interpret the negative emotions, since human’s feeling cannot be represented just by facial expressions or actions. Our proposed system composed of facial expression recognition and action detection, respectively. Compared with the Action Unit (AU), this work proposes a novel Suffering Unit (SU), the SU consists of facial and posture action units. After capturing the whole body from Kinect v2, the system performs the recognition for both facial expression as well as action and output the results in real time. The proposed Hierarchy-Coherence K Nearest Neighbor (HC-KNN) calculate the coherence of training data and can improve the performance in comparison of KNN in facial expression recognition. On the other hand, an Average Moving Action Status Window (AMASW) is also proposed to build our action detection system. With the proposed understanding system, we can identify SUFFERING factors by SU which contains 19 kinds of facial expressions and actions. The experimental results have demonstrated the effectiveness of the proposed system, the recognition rate can achieve 87.74% for facial expression and 90.81% for action, respectively.
Books on the topic "Facial expression understanding"
Zhixin, Yi, ed. Xin li xue jia de mian xiang shu: Jie du qing xu de mi ma = Emotions revealed : understanding faces and feelings. Taibei Shi: Xin ling gong fang wen hua shi ye gu fen you xian gong si, 2004.
Find full text1934-, Neu Harold C., ed. Understanding infectious disease. St. Louis: Mosby Year Book, 1992.
Find full textMandal, Manas K., and Avinash Awasthi, eds. Understanding Facial Expressions in Communication. New Delhi: Springer India, 2015. http://dx.doi.org/10.1007/978-81-322-1934-7.
Full textProfyt, Linda. Children's understanding of emotions in facial expressions. Sudbury, Ont: Laurentian University, Department of Psychology, 1990.
Find full textLee, Daniel H., and Adam K. Anderson. Form and Function of Facial Expressive Origins. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780190613501.003.0010.
Full textDiogo, Rui, and Sharlene E. Santana. Evolution of Facial Musculature. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780190613501.003.0008.
Full textRoss, Jacob DC. Making Faces: Understanding Facial Expressions for Autistic Kids. Jacob DC Ross, 2015.
Find full textMason, Peggy. From Movement to Action. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780190237493.003.0023.
Full textMandal, Manas K., and Avinash Awasthi. Understanding Facial Expressions in Communication: Cross-cultural and Multidisciplinary Perspectives. Springer, 2014.
Find full textMandal, Manas K., and Avinash Awasthi. Understanding Facial Expressions in Communication: Cross-cultural and Multidisciplinary Perspectives. Springer, 2016.
Find full textBook chapters on the topic "Facial expression understanding"
Gong, Shaogang, and Tao Xiang. "Understanding Facial Expression." In Visual Analysis of Behaviour, 69–93. London: Springer London, 2011. http://dx.doi.org/10.1007/978-0-85729-670-2_4.
Full textValstar, Michel. "Automatic Facial Expression Analysis." In Understanding Facial Expressions in Communication, 143–72. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1934-7_8.
Full textPoria, Swarup, Ananya Mondal, and Pritha Mukhopadhyay. "Evaluation of the Intricacies of Emotional Facial Expression of Psychiatric Patients Using Computational Models." In Understanding Facial Expressions in Communication, 199–226. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1934-7_10.
Full textDrira, Hassen, Boulbaba Ben Amor, Mohamed Daoudi, and Stefano Berretti. "A Dense Deformation Field for Facial Expression Analysis in Dynamic Sequences of 3D Scans." In Human Behavior Understanding, 148–59. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-02714-2_13.
Full textChen, Luefeng, Min Wu, Witold Pedrycz, and Kaoru Hirota. "Weight-Adapted Convolution Neural Network for Facial Expression Recognition." In Emotion Recognition and Understanding for Emotional Human-Robot Interaction Systems, 57–75. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61577-2_5.
Full textCruz, Alberto C., B. Bhanu, and N. S. Thakoor. "Understanding of the Biological Process of Nonverbal Communication: Facial Emotion and Expression Recognition." In Computational Biology, 329–47. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23724-4_18.
Full textRegenbogen, Christina, and Ute Habel. "Facial Expressions in Empathy Research." In Understanding Facial Expressions in Communication, 101–17. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1934-7_6.
Full textAwasthi, Avinash, and Manas K. Mandal. "Facial Expressions of Emotions: Research Perspectives." In Understanding Facial Expressions in Communication, 1–18. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1934-7_1.
Full textFrank, Mark G., and Elena Svetieva. "Microexpressions and Deception." In Understanding Facial Expressions in Communication, 227–42. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1934-7_11.
Full textCastillo, Paola A. "The Detection of Deception in Cross-Cultural Contexts." In Understanding Facial Expressions in Communication, 243–63. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1934-7_12.
Full textConference papers on the topic "Facial expression understanding"
"Session: Facial Expression Understanding." In 2019 IEEE 15th International Conference on Intelligent Computer Communication and Processing (ICCP). IEEE, 2019. http://dx.doi.org/10.1109/iccp48234.2019.8959631.
Full textOu, Yang-Yen, Ta-Wen Kuan, An-Chao Tsai, Jhing-Fa Wang, and Jheng-Ping Chiou. "Sunnfering understanding system based on facial expression and human action." In 2016 International Conference on Orange Technologies (ICOT). IEEE, 2016. http://dx.doi.org/10.1109/icot.2016.8278977.
Full textMcDuff, Daniel, Rana el Kaliouby, Karim Kassam, and Rosalind Picard. "Acume: A new visualization tool for understanding facial expression and gesture data." In Gesture Recognition (FG 2011). IEEE, 2011. http://dx.doi.org/10.1109/fg.2011.5771464.
Full text"DYNAMIC FACIAL EXPRESSION UNDERSTANDING BASED ON TEMPORAL MODELLING OF TRANSFERABLE BELIEF MODEL." In International Conference on Computer Vision Theory and Applications. SciTePress - Science and and Technology Publications, 2006. http://dx.doi.org/10.5220/0001377600930100.
Full textYongmian Zhang and Qiang Ji. "Facial expression understanding in image sequences using dynamic and active visual information fusion." In ICCV 2003: 9th International Conference on Computer Vision. IEEE, 2003. http://dx.doi.org/10.1109/iccv.2003.1238640.
Full textPham, Phuong, and Jingtao Wang. "Understanding Emotional Responses to Mobile Video Advertisements via Physiological Signal Sensing and Facial Expression Analysis." In IUI'17: 22nd International Conference on Intelligent User Interfaces. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3025171.3025186.
Full textBuzuti, Lucas Fontes, and Carlos Eduardo Thomaz. "Understanding fully-connected and convolution allayers in unsupervised learning using face images." In XV Workshop de Visão Computacional. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/wvc.2019.7621.
Full textSeshadri, Priya, Youyi Bi, Jaykishan Bhatia, Ross Simons, Jeffrey Hartley, and Tahira Reid. "Evaluations That Matter: Customer Preferences Using Industry-Based Evaluations and Eye-Gaze Data." In ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/detc2016-60293.
Full textSun, Ran, Harald Haraldsson, Yuhang Zhao, and Serge Belongie. "Anon-Emoji: An Optical See-Through Augmented Reality System for Children with Autism Spectrum Disorders to promote Understanding of Facial Expressions and Emotions." In 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2019. http://dx.doi.org/10.1109/ismar-adjunct.2019.00052.
Full textMyasnikova, Lyudmila, and Elena Shlegel. "Transformation of Individuality & Publicity: Philosophic-Anthropological Analysis." In The Public/Private in Modern Civilization, the 22nd Russian Scientific-Practical Conference (with international participation) (Yekaterinburg, April 16-17, 2020). Liberal Arts University – University for Humanities, Yekaterinburg, 2020. http://dx.doi.org/10.35853/ufh-public/private-2020-02.
Full text