Academic literature on the topic 'Facial-expression communication'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Facial-expression communication.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Facial-expression communication"

1

Yoshitomi, Yasunari. "Human-Computer Communication Using Facial Expression." Proceedings of International Conference on Artificial Life and Robotics 26 (January 21, 2021): 275–78. http://dx.doi.org/10.5954/icarob.2021.ps-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ju, Wang, Ding Rui, and Chun Yan Nie. "Research on the Facial Expression Feature Extraction of Facial Expression Recognition Based on MATLAB." Advanced Materials Research 1049-1050 (October 2014): 1522–25. http://dx.doi.org/10.4028/www.scientific.net/amr.1049-1050.1522.

Full text
Abstract:
In such a developed day of information communication, communication is an important essential way of interpersonal communication. As a carrier of information, expression is rich in human behavior information. Facial expression recognition is a combination of many fields, but also a new topic in the field of pattern recognition. This paper mainly studied the facial feature extraction based on MATLAB, by MATLAB software, extracting the expression features through a large number of facial expressions, which can be divided into different facial expressions more accurate classification .
APA, Harvard, Vancouver, ISO, and other styles
3

Park, Hyun-Shin. "A Study on Nonverbal Communication for Effective Sermon Delivery Focusing on Facial Expression and Eye Communication." Gospel and Praxis 50 (February 20, 2019): 69–99. http://dx.doi.org/10.25309/kept.2019.2.20.069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Ju, Rui Ding, and Mei Hong. "Facial Expression Recognition Based on MATLAB." Applied Mechanics and Materials 543-547 (March 2014): 2188–91. http://dx.doi.org/10.4028/www.scientific.net/amm.543-547.2188.

Full text
Abstract:
In today's social life, communication between people is essential to do anything important way. The facial expressions are rich in human behavior information, it is very important means of communication, as a carrier of information, expression can convey a lot of voice can not convey the information. Expression recognition field of pattern recognition is a new task, is an essential part of intelligent machines. This paper studies the discrete wavelet transform feature extraction and expression using MATLAB software image feature extraction and treatment with an elastic template matching algorithm to do the appropriate test expression recognition.
APA, Harvard, Vancouver, ISO, and other styles
5

Sumeisey, Vivian Savenia, Rahmadsyah Rangkuti, and Rohani Ganie. "NON-VERBAL COMMUNICATION OF THE SIMPSONS MEMES IN “MEMES.COM” INSTAGRAM." Language Literacy: Journal of Linguistics, Literature, and Language Teaching 3, no. 1 (July 5, 2019): 83–88. http://dx.doi.org/10.30743/ll.v3i1.992.

Full text
Abstract:
The research aims to identify the nonverbal communication especially kinesics aspect in the Simpsons memes in “memes.com” instagram. The nonverbal communications in the Simpsons memes convey the meme users’ emotions, feelings and messages through expressive actions. By analyzing the non verbal communication, the meme users are able to understand the meaning of the meme and the meme readers are able to understand what the memes senders try to communicate. The research was conducted by means qualitative descriptive analysis. The data of the research was the Simpsons meme and the source of data was “memes.com” instagram. The data collection was qualitative audio and visual material because the data is a picture. The sample of the research was forteen Simpsons memes. Facial expression, posture and gesture are the kinesics aspect that found in the Simpsons memes in “memes.com” instagram. The results of the researchwere one meme showed posture and gesture, two memes showed facial expression and gesture, three memes showed facial expression and posture, memes only showed posture and five memes showed the character’s facial expression in conveying the message.
APA, Harvard, Vancouver, ISO, and other styles
6

Taee, Elaf J. Al, and Qasim Mohammed Jasim. "Blurred Facial Expression Recognition System by Using Convolution Neural Network." Webology 17, no. 2 (December 21, 2020): 804–16. http://dx.doi.org/10.14704/web/v17i2/web17068.

Full text
Abstract:
A facial expression is a visual impression of a person's situations, emotions, cognitive activity, personality, intention and psychopathology, it has an active and vital role in the exchange of information and communication between people. In machines and robots which dedicated to communication with humans, the facial expressions recognition play an important and vital role in communication and reading of what is the person implies, especially in the field of health. For that the research in this field leads to development in communication with the robot. This topic has been discussed extensively, and with the progress of deep learning and use Convolution Neural Network CNN in image processing which widely proved efficiency, led to use CNN in the recognition of facial expressions. Automatic system for Facial Expression Recognition FER require to perform detection and location of faces in a cluttered scene, feature extraction, and classification. In this research, the CNN used for perform the process of FER. The target is to label each image of facial into one of the seven facial emotion categories considered in the JAFFE database. JAFFE facial expression database with seven facial expression labels as sad, happy, fear, surprise, anger, disgust, and natural are used in this research. We trained CNN with different depths using gray-scale images from the JAFFE database.The accuracy of proposed system was 100%.
APA, Harvard, Vancouver, ISO, and other styles
7

Shang, Yuyi, Mie Sato, and Masao Kasuga. "An Interactive System with Facial Expression Recognition." Journal of Advanced Computational Intelligence and Intelligent Informatics 9, no. 6 (November 20, 2005): 637–42. http://dx.doi.org/10.20965/jaciii.2005.p0637.

Full text
Abstract:
To make communication between users and machines more comfortable, we focus on facial expressions and automatically classify them into 4 expression candidates: “joy,” “anger, ” “sadness,” and “surprise.” The classification uses features that correspond to expression-motion patterns, and then voice data is output based on classification results. When we output voice data, insufficiency in classification is taken into account. We choose the first and second expression candidates from classification results. To realize interactive communication between users and machines, information on these candidates is used when we access a voice database. The voice database contains voice data corresponding to emotions.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Jianmin, Yuxi Wang, Yujia Liu, Tianyang Yue, Chengji Wang, Weiguang Yang, Preben Hansen, and Fang You. "Experimental Study on Abstract Expression of Human-Robot Emotional Communication." Symmetry 13, no. 9 (September 14, 2021): 1693. http://dx.doi.org/10.3390/sym13091693.

Full text
Abstract:
With the continuous development of intelligent product interaction technology, the facial expression design of virtual images on the interactive interface of intelligent products has become an important research topic. Based on the current research on facial expression design of existing intelligent products, we symmetrically mapped the PAD (pleasure–arousal–dominance) emotion value to the image design, explored the characteristics of abstract expressions and the principles of expression design, and evaluated them experimentally. In this study, the experiment of PAD scores was conducted on the emotion expression design of abstract expressions, and the data results were analyzed to iterate the expression design. The experimental results show that PAD values can effectively guide designers in expression design. Meanwhile, the efficiency and recognition accuracy of human communication with abstract expression design can be improved by facial auxiliary elements and eyebrows.
APA, Harvard, Vancouver, ISO, and other styles
9

Made Chintya Maha, Yekti. "KINESICS INTERACTION: TOWARDS EYE CONTACT, POSTURE AND FACIAL EXPRESSION OF EDWARD AND BELLA IN A MOVIE ENTITLED “TWILIGHT”." Lingua Scientia 24, no. 1 (June 30, 2017): 27. http://dx.doi.org/10.23887/ls.v24i1.18795.

Full text
Abstract:
This study discusses the nonverbal communication particularly body language. This study focuses on kinesics such as: eye contact, posture, and facial expression of the male main character (Edward Cullen) and the female main character (Bella Swan) in Twilight movie by Stephenie Meyer. The aims of this study is to know the meaning behind those nonverbal communications of male main character and female main character as their acting in the movie. The method used to answer the problem of this study is Descriptive qualitative. The data of this study is a film entitled Twilight produced in 2008. The data is described in the form of images and words. From this study, it can be seen that there are three kinds of nonverbal communication used by the male and female main character. Those are eye contact, posture, and facial expression where the nonverbal communication used by the male character is concerned, serious, brave, romantic, cool postures, friendly and bright eyes. Whereas the female character uses dim eye contact, glace and shock posture, and amazed facial expression. It is found that there are several differences of using nonverbal communication between male and female character in the movie.
APA, Harvard, Vancouver, ISO, and other styles
10

LI, YI, and MINORU HASHIMOTO. "EMOTIONAL SYNCHRONIZATION-BASED HUMAN–ROBOT COMMUNICATION AND ITS EFFECTS." International Journal of Humanoid Robotics 10, no. 01 (March 2013): 1350014. http://dx.doi.org/10.1142/s021984361350014x.

Full text
Abstract:
This paper presents a natural and comfortable communication system between human and robot based on synchronization to human emotional state using human facial expression recognition. The system consists of three parts: human emotion recognition, robotic emotion generation, and robotic emotion expression. The robot recognizes human emotion through human facial expressions, and robotic emotion is generated and synchronized with human emotion dynamically using a vector field of dynamics. The robot makes dynamically varying facial expressions to express its own emotions to the human. A communication experiment was conducted to examine the effectiveness of the proposed system. The authors found that subjects became much more comfortable after communicating with the robot with synchronized emotions. Subjects felt somewhat uncomfortable after communicating with the robot with non-synchronized emotions. During emotional synchronization, subjects communicated much more with the robot, and the communication time was double that during non-synchronization. Furthermore, in the case of emotional synchronization, subjects had good impressions of the robot, much better than the impressions in the case of non-synchronization. It was confirmed in this study that emotional synchronization in human–robot communication can be effective in making humans comfortable and makes the robot much more favorable and acceptable to humans.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Facial-expression communication"

1

Murphy, Suzanne Marguerite. "Young children's behaviour and interactive tasks : the effects of popularity on communication and task performance." Thesis, Open University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Slessor, Gillian. "Age-related changes in decoding basic social cues from the eyes." Thesis, Available from the University of Aberdeen Library and Historic Collections Digital Resources, 2009. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?application=DIGITOOL-3&owner=resourcediscovery&custom_att_2=simple_viewer&pid=53353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

McLellan, Tracey Lee. "Sensitivity to Emotion Specified in Facial Expressions and the Impact of Aging and Alzheimer's Disease." Thesis, University of Canterbury. Psychology, 2008. http://hdl.handle.net/10092/1979.

Full text
Abstract:
This thesis describes a program of research that investigated the sensitivity of healthy young adults, healthy older adults and individuals with Alzheimer’s disease (AD) to happiness, sadness and fear emotion specified in facial expressions. In particular, the research investigated the sensitivity of these individuals to the distinctions between spontaneous expressions of emotional experience (genuine expressions) and deliberate, simulated expressions of emotional experience (posed expressions). The specific focus was to examine whether aging and/or AD effects sensitivity to the target emotions. Emotion-categorization and priming tasks were completed by all participants. The tasks employed an original set of cologically valid facial displays generated specifically for the present research. The categorization task (Experiments 1a, 2a, 3a, 4a) required participants to judge whether targets were, or were not showing and feeling each target emotion. The results showed that all 3 groups identified a genuine expression as both showing and feeling the target emotion whilst a posed expression was identified more frequently as showing than feeling the emotion. Signal detection analysis demonstrated that all 3 groups were sensitive to the expression of emotion, reliably differentiating expressions of experienced emotion (genuine expression) from expressions unrelated to emotional experience (posed and neutral expressions). In addition, both healthy young and older adults could reliably differentiate between posed and genuine expressions of happiness and sadness, whereas, individuals with AD could not. Sensitivity to emotion specified in facial expressions was found to be emotion specific and to be independent of both the level of general cognitive functioning and of specific cognitive functions. The priming task (Experiments 1b, 2b, 3b,4b) employed the facial expressions as primes in a word valence task in order to investigate spontaneous attention to facial expression. Healthy young adults only showed an emotion-congruency priming effect for genuine expressions. Healthy older adults and individuals with AD showed no priming effects. Results are discussed in terms of the understanding of the recognition of emotional states in others and the impact of aging and AD on the recognition of emotional states. Consideration is given to how these findings might influence the care and management of individuals with AD.
APA, Harvard, Vancouver, ISO, and other styles
4

Visser, Naomi. "The ability of four-year-old children to recognise basic emotions represented by graphic symbols." Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-11162007-164230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Peschka-Daskalos, Patricia Jean. "An Intercultural Analysis of Differences in Appropriateness Ratings of Facial Expressions Between Japanese and American Subjects." PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/4700.

Full text
Abstract:
In 1971 Paul Ekman posited his Neuro-Cultural Theory of Emotion which stated that expressions of emotion are universal but controlled by cultural display rules. This thesis tests the Neuro-Cultural Theory by having subjects from two cultures, Japan and the United States, judge the perceived appropriateness facial expressions in social situations. Preliminary procedures resulted in a set of scenarios in which socially appropriate responses were deemed to be either "Happy", "Angry" or "Surprised". Data in the experimental phase of the study were collected using a questionnaire format. Through the use of a 5-point Likert scale, each subject rated the appropriateness of happy, anger and surprise expressions in positive, negative and ambiguous social situations. Additionally, the subjects were asked to label each expression in each situation. The responses were analyzed statistically using Analysis of Variance procedures. Label percentages were also calculated for: the second task in the study. No support was found for two of the three research hypotheses, and only partial support was found for a third research hypothesis. These results were discussed in terms of the need for greater theoretical and methodological refinements.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Lan. "Towards Enhancing Human-robot Communication for Industrial Robots: A Study in Facial Expressions Mot Förbättra Människa-robot Kommunikation för Industrirobotar : En studie i ansiktsuttryck." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-189569.

Full text
Abstract:
Collaborative robots are becoming more commonplace within factories to work alongside their human counterparts. With this newfound perspective towards robots being seen as collaborative partners comes the question of how interacting with these machines will change. This thesis therefore focuses on investigating the connection between facial expression communication in industrial robots and users' perceptions. Experiments were conducted to investigate the relationship between users' perceptions towards both existing facial expressions of the Baxter robot (an industrial robot by Rethink Robotics) and redesigned versions of these facial expressions. Findings reveal that the redesigned facial expressions provide a better match to users’ expectations. In addition, insights into improving the expressive communication between humans and robots are discussed, including the need for additional solutions which can complement the facial expressions displayed by providing more detailed information as needed. The last section of this thesis presents future research directions towards building a more intuitive and user-friendly human-robot cooperation space for future industrial robots.
APA, Harvard, Vancouver, ISO, and other styles
7

Asplund, Kenneth. "The experience of meaning in the care of patients in the terminal stage of dementia of the Alzheimer type : interpretation of non-verbal communication and ethical demands." Doctoral thesis, Umeå universitet, Institutionen för omvårdnad, 1991. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-96891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tsalamlal, Mohamed Yacine. "Communication affective médiée via une interface tactile." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS379/document.

Full text
Abstract:
La communication affective est au cœur de nos interactions interpersonnelles. Nous communiquons les émotions à travers de multiples canaux non verbaux. Plusieurs travaux de recherche sur l’interaction homme-machine ont exploité ces modalités de communication afin de concevoir des systèmes permettant de reconnaître et d’afficher automatiquement des signaux affectifs. Le toucher est la modalité la moins explorée dans ce domaine de recherche. L’aspect intrusif des interfaces haptiques actuelles est l’un des principaux obstacles à leur utilisation dans la communication affective médiée. En effet, l’utilisateur est physiquement connecté à des systèmes mécaniques pour recevoir la stimulation. Cette configuration altère la transparence de l’interaction médiée et empêche la perception de certaines dimensions affectives comme la valence. L’objectif de cette thèse est de proposer et d’étudier une technique de stimulation tactile sans contact avec des systèmes mécaniques pour médier des signaux d’affects. Sur la base de l’état de l’art des interfaces haptiques, nous avons proposé une stratégie de stimulation tactile basée sur l’utilisation d’un jet d’air mobile. Cette technique permet de fournir une stimulation tactile non-intrusive sur des zones différentes du corps. De plus, ce dispositif tactile permettrait une stimulation efficace de certains mécanorécepteurs qui jouent un rôle important dans les perceptions d’affects positifs. Nous avons conduit une étude expérimentale pour comprendre les relations entre les caractéristiques physiques de la stimulation tactile par jet d’air et la perception affective des utilisateurs. Les résultats mettent en évidence les effets principaux de l'intensité et de la vitesse du mouvement du jet d’air sur l’évaluation subjective mesurée dans l’espace affectif (à savoir, la valence, l'arousal et de la dominance).La communication des émotions est clairement multimodale. Nous utilisons le toucher conjointement avec d’autres modalités afin de communiquer les différents messages affectifs. C’est dans ce sens que nous avons conduit deux études expérimentales pour examiner la combinaison de la stimulation tactile par jet d’air avec les expressions faciales et vocales pour la perception de la valence. Ces expérimentations ont été conduites dans un cadre théorique et expérimental appelé théorie de l’intégration de l’information. Ce cadre permet de modéliser l’intégration de l’information issue de plusieurs sources en employant une algèbre cognitive. Les résultats de nos travaux suggèrent que la stimulation tactile par jet d’air peut être utilisée pour transmettre des signaux affectifs dans le cadre des interactions homme-machine. Les modèles perceptifs d’intégration bimodales peuvent être exploités pour construire des modèles computationnels permettant d’afficher des affects en combinant la stimulation tactile aux expressions faciales ou à la voix
Affective communication plays a major role in our interpersonal interactions. We communicate emotions through multiple non-verbal channels. Researches on human-computer interaction have exploited these communication channels in order to design systems that automatically recognize and display emotional signals. Touch has receivers less interest then other non-verbal modalities in this area of research. The intrusive aspect of current haptic interfaces is one of the main obstacles to their use in mediated emotional communication. In fact, the user is must physically connected to mechanical systems to receive the stimulation. This configuration affects the transparency of the mediated interaction and limits the perception of certain emotional dimensions as the Valence. The objective of this thesis is to propose and study a technique for tactile stimulation. This technique does not require contact with mechanical systems to transmit affective signals. On the basis of the state of the art of haptic interfaces, we proposed a strategy of tactile stimulation based on the use of a mobile air jet. This technique provides a non-intrusive tactile stimulation on different areas of the body. In addition, this tactile device would allow effective stimulation of some mechanoreceptors that play an important role in perceptions of positive affect. We conducted an experimental study to understand the relationships between the physical characteristics of tactile stimulation by air jet and the emotional perception of the users. The results highlight the main effects of the intensity and the velocity of movement of the air stream on the subjective evaluation measured in space affective (namely, Valence, Arousal and Dominance).The communication of emotions is clearly multi-modal. We use touch jointly with other modalities to communicate different emotional messages. We conducted two experimental studies to examine the combination of air jet tactile stimulation with facial and vocal expressions for perception of the valence. These experiments were conducted in a theoretical and experimental framework called integration of information theory. This framework allows modelling the integration of information from multiple sources using a cognitive algebra. Our work suggests that tactile stimulation by air jet can be used to transmit emotional signals in the context of the human-machine interactions. Perceptual bimodal integration models can be exploited to build computational models to display affects by combining tactile stimulation to facial expressions or the voice
APA, Harvard, Vancouver, ISO, and other styles
9

Atwood, Kristen Diane. "Recognition of Facial Expressions of Six Emotions by Children with Specific Language Impairment." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1501.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Stott, Dorthy A. "Recognition of Emotion in Facial Expressions by Children with Language Impairment." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2513.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Facial-expression communication"

1

Koe to kao no chūseishi: Ikusa to soshō no jōkei yori. Tōkyō: Yoshikawa Kōbunkan, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nonverbal communication: Science and applications. Thousand Oaks: SAGE Publications, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Haller, Bernard. Le visage parle. Paris: Balland, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhixin, Yi, ed. Xin li xue jia de mian xiang shu: Jie du qing xu de mi ma = Emotions revealed : understanding faces and feelings. Taibei Shi: Xin ling gong fang wen hua shi ye gu fen you xian gong si, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Meuter, Norbert. Anthropologie des Ausdrucks: Die Expressivitat des Menschen zwischen Natur und Kultur. Munchen: Wilhelm Fink, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Anthropologie des Ausdrucks: Die Expressivität des Menschen zwischen Natur und Kultur. München: Wilhelm Fink, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Le doux et l'amer: Sensation gustative, émotion et communication chez le jeune enfant. Paris: Presses universitaires de France, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Barabanshchikov, V. A. Lit︠s︡o cheloveka kak sredstvo obshchenii︠a︡: Mezhdist︠s︡iplinarnyĭ podkhod. Moskva: Kogito-t︠s︡entr, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Melanie, Metzger, ed. Deaf tend your: Non-manual signals in American Sign Language. Silver Spring, Md: Calliope Press, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Langage et interactions sociales: La fonction stratégique du langage dans les jeux de face. Paris: Harmattan, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Facial-expression communication"

1

Valstar, Michel. "Automatic Facial Expression Analysis." In Understanding Facial Expressions in Communication, 143–72. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1934-7_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mishra, Sweta, and Sheweta Talashi. "Facial Expression Recognition System Using Different Methods." In Computer Networks and Inventive Communication Technologies, 185–96. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-9647-6_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kyperountas, Marios, and Ioannis Pitas. "Facial Expression Recognition Using Two-Class Discriminant Features." In Biometric ID Management and Multimodal Communication, 89–96. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04391-8_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Song, Miao, Qian Qian, and Shinomori Keizo. "The Effect of Expression Geometry and Facial Identity on the Expression Aftereffect." In IFIP Advances in Information and Communication Technology, 124–29. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68121-4_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kumar, Pradip, Ankit Kishore, and Raksha Pandey. "Emotion Recognition of Facial Expression Using Convolutional Neural Network." In Innovative Data Communication Technologies and Application, 362–69. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-38040-3_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ashok, Aswathy, and Jisha John. "Facial Expression Recognition System for Visually Impaired." In International Conference on Intelligent Data Communication Technologies and Internet of Things (ICICI) 2018, 244–50. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-03146-6_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sen, Arnaja, Dhaval Popat, Hardik Shah, Priyanka Kuwor, and Era Johri. "Music Playlist Generation Using Facial Expression Analysis and Task Extraction." In Intelligent Communication and Computational Technologies, 129–39. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-5523-2_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Srinivasa Reddy, K., E. Sunil Reddy, and N. Baswanth. "Facial Expression Recognition by Considering Nonuniform Local Binary Patterns." In Emerging Research in Computing, Information, Communication and Applications, 645–58. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-6001-5_55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Poria, Swarup, Ananya Mondal, and Pritha Mukhopadhyay. "Evaluation of the Intricacies of Emotional Facial Expression of Psychiatric Patients Using Computational Models." In Understanding Facial Expressions in Communication, 199–226. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1934-7_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Navastara, Dini Adni, Hendry Wiranto, Chastine Fatichah, and Nanik Suciati. "Facial Expression Recognition Using Wavelet Transform and Convolutional Neural Network." In Advances in Computer, Communication and Computational Sciences, 941–52. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4409-5_83.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Facial-expression communication"

1

Mal, Hari Prasad, and P. Swarnalatha. "Facial expression detection using facial expression model." In 2017 International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS). IEEE, 2017. http://dx.doi.org/10.1109/icecds.2017.8389644.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Santos, Patrick, Erbene de Castro Maia, Miriam Goubran, and Emil Petriu. "Facial expression communication for healthcare androids." In 2013 IEEE International Symposium on Medical Measurements and Applications (MeMeA). IEEE, 2013. http://dx.doi.org/10.1109/memea.2013.6549703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

"Session: Facial Expression Understanding." In 2019 IEEE 15th International Conference on Intelligent Computer Communication and Processing (ICCP). IEEE, 2019. http://dx.doi.org/10.1109/iccp48234.2019.8959631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Akyol, Funda, and Pinar Duygulu Sahin. "Image-based facial expression detection." In 2016 24th Signal Processing and Communication Application Conference (SIU). IEEE, 2016. http://dx.doi.org/10.1109/siu.2016.7495814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Balasubramani, A., K. Kalaivanan, R. C. Karpagalakshmi, and R. Monikandan. "Automatic facial expression recognition system." In 2008 International Conference on Computing, Communication and Networking (ICCCN). IEEE, 2008. http://dx.doi.org/10.1109/icccnet.2008.4787749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Anusha, A. V., J. K. Jayasree, Anusree Bhaskar, and R. P. Aneesh. "Facial expression recognition and gender classification using facial patches." In 2016 International Conference on Communication Systems and Networks (ComNet). IEEE, 2016. http://dx.doi.org/10.1109/csn.2016.7824014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tripathi, Prashant, Kumar Ankit, Rishabh Sharma, and Tarun Kumar. "Facial Expression Recognition Through CNN." In 2021 6th International Conference on Communication and Electronics Systems (ICCES). IEEE, 2021. http://dx.doi.org/10.1109/icces51350.2021.9488963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kaleekal, Tobin T., and Jyotsna Singh. "Facial Expression Recognition using Hahn Moment on Facial Patches." In 2019 4th International Conference on Recent Trends on Electronics, Information, Communication & Technology (RTEICT). IEEE, 2019. http://dx.doi.org/10.1109/rteict46194.2019.9016953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ekmen, Beste, and Hazim Kemal Ekenel. "Real time animated facial expression transfer." In 2016 24th Signal Processing and Communication Application Conference (SIU). IEEE, 2016. http://dx.doi.org/10.1109/siu.2016.7495959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ashir, Abubakar M., and Alaa Eleyan. "Compressive sensing based facial expression recognition." In 2016 24th Signal Processing and Communication Application Conference (SIU). IEEE, 2016. http://dx.doi.org/10.1109/siu.2016.7495971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography