Dissertations / Theses on the topic 'Facial-expression communication'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 25 dissertations / theses for your research on the topic 'Facial-expression communication.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Murphy, Suzanne Marguerite. "Young children's behaviour and interactive tasks : the effects of popularity on communication and task performance." Thesis, Open University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287005.
Full textSlessor, Gillian. "Age-related changes in decoding basic social cues from the eyes." Thesis, Available from the University of Aberdeen Library and Historic Collections Digital Resources, 2009. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?application=DIGITOOL-3&owner=resourcediscovery&custom_att_2=simple_viewer&pid=53353.
Full textMcLellan, Tracey Lee. "Sensitivity to Emotion Specified in Facial Expressions and the Impact of Aging and Alzheimer's Disease." Thesis, University of Canterbury. Psychology, 2008. http://hdl.handle.net/10092/1979.
Full textVisser, Naomi. "The ability of four-year-old children to recognise basic emotions represented by graphic symbols." Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-11162007-164230.
Full textPeschka-Daskalos, Patricia Jean. "An Intercultural Analysis of Differences in Appropriateness Ratings of Facial Expressions Between Japanese and American Subjects." PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/4700.
Full textWang, Lan. "Towards Enhancing Human-robot Communication for Industrial Robots: A Study in Facial Expressions Mot Förbättra Människa-robot Kommunikation för Industrirobotar : En studie i ansiktsuttryck." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-189569.
Full textAsplund, Kenneth. "The experience of meaning in the care of patients in the terminal stage of dementia of the Alzheimer type : interpretation of non-verbal communication and ethical demands." Doctoral thesis, Umeå universitet, Institutionen för omvårdnad, 1991. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-96891.
Full textTsalamlal, Mohamed Yacine. "Communication affective médiée via une interface tactile." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS379/document.
Full textAffective communication plays a major role in our interpersonal interactions. We communicate emotions through multiple non-verbal channels. Researches on human-computer interaction have exploited these communication channels in order to design systems that automatically recognize and display emotional signals. Touch has receivers less interest then other non-verbal modalities in this area of research. The intrusive aspect of current haptic interfaces is one of the main obstacles to their use in mediated emotional communication. In fact, the user is must physically connected to mechanical systems to receive the stimulation. This configuration affects the transparency of the mediated interaction and limits the perception of certain emotional dimensions as the Valence. The objective of this thesis is to propose and study a technique for tactile stimulation. This technique does not require contact with mechanical systems to transmit affective signals. On the basis of the state of the art of haptic interfaces, we proposed a strategy of tactile stimulation based on the use of a mobile air jet. This technique provides a non-intrusive tactile stimulation on different areas of the body. In addition, this tactile device would allow effective stimulation of some mechanoreceptors that play an important role in perceptions of positive affect. We conducted an experimental study to understand the relationships between the physical characteristics of tactile stimulation by air jet and the emotional perception of the users. The results highlight the main effects of the intensity and the velocity of movement of the air stream on the subjective evaluation measured in space affective (namely, Valence, Arousal and Dominance).The communication of emotions is clearly multi-modal. We use touch jointly with other modalities to communicate different emotional messages. We conducted two experimental studies to examine the combination of air jet tactile stimulation with facial and vocal expressions for perception of the valence. These experiments were conducted in a theoretical and experimental framework called integration of information theory. This framework allows modelling the integration of information from multiple sources using a cognitive algebra. Our work suggests that tactile stimulation by air jet can be used to transmit emotional signals in the context of the human-machine interactions. Perceptual bimodal integration models can be exploited to build computational models to display affects by combining tactile stimulation to facial expressions or the voice
Atwood, Kristen Diane. "Recognition of Facial Expressions of Six Emotions by Children with Specific Language Impairment." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1501.pdf.
Full textStott, Dorthy A. "Recognition of Emotion in Facial Expressions by Children with Language Impairment." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2513.pdf.
Full textJohansson, David. "Design and evaluation of an avatar-mediated system for child interview training." Thesis, Linnéuniversitetet, Institutionen för medieteknik (ME), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-40054.
Full textWallez, Catherine. "Communication chez les primates non humains : étude des asymétries dans la production d'expressions oro-faciales." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM3123/document.
Full textThe study of oro-facial asymmetries offers an indirect and suitable index to determine the hemispheric specialization of the processes associated to socio-emotional communication in non-human primates. However, few studies have been made in this domain and the available theories in humans are in part contradictory. In order to contribute to this field, i.e., hemispheric specialization of cognitive and emotional processing in primates, four experimental studies have been carried out during this doctorate. Firstly, two methods have been used to assess oro-facial asymmetries in adult baboons (a morphometric one and a free viewing of chimeric faces). A right hemispheric specialization for negative emotions was noticed. A third study demonstrated for the first time a population-level hemispheric specialization for the production of emotions in infant macaques and baboons. A last study tested the robustness of previous findings in chimpanzees concerning differences of hemispheric lateralization patterns depending on the communicative function of the vocalizations: intentional (left hemisphere) vs emotional (right hemisphere). Results confirmed the previous conclusions and allowed to discuss hypotheses about the origin of the evolution of language (speech). These collective findings are discussed within the context of the phylogeny of hemispheric specialization mechanisms underlying verbal and nonverbal communication in humans
Khaled, Fazia. "La communication des émotions chez l’enfant (colère, joie, tristesse) ; études de cas et confrontation de théories linguistiques." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCA137.
Full textThis research provides a multimodal analysis of the expression of emotion in two monolingual American children and their parents. The children were filmed in natural interactions in a family setting from the ages of 11 months to 3 years 10 months, and from 1 year 1 month to 4 years.We adopted a broad definition of language in this research which encompasses various semiotic resources – from verbal resources (lexicon and grammatical features), to nonverbal (vocalizations, facial expressions, and gestures). We focus on the children’s acquisition and development of these verbal and nonverbal markers and on how they are used by their parents. Our research shows that children develop specific and distinct communicational patterns, which are greatly influenced by the input to which they are exposed.From a theoretical perspective, our research draws from a constructivist and functionalist approach (Tomasello, 2003), and our data is analyzed in light of language socialization and of studies which have shown that facial expressions and gestures are used as communicational signals in face-to-face dialogue. Our methodology combines quantitative and qualitative methods to investigate each speaker’s verbal and nonverbal behavior when expressing emotions.Having outlined our theoretical and methodological foundation (Part I), we present our results on the expression of three emotions (happiness, sadness, and anger) in children and adults (Part II). Our research suggests that while children’s linguistic development has little impact on the richness of their emotional expression parental input and attitudes both play a crucial role in the acquisition of each modality and in the transmission of communicational patterns
Grossard, Charline. "Evaluation et rééducation des expressions faciales émotionnelles chez l’enfant avec TSA : le projet JEMImE Serious games to teach social interactions and emotions to individuals with autism spectrum disorders (ASD) Children facial expression production : influence of age, gender, emotion subtype, elicitation condition and culture." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS625.
Full textThe autism spectrum disorder (ASD) is characterized by difficulties in socials skills, as emotion recognition and production. Several studies focused on emotional facial expressions (EFE) recognition, but few worked on its production, either in typical children or in children with ASD. Nowadays, information and communication technologies are used to work on social skills in ASD but few studies using these technologies focus on EFE production. After a literature review, we found only 4 games regarding EFE production. Our final goal was to create the serious game JEMImE to work on EFE production with children with ASD using an automatic feedback. We first created a dataset of EFE of typical children and children with ASD to train an EFE recognition algorithm and to study their production skills. Several factors modulate them, such as age, type of emotion or culture. We observed that human judges and the algorithm assess the quality of the EFE of children with ASD as poorer than the EFE of typical children. Also, the EFE recognition algorithm needs more features to classify their EFE. We then integrated the algorithm in JEMImE to give the child a visual feedback in real time to correct his/her productions. A pilot study including 23 children with ASD showed that children are able to adapt their productions thanks to the feedback given by the algorithm and illustrated an overall good subjective experience with JEMImE. The beta version of JEMImE shows promising potential and encourages further development of the game in order to offer longer game exposure to children with ASD and so allow a reliable assessment of the effect of this training on their production of EFE
Dakpé, Stéphanie. "Etude biomécanique de la mimique faciale." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2203/document.
Full textThe aim of this research is to study facials mimics movements and to correlate externat soft tissue (i.e., cutaneous) movement during facial mimics with internal (i.e., facial mimic muscle) movement. The entire facial mimicry couldn't be studied, that's why relevant movements had been selected. Those movements were characterised by a clinically qualitative analysis in 23 young healthy volunteers. The analysis was performed with video recordings including scaling derived from the FACS (Facial Action Coding System). After the validation of external characterisation by this method, internal characterisation of the mimic facial muscle was carried out in 10 volunteers. A modelization of selected facial mimic muscle as Zygomaticus Major was achieved. With this work, morphological parameters could be extracted, 3D morphometric data were analysed to provide a better understanding of cinematic behaviour of muscle in different positions.This research is included in the Simovi Project, which aims to determine to what extent a facial mimic can be evaluated objectively, to select the qualitative and quantitative indicators for evaluation of mimic facial disorders, and to transfer our technological developments in clinical field. This research is a first step and provides data for simulation or developments of measurement tools in evaluation and follow-up of mimic facial disorders
Jacquot, Amélie. "Influence des indices sociaux non-verbaux sur les jugements métacognitifs rétrospectifs : études comportementales, électromyographiques et interculturelles." Thesis, Paris 10, 2017. http://www.theses.fr/2017PA100131/document.
Full textActions and decisions most often take place in the presence of others. Previous social psychology studies have shown the effects of social information on external and observable behaviors. In this work, we aim to determine to what extent social information also influences the internal metacognitive monitoring processes underlying cognitive actions and decision-making. Non-verbal social cues (such as gaze direction and facial expressions) constitute an important part of human communication. Here, we have tested (i) whether non-verbal social cues are integrated into the processes of retrospective metacognitive monitoring (i.e. into the assessment of confidence-based judgment); (ii) whether filtering mechanisms are used to modulate the impact of these cues on confidence-based judgment, depending on cue relevance; and (iii) whether participants’ culture (collectivist versus individualist) modulates the impact of these cues on confidence-based judgment. Our work explores these issues through four sets of behavioral studies, two of which also exploring facial expressions using electromyography, and two of which exploring intercultural differences (comparing Japanese and French participants). Overall, we observed that non-verbal social cues that reinforce individuals' choices automatically increase their confidence in those choices, even when the cues are unreliable. The processing of these particular social cues (in the context of the assessment of confidence-based judgment) follow a heuristic pathway. The effects are similar among Japanese and French participants, although somewhat more marked among Japanese participants (i.e. the collectivist culture). The electromyographic recordings of significant facial expressions during the cognitive task likely reflect different mechanisms depending on individuals’ cultural values. We discuss our findings in the context of their clinical and learning applications
Arif-Rahu, Mamoona. "FACIAL EXPRESSION DISCRIMINATES BETWEEN PAIN AND ABSENCE OF PAIN IN THE NON-COMMUNICATIVE, CRITICALLY ILL ADULT PATIENT." VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/153.
Full textur, Réhman Shafiq. "Expressing emotions through vibration for perception and control." Doctoral thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-32990.
Full textTaktil Video
Mehling, Margaret Helen. "Differential Impact of Drama-Based versus Traditional Social Skills Intervention on the Brain-Basis and Behavioral Expression of Social Communication Skills in Children with Autism Spectrum Disorder." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1492723324931796.
Full textHansen, Joseph S. "The interpretation of a conductor's nonverbal communication by ensemble members and the impact on conducting education." Thesis, 2021. https://hdl.handle.net/2144/41869.
Full textTrewick, Christine. "Out-of-plane action unit recognition using recurrent neural networks." Thesis, 2015. http://hdl.handle.net/10539/18540.
Full textThe face is a fundamental tool to assist in interpersonal communication and interaction between people. Humans use facial expressions to consciously or subconsciously express their emotional states, such as anger or surprise. As humans, we are able to easily identify changes in facial expressions even in complicated scenarios, but the task of facial expression recognition and analysis is complex and challenging to a computer. The automatic analysis of facial expressions by computers has applications in several scientific subjects such as psychology, neurology, pain assessment, lie detection, intelligent environments, psychiatry, and emotion and paralinguistic communication. We look at methods of facial expression recognition, and in particular, the recognition of Facial Action Coding System’s (FACS) Action Units (AUs). Movements of individual muscles on the face are encoded by FACS from slightly different, instant changes in facial appearance. Contractions of specific facial muscles are related to a set of units called AUs. We make use of Speeded Up Robust Features (SURF) to extract keypoints from the face and use the SURF descriptors to create feature vectors. SURF provides smaller sized feature vectors than other commonly used feature extraction techniques. SURF is comparable to or outperforms other methods with respect to distinctiveness, robustness, and repeatability. It is also much faster than other feature detectors and descriptors. The SURF descriptor is scale and rotation invariant and is unaffected by small viewpoint changes or illumination changes. We use the SURF feature vectors to train a recurrent neural network (RNN) to recognize AUs from the Cohn-Kanade database. An RNN is able to handle temporal data received from image sequences in which an AU or combination of AUs are shown to develop from a neutral face. We are recognizing AUs as they provide a more fine-grained means of measurement that is independent of age, ethnicity, gender and different expression appearance. In addition to recognizing FACS AUs from the Cohn-Kanade database, we use our trained RNNs to recognize the development of pain in human subjects. We make use of the UNBC-McMaster pain database which contains image sequences of people experiencing pain. In some cases, the pain results in their face moving out-of-plane or some degree of in-plane movement. The temporal processing ability of RNNs can assist in classifying AUs where the face is occluded and not facing frontally for some part of the sequence. Results are promising when tested on the Cohn-Kanade database. We see higher overall recognition rates for upper face AUs than lower face AUs. Since keypoints are globally extracted from the face in our system, local feature extraction could provide improved recognition results in future work. We also see satisfactory recognition results when tested on samples with out-of-plane head movement, showing the temporal processing ability of RNNs.
Imai, Tatsuya. "The influence of stigma of mental illnesses on decoding and encodting of verbal and nonverbal messages." 2013. http://hdl.handle.net/2152/21777.
Full texttext
Boulton, Chris. "Trophy Children Don’t Smile: Fashion Advertisements For Designer Children’s Clothing In Cookie Magazine." 2007. https://scholarworks.umass.edu/theses/3.
Full textCHEN, YI-CYUN, and 陳益群. "A Study on the Adaptation Learning of Interpersonal Communication for College Students Based on Mobile Learning System Combining with Facial Expression Recognition." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/33py5u.
Full text國立中正大學
通訊工程研究所
106
The study of this paper use mobile learning system combining with facial expression recognition in interpersonal learning. This system explores the learning effect of interpersonal learning for college students who take life education course. There’re two purposes of the study. Firstly, college students use facial expression recognition system whether the system will help the effect of adaptation learning of interpersonal communication. Secondly, satisfaction of using facial expression recognition system to support adaptation learning is also evaluated. We adopted two methods of the study. Firstly, we have developed facial expression recognition for mobile learning system. Secondly, we design experimental method to understand the interpersonal learning effectiveness. The facial expression recognition system is based on the services provided by Microsoft Azure, and the Augmented Reality (AR) system makes use of the Unity 3D development environment. We can use C# language to develop AR learning system on the mobile vehicles. The AR system provide students to facilitate situational learning and interaction. The facial expression recognition is to provide learning status and behavior records of systematic learners that is conducive to the system for choosing adaptive teaching materials. In addition to the traditional classroom teaching, the students through the Moodle E-learning platform can use mobile vehicles for self-learning to improve learning outcomes. The results of this study show that the college students are generally satisfied with the help of interpersonal communication based on learning system. From education statistical model, college students in the interpersonal communication show the learning performance that has a significant difference. The results also show that the learning system can help the students to improve the interpersonal communication learning efficiency. We have found college students are generally satisfied with the function of mobile learning system. According to the results of the questionnaire, we have found drawbacks that will influence the overall learning effect. We recommend future development is needed for focusing on the following research issues: 1. Improving the accuracy of facial expression recognition, maintaining system stability and reducing the impact of mobile environmental factors 2. Establishing evaluation system for teaching materials, providing adaptive learning system to evaluate the teaching materials; the purpose of evaluation is to reflect when the teaching materials is lower than the threshold, the system can automatically replace the teaching materials to increase the diversity and relevance of teaching materials.
Fan, Chao. "Real-time facial expression analysis : a thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Ph.D.) in Computer Science at Massey University, Auckland, New Zealand." 2008. http://hdl.handle.net/10179/762.
Full text