To see the other types of publications on this topic, follow the link: Facial-expression communication.

Dissertations / Theses on the topic 'Facial-expression communication'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 25 dissertations / theses for your research on the topic 'Facial-expression communication.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Murphy, Suzanne Marguerite. "Young children's behaviour and interactive tasks : the effects of popularity on communication and task performance." Thesis, Open University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Slessor, Gillian. "Age-related changes in decoding basic social cues from the eyes." Thesis, Available from the University of Aberdeen Library and Historic Collections Digital Resources, 2009. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?application=DIGITOOL-3&owner=resourcediscovery&custom_att_2=simple_viewer&pid=53353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

McLellan, Tracey Lee. "Sensitivity to Emotion Specified in Facial Expressions and the Impact of Aging and Alzheimer's Disease." Thesis, University of Canterbury. Psychology, 2008. http://hdl.handle.net/10092/1979.

Full text
Abstract:
This thesis describes a program of research that investigated the sensitivity of healthy young adults, healthy older adults and individuals with Alzheimer’s disease (AD) to happiness, sadness and fear emotion specified in facial expressions. In particular, the research investigated the sensitivity of these individuals to the distinctions between spontaneous expressions of emotional experience (genuine expressions) and deliberate, simulated expressions of emotional experience (posed expressions). The specific focus was to examine whether aging and/or AD effects sensitivity to the target emotions. Emotion-categorization and priming tasks were completed by all participants. The tasks employed an original set of cologically valid facial displays generated specifically for the present research. The categorization task (Experiments 1a, 2a, 3a, 4a) required participants to judge whether targets were, or were not showing and feeling each target emotion. The results showed that all 3 groups identified a genuine expression as both showing and feeling the target emotion whilst a posed expression was identified more frequently as showing than feeling the emotion. Signal detection analysis demonstrated that all 3 groups were sensitive to the expression of emotion, reliably differentiating expressions of experienced emotion (genuine expression) from expressions unrelated to emotional experience (posed and neutral expressions). In addition, both healthy young and older adults could reliably differentiate between posed and genuine expressions of happiness and sadness, whereas, individuals with AD could not. Sensitivity to emotion specified in facial expressions was found to be emotion specific and to be independent of both the level of general cognitive functioning and of specific cognitive functions. The priming task (Experiments 1b, 2b, 3b,4b) employed the facial expressions as primes in a word valence task in order to investigate spontaneous attention to facial expression. Healthy young adults only showed an emotion-congruency priming effect for genuine expressions. Healthy older adults and individuals with AD showed no priming effects. Results are discussed in terms of the understanding of the recognition of emotional states in others and the impact of aging and AD on the recognition of emotional states. Consideration is given to how these findings might influence the care and management of individuals with AD.
APA, Harvard, Vancouver, ISO, and other styles
4

Visser, Naomi. "The ability of four-year-old children to recognise basic emotions represented by graphic symbols." Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-11162007-164230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Peschka-Daskalos, Patricia Jean. "An Intercultural Analysis of Differences in Appropriateness Ratings of Facial Expressions Between Japanese and American Subjects." PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/4700.

Full text
Abstract:
In 1971 Paul Ekman posited his Neuro-Cultural Theory of Emotion which stated that expressions of emotion are universal but controlled by cultural display rules. This thesis tests the Neuro-Cultural Theory by having subjects from two cultures, Japan and the United States, judge the perceived appropriateness facial expressions in social situations. Preliminary procedures resulted in a set of scenarios in which socially appropriate responses were deemed to be either "Happy", "Angry" or "Surprised". Data in the experimental phase of the study were collected using a questionnaire format. Through the use of a 5-point Likert scale, each subject rated the appropriateness of happy, anger and surprise expressions in positive, negative and ambiguous social situations. Additionally, the subjects were asked to label each expression in each situation. The responses were analyzed statistically using Analysis of Variance procedures. Label percentages were also calculated for: the second task in the study. No support was found for two of the three research hypotheses, and only partial support was found for a third research hypothesis. These results were discussed in terms of the need for greater theoretical and methodological refinements.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Lan. "Towards Enhancing Human-robot Communication for Industrial Robots: A Study in Facial Expressions Mot Förbättra Människa-robot Kommunikation för Industrirobotar : En studie i ansiktsuttryck." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-189569.

Full text
Abstract:
Collaborative robots are becoming more commonplace within factories to work alongside their human counterparts. With this newfound perspective towards robots being seen as collaborative partners comes the question of how interacting with these machines will change. This thesis therefore focuses on investigating the connection between facial expression communication in industrial robots and users' perceptions. Experiments were conducted to investigate the relationship between users' perceptions towards both existing facial expressions of the Baxter robot (an industrial robot by Rethink Robotics) and redesigned versions of these facial expressions. Findings reveal that the redesigned facial expressions provide a better match to users’ expectations. In addition, insights into improving the expressive communication between humans and robots are discussed, including the need for additional solutions which can complement the facial expressions displayed by providing more detailed information as needed. The last section of this thesis presents future research directions towards building a more intuitive and user-friendly human-robot cooperation space for future industrial robots.
APA, Harvard, Vancouver, ISO, and other styles
7

Asplund, Kenneth. "The experience of meaning in the care of patients in the terminal stage of dementia of the Alzheimer type : interpretation of non-verbal communication and ethical demands." Doctoral thesis, Umeå universitet, Institutionen för omvårdnad, 1991. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-96891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tsalamlal, Mohamed Yacine. "Communication affective médiée via une interface tactile." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS379/document.

Full text
Abstract:
La communication affective est au cœur de nos interactions interpersonnelles. Nous communiquons les émotions à travers de multiples canaux non verbaux. Plusieurs travaux de recherche sur l’interaction homme-machine ont exploité ces modalités de communication afin de concevoir des systèmes permettant de reconnaître et d’afficher automatiquement des signaux affectifs. Le toucher est la modalité la moins explorée dans ce domaine de recherche. L’aspect intrusif des interfaces haptiques actuelles est l’un des principaux obstacles à leur utilisation dans la communication affective médiée. En effet, l’utilisateur est physiquement connecté à des systèmes mécaniques pour recevoir la stimulation. Cette configuration altère la transparence de l’interaction médiée et empêche la perception de certaines dimensions affectives comme la valence. L’objectif de cette thèse est de proposer et d’étudier une technique de stimulation tactile sans contact avec des systèmes mécaniques pour médier des signaux d’affects. Sur la base de l’état de l’art des interfaces haptiques, nous avons proposé une stratégie de stimulation tactile basée sur l’utilisation d’un jet d’air mobile. Cette technique permet de fournir une stimulation tactile non-intrusive sur des zones différentes du corps. De plus, ce dispositif tactile permettrait une stimulation efficace de certains mécanorécepteurs qui jouent un rôle important dans les perceptions d’affects positifs. Nous avons conduit une étude expérimentale pour comprendre les relations entre les caractéristiques physiques de la stimulation tactile par jet d’air et la perception affective des utilisateurs. Les résultats mettent en évidence les effets principaux de l'intensité et de la vitesse du mouvement du jet d’air sur l’évaluation subjective mesurée dans l’espace affectif (à savoir, la valence, l'arousal et de la dominance).La communication des émotions est clairement multimodale. Nous utilisons le toucher conjointement avec d’autres modalités afin de communiquer les différents messages affectifs. C’est dans ce sens que nous avons conduit deux études expérimentales pour examiner la combinaison de la stimulation tactile par jet d’air avec les expressions faciales et vocales pour la perception de la valence. Ces expérimentations ont été conduites dans un cadre théorique et expérimental appelé théorie de l’intégration de l’information. Ce cadre permet de modéliser l’intégration de l’information issue de plusieurs sources en employant une algèbre cognitive. Les résultats de nos travaux suggèrent que la stimulation tactile par jet d’air peut être utilisée pour transmettre des signaux affectifs dans le cadre des interactions homme-machine. Les modèles perceptifs d’intégration bimodales peuvent être exploités pour construire des modèles computationnels permettant d’afficher des affects en combinant la stimulation tactile aux expressions faciales ou à la voix
Affective communication plays a major role in our interpersonal interactions. We communicate emotions through multiple non-verbal channels. Researches on human-computer interaction have exploited these communication channels in order to design systems that automatically recognize and display emotional signals. Touch has receivers less interest then other non-verbal modalities in this area of research. The intrusive aspect of current haptic interfaces is one of the main obstacles to their use in mediated emotional communication. In fact, the user is must physically connected to mechanical systems to receive the stimulation. This configuration affects the transparency of the mediated interaction and limits the perception of certain emotional dimensions as the Valence. The objective of this thesis is to propose and study a technique for tactile stimulation. This technique does not require contact with mechanical systems to transmit affective signals. On the basis of the state of the art of haptic interfaces, we proposed a strategy of tactile stimulation based on the use of a mobile air jet. This technique provides a non-intrusive tactile stimulation on different areas of the body. In addition, this tactile device would allow effective stimulation of some mechanoreceptors that play an important role in perceptions of positive affect. We conducted an experimental study to understand the relationships between the physical characteristics of tactile stimulation by air jet and the emotional perception of the users. The results highlight the main effects of the intensity and the velocity of movement of the air stream on the subjective evaluation measured in space affective (namely, Valence, Arousal and Dominance).The communication of emotions is clearly multi-modal. We use touch jointly with other modalities to communicate different emotional messages. We conducted two experimental studies to examine the combination of air jet tactile stimulation with facial and vocal expressions for perception of the valence. These experiments were conducted in a theoretical and experimental framework called integration of information theory. This framework allows modelling the integration of information from multiple sources using a cognitive algebra. Our work suggests that tactile stimulation by air jet can be used to transmit emotional signals in the context of the human-machine interactions. Perceptual bimodal integration models can be exploited to build computational models to display affects by combining tactile stimulation to facial expressions or the voice
APA, Harvard, Vancouver, ISO, and other styles
9

Atwood, Kristen Diane. "Recognition of Facial Expressions of Six Emotions by Children with Specific Language Impairment." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1501.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Stott, Dorthy A. "Recognition of Emotion in Facial Expressions by Children with Language Impairment." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2513.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Johansson, David. "Design and evaluation of an avatar-mediated system for child interview training." Thesis, Linnéuniversitetet, Institutionen för medieteknik (ME), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-40054.

Full text
Abstract:
There is an apparent problem with children being abused in different ways in their everyday life and the lack of education related to these issues among working adults in the vicinity of these children, for example as social workers or teachers. There are formal courses in child interview training that teach participants how to talk to children in a correct manner. Avatar-mediation enables new methods of practicing this communication without having to involve a real child or role play face-to-face with another adult. In this study it was explored how a system could be designed in order to enable educational practice sessions where a child interview expert can be mediated through avatars in the form of virtual children. Prototypes were developed in order to evaluate the feasibility of the scenario regarding methods for controlling the avatar and how the avatar was perceived by the participants. It was found that there is a clear value in the educational approach of using avatar-mediation. From the perspective of the interactor it was found that using a circular radial interface for graphical representation of different emotions was possible to control a video-based avatar while simultaneously having a conversation with the participant. The results of the study include a proposed design of an interface, description of underlying system functionality and suggestions on how avatar behavior can be characterized in order to achieve a high level of presence for the participant.
APA, Harvard, Vancouver, ISO, and other styles
12

Wallez, Catherine. "Communication chez les primates non humains : étude des asymétries dans la production d'expressions oro-faciales." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM3123/document.

Full text
Abstract:
L'examen des asymétries oro-faciales fournit un indice indirect et fiable pour déterminer la spécialisation hémisphérique des processus liés à la communication socio-émotionnelle chez les primates non humains. Cependant, à ce jour, peu d'études ont été réalisées et les théories formulées chez l'homme sont peu consensuelles. Afin de contribuer à la question de la latéralisation cérébrale des processus cognitivo-émotionnels chez le primate, quatre études expérimentales ont été réalisées au cours de cette thèse. Tout d'abord, deux méthodes ont été utilisées pour mesurer les asymétries oro-faciales dans une population de babouins adultes (une méthode morphométrique et une méthode dite des « chimères »). Une spécialisation hémisphérique droite dominante pour le traitement des émotions négatives a été notée. Une troisième étude a démontré, pour la première fois, une asymétrie oro-faciale au niveau populationnel chez des jeunes macaques et babouins. Enfin, une dernière étude a été réalisée chez des chimpanzés afin de tester la robustesse d'une recherche qui avait mis en évidence une différence d'asymétrie selon la fonction communicative intentionnelle (hémisphère gauche) vs. émotionnelle (hémisphère droit) des vocalisations. Les résultats ont confirmé ceux de la première étude et permettent de discuter des hypothèses concernant l'origine de l'évolution du langage. Ces travaux sont discutés à la lumière des recherches récentes concernant de nombreuses espèces animales. Ils apportent des connaissances nouvelles pour appréhender la phylogénèse de la spécialisation hémisphérique des processus associés à la communication verbale et non verbale chez l'homme
The study of oro-facial asymmetries offers an indirect and suitable index to determine the hemispheric specialization of the processes associated to socio-emotional communication in non-human primates. However, few studies have been made in this domain and the available theories in humans are in part contradictory. In order to contribute to this field, i.e., hemispheric specialization of cognitive and emotional processing in primates, four experimental studies have been carried out during this doctorate. Firstly, two methods have been used to assess oro-facial asymmetries in adult baboons (a morphometric one and a free viewing of chimeric faces). A right hemispheric specialization for negative emotions was noticed. A third study demonstrated for the first time a population-level hemispheric specialization for the production of emotions in infant macaques and baboons. A last study tested the robustness of previous findings in chimpanzees concerning differences of hemispheric lateralization patterns depending on the communicative function of the vocalizations: intentional (left hemisphere) vs emotional (right hemisphere). Results confirmed the previous conclusions and allowed to discuss hypotheses about the origin of the evolution of language (speech). These collective findings are discussed within the context of the phylogeny of hemispheric specialization mechanisms underlying verbal and nonverbal communication in humans
APA, Harvard, Vancouver, ISO, and other styles
13

Khaled, Fazia. "La communication des émotions chez l’enfant (colère, joie, tristesse) ; études de cas et confrontation de théories linguistiques." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCA137.

Full text
Abstract:
Cette thèse propose une analyse multimodale de l’expression des émotions chez deux enfants américaines et leurs parents monolingues. Les enfants ont été filmées entre 11 mois et 3 and et 10 mois pour l’une et entre 1 an et 1 mois et 4 ans pour l’autre au cours d’interactions spontanées en milieu familial. Nous adoptons une définition du langage large car toutes les ressources sémiotiques sont à prendre en compte : ressources verbales (lexique, marqueurs grammaticaux), vocales (vocalisations), gestuelles et corporelles (gestes, expressions faciales, actions).Nous nous concentrons sur l’acquisition et le développement des marqueurs verbaux et non verbaux exprimant les émotions chez l’enfant et sur l’usage de ces marqueurs chez l’adulte. Nous montrons que des profils expressifs bien précis et distincts semblent déjà émerger chez les enfants, grandement influencés par l’input auquel ils sont exposés chaque jour.Au plan théorique, notre recherche s’inscrit dans une approche constructiviste et fonctionnaliste de la langue (Tomasello, 2003) et nous analysons les données à l’aune de la socialisation langagière, et des études sur la gestualité et les expressions faciales comme vecteurs d’informations communicationnelles. Au plan méthodologique, nous réalisons des analyses quantitatives et qualitatives afin d’éclairer les comportements propres à chaque locuteur.Après avoir exposé notre socle théorique et notre méthodologie (partie I), nous révélons nos résultats sur l’expression de trois émotions (colère, joie, et tristesse) chez les locuteurs adultes et enfants (partie II). Nos résultats suggèrent que le développement linguistique des enfants n’a pas d’incidence sur l’expression de leurs émotions, mais que l’input et les attitudes parentales jouent un rôle majeur dans l’acquisition et le développement de chaque modalité et dans la transmission de modèles expressifs
This research provides a multimodal analysis of the expression of emotion in two monolingual American children and their parents. The children were filmed in natural interactions in a family setting from the ages of 11 months to 3 years 10 months, and from 1 year 1 month to 4 years.We adopted a broad definition of language in this research which encompasses various semiotic resources – from verbal resources (lexicon and grammatical features), to nonverbal (vocalizations, facial expressions, and gestures). We focus on the children’s acquisition and development of these verbal and nonverbal markers and on how they are used by their parents. Our research shows that children develop specific and distinct communicational patterns, which are greatly influenced by the input to which they are exposed.From a theoretical perspective, our research draws from a constructivist and functionalist approach (Tomasello, 2003), and our data is analyzed in light of language socialization and of studies which have shown that facial expressions and gestures are used as communicational signals in face-to-face dialogue. Our methodology combines quantitative and qualitative methods to investigate each speaker’s verbal and nonverbal behavior when expressing emotions.Having outlined our theoretical and methodological foundation (Part I), we present our results on the expression of three emotions (happiness, sadness, and anger) in children and adults (Part II). Our research suggests that while children’s linguistic development has little impact on the richness of their emotional expression parental input and attitudes both play a crucial role in the acquisition of each modality and in the transmission of communicational patterns
APA, Harvard, Vancouver, ISO, and other styles
14

Grossard, Charline. "Evaluation et rééducation des expressions faciales émotionnelles chez l’enfant avec TSA : le projet JEMImE Serious games to teach social interactions and emotions to individuals with autism spectrum disorders (ASD) Children facial expression production : influence of age, gender, emotion subtype, elicitation condition and culture." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS625.

Full text
Abstract:
Le trouble du Spectre de l’Autisme (TSA) est caractérisé par des difficultés concernant les habiletés sociales dont l’utilisation des expressions faciales émotionnelles (EFE). Si de nombreuses études s’intéressent à leur reconnaissance, peu évaluent leur production chez l’enfant typique et avec TSA. Les nouvelles technologies sont plébiscitées pour travailler les habiletés sociales auprès des enfants avec TSA, or, peu d’études concernent leur utilisation pour le travail de la production des EFE. Au début de ce projet, nous retrouvions seulement 4 jeux la travaillant. Notre objectif a été la création du jeu sérieux JEMImE travaillant la production des EFE chez l’enfant avec TSA grâce à un feedback automatisé. Nous avons d’abord constitué une base de données d’EFE d’enfants typiques et avec TSA pour créer un algorithme de reconnaissance des EFE et étudier leurs compétences de production. Plusieurs facteurs les influencent comme l’âge, le type d’émotion, la culture. Les EFE des enfants avec TSA sont jugées de moins bonne qualité par des juges humains et par l’algorithme de reconnaissance des EFE qui a besoin de plus de points repères sur leurs visages pour classer leurs EFE. L’algorithme ensuite intégré dans JEMImE donne un retour visuel en temps réel à l’enfant pour corriger ses productions. Une étude pilote auprès de 23 enfants avec TSA met en avant une bonne adaptation des enfants aux retours de l’algorithme ainsi qu’une bonne expérience dans l’utilisation du jeu. Ces résultats prometteurs ouvrent la voie à un développement plus poussé du jeu pour augmenter le temps de jeu et ainsi évaluer l’effet de cet entraînement sur la production des EFE chez les enfants avec TSA
The autism spectrum disorder (ASD) is characterized by difficulties in socials skills, as emotion recognition and production. Several studies focused on emotional facial expressions (EFE) recognition, but few worked on its production, either in typical children or in children with ASD. Nowadays, information and communication technologies are used to work on social skills in ASD but few studies using these technologies focus on EFE production. After a literature review, we found only 4 games regarding EFE production. Our final goal was to create the serious game JEMImE to work on EFE production with children with ASD using an automatic feedback. We first created a dataset of EFE of typical children and children with ASD to train an EFE recognition algorithm and to study their production skills. Several factors modulate them, such as age, type of emotion or culture. We observed that human judges and the algorithm assess the quality of the EFE of children with ASD as poorer than the EFE of typical children. Also, the EFE recognition algorithm needs more features to classify their EFE. We then integrated the algorithm in JEMImE to give the child a visual feedback in real time to correct his/her productions. A pilot study including 23 children with ASD showed that children are able to adapt their productions thanks to the feedback given by the algorithm and illustrated an overall good subjective experience with JEMImE. The beta version of JEMImE shows promising potential and encourages further development of the game in order to offer longer game exposure to children with ASD and so allow a reliable assessment of the effect of this training on their production of EFE
APA, Harvard, Vancouver, ISO, and other styles
15

Dakpé, Stéphanie. "Etude biomécanique de la mimique faciale." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2203/document.

Full text
Abstract:
Ce travail de thèse, inclus dans un projet structurant plus vaste, projet SIMOVI (SImulation des MOuvements du VIsage), s’attache à étudier spécifiquement la mimique faciale en corrélant les déplacements visibles du revêtement cutané et les mouvements musculaires internes à travers le développement de plusieurs méthodologies. L’ensemble de la mimique faciale ne pouvant être étudié, étant donné la multitude d’expressions, les mouvements pertinents à étudier dans nos travaux ont été identifiés. Ces mouvements ont été caractérisés chez 23 sujets jeunes dans une analyse descriptive qualitative et clinique, basée sur une méthodologie s’appuyant sur l’analyse d’enregistrements vidéoscopiques, et le développement d’un codage issu du FACS (Facial Action Coding System). Une cohorte de référence a ainsi été constituée. Après avoir validé notre méthodologie pour la caractérisation externe de la mimique, l’analyse des muscles peauciers par l’IRM a été réalisée sur 10 hémifaces parmi les sujets sains issus de la cohorte. Cette caractérisation a fait appel, à partir d’une anatomie in vivo, à une modélisation de certains muscles peauciers (zygomaticus major en particulier) afin d’extraire des paramètres morphologiques, de réaliser une analyse plus fine de la morphologie musculaire en 3 dimensions, et d’apporter une meilleure compréhension du comportement cinématique du muscle dans différentes positions. Par son intégration dans un questionnement plus vaste :- comment caractériser objectivement la mimique faciale ? - quels sont les indicateurs qualitatifs et quantitatifs de la mimique que nous pouvons recueillir, et comment réaliser ce recueil ? - comment utiliser les développements technologiques dans les applications cliniques ? Ce travail constitue une étape préliminaire à d’autres travaux. Il pourra fournir des données de référence à des fins de modélisation, de simulation de la mimique faciale, ou de développements d’outil de mesures pour le suivi et l’évaluation des déficits de la mimique faciale
The aim of this research is to study facials mimics movements and to correlate externat soft tissue (i.e., cutaneous) movement during facial mimics with internal (i.e., facial mimic muscle) movement. The entire facial mimicry couldn't be studied, that's why relevant movements had been selected. Those movements were characterised by a clinically qualitative analysis in 23 young healthy volunteers. The analysis was performed with video recordings including scaling derived from the FACS (Facial Action Coding System). After the validation of external characterisation by this method, internal characterisation of the mimic facial muscle was carried out in 10 volunteers. A modelization of selected facial mimic muscle as Zygomaticus Major was achieved. With this work, morphological parameters could be extracted, 3D morphometric data were analysed to provide a better understanding of cinematic behaviour of muscle in different positions.This research is included in the Simovi Project, which aims to determine to what extent a facial mimic can be evaluated objectively, to select the qualitative and quantitative indicators for evaluation of mimic facial disorders, and to transfer our technological developments in clinical field. This research is a first step and provides data for simulation or developments of measurement tools in evaluation and follow-up of mimic facial disorders
APA, Harvard, Vancouver, ISO, and other styles
16

Jacquot, Amélie. "Influence des indices sociaux non-verbaux sur les jugements métacognitifs rétrospectifs : études comportementales, électromyographiques et interculturelles." Thesis, Paris 10, 2017. http://www.theses.fr/2017PA100131/document.

Full text
Abstract:
La grande majorité de nos actions et décisions s’élaborent en présence d’autres individus. De nombreux travaux en psychologie sociale attestent de l’existence d’influences sociales sur les comportements observables par autrui. Nos travaux ont pour objectif de déterminer dans quelles mesures des informations sociales influencent également les processus internes de monitoring métacognitif qui accompagnent les actions cognitives et les prises de décisions. Les indices sociaux non-verbaux (tels que des directions de regards et les expressions faciales) constituent une part importante de la communication humaine. Nos études ont ainsi testé si i) les indices sociaux non-verbaux sont intégrés aux processus de monitoring métacognitif rétrospectif (i.e. à l’élaboration de jugement de confiance); ii) des mécanismes de filtrage permettent de moduler l’impact de ces indices sur les jugements de confiance, en fonction de la pertinence des indices iii) la culture des participants (collectiviste versus individualiste) module l’impact de ces indices sur les jugements de confiance. Nos travaux explorent ces questions à travers quatre ensemble d’études comportementales, dont deux explorant également l’électromyographie faciale des participants, et deux explorant les différences interculturelles (en comparant des participants japonais et français). Dans leur ensemble, ces résultats indiquent que des indices sociaux non-verbaux qui confortent les choix des individus augmentent automatiquement le sentiment de confiance des individus en leur choix, alors même que ces indices ne sont pas fiables. Le traitement de ces indices sociaux particuliers, dans le contexte d’élaboration d’un jugement de confiance, semble reposer sur une voie de type heuristique. Les effets sont très similaires chez les participants Japonais et Français, bien que partiellement plus marqués chez les participants Japonais (i.e. dans les cultures collectivistes). Les réactions électromyographiques faciales suscitées par des expressions faciales signifiantes dans le contexte de réalisation de la tâche cognitive refléteraient différents mécanismes en fonction des valeurs culturelles des individus. Nous discutons des implications de nos résultats dans les domaines cliniques et de l’apprentissage
Actions and decisions most often take place in the presence of others. Previous social psychology studies have shown the effects of social information on external and observable behaviors. In this work, we aim to determine to what extent social information also influences the internal metacognitive monitoring processes underlying cognitive actions and decision-making. Non-verbal social cues (such as gaze direction and facial expressions) constitute an important part of human communication. Here, we have tested (i) whether non-verbal social cues are integrated into the processes of retrospective metacognitive monitoring (i.e. into the assessment of confidence-based judgment); (ii) whether filtering mechanisms are used to modulate the impact of these cues on confidence-based judgment, depending on cue relevance; and (iii) whether participants’ culture (collectivist versus individualist) modulates the impact of these cues on confidence-based judgment. Our work explores these issues through four sets of behavioral studies, two of which also exploring facial expressions using electromyography, and two of which exploring intercultural differences (comparing Japanese and French participants). Overall, we observed that non-verbal social cues that reinforce individuals' choices automatically increase their confidence in those choices, even when the cues are unreliable. The processing of these particular social cues (in the context of the assessment of confidence-based judgment) follow a heuristic pathway. The effects are similar among Japanese and French participants, although somewhat more marked among Japanese participants (i.e. the collectivist culture). The electromyographic recordings of significant facial expressions during the cognitive task likely reflect different mechanisms depending on individuals’ cultural values. We discuss our findings in the context of their clinical and learning applications
APA, Harvard, Vancouver, ISO, and other styles
17

Arif-Rahu, Mamoona. "FACIAL EXPRESSION DISCRIMINATES BETWEEN PAIN AND ABSENCE OF PAIN IN THE NON-COMMUNICATIVE, CRITICALLY ILL ADULT PATIENT." VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/153.

Full text
Abstract:
BACKGROUND: Pain assessment is a significant challenge in critically ill adults, especially those unable to communicate their pain level. At present there is no universally accepted pain scale for use in the non-communicative (cognitively impaired, sedated, paralyzed or mechanically ventilated) patient. Facial expressions are considered among the most reflexive and automatic nonverbal indices of pain. The facial expression component of pain assessment tools include a variety of facial descriptors (wincing, frowning, grimacing, smile/relaxed) with inconsistent pain intensity ratings or checklists of behaviors. The lack of consistent facial expression description and quantification of pain intensity makes standardization of pain evaluation difficult. Although use of facial expression is an important behavioral measure of pain intensity, precise and accurate methods for interpreting the specific facial actions of pain in critically ill adults has not been identified. OBJECTIVE: The three specific aims of this prospective study were: 1) to describe facial actions during pain in non-communicative critically ill patients; 2) to determine facial actions that characterize the pain response; 3) to describe the effect of patient factors on facial actions during the pain response. DESIGN: Descriptive, correlational, comparative. SETTING: Two adult critical care units (Surgical Trauma ICU-STICU and Medical Respiratory ICU-MRICU) at an urban university medical center. SUBJECTS: A convenience sample of 50 non-communicative critically ill intubated, mechanically ventilated adult patients. Fifty-two percent were male, 48% Euro-American, with mean age 52.5 years (±17. 2). METHODS: Subjects were video-recorded while in an intensive care unit at rest (baseline phase) and during endotracheal suctioning (procedure phase). Observer-based pain ratings were gathered using the Behavioral Pain Scale. Facial actions were coded from video using the Facial Action Coding System (FACS) over a 30 second time period for each phase. Pain scores were calculated from FACS action units (AU) following Prkachin and Solomon metric. RESULTS: Fourteen facial action units were associated with pain response and found to occur more frequently during the noxious procedure than during baseline. These included areas of brow raiser, brow lower, orbit tightening, eye closure, head movements, mouth opening, nose wrinkling, and nasal dilatation, and chin raise. The sum of intensity of the 14 AUs was correlated with BPS (r=0.70, P<0.0001) and with the facial expression component of BPS (r=0.58, P<0.0001) during procedure. A stepwise multivariate analysis predicted 5 pain-relevant facial AUs [brow raiser (AU 1), brow lower (AU 4), nose wrinkling (AU 9), head turned right (AU 52), and head turned up (AU53)] that accounted for 71% of the variance (Adjusted R2=0.682) in pain response (F= 21.99, df=49, P<0.0001). The FACS pain intensity score based on 5 pain-relevant facial AUs was associated with BPS (r=0.77, P<0.0001) and with the facial expression component of BPS (r=0.63, P<0.0001) during procedure. Patient factors (e. g., age, gender, race, and diagnosis, duration of endotracheal intubation, ICU length of stay, and analgesic and sedative drug usages, and severity of illness) were not associated with the FACS pain intensity score. CONCLUSIONS: Overall, the FACS pain intensity score composed of inner brow raiser, brow lower, nose wrinkle, and head movements reflected a general pain action in our study. Upper facial expression provides an important behavioral measure of pain which may be used in the clinical evaluation of pain in the non-communicative critically ill patients. These results provide preliminary results that the Facial Action Coding System can discriminate a patient’s acute pain experience.
APA, Harvard, Vancouver, ISO, and other styles
18

ur, Réhman Shafiq. "Expressing emotions through vibration for perception and control." Doctoral thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-32990.

Full text
Abstract:
This thesis addresses a challenging problem: “how to let the visually impaired ‘see’ others emotions”. We, human beings, are heavily dependent on facial expressions to express ourselves. A smile shows that the person you are talking to is pleased, amused, relieved etc. People use emotional information from facial expressions to switch between conversation topics and to determine attitudes of individuals. Missing emotional information from facial expressions and head gestures makes the visually impaired extremely difficult to interact with others in social events. To enhance the visually impaired’s social interactive ability, in this thesis we have been working on the scientific topic of ‘expressing human emotions through vibrotactile patterns’. It is quite challenging to deliver human emotions through touch since our touch channel is very limited. We first investigated how to render emotions through a vibrator. We developed a real time “lipless” tracking system to extract dynamic emotions from the mouth and employed mobile phones as a platform for the visually impaired to perceive primary emotion types. Later on, we extended the system to render more general dynamic media signals: for example, render live football games through vibration in the mobile for improving mobile user communication and entertainment experience. To display more natural emotions (i.e. emotion type plus emotion intensity), we developed the technology to enable the visually impaired to directly interpret human emotions. This was achieved by use of machine vision techniques and vibrotactile display. The display is comprised of a ‘vibration actuators matrix’ mounted on the back of a chair and the actuators are sequentially activated to provide dynamic emotional information. The research focus has been on finding a global, analytical, and semantic representation for facial expressions to replace state of the art facial action coding systems (FACS) approach. We proposed to use the manifold of facial expressions to characterize dynamic emotions. The basic emotional expressions with increasing intensity become curves on the manifold extended from the center. The blends of emotions lie between those curves, which could be defined analytically by the positions of the main curves. The manifold is the “Braille Code” of emotions. The developed methodology and technology has been extended for building assistive wheelchair systems to aid a specific group of disabled people, cerebral palsy or stroke patients (i.e. lacking fine motor control skills), who don’t have ability to access and control the wheelchair with conventional means, such as joystick or chin stick. The solution is to extract the manifold of the head or the tongue gestures for controlling the wheelchair. The manifold is rendered by a 2D vibration array to provide user of the wheelchair with action information from gestures and system status information, which is very important in enhancing usability of such an assistive system. Current research work not only provides a foundation stone for vibrotactile rendering system based on object localization but also a concrete step to a new dimension of human-machine interaction.
Taktil Video
APA, Harvard, Vancouver, ISO, and other styles
19

Mehling, Margaret Helen. "Differential Impact of Drama-Based versus Traditional Social Skills Intervention on the Brain-Basis and Behavioral Expression of Social Communication Skills in Children with Autism Spectrum Disorder." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1492723324931796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hansen, Joseph S. "The interpretation of a conductor's nonverbal communication by ensemble members and the impact on conducting education." Thesis, 2021. https://hdl.handle.net/2144/41869.

Full text
Abstract:
Conducting gestures and facial expressions can be interpreted with wide variance by musicians, even within ensembles with a close range of technical mastery and experience. In this study, I examined the interpretations of a music conductor’s nonverbal communication to collegiate wind ensemble students and the accompanying pedagogical considerations when leading live performers. The conceptual framework of the study was kinesics, “the study of body movements, facial expressions, and gestures” (Ottenheimer, 2009, p. 160), and more specifically, Ekman and Friesen’s (1969) categories of nonverbal communication. Within this framework, the two categories I used specifically were emblems- nonverbal signals from the body representing a verbal message, and affect displays- characterizations of an emotion or other message depicted primarily on the face. Utilizing gesture descriptions compiled by Sousa (1988), I created a video stimulus to interview students on their reactions to 21 gestures of the hands, arms, and torso, as well as 10 naturally occurring facial expressions while conducting. Using the conducting video as the stimulus, I interviewed 80 college students at nine college campuses. Students participated in an individual 30-minute interview where they watched each of the 31 video excerpts and gave verbal feedback about what they perceived as the message of each of the gestures or facial expressions. Data were analyzed and compared to Sousa’s (1988) descriptions of each gesture from which the conductor attempted to demonstrate on the video. Utilizing Ekman and Friesen’s (1969) metric of 70% recognition to code a response as an emblem, 16 of the 21 gestures (76%) were discovered to be musical emblems, compared to 71% in Sousa’s (1988) study. Only 12 out of 21 gestures were identified as emblems in both studies (57%). Categories of the strongest prevalence in the current study of emblems included dynamics and tempo changes. Results from the 10 videos of facial expression netted more than ten different themes per affect display, each with diverse descriptions of musical and emotional messages. Overall results showed the small muscle movements of the face are capable of multi-message and multi-signal semiotic functions (Ekman & Friesen, 1978) with robust descriptions that can change rapidly in significant ways.
APA, Harvard, Vancouver, ISO, and other styles
21

Trewick, Christine. "Out-of-plane action unit recognition using recurrent neural networks." Thesis, 2015. http://hdl.handle.net/10539/18540.

Full text
Abstract:
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of requirements for the degree of Master of Science. Johannesburg, 2015.
The face is a fundamental tool to assist in interpersonal communication and interaction between people. Humans use facial expressions to consciously or subconsciously express their emotional states, such as anger or surprise. As humans, we are able to easily identify changes in facial expressions even in complicated scenarios, but the task of facial expression recognition and analysis is complex and challenging to a computer. The automatic analysis of facial expressions by computers has applications in several scientific subjects such as psychology, neurology, pain assessment, lie detection, intelligent environments, psychiatry, and emotion and paralinguistic communication. We look at methods of facial expression recognition, and in particular, the recognition of Facial Action Coding System’s (FACS) Action Units (AUs). Movements of individual muscles on the face are encoded by FACS from slightly different, instant changes in facial appearance. Contractions of specific facial muscles are related to a set of units called AUs. We make use of Speeded Up Robust Features (SURF) to extract keypoints from the face and use the SURF descriptors to create feature vectors. SURF provides smaller sized feature vectors than other commonly used feature extraction techniques. SURF is comparable to or outperforms other methods with respect to distinctiveness, robustness, and repeatability. It is also much faster than other feature detectors and descriptors. The SURF descriptor is scale and rotation invariant and is unaffected by small viewpoint changes or illumination changes. We use the SURF feature vectors to train a recurrent neural network (RNN) to recognize AUs from the Cohn-Kanade database. An RNN is able to handle temporal data received from image sequences in which an AU or combination of AUs are shown to develop from a neutral face. We are recognizing AUs as they provide a more fine-grained means of measurement that is independent of age, ethnicity, gender and different expression appearance. In addition to recognizing FACS AUs from the Cohn-Kanade database, we use our trained RNNs to recognize the development of pain in human subjects. We make use of the UNBC-McMaster pain database which contains image sequences of people experiencing pain. In some cases, the pain results in their face moving out-of-plane or some degree of in-plane movement. The temporal processing ability of RNNs can assist in classifying AUs where the face is occluded and not facing frontally for some part of the sequence. Results are promising when tested on the Cohn-Kanade database. We see higher overall recognition rates for upper face AUs than lower face AUs. Since keypoints are globally extracted from the face in our system, local feature extraction could provide improved recognition results in future work. We also see satisfactory recognition results when tested on samples with out-of-plane head movement, showing the temporal processing ability of RNNs.
APA, Harvard, Vancouver, ISO, and other styles
22

Imai, Tatsuya. "The influence of stigma of mental illnesses on decoding and encodting of verbal and nonverbal messages." 2013. http://hdl.handle.net/2152/21777.

Full text
Abstract:
Stigmas associated with depression and schizophrenia have been found to negatively impact the communication those with mental illness have with others in face-to-face interactions (e.g., Lysaker, Roe, & Yanos, 2007; Nicholson & Sacco, 1999). This study attempted to specifically examine how stigma affects cognitions, emotions, and behaviors of interactants without a mental illness toward those with a mental illness in online interactions. In this experimental study, 412 participants interacted with a hypothetical target on Facebook, who was believed to have depression, schizophrenia, or a cavity (i.e., the control group). They were asked to read a profile of the target on Facebook, respond to a message from the target, and complete measurements assessing perceived positive and negative face threats in the target's message, perceived facial expressions of the target, induced affect, predicted outcome value, and rejecting attitudes towards the target. Results revealed that the target labeled as schizophrenic was rejected more and perceived to have lower outcome value than the target without a mental illness or labeled as depressive. However, there were no significant differences in any outcomes between the depression and control groups. The mixed results were discussed in relation to methodological limitations and possible modifications of previous theoretical arguments. Theoretical and practical contributions were considered and suggestions for future research were offered.
text
APA, Harvard, Vancouver, ISO, and other styles
23

Boulton, Chris. "Trophy Children Don’t Smile: Fashion Advertisements For Designer Children’s Clothing In Cookie Magazine." 2007. https://scholarworks.umass.edu/theses/3.

Full text
Abstract:
This study examines print advertising from Cookie, an up-scale American parenting magazine for affluent mothers. The ads include seven designer clothing brands: Rocawear, Baby Phat, Ralph Lauren, Diesel, Kenneth Cole, Sean John, and DKNY. When considered within the context of their adult equivalents, the ads for the children’s lines often created a prolepsis—or flash-forward—by depicting the child model as a nascent adult. This was accomplished in three ways. First, the children’s ads typically contained structural continuities such as logo, set design, and color scheme that helped reinforce their relationship with the adult brand. Second, most of the ads place the camera at eye-level—a framing that allows the child models to address their adult viewers as equals. Finally, almost half of the ads feature at least one child looking directly at the camera with a serious expression. This is significant because, in Western culture, the withholding of a smile is a sign of dominance typically reserved for adult males. When children mimic this familiar and powerful “look,” they convey a sense of adult-like confidence and self-awareness often associated with precocious sexuality.
APA, Harvard, Vancouver, ISO, and other styles
24

CHEN, YI-CYUN, and 陳益群. "A Study on the Adaptation Learning of Interpersonal Communication for College Students Based on Mobile Learning System Combining with Facial Expression Recognition." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/33py5u.

Full text
Abstract:
碩士
國立中正大學
通訊工程研究所
106
The study of this paper use mobile learning system combining with facial expression recognition in interpersonal learning. This system explores the learning effect of interpersonal learning for college students who take life education course. There’re two purposes of the study. Firstly, college students use facial expression recognition system whether the system will help the effect of adaptation learning of interpersonal communication. Secondly, satisfaction of using facial expression recognition system to support adaptation learning is also evaluated. We adopted two methods of the study. Firstly, we have developed facial expression recognition for mobile learning system. Secondly, we design experimental method to understand the interpersonal learning effectiveness. The facial expression recognition system is based on the services provided by Microsoft Azure, and the Augmented Reality (AR) system makes use of the Unity 3D development environment. We can use C# language to develop AR learning system on the mobile vehicles. The AR system provide students to facilitate situational learning and interaction. The facial expression recognition is to provide learning status and behavior records of systematic learners that is conducive to the system for choosing adaptive teaching materials. In addition to the traditional classroom teaching, the students through the Moodle E-learning platform can use mobile vehicles for self-learning to improve learning outcomes. The results of this study show that the college students are generally satisfied with the help of interpersonal communication based on learning system. From education statistical model, college students in the interpersonal communication show the learning performance that has a significant difference. The results also show that the learning system can help the students to improve the interpersonal communication learning efficiency. We have found college students are generally satisfied with the function of mobile learning system. According to the results of the questionnaire, we have found drawbacks that will influence the overall learning effect. We recommend future development is needed for focusing on the following research issues: 1. Improving the accuracy of facial expression recognition, maintaining system stability and reducing the impact of mobile environmental factors 2. Establishing evaluation system for teaching materials, providing adaptive learning system to evaluate the teaching materials; the purpose of evaluation is to reflect when the teaching materials is lower than the threshold, the system can automatically replace the teaching materials to increase the diversity and relevance of teaching materials.
APA, Harvard, Vancouver, ISO, and other styles
25

Fan, Chao. "Real-time facial expression analysis : a thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Ph.D.) in Computer Science at Massey University, Auckland, New Zealand." 2008. http://hdl.handle.net/10179/762.

Full text
Abstract:
As computers have become more and more advanced, with even the most basic computer capable of tasks almost unimaginable only a decade ago, researchers and developers are focusing on improving the way that computers interact with people in their everyday lives. A core goal, therefore, is to develop a computer system which can understand and react appropriately to natural human behavior. A key requirement for such a system is the ability to automatically, and in real time, recognises human facial expressions. In addition, this must be successfully achieved regardless of the inherent differences in human faces or variations in lighting and other external conditions. The focus of this research was to develop such a system by evaluating and then utilizing the most appropriate of the many image processing techniques currently available, and, where appropriate, developing new methodologies and algorithms. The first key step in the system is to recognise a human face with acceptable levels of misses and false positives. This research analysed and evaluated a number of different face detection techniques, before developing a novel algorithm which combined phase congruency and template matching techniques. This novel algorithm provides key advantages over existing techniques because it can detect faces rotated to any angle, and it works in real time. Existing techniques could only recognise faces which were rotated less than 10 degrees (in either direction) and most could not work in real time due to excessive computational power requirements. The next step for the system is to enhance and extract the facial features. To successfully achieve the stated goal, the enhancement and extraction of the facial features must reduce the number of facial dimensions to ensure the system can operate in real time, as well as providing sufficient clear and detailed features to allow the facial expressions to be accurately recognised. This part of the system was successfully completed by developing a novel algorithm based on the existing Contrast Limited Adaptive Histogram Equalization technique which quickly and accurately represents facial features, and developing another novel algorithm which reduces the number of feature dimensions by combining radon transformation and fast Fourier transformation techniques, ensuring real time operation is possible. The final step for the system is to use the information provided by the first two steps to accurately recognise facial expressions. This is achieved using an SVM trained using a database including both real and computer generated facial images with various facial expressions. The system developed during this research can be utilised in a number of ways, and, most significantly, has the potential to revolutionise future interactions between humans and computers by assisting these reactions to become natural and intuitive. In addition, individual components of the system also have significant potential, with, for example, the algorithms which allow the recognition of an object regardless of its rotation under consideration as part of a project aiming to achieve non-invasive detection of early stage cancer cells.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography