To see the other types of publications on this topic, follow the link: Affective Computing.

Dissertations / Theses on the topic 'Affective Computing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Affective Computing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Galván, Suazo José Daniel, and Lucas Victor Manuel Segura. "Proyecto desarrollo de aplicaciones con affective computing." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2017. http://hdl.handle.net/10757/622083.

Full text
Abstract:
Desarrolla demostraciones de Affective Computing considerando cinco tecnologías de reconocimiento: Facial Recognition, Gait Recognition, Speech Recognition, Gaze Recognition y Gesture Control. En el capítulo 1 se describirá el proyecto a niel de gestión. En ese contexto, se planteará el objetivo general, cuyo cumplimiento se determinará por la culminación de objetivos específicos, los cuales están relacionados a los indicadores de éxito también descritos. Finalmente, se presentará la planificación del proyecto, en el cual se detallará el alcance y el planeamiento de gestión referente al tiempo, recursos humanos, comunicaciones y riesgos. En el capítulo 2 se presentará el marco teórico del proyecto, el cual iniciará con la definición de Emotion Detection, el cual es el principal componente del que se vale Affective Computing. Posteriormente, se definirá Affective Computing. En el capítulo 3 se desarrollará el Estado del Arte, presentando algunos de los proyectos predecesores exponiendo así la situación actual del progreso de la implementación de soluciones basadas en Affective Computing. Por último, se establecerán las conclusiones. En el capítulo 4 se describirán las soluciones, que son parte del producto final, documentando su descripción, historias de usuario, mapas de interacción y arquitectura de solución. Por último, en el capítulo 5 se documentarán las tres propuestas de solución de Affective Computing.
Affective computing is a field of research and emerging development which use in very interesting in different fields of business today. In this paper sets out the scope of this project is the development of Affective Computing demonstrations recognition considering five technologies: facial recognition, gait recognition, voice recognition, gesture recognition and Gaze Control. Chapter 1 will describe Project since a management perspective. In this context, the general objective whose compliance is determined by the completion of Specific Objectives, which are related one success indicators described also be considered. Finally, the present project planning, in which the scope and Management Planning terms of time, human resources, communications and risks are detailed. In Chapter 2, presents the list of different student outcomes. In this chapter, each point describes how the project satisfied the student outcome’s criteria. In Chapter 3, presents the theoretical framework of the project, which is initiated with the definition of emotion detection, which is the principal component that uses the Affective Computing is present. Subsequently, Affective Computing is defined. In Chapter 4, the State of the Art will take place, presenting some of the predecessors Projects exposing the current state of progress of the implementation of solutions based on Affective Computing. Finally, there are the Conclusions. In Chapter 5, solutions, it’s explain about the final product, documenting his description, user stories, maps Interaction and solution architecture will be described. Finally, in Chapter 6, the three proposals Affective Computing solution will be documented.
APA, Harvard, Vancouver, ISO, and other styles
2

Thompson, Nik. "Development of an open affective computing environment." Thesis, Thompson, Nik (2012) Development of an open affective computing environment. PhD thesis, Murdoch University, 2012. https://researchrepository.murdoch.edu.au/id/eprint/13923/.

Full text
Abstract:
Affective computing facilitates more intuitive, natural computer interfaces by enabling the communication of the user’s emotional state. Despite rapid growth in recent years, affective computing is still an under-explored field, which holds promise to be a valuable direction for future software development. An area which may particularly benefit is e-learning. The fact that interaction with computers is often a fundamental part of study, coupled with the interaction between affective state and learning, makes this an ideal candidate for affective computing developments. The overall aim of the research described in this thesis is to advance the field and promote the uptake of affective computing applications both within the domain of e-learning, as well as in other problem domains. This aim has been addressed with contributions in the areas of tools to infer affective state through physiology, an architecture of a re-usable component based model for affective application development and the construction and subsequent empirical evaluation of a tutoring system that responds to the learner’s affective state. The first contribution put forward a solution that is able to infer the user’s affective state by measuring subtle physiological signals using relatively unobtrusive and low-cost equipment. An empirical study was conducted to evaluate the success of this solution. Results demonstrated that the physiological signals did respond to affective state, and that the platform and methodology was sufficiently robust to detect changes in affective state. The second contribution addressed the ad-hoc and sometimes overly complex nature of affective application development, which may be hindering progress in the field. A conceptual model for affective software development called the Affective Stack Model was introduced. This model supports a logical separation and loose coupling of reusable functional components to ensure that they may be developed and refined independently of one another in an efficient and streamlined manner. The third major contribution utilized the proposed Affective Stack Model, and the physiological sensing platform, to construct an e-learning tutor that was able to detect and respond to the learner’s affective state in real-time. This demonstrated the real-world applicability and success of the conceptual model, whilst also providing a proof of concept test-bed in which to evaluate the theorized learning gains that may be realized by affective tutoring strategies. An empirical study was conducted to assess the effectiveness of this tutoring system as compared to a non-affect sensing implementation. Results confirmed that there were statistically significant differences whereby students who interacted with the affective tutor had greater levels of perceived learning than students who used the non-affective version. This research has theoretical and practical implications for the development of affective computing applications. The findings confirmed that underlying affective state can be inferred with two physiological signals, paving the way for further evaluation and research into the applications of physiological computing. The Affective Stack Model has also provided a framework to support future affective software development. A significant aspect of this contribution is that this is the first such model to be created which is compatible with the use of third-party, closed source software. This should make a considerable impact in the future as vast possibilities for future affective interfaces have been opened up. The development and subsequent evaluation of the affective tutor has substantial practical implications by demonstrating that the Affective Stack Model can be successfully applied to a real-world application to augment traditional learning materials with the capability for affect support. Furthermore, the empirical support that learning gains are attainable should spur new interest and growth in this area.
APA, Harvard, Vancouver, ISO, and other styles
3

Becker-Asano, Christian. "WASABI: affect simulation for agents with believable interactivity /." Heidelberg : Akademische Verlagsgesellschaft Aka, 2008. http://opac.nebis.ch/cgi-bin/showAbstract.pl?u20=9783898383196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Reynolds, Carson Jonathan 1976. "Adversarial uses of affective computing and ethical implications." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33881.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2005.
Page 158 blank.
Includes bibliographical references (p. 141-145).
Much existing affective computing research focuses on systems designed to use information related to emotion to benefit users. Many technologies are used in situations their designers didn't anticipate and would not have intended. This thesis discusses several adversarial uses of affective computing: use of systems with the goal of hindering some users. The approach taken is twofold: first experimental observation of use of systems that collect affective signals and transmit them to an adversary; second discussion of normative ethical judgments regarding adversarial uses of these same systems. This thesis examines three adversarial contexts: the Quiz Experiment, the Interview Experiment, and the Poker Experiment. In the quiz experiment, participants perform a tedious task that allows increasing their monetary reward by reporting they solved more problems than they actually did. The Interview Experiment centers on a job interview where some participants hide or distort information, interviewers are rewarded for hiring the honest, and where interviewees are rewarded for being hired. In the Poker Experiment subjects are asked to play a simple poker-like game against an adversary who has extra affective or game state information.
(cont.) These experiments extend existing work on ethical implications of polygraphs by considering variables (e.g. context or power relationships) other than recognition rate and using systems where information is completely mediated by computers. In all three experiments it is hypothesized that participants using systems that sense and transmit affective information to an adversary will have degraded performance and significantly different ethical evaluations than those using comparable systems that do not sense or transmit affective information. Analysis of the results of these experiments shows a complex situation in which the context of using affective computing systems bears heavily on reports dealing with ethical implications. The contribution of this thesis is these novel experiments that solicit participant opinion about ethical implications of actual affective computing systems and dimensional metaethics, a procedure for anticipating ethical problems with affective computing systems.
by Carson Jonathan Reynolds.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
5

Bortz, Brennon Christopher. "Using Music and Emotion to Enable Effective Affective Computing." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/90888.

Full text
Abstract:
The computing devices with which we interact daily continue to become ever smaller, intelligent, and pervasive. Not only are they becoming more intelligent, but some are developing awareness of a user's affective state. Affective computing—computing that in some way senses, expresses, or modifies affect—is still a field very much in its youth. While progress has been made, the field is still limited by the need for larger sets of diverse, naturalistic, and multimodal data. This work first considers effective strategies for designing psychophysiological studies that permit the assembly of very large samples that cross numerous demographic boundaries, data collection in naturalistic environments, distributed study locations, rapid iterations on study designs, and the simultaneous investigation of multiple research questions. It then explores how commodity hardware and general-purpose software tools can be used to record, represent, store, and disseminate such data. As a realization of these strategies, this work presents a new database from the Emotion in Motion (EiM) study of human psychophysiological response to musical affective stimuli comprising over 23,000 participants and nearly 67,000 psychophysiological responses. Because music presents an excellent tool for the investigation of human response to affective stimuli, this work uses this wealth of data to explore how to design more effective affective computing systems by characterizing the strongest responses to musical stimuli used in EiM. This work identifies and characterizes the strongest of these responses, with a focus on modeling the characteristics of listeners that make them more or less prone to demonstrating strong physiological responses to music stimuli. This dissertation contributes the findings from a number of explorations of the relationships between strong reactions to music and the characteristics and self-reported affect of listeners. It demonstrates not only that such relationships do exist, but takes steps toward automatically predicting whether or not a listener will exhibit such exceptional responses. Second, this work contributes a flexible strategy and functional system for both successfully executing large-scale, distributed studies of psychophysiology and affect; and for synthesizing, managing, and disseminating the data collected through such efforts. Finally, and most importantly, this work presents the EiM database itself.
Doctor of Philosophy
The computing devices with which we interact daily continue to become ever smaller, intelligent, and pervasive. Not only are they becoming more intelligent, but some are developing awareness of a user’s affective state. Affective computing—computing that in some way senses, expresses, or modifies affect—is still a field very much in its youth. While progress has been made, the field is still limited by the need for larger sets of diverse, naturalistic, and multimodal data. This dissertation contributes the findings from a number of explorations of the relationships between strong reactions to music and the characteristics and self-reported affect of listeners. It demonstrates not only that such relationships do exist, but takes steps toward automatically predicting whether or not a listener will exhibit such exceptional responses. Second, this work contributes a flexible strategy and functional system for both successfully executing large-scale, distributed studies of psychophysiology and affect; and for synthesizing, managing, and disseminating the data collected through such efforts. Finally, and most importantly, this work presents the Emotion in Motion (EiM) (a study of human affective/psychophysiological response to musical stimuli) database comprising over 23,000 participants and nearly 67,000 psychophysiological responses.
APA, Harvard, Vancouver, ISO, and other styles
6

Radits, Markus. "The Affective PDF Reader." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-7033.

Full text
Abstract:

The Affective PDF Reader is a PDF Reader combined with affect recognition systems. The aim of the project is to research a way to provide the reader of a PDF with real - time visual feedback while reading the text to influence the reading experience in a positive way. The visual feedback is given in accordance to analyzed emotional states of the person reading the text - this is done by capturing and interpreting affective information with a facial expression recognition system. Further enhancements would also include analysis of voice in the computation as well as gaze tracking software to be able to use the point of gaze when rendering the visualizations.The idea of the Affective PDF Reader mainly arose in admitting that the way we read text on computers, mostly with frozen and dozed off faces, is somehow an unsatisfactory state or moreover a lonesome process and a poor communication. This work is also inspired by the significant progress and efforts in recognizing emotional states from video and audio signals and the new possibilities that arise from.The prototype system was providing visualizations of footprints in different shapes and colours which were controlled by captured facial expressions to enrich the textual content with affective information. The experience showed that visual feedback controlled by utterances of facial expressions can bring another dimension to the reading experience if the visual feedback is done in a frugal and non intrusive way and it showed that the evolvement of the users can be enhanced.

APA, Harvard, Vancouver, ISO, and other styles
7

Anderson, Keith William John. "A real-time facial expression recognition system for affective computing." Thesis, Queen Mary, University of London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.405823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Villeda, Enrique Edgar León. "Towards affective pervasive computing : emotion detection in intelligent inhabited environments." Thesis, University of Essex, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.438154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yates, Heath. "Affective Intelligence in Built Environments." Diss., Kansas State University, 2018. http://hdl.handle.net/2097/38790.

Full text
Abstract:
Doctor of Philosophy
Department of Computer Science
William H. Hsu
The contribution of the proposed dissertation is the application of affective intelligence in human-developed spaces where people live, work, and recreate daily, also known as built environments. Built environments have been known to influence and impact individual affective responses. The implications of built environments on human well-being and mental health necessitate the need to develop new metrics to measure and detect how humans respond subjectively in built environments. Detection of arousal in built environments given biometric data and environmental characteristics via a machine learning-centric approach provides a novel and new capability to measure human responses to built environments. Work was also conducted on experimental design methodologies for multiple sensor fusion and detection of affect in built environments. These contributions include exploring new methodologies in applying supervised machine learning algorithms, such as logistic regression, random forests, and artificial neural networks, in the detection of arousal in built environments. Results have shown a machine learning approach can not only be used to detect arousal in built environments but also for the construction of novel explanatory models of the data.
APA, Harvard, Vancouver, ISO, and other styles
10

Axelrod, Lesley Ann. "Emotional recognition in computing." Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/5758.

Full text
Abstract:
Emotions are fundamental to human lives and decision-making. Understanding and expression of emotional feeling between people forms an intricate web. This complex interactional phenomena, is a hot topic for research, as new techniques such as brain imaging give us insights about how emotions are tied to human functions. Communication of emotions is mixed with communication of other types of information (such as factual details) and emotions can be consciously or unconsciously displayed. Affective computer systems, using sensors for emotion recognition and able to make emotive responses are under development. The increased potential for emotional interaction with products and services, in many domains, is generating much interest. Emotionally enhanced systems have potential to improve human computer interaction and so to improve how systems are used and what they can deliver. They may also have adverse implications such as creating systems capable of emotional manipulation of users. Affective systems are in their infancy and lack human complexity and capability. This makes it difficult to assess whether human interaction with such systems will actually prove beneficial or desirable to users. By using experimental design, a Wizard of Oz methodology and a game that appeared to respond to the user's emotional signals with human-like capability, I tested user experience and reactions to a system that appeared affective. To assess users' behaviour, I developed a novel affective behaviour coding system called 'affectemes'. I found significant gains in user satisfaction and performance when using an affective system. Those believing the system responded to emotional signals blinked more frequently. If the machine failed to respond to their emotional signals, they increased their efforts to convey emotion, which might be an attempt to 'repair' the interaction. This work highlights how very complex and difficult it is to design and evaluate affective systems. I identify many issues for future work, including the unconscious nature of emotions and how they are recognised and displayed with affective systems; issues about the power of emotionally interactive systems and their evaluation; and critical ethical issues. These are important considerations for future design of systems that use emotion recognition in computing.
APA, Harvard, Vancouver, ISO, and other styles
11

Coots, Ian. "Deep Learning of Affective Content from Audio for Computing Movie Similarities." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-167976.

Full text
Abstract:
Recommendation systems are in wide use by many different services on theinternet today. Most commonly, recommendation systems use a technique calledcollaborative filtering, which makes recommendations for a user using other users’ratings. As a result, collaborative filtering is limited by the number of user ratingsin the system. Content-based recommendations perform direct comparisons on theitems to be recommended and thus avoid dependence on user input. In order toimplement a content-based recommendation engine, pairwise similarity measuresmust be calculated for all of the entities in the system. When the entities to be rec-ommended are movies, it can be informative to make comparisons using emotional(or affective) content. This work details the investigation of different methodologiesfor extracting affective content from movie audio using deep neural networks. First,different types of feature vectors in concert with a variety of model parameters fortraining were examined in order to project input audio data into a three dimen-sional valence-arousal-dominance (VAD) space where affective content can be moreeasily compared and visualized. Finally, two different similarity measures for directcomparison of movies with respect to their affective content were introduced.
APA, Harvard, Vancouver, ISO, and other styles
12

Berthelon, Franck. "Modélisation et détection des émotions à partir de données expressives et contextuelles." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00917416.

Full text
Abstract:
Nous proposons un modèle informatique pour la détection des émotions basé sur le comportement humain. Pour ce travail, nous utilisons la théorie des deux facteurs de Schachter et Singer pour reproduire dans notre architecture le comportement naturel en utilisant à la fois des données expressives et contextuelles. Nous concentrons nos efforts sur l'interprétation d'expressions en introduisant les Cartes Émotionnelles Personnalisées (CEPs) et sur la contextualisation des émotions via une ontologie du contexte émotionnel(EmOCA). Les CEPs sont motivées par le modèle complexe de Scherer et représentent les émotions déterminées par de multiple capteurs. Les CEPs sont calibrées individuellement, puis un algorithme de régression les utilises pour définir le ressenti émotionnel à partir des mesures des expressions corporelles. L'objectif de cette architecture est de séparer l'interprétation de la capture des expressions, afin de faciliter le choix des capteurs. De plus, les CEPs peuvent aussi être utilisées pour la synthétisation des expressions émotionnelles. EmOCA utilise le contexte pour simuler la modulation cognitive et pondérer l'émotion prédite. Nous utilisons pour cela un outil de raisonnement interopérable, une ontologie, nous permettant de décrire et de raisonner sur les philies et phobies pour pondérer l'émotion calculée à partir des expressions. Nous présentons également un prototype utilisant les expressions faciales pour évaluer la reconnaissance des motions en temps réel à partir de séquences vidéos. De plus, nous avons pu remarquer que le système décrit une sorte d'hystérésis lors du changement émotionnel comme suggéré par Scherer pour son modèle psychologique.
APA, Harvard, Vancouver, ISO, and other styles
13

Gamberini, Jacopo. "AFFECTIVE COMPUTING IN SMART EDUCATION: Stato dell'Arte e Sviluppo di un Prototipo." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
Lo sviluppo negli ultimi anni della tecnologia ed in particolar modo dell'intelligenza artificiale è stato decisamente elevato, ed ha consentito lo sviluppo di molte applicazioni in mercati verticali differenti. In questa tesi, si approfondisce una sotto-categoria dell'IA, ossia l'Affective Computing, il cui ruolo è sviluppare sistemi capaci di riconoscere e simulare le emozioni e gli stati d'animo umani. Questo elaborato fornisce una rassegna dello stato dell'arte dell'Affective Computing, e ne descrive inoltre una possibile applicazione nel dominio della Smart Education. Il progetto consiste nella lettura del flusso di un video per vedere i cambiamenti nelle espressioni facciali di un volto, al fine di rilevare le emozioni provate da una persona in determinati momenti e associarle alle espressioni. Infine verrà fornito un report contenente i valori medi di tutte le emozioni rilevate.
APA, Harvard, Vancouver, ISO, and other styles
14

Moshkina, Lilia V. "An integrative framework of time-varying affective robotic behavior." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39568.

Full text
Abstract:
As robots become more and more prevalent in our everyday life, making sure that our interactions with them are natural and satisfactory is of paramount importance. Given the propensity of humans to treat machines as social actors, and the integral role affect plays in human life, providing robots with affective responses is a step towards making our interaction with them more intuitive. To the end of promoting more natural, satisfying and effective human-robot interaction and enhancing robotic behavior in general, an integrative framework of time-varying affective robotic behavior was designed and implemented on a humanoid robot. This psychologically inspired framework (TAME) encompasses 4 different yet interrelated affective phenomena: personality Traits, affective Attitudes, Moods and Emotions. Traits determine consistent patterns of behavior across situations and environments and are generally time-invariant; attitudes are long-lasting and reflect likes or dislikes towards particular objects, persons, or situations; moods are subtle and relatively short in duration, biasing behavior according to favorable or unfavorable conditions; and emotions provide a fast yet short-lived response to environmental contingencies. The software architecture incorporating the TAME framework was designed as a stand-alone process to promote platform-independence and applicability to other domains. In this dissertation, the effectiveness of affective robotic behavior was explored and evaluated in a number of human-robot interaction studies with over 100 participants. In one of these studies, the impact of Negative Mood and emotion of Fear was assessed in a mock-up search-and-rescue scenario, where the participants found the robot expressing affect more compelling, sincere, convincing and "conscious" than its non-affective counterpart. Another study showed that different robotic personalities are better suited for different tasks: an extraverted robot was found to be more welcoming and fun for a task as a museum robot guide, where an engaging and gregarious demeanor was expected; whereas an introverted robot was rated as more appropriate for a problem solving task requiring concentration. To conclude, multi-faceted robotic affect can have far-reaching practical benefits for human-robot interaction, from making people feel more welcome where gregariousness is expected to making unobtrusive partners for problem solving tasks to saving people's lives in dangerous situations.
APA, Harvard, Vancouver, ISO, and other styles
15

Ketchum, Devin Kyle. "The Use of the CAfFEINE Framework in a Step-by-Step Assembly Guide." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/96609.

Full text
Abstract:
Today's technology is becoming more interactive with voice assistants like Siri. However, interactive systems such as Siri make mistakes. The purpose of this thesis is to explore using affect as an implicit feedback channel so that such mistakes would be easily corrected in real time. The CAfFEINE Framework, which was created by Dr. Saha, is a context-aware affective feedback loop in an intelligent environment. For the research described in this thesis, the focus will be on analyzing a user's physiological response to the service provided by an intelligent environment. To test this feedback loop, an experiment was constructed using an on-screen, step-by-step assembly guide for a Tangram puzzle. To categorize the user's response to the experiment, baseline readings were gathered for a user's stressed and non-stressed state. The Paced Stroop Test and two other baseline tests were conducted to gather these two states. The data gathered in the baseline tests was then used to train a support vector machine to predict the user's response to the Tangram experiment. During the data analysis phase of the research, the results for the predictions on the Tangram experiment were not as expected. Multiple trials of training data for the support vector machine were explored, but the data gathered throughout this research was not enough to draw proper conclusions. More focus was then given to analyzing the pre-processed data of the baseline tests in an attempt to find a factor or group of factors to determine if the user's physiological responses would be useful to train the Support Vector Machine. There were trends found when comparing the area under the curves of the Paced Stroop Test phasic driver plots. It was found that these comparison factors might be a useful approach for differentiating users based upon their physiological responses during the Paced Stroop Test.
Master of Science
The purpose of this thesis was to use the CAfFEINE Framework, proposed by Dr. Saha, in a real-world environment. Dr. Saha's Framework utilizes a user's physical responses, i.e. heart rate, in a smart environment to give information to the smart devices. For example, if Siri were to give a user directions to someone's home and told that user to turn right when the user knew they needed to turn left. That user would have a physical reaction as in their heart rate would increase. If the user were wearing a smart watch, Siri would be able to see the heart rate increase and realize, from past experiences with that user, that the information she gave to the user was incorrect. Then she would be able to correct herself. My research focused on measuring user reaction to a smart service provided in a real-world situation using a Tangram puzzle as a mock version of an industrial assembly situation. The users were asked to follow on-screen instructions to assemble the Tangram puzzle. Their reactions were recorded through a smart watch and analyzed post-experiment. Based on the results of a Paced Stroop Test they took before the experiment, a computer algorithm would predict their stress levels for each service provided by the step-by-step instruction guide. However, the results did not turn out as expected. Therefore, the rest of the research focused more on why the results did not support Dr. Saha's previous Framework results.
APA, Harvard, Vancouver, ISO, and other styles
16

Ayoub, Issa. "Multimodal Affective Computing Using Temporal Convolutional Neural Network and Deep Convolutional Neural Networks." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39337.

Full text
Abstract:
Affective computing has gained significant attention from researchers in the last decade due to the wide variety of applications that can benefit from this technology. Often, researchers describe affect using emotional dimensions such as arousal and valence. Valence refers to the spectrum of negative to positive emotions while arousal determines the level of excitement. Describing emotions through continuous dimensions (e.g. valence and arousal) allows us to encode subtle and complex affects as opposed to discrete emotions, such as the basic six emotions: happy, anger, fear, disgust, sad and neutral. Recognizing spontaneous and subtle emotions remains a challenging problem for computers. In our work, we employ two modalities of information: video and audio. Hence, we extract visual and audio features using deep neural network models. Given that emotions are time-dependent, we apply the Temporal Convolutional Neural Network (TCN) to model the variations in emotions. Additionally, we investigate an alternative model that combines a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN). Given our inability to fit the latter deep model into the main memory, we divide the RNN into smaller segments and propose a scheme to back-propagate gradients across all segments. We configure the hyperparameters of all models using Gaussian processes to obtain a fair comparison between the proposed models. Our results show that TCN outperforms RNN for the recognition of the arousal and valence emotional dimensions. Therefore, we propose the adoption of TCN for emotion detection problems as a baseline method for future work. Our experimental results show that TCN outperforms all RNN based models yielding a concordance correlation coefficient of 0.7895 (vs. 0.7544) on valence and 0.8207 (vs. 0.7357) on arousal on the validation dataset of SEWA dataset for emotion prediction.
APA, Harvard, Vancouver, ISO, and other styles
17

ROMEO, LUCA. "Applied Machine Learning for Health Informatics: Human Motion Analysis and Affective Computing Application." Doctoral thesis, Università Politecnica delle Marche, 2018. http://hdl.handle.net/11566/253031.

Full text
Abstract:
Il monitoraggio della qualità della vita e del benessere della persona rappresenta una sfida aperta nello scenario sanitario. La necessità di risolvere questo task nella nuova era dell'Intelligenza Artificiale porta all’applicazione di metodi dal campo del machine learning. Gli obiettivi e i contributi di questa tesi riflettono le attività di ricerca svolte (i) nell’ambito dell’analisi del movimento: valutazione e monitoraggio automatico del movimento umano durante la riabilitazione fisica, e (ii) nell’ambito dell’affective computing: stima dello stato affettivo del soggetto. Nel primo tema il candidato presenta un algoritmo in grado di estrarre le caratteristiche di movimento clinicamente rilevanti dalle traiettorie dello skeleton acquisite da un sensore RGBD, e fornire un punteggio sulla prestazione del soggetto. L'approccio proposto si basa su regole derivate da indicazioni cliniche e su un algoritmo di machine learning (i.e., Hidden Semi-Markov Model). L'affidabilità dell'approccio proposto è studiata su un dataset collezionato dal candidato rispetto ad un algoritmo gold standard e alla valutazione clinica. I risultati sostengono l'uso della metodologia proposta per la valutazione quantitativa delle prestazioni motorie durante la riabilitazione fisica. Nel secondo topic il candidato propone l’applicazione del framework di Multiple Instance Learning per l'apprendimento della risposta emotiva in presenza di label continui ed ambigui. Questa varaibilità è spesso presente nella risposta affettiva ad uno stimolo esterno (e.g., interazione multimediale). L'affidabilità dell'approccio di Multiple Instance Learning è indagata su un database di benchmark e un dataset più vicino alle problematiche del mondo reale acquisito dal candidato. I risultati ottenuti evidenziano come la metodologia proposta è consistente per la stima dello stato affettivo.
The monitoring of the quality of life and the subject's well-being represent an open challenge in the healthcare scenario. The emergence of solving this task in the new era of Artificial Intelligence leads to the application of methods in the machine learning field. The objectives and the contributions of this thesis reflect the research activities performed on the topics of (i) human motion analysis: the automatic monitoring and assessment of human movement during physical rehabilitation and (ii) affective computing: the inferring of the affective state of the subject. In the first topic, the author presents an algorithm able to extract clinically relevant motion features from the RGB-D visual skeleton joints input and provide a related score about subject’s performance. The proposed approach is respectively based on rules derived by clinician suggestions and machine learning algorithm (i.e., Hidden Semi Markov Model). The reliability of the proposed approach is tested over a dataset collected by the author and with respect to a gold standard algorithm and with respect to the clinical assessment. The results support the use of the proposed methodology for quantitatively assessing motor performance during a physical rehabilitation. In the second topic, the author proposes the application of a Multiple Instance Learning (MIL) framework for learning emotional response in presence of continuous and ambiguous labels. This is often the case with affective response to external stimuli (e.g., multimedia interaction). The reliability of the MIL approach is investigated over a benchmark database and one dataset closer to real-world problematic collected by the author. The obtained results point out how the applied methodology is consistent for predicting the human affective response.
APA, Harvard, Vancouver, ISO, and other styles
18

Yacoubi, Alya. "Vers des agents conversationnels capables de réguler leurs émotions : un modèle informatique des tendances à l’action." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS378/document.

Full text
Abstract:
Les agents virtuels conversationnels ayant un comportement social reposent souvent sur au moins deux disciplines différentes : l’informatique et la psychologie. Dans la plupart des cas, les théories psychologiques sont converties en un modèle informatique afin de permettre aux agents d’adopter des comportements crédibles. Nos travaux de thèse se positionnent au croisement de ces deux champs disciplinaires. Notre objectif est de renforcer la crédibilité des agents conversationnels. Nous nous intéressons aux agents conversationnels orientés tâche, qui sont utilisés dans un contexte professionnel pour produire des réponses à partir d’une base de connaissances métier. Nous proposons un modèle affectif pour ces agents qui s’inspire des mécanismes affectifs chez l’humain. L’approche que nous avons choisie de mettre en œuvre dans notre modèle s’appuie sur la théorie des Tendances à l’Action en psychologie. Nous avons proposé un modèle des émotions en utilisant un formalisme inspiré de la logique BDI pour représenter les croyances et les buts de l’agent. Ce modèle a été implémenté dans une architecture d’agent conversationnel développée au sein de l’entreprise DAVI. Afin de confirmer la pertinence de notre approche, nous avons réalisé plusieurs études expérimentales. La première porte sur l’évaluation d’expressions verbales de la tendance à l’action. La deuxième porte sur l’impact des différentes stratégies de régulation possibles sur la perception de l’agent par l’utilisateur. Enfin, la troisième étude porte sur l’évaluation des agents affectifs en interaction avec des participants. Nous montrons que le processus de régulation que nous avons implémenté permet d’augmenter la crédibilité et le professionnalisme perçu des agents, et plus généralement qu’ils améliorent l’interaction. Nos résultats mettent ainsi en avant la nécessité de prendre en considération les deux mécanismes émotionnels complémentaires : la génération et la régulation des réponses émotionnelles. Ils ouvrent des perspectives sur les différentes manières de gérer les émotions et leur impact sur la perception de l’agent
Conversational virtual agents with social behavior are often based on at least two different disciplines : computer science and psychology. In most cases, psychological findings are converted into computational mechanisms in order to make agents look and behave in a believable manner. In this work, we aim at increasing conversational agents’ belivielibity and making human-agent interaction more natural by modelling emotions. More precisely, we are interested in task-oriented conversational agents, which are used as a custumer-relationship channel to respond to users request. We propose an affective model of emotional responses’ generation and control during a task-oriented interaction. Our proposed model is based, on one hand, on the theory of Action Tendencies (AT) in psychology to generate emotional responses during the interaction. On the other hand, the emotional control mechanism is inspired from social emotion regulation in empirical psychology. Both mechanisms use agent’s goals, beliefs and ideals. This model has been implemented in an agent architecture endowed with a natural language processing engine developed by the company DAVI. In order to confirm the relevance of our approach, we realized several experimental studies. The first was about validating verbal expressions of action tendency in a human-agent dialogue. In the second, we studied the impact of different emotional regulation strategies on the agent perception by the user. This study allowed us to design a social regulation algorithm based on theoretical and empirical findings. Finally, the third study focuses on the evaluation of emotional agents in real-time interactions. Our results show that the regulation process contributes in increasing the credibility and perceived competence of agents as well as in improving the interaction. Our results highlight the need to take into consideration of the two complementary emotional mechanisms : the generation and regulation of emotional responses. They open perspectives on different ways of managing emotions and their impact on the perception of the agent
APA, Harvard, Vancouver, ISO, and other styles
19

Tsoukalas, Kyriakos. "On Affective States in Computational Cognitive Practice through Visual and Musical Modalities." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/104069.

Full text
Abstract:
Learners' affective states correlate with learning outcomes. A key aspect of instructional design is the choice of modalities by which learners interact with instructional content. The existing literature focuses on quantifying learning outcomes without quantifying learners' affective states during instructional activities. An investigation of how learners feel during instructional activities will inform the instructional systems design methodology of a method for quantifying the effects of individually available modalities on learners' affect. The objective of this dissertation is to investigate the relationship between affective states and learning modalities of instructional computing. During an instructional activity, learners' enjoyment, excitement, and motivation are measured before and after a computing activity offered in three distinct modalities. The modalities concentrate on visual and musical computing for the practice of computational thinking. An affective model for the practice of computational thinking through musical expression was developed and validated. This dissertation begins with a literature review of relevant theories on embodied cognition, learning, and affective states. It continues with designing and fabricating a prototype instructional apparatus and its virtual simulation as a web service, both for the practice of computational thinking through musical expression, and concludes with a study investigating participants' affective states before and after four distinct online computing activities. This dissertation builds on and contributes to extant literature by validating an affective model for computational thinking practice through self-expression. It also proposes a nomological network for the construct of computational thinking for future exploration of the construct, and develops a method for the assessment of instructional activities based on predefined levels of skill and knowledge.
Doctor of Philosophy
This dissertation investigates the role of learners' affect during instructional activities of visual and musical computing. More specifically, learners' enjoyment, excitement, and motivation are measured before and after a computing activity offered in four distinct ways. The computing activities are based on a prototype instructional apparatus, which was designed and fabricated for the practice of computational thinking. A study was performed using a virtual simulation accessible via internet browser. The study suggests that maintaining enjoyment during instructional activities is a more direct path to academic motivation than excitement.
APA, Harvard, Vancouver, ISO, and other styles
20

Baltrušaitis, Tadas. "Automatic facial expression analysis." Thesis, University of Cambridge, 2014. https://www.repository.cam.ac.uk/handle/1810/245253.

Full text
Abstract:
Humans spend a large amount of their time interacting with computers of one type or another. However, computers are emotionally blind and indifferent to the affective states of their users. Human-computer interaction which does not consider emotions, ignores a whole channel of available information. Faces contain a large portion of our emotionally expressive behaviour. We use facial expressions to display our emotional states and to manage our interactions. Furthermore, we express and read emotions in faces effortlessly. However, automatic understanding of facial expressions is a very difficult task computationally, especially in the presence of highly variable pose, expression and illumination. My work furthers the field of automatic facial expression tracking by tackling these issues, bringing emotionally aware computing closer to reality. Firstly, I present an in-depth analysis of the Constrained Local Model (CLM) for facial expression and head pose tracking. I propose a number of extensions that make location of facial features more accurate. Secondly, I introduce a 3D Constrained Local Model (CLM-Z) which takes full advantage of depth information available from various range scanners. CLM-Z is robust to changes in illumination and shows better facial tracking performance. Thirdly, I present the Constrained Local Neural Field (CLNF), a novel instance of CLM that deals with the issues of facial tracking in complex scenes. It achieves this through the use of a novel landmark detector and a novel CLM fitting algorithm. CLNF outperforms state-of-the-art models for facial tracking in presence of difficult illumination and varying pose. Lastly, I demonstrate how tracked facial expressions can be used for emotion inference from videos. I also show how the tools developed for facial tracking can be applied to emotion inference in music.
APA, Harvard, Vancouver, ISO, and other styles
21

Jaimes, Luis Gabriel. "On the Selection of Just-in-time Interventions." Scholar Commons, 2015. https://scholarcommons.usf.edu/etd/5506.

Full text
Abstract:
A deeper understanding of human physiology, combined with improvements in sensing technologies, is fulfilling the vision of affective computing, where applications monitor and react to changes in affect. Further, the proliferation of commodity mobile devices is extending these applications into the natural environment, where they become a pervasive part of our daily lives. This work examines one such pervasive affective computing application with significant implications for long-term health and quality of life adaptive just-in-time interventions (AJITIs). We discuss fundamental components needed to design AJITIs based for one kind of affective data, namely stress. Chronic stress has significant long-term behavioral and physical health consequences, including an increased risk of cardiovascular disease, cancer, anxiety and depression. This dissertation presents the state-of-the-art of Just-in-time interventions for stress. It includes a new architecture. that is used to describe the most important issues in the design, implementation, and evaluation of AJITIs. Then, the most important mechanisms available in the literature are described, and classified. The dissertation also presents a simulation model to study and evaluate different strategies and algorithms for interventions selection. Then, a new hybrid mechanism based on value iteration and monte carlo simulation method is proposed. This semi-online algorithm dynamically builds a transition probability matrix (TPM) which is used to obtain a new policy for intervention selection. We present this algorithm in two different versions. The first version uses a pre-determined number of stress episodes as a training set to create a TPM, and then to generate the policy that will be used to select interventions in the future. In the second version, we use each new stress episode to update the TPM, and a pre-determined number of episodes to update our selection policy for interventions. We also present a completely online learning algorithm for intervention selection based on Q-learning with eligibility traces. We show that this algorithm could be used by an affective computing system to select and deliver in mobile environments. Finally, we conducts posthoc experiments and simulations to demonstrate feasibility of both real-time stress forecasting and stress intervention adaptation and optimization.
APA, Harvard, Vancouver, ISO, and other styles
22

Feghoul, Kevin. "Deep learning for simulation in healthcare : Application to affective computing and surgical data science." Electronic Thesis or Diss., Université de Lille (2022-....), 2024. http://www.theses.fr/2024ULILS033.

Full text
Abstract:
Dans cette thèse, nous abordons diverses tâches dans les domaines de l’informatique affective et de la science des données chirurgicales qui ont le potentiel d’améliorer la simulation médicale. Plus précisément, nous nous concentrons sur quatre défis clés : la détection du stress, la reconnaissance des émotions, l’évaluation des compétences chirurgicales et la reconnaissance des gestes chirurgicaux. La simulation est devenue un élément important de la formation médicale, offrant aux étudiants la possibilité d’acquérir de l’expérience et de perfectionner leurs compétences dans un environnement sûr et contrôlé. Cependant,malgré des avancées significatives, la formation basée sur la simulation fait encore face à d’importants défis qui limitent son plein potentiel. Parmi ces défis figurent la garantie de scénarios réalistes, la prise en compte des variations individuelles dans les réponses émotionnelles des apprenants, et, pour certains types de simulations, comme les simulations chirurgicales, l’évaluation objective des performances. Intégrer le suivi des états cognitifs,des niveaux de stress et des états émotionnels des étudiants en médecine, ainsi que l’incorporation d’outils fournissant des retours objectifs et personnalisés, en particulier pour les simulations chirurgicales, pourrait aider à pallier ces limitations. Ces dernières années, l’apprentissage profond a révolutionné notre façon de résoudre des problèmes complexes dans diverses disciplines, entraînant des avancées significatives en informatique affective et en science des données chirurgicales. Cependant, plusieurs défis spécifiques à ces domaines subsistent. En informatique affective, la reconnaissance automatique du stress et des émotions est difficile en raison des problèmes de définition de ces états et de la variabilité de leur expression chez les individus. De plus, la nature multimodale de l’expression du stress et des émotions ajoute une couche de complexité supplémentaire, car l’intégration efficace de sources de données diverses demeure un défi majeur. En science des données chirurgicales, la variabilité des techniques chirurgicales entre les praticiens, la nature dynamique des environnements chirurgicaux, et l’intégration de plusieurs modalités soulignent les difficultés pour l’évaluation automatique des compétences chirurgicales et la reconnaissance des gestes. La première partie de cette thèse propose un nouveau cadre de fusion multimodale basé sur le Transformer pour la détection du stress, en exploitant plusieurs techniques de fusion. Ce cadre intègre des signaux physiologiques provenant de deux capteurs,chaque capteur étant traité comme une modalité distincte. Pour la reconnaissance des émotions, nous proposons une approche multimodale innovante utilisant un réseau de neurones convolutifs sur graphes (GCN) pour fusionner efficacement les représentations intermédiaires de plusieurs modalités, extraites à l’aide de Transformer encoders unimodaux. Dans la deuxième partie de cette thèse, nous introduisons un nouveau cadre d’apprentissage profond qui combine un GCN avec un Transformer encoder pour l’évaluation des compétences chirurgicales, en exploitant des séquences de données de squelettes de mains.Nous évaluons notre approche en utilisant des données issues de deux tâches de simulation chirurgicale que nous avons collectées. Nous proposons également un nouveau cadre multimodal basé sur le Transformer pour la reconnaissance des gestes chirurgicaux, intégrant un module itératif de raffinement multimodal afin d’améliorer la fusion des informations complémentaires entre différentes modalités. Pour pallier les limitations des ensembles de données existants en reconnaissance des gestes chirurgicaux, nous avons collecté deux nouveaux ensembles de données spécifiquement conçus pour cette tâche, sur lesquels nous avons effectué des benchmarks unimodaux et multimodaux pour le premier ensemble de données et des benchmarks unimodaux pour le second
In this thesis, we address various tasks within the fields of affective computing and surgicaldata science that have the potential to enhance medical simulation. Specifically, we focuson four key challenges: stress detection, emotion recognition, surgical skill assessment, andsurgical gesture recognition. Simulation has become a crucial component of medical training,offering students the opportunity to gain experience and refine their skills in a safe, controlledenvironment. However, despite significant advancements, simulation-based trainingstill faces important challenges that limit its full potential. Some of these challengesinclude ensuring realistic scenarios, addressing individual variations in learners’ emotionalresponses, and, for certain types of simulations, such as surgical simulation, providing objectiveassessments. Integrating the monitoring of medical students’ cognitive states, stresslevels and emotional states, along with incorporating tools that provide objective and personalizedfeedback, especially for surgical simulations, could help address these limitations.In recent years, deep learning has revolutionized the waywe solve complex problems acrossvarious disciplines, leading to significant advancements in affective computing and surgicaldata science. However, several domain-specific challenges remain. In affective computing,automatically recognizing stress and emotions is challenging due to difficulties in definingthese states and the variability in their expression across individuals. Furthermore, themultimodal nature of stress and emotion expression introduces another layer of complexity,as effectively integrating diverse data sources remains a significant challenge. In surgicaldata science, the variability in surgical techniques across practitioners, the dynamic natureof surgical environments, and the challenge of effectively integrating multiple modalitieshighlight ongoing challenges in surgical skill assessment and gesture recognition. The firstpart of this thesis introduces a novel Transformer-based multimodal framework for stressdetection that leverages multiple fusion techniques. This framework integrates physiologicalsignals from two sensors, with each sensor’s data treated as a distinct modality. Foremotion recognition, we propose a novel multimodal approach that employs a Graph ConvolutionalNetwork (GCN) to effectively fuse intermediate representations from multiplemodalities, extracted using unimodal Transformer encoders. In the second part of this thesis,we introduce a new deep learning framework that combines a GCN with a Transformerencoder for surgical skill assessment, leveraging sequences of hand skeleton data. We evaluateour approach using two surgical simulation tasks that we have collected. Additionally,we propose a novel Transformer-based multimodal framework for surgical gesture recognitionthat incorporates an iterative multimodal refinement module to enhance the fusionof complementary information from different modalities. To address existing dataset limitationsin surgical gesture recognition, we collected two new datasets specifically designedfor this task, on which we conducted unimodal and multimodal benchmarks for the firstdataset and unimodal benchmarks for the second
APA, Harvard, Vancouver, ISO, and other styles
23

Vielzeuf, Valentin. "Apprentissage neuronal profond pour l'analyse de contenus multimodaux et temporels." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMC229/document.

Full text
Abstract:
Notre perception est par nature multimodale, i.e. fait appel à plusieurs de nos sens. Pour résoudre certaines tâches, il est donc pertinent d’utiliser différentes modalités, telles que le son ou l’image.Cette thèse s’intéresse à cette notion dans le cadre de l’apprentissage neuronal profond. Pour cela, elle cherche à répondre à une problématique en particulier : comment fusionner les différentes modalités au sein d’un réseau de neurones ?Nous proposons tout d’abord d’étudier un problème d’application concret : la reconnaissance automatique des émotions dans des contenus audio-visuels.Cela nous conduit à différentes considérations concernant la modélisation des émotions et plus particulièrement des expressions faciales. Nous proposons ainsi une analyse des représentations de l’expression faciale apprises par un réseau de neurones profonds.De plus, cela permet d’observer que chaque problème multimodal semble nécessiter l’utilisation d’une stratégie de fusion différente.C’est pourquoi nous proposons et validons ensuite deux méthodes pour obtenir automatiquement une architecture neuronale de fusion efficace pour un problème multimodal donné, la première se basant sur un modèle central de fusion et ayant pour visée de conserver une certaine interprétation de la stratégie de fusion adoptée, tandis que la seconde adapte une méthode de recherche d'architecture neuronale au cas de la fusion, explorant un plus grand nombre de stratégies et atteignant ainsi de meilleures performances.Enfin, nous nous intéressons à une vision multimodale du transfert de connaissances. En effet, nous détaillons une méthode non traditionnelle pour effectuer un transfert de connaissances à partir de plusieurs sources, i.e. plusieurs modèles pré-entraînés. Pour cela, une représentation neuronale plus générale est obtenue à partir d’un modèle unique, qui rassemble la connaissance contenue dans les modèles pré-entraînés et conduit à des performances à l'état de l'art sur une variété de tâches d'analyse de visages
Our perception is by nature multimodal, i.e. it appeals to many of our senses. To solve certain tasks, it is therefore relevant to use different modalities, such as sound or image.This thesis focuses on this notion in the context of deep learning. For this, it seeks to answer a particular problem: how to merge the different modalities within a deep neural network?We first propose to study a problem of concrete application: the automatic recognition of emotion in audio-visual contents.This leads us to different considerations concerning the modeling of emotions and more particularly of facial expressions. We thus propose an analysis of representations of facial expression learned by a deep neural network.In addition, we observe that each multimodal problem appears to require the use of a different merge strategy.This is why we propose and validate two methods to automatically obtain an efficient fusion neural architecture for a given multimodal problem, the first one being based on a central fusion network and aimed at preserving an easy interpretation of the adopted fusion strategy. While the second adapts a method of neural architecture search in the case of multimodal fusion, exploring a greater number of strategies and therefore achieving better performance.Finally, we are interested in a multimodal view of knowledge transfer. Indeed, we detail a non-traditional method to transfer knowledge from several sources, i.e. from several pre-trained models. For that, a more general neural representation is obtained from a single model, which brings together the knowledge contained in the pre-trained models and leads to state-of-the-art performances on a variety of facial analysis tasks
APA, Harvard, Vancouver, ISO, and other styles
24

Abd, Gaus Yona Falinie. "Artificial intelligence system for continuous affect estimation from naturalistic human expressions." Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/16348.

Full text
Abstract:
The analysis and automatic affect estimation system from human expression has been acknowledged as an active research topic in computer vision community. Most reported affect recognition systems, however, only consider subjects performing well-defined acted expression, in a very controlled condition, so they are not robust enough for real-life recognition tasks with subject variation, acoustic surrounding and illumination change. In this thesis, an artificial intelligence system is proposed to continuously (represented along a continuum e.g., from -1 to +1) estimate affect behaviour in terms of latent dimensions (e.g., arousal and valence) from naturalistic human expressions. To tackle the issues, feature representation and machine learning strategies are addressed. In feature representation, human expression is represented by modalities such as audio, video, physiological signal and text modality. Hand- crafted features is extracted from each modality per frame, in order to match with consecutive affect label. However, the features extracted maybe missing information due to several factors such as background noise or lighting condition. Haar Wavelet Transform is employed to determine if noise cancellation mechanism in feature space should be considered in the design of affect estimation system. Other than hand-crafted features, deep learning features are also analysed in terms of the layer-wise; convolutional and fully connected layer. Convolutional Neural Network such as AlexNet, VGGFace and ResNet has been selected as deep learning architecture to do feature extraction on top of facial expression images. Then, multimodal fusion scheme is applied by fusing deep learning feature and hand-crafted feature together to improve the performance. In machine learning strategies, two-stage regression approach is introduced. In the first stage, baseline regression methods such as Support Vector Regression are applied to estimate each affect per time. Then in the second stage, subsequent model such as Time Delay Neural Network, Long Short-Term Memory and Kalman Filter is proposed to model the temporal relationships between consecutive estimation of each affect. In doing so, the temporal information employed by a subsequent model is not biased by high variability present in consecutive frame and at the same time, it allows the network to exploit the slow changing dynamic between emotional dynamic more efficiently. Following of two-stage regression approach for unimodal affect analysis, fusion information from different modalities is elaborated. Continuous emotion recognition in-the-wild is leveraged by investigating mathematical modelling for each emotion dimension. Linear Regression, Exponent Weighted Decision Fusion and Multi-Gene Genetic Programming are implemented to quantify the relationship between each modality. In summary, the research work presented in this thesis reveals a fundamental approach to automatically estimate affect value continuously from naturalistic human expression. The proposed system, which consists of feature smoothing, deep learning feature, two-stage regression framework and fusion using mathematical equation between modalities is demonstrated. It offers strong basis towards the development artificial intelligent system on estimation continuous affect estimation, and more broadly towards building a real-time emotion recognition system for human-computer interaction.
APA, Harvard, Vancouver, ISO, and other styles
25

Hamieh, Salam. "Utilisation des méthodes de détection d'anomalies pour l'informatique affective." Electronic Thesis or Diss., Université Grenoble Alpes, 2024. http://www.theses.fr/2024GRALT017.

Full text
Abstract:
Les récentes avancées technologiques ont ouvert la voie à l'automatisation dans divers secteurs. Cela a conduit à un intérêt croissant pour le développement de modèles d'apprentissage automatique pour la reconnaissance et l'interprétation des émotions. Néanmoins, l'évaluation informatisée efficace des états affectifs et mentaux est confrontée à plusieurs défis importants, notamment la difficulté d'obtenir des données suffisantes, la complexité de l'étiquetage et la complexité de la tâche. Une solution à ces défis réside dans le domaine de la détection d'anomalies, qui a démontré son importance dans de nombreux domaines. Cette thèse est consacrée à la résolution des défis multiples dans le domaine de l'informatique affective en tirant parti de la puissance des méthodes de détection d'anomalies. L'un des principaux défis abordés est la rareté des données, un problème omniprésent lorsque l'on s'efforce de construire des modèles d'apprentissage automatique capables d'identifier avec précision des états mentaux rares. Nous étudions les méthodes de détection des anomalies dans deux applications critiques : La détection des distractions visuelles et la prédiction des rechutes psychotiques. Ces scénarios représentent des états exigeants et parfois périlleux pour la collecte de données dans des contextes réels. L'étude comprend une exploration complète des modèles traditionnels et des modèles basés sur l'apprentissage profond, démontrant le succès de ces méthodes pour surmonter les défis posés par les ensembles de données déséquilibrés. Cette réussite laisse entrevoir la possibilité d'applications plus larges à l'avenir, qui nous aideront à mieux comprendre et traiter les états mentaux et affectifs rares et difficiles à collecter dans divers domaines où il n'est pas possible d'obtenir des données suffisantes. En outre, cette recherche aborde le défi de l'inter-variabilité entre les individus dans le domaine des états affectifs, en particulier dans le contexte des patients souffrant de rechute psychotique. L'étude fournit une analyse comparative, explorant les forces et les limites des modèles globaux et personnalisés. Cependant, en utilisant la détection des anomalies, il devient possible d'utiliser les données d'un individu pour modéliser ses habitudes de santé et détecter les anomalies lorsque ces habitudes s'écartent de la norme. Les résultats soulignent l'importance de la personnalisation comme moyen d'améliorer la précision des modèles, en particulier dans les scénarios caractérisés par une inter-variabilité importante entre les sujets. En outre, la complexité des ensembles de données non équilibrés est un autre thème de cette thèse. Elle explore les méthodes de sélection des caractéristiques adaptées à ces caractéristiques spécifiques des ensembles de données. En s'appuyant sur des techniques de pointe, y compris les autoencodeurs, la recherche propose de nouvelles stratégies pour relever les défis de la sélection des caractéristiques posés par les ensembles de données déséquilibrés dans des applications telles que la détection de la distraction visuelle et la prédiction de la rechute psychotique.Enfin, l'étude présente une nouvelle solution pour la fusion d'informations provenant de sources multiples, améliorant la précision prédictive dans l'informatique affective. Cette nouvelle approche incorpore un indicateur de difficulté innovant dérivé de l'erreur de reconstruction d'un autoencodeur. Le résultat est le développement de systèmes multimodaux de reconnaissance continue des émotions qui présentent des performances supérieures.Dans cette thèse, nous avons étudié diverses applications des méthodes de détection d'anomalies dans le domaine de l'informatique affective. Bien qu'il s'agisse d'étapes initiales démontrant le potentiel de nos approches proposées, elles jettent également les bases d'une exploration plus poussée de différentes applications et de leurs variations
Recent technological advancements have paved the way for automation in various sectors, from education to autonomous driving, collaborative robots, and customer service. This has led to an increasing interest in the development of machine learning models for emotion recognition and interpretation. Nonetheless, the efficient computer-based assessment of affective and mental states faces several significant challenges, which include the difficulty of obtaining sufficient data, the intricacy of labeling, and the complexity of the task. One promising solution to these challenges lies in the field of anomaly detection, which has demonstrated its significance in numerous domains. This thesis is dedicated to addressing the multifaceted challenges in the field of affective computing by leveraging the power of anomaly detection methods.One of the key challenges addressed is data scarcity, a pervasive issue when striving to construct machine learning models capable of accurately identifying rare mental states. We study anomaly detection methods, utilizing unsupervised approaches in two critical applications: Visual Distraction Detection and Psychotic Relapse Prediction. These scenarios represent demanding and sometimes perilous states for data collection in real-world contexts. The study encompasses a comprehensive exploration of traditional and deep learning-based models, such as autoencoders, demonstrating the success of these methods in overcoming the challenges posed by unbalanced datasets. This success suggests the potential for wider applications in the future, which will help us better understand and deal with rare and hard-to-collect mental and affective states across various areas where obtaining sufficient data is not possible.Furthermore, this research addresses the challenge of inter-variability among individuals in the domain of affective states, particularly in the context of patients with psychotic relapse. The study provides a comparative analysis, exploring the strengths and limitations of both global and personalized models. Personalization is a solution to this challenge, although gathering sufficient personal data, especially for relapse situations, is challenging. However, by employing anomaly detection, it becomes feasible to use an individual's data to model their healthy patterns and detect anomalies when these patterns deviate from the norm. The findings underscore the significance of personalization as an avenue for enhancing the precision of models, especially in scenarios characterized by substantial inter-variability among subjects.Moreover, the complexity of unbalanced datasets is another focus of this thesis. It explores feature selection methods tailored to address these specific dataset characteristics. By leveraging state-of-the-art techniques, including autoencoders, the research advances novel strategies for addressing feature selection challenges posed by unbalanced datasets in applications such as Visual Distraction Detection and Psychotic Relapse Prediction.Finally, the study introduces a novel solution for information fusion from multiple sources, enhancing predictive accuracy in affective computing. This novel approach incorporates an innovative difficulty data indicator derived from an autoencoder's reconstruction error. The outcome is the development of multimodal continuous emotion recognition systems that exhibit superior performance. This approach is studied using the ULM TSST dataset for predicting arousal and valence among participants in stress-induced situations.In this thesis, we investigated various applications of anomaly detection methods in affective computing domain. While these are initial steps showcasing the potential of our proposed approaches, they also lay the groundwork for further exploration in different applications and their variations
APA, Harvard, Vancouver, ISO, and other styles
26

Reitberger, Wolfgang Heinrich. "Affective Dynamics in Responsive Media Spaces." Thesis, Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4975.

Full text
Abstract:
In this thesis computer-mediated human interaction and human computer interaction in responsive spaces are discussed. Can such spaces be de-signed to create an affective response from the players? What are the de-sign heuristics for a space that allows for the establishment of affective dy-namics? I research the user experience of players of existing spaces built by the Topological Media Lab. In addition to that I review other relevant ex-perimental interfaces, e.g. works by Myron Krueger and my own earlier piece Riviera in order to analyze their affective dynamics. Also, I review the different applications and programming paradigms involved in authoring such spaces (e.g. Real-time systems like Max/MSP/Jitter and EyeCon) and how to apply them in compliance with the design heuristics.
APA, Harvard, Vancouver, ISO, and other styles
27

Saha, Deba Pratim. "A Study of Methods in Computational Psychophysiology for Incorporating Implicit Affective Feedback in Intelligent Environments." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/84469.

Full text
Abstract:
Technological advancements in sensor miniaturization, processing power and faster networks has broadened the scope of our contemporary compute-infrastructure to an extent that Context-Aware Intelligent Environment (CAIE)--physical spaces with computing systems embedded in it--are increasingly commonplace. With the widespread adoption of intelligent personal agents proliferating as close to us as our living rooms, there is a need to rethink the human-computer interface to accommodate some of their inherent properties such as multiple focus of interaction with a dynamic set of devices and limitations such as lack of a continuous coherent medium of interaction. A CAIE provides context-aware services to aid in achieving user's goals by inferring their instantaneous context. However, often due to lack of complete understanding of a user's context and goals, these services may be inappropriate or at times even pose hindrance in achieving user's goals. Determining service appropriateness is a critical step in implementing a reliable and robust CAIE. Explicitly querying the user to gather such feedback comes at the cost of user's cognitive resources in addition to defeating the purpose of designing a CAIE to provide automated services. The CAIE may, however, infer this appropriateness implicitly from the user, by observing and sensing various behavioral cues and affective reactions from the user, thereby seamlessly gathering such user-feedback. In this dissertation, we have studied the design space for incorporating user's affective reactions to the intelligent services, as a mode of implicit communication between the user and the CAIE. As a result, we have introduced a framework named CAfFEINE, acronym for Context-aware Affective Feedback in Engineering Intelligent Naturalistic Environments. The CAfFEINE framework encompasses models, methods and algorithms establishing the validity of the idea of using a physiological-signal based affective feedback loop in conveying service appropriateness in a CAIE. In doing so, we have identified methods of learning ground-truth about an individual user's affective reactions as well as introducing a novel algorithm of estimating a physiological signal based quality-metric for our inferences. To evaluate the models and methods presented in the CAfFEINE framework, we have designed a set of experiments in laboratory-mockups and virtual-reality setup, providing context aware services to the users, while collecting their physiological signals from wearable sensors. Our results provide empirical validation for our CAfFEINE framework, as well as point towards certain guidelines for conducting future research extending this novel idea. Overall, this dissertation contributes by highlighting the symbiotic nature of the subfields of Affective Computing and Context-aware Computing and by identifying models, proposing methods and designing algorithms that may help accentuate this relationship making future intelligent environments more human-centric.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
28

Haglund, Sonja. "Färgens påverkan på mänsklig emotion vid gränssnittsdesign." Thesis, University of Skövde, School of Humanities and Informatics, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-856.

Full text
Abstract:

Dagens teknologiska samhälle ställer höga krav på människan, bland annat gällande att processa information. Vid utformning av system tas det numera vanligtvis hänsyn till människa-datorinteraktionen (MDI) för att erhålla en så hög användbarhet som möjligt. Affektiv Informatik, som är ett utvecklat sätt att förhålla sig till MDI, talar för att utveckla system som både kan uppfatta och förmedla emotioner till användaren. Fokus i rapporten är hur ett system kan förmedla emotioner, via dess färgsättning, och därmed påverka användarens emotionella tillstånd. En kvantitativ undersökning har utförts för att ta reda på hur färger kan användas i ett system för att förmedla känslouttryck till användare. Vidare har en jämförelse gjorts mellan undersökningens resultat och tidigare teorier om hur färg påverkar människans emotioner för att ta reda på huruvida de är lämpliga att tillämpa vid gränssnittsdesign. Resultatet pekade på en samständighet med de tidigare teorierna, men med endast en statistisk signifikant skillnad mellan blått och gult gällande behagligheten.

APA, Harvard, Vancouver, ISO, and other styles
29

Jerčić, Petar. "Design and Evaluation of Affective Serious Games for Emotion Regulation Training." Doctoral thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-10478.

Full text
Abstract:
Emotions are thought to be one of the key factors that critically influences human decision-making. Emotion regulation can help to mitigate emotion related decision biases and eventually lead to a better decision performance. Serious games emerged as a new angle introducing technological methods to learning emotion regulation, where meaningful biofeedback information communicates player's emotional state. Games are a series of interesting choices, where design of those choices could support an educational platform to learning emotion regulation. Such design could benefit digital serious games as those choices could be informed though player's physiology about emotional states in real time. This thesis explores design and evaluation methods for creating serious games where emotion regulation can be learned and practiced. Design of a digital serious game using physiological measures of emotions was investigated and evaluated. Furthermore, it investigates emotions and the effect of emotion regulation on decision performance in digital serious games. The scope of this thesis was limited to digital serious games for emotion regulation training using psychophysiological methods to communicate player's affective information. Using the psychophysiological methods in design and evaluation of digital serious games, emotions and their underlying neural mechanism have been explored. Effects of emotion regulation have been investigated where decision performance has been measured and analyzed. The proposed metrics for designing and evaluating such affective serious games have been extensively evaluated. The research methods used in this thesis were based on both quantitative and qualitative aspects, with true experiment and evaluation research, respectively. Digital serious games approach to emotion regulation was investigated, player's physiology of emotions informs design of interactions where regulation of those emotions could be practiced. The results suggested that two different emotion regulation strategies, suppression and cognitive reappraisal, are optimal for different decision tasks contexts. With careful design methods, valid serious games for training those different strategies could be produced. Moreover, using psychophysiological methods, underlying emotion neural mechanism could be mapped. This could inform a digital serious game about an optimal level of arousal for a certain task, as evidence suggests that arousal is equally or more important than valence for decision-making. The results suggest that it is possible to design and develop digital serious game applications that provide helpful learning environment where decision makers could practice emotion regulation and subsequently improve their decision-making. If we assume that physiological arousal is more important than physiological valence for learning purposes, results show that digital serious games designed in this thesis elicit high physiological arousal, suitable for use as an educational platform.
APA, Harvard, Vancouver, ISO, and other styles
30

Aranha, Renan Vinicius. "EasyAffecta: um framework baseado em Computação Afetiva para adaptação automática de jogos sérios para reabilitação motora." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-24072017-083504/.

Full text
Abstract:
A utilização de jogos sérios em muitas atividades, incluindo casos de saúde, como o processo de reabilitação motora, tem demonstrado resultados satisfatórios que encorajam o desenvolvimento de novas aplicações neste cenário. Jogos podem tornar tais atividades mais interessantes e divertidas para os pacientes, como também auxiliar as etapas do processo de reabilitação. Nestas aplicações, estratégias que visam a manutenção do nível de motivação do usuário durante a utilização são muito importantes. Assim, esta pesquisa investiga a adaptação de contexto em jogos sérios com a utilização de técnicas de Computação Afetiva. A proposta consiste em um framework que torna mais baixo ao programador o custo de implementação da adaptação afetiva em jogos e permite que o fisioterapeuta configure as adaptações que serão executadas no jogo conforme o perfil dos pacientes. Com o intuito de verificar a viabilidade da proposta, dois jogos para reabilitação motora e uma versão do framework foram implementados, permitindo a realização de experimentos com programadores, fisioterapeutas e pacientes. Os resultados obtidos permitem concluir que a abordagem proposta tende a proporcionar grande impacto social e tecnológico
The use of serious games in many activities, including health cases, like the motor rehabilitation process, has demonstrated results that encourage the development of new applications in this scenario. These activities can be more interesting and funnier by using games, as well as help the patients to execute the steps of the rehabilitation process. In these applications, strategies to maintain the user\'s motivation level during the game are very important. Thus, in this research, we investigated the context adaptation on serious games using techniques of Affective Computing. The proposal consists of a framework that makes the cost of implementing affective adaptation in games lower to programmers and allows the physiotherapists to configure the adaptations that will be executed in the game, according to the profile of the patients. In order to verify the feasibility of the proposal, two games for motor rehabilitation and a version of the framework were implemented, allowing the realization of experiments with programmers, physiotherapists, and patients. The results obtained allow us to conclude that the proposed approach tends to provide great social and technological impact
APA, Harvard, Vancouver, ISO, and other styles
31

Pampouchidou, Anastasia. "Automatic detection of visual cues associated to depression." Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCK054/document.

Full text
Abstract:
La dépression est le trouble de l'humeur le plus répandu dans le monde avec des répercussions sur le bien-être personnel, familial et sociétal. La détection précoce et précise des signes liés à la dépression pourrait présenter de nombreux avantages pour les cliniciens et les personnes touchées. Le présent travail visait à développer et à tester cliniquement une méthodologie capable de détecter les signes visuels de la dépression afin d’aider les cliniciens dans leur décision.Plusieurs pipelines d’analyse ont été mis en œuvre, axés sur les algorithmes de représentation du mouvement, via des changements de textures ou des évolutions de points caractéristiques du visage, avec des algorithmes basés sur les motifs binaires locaux et leurs variantes incluant ainsi la dimension temporelle (Local Curvelet Binary Patterns-Three Orthogonal Planes (LCBP-TOP), Local Curvelet Binary Patterns- Pairwise Orthogonal Planes (LCBP-POP), Landmark Motion History Images (LMHI), and Gabor Motion History Image (GMHI)). Ces méthodes de représentation ont été combinées avec différents algorithmes d'extraction de caractéristiques basés sur l'apparence, à savoir les modèles binaires locaux (LBP), l'histogramme des gradients orientés (HOG), la quantification de phase locale (LPQ) et les caractéristiques visuelles obtenues après transfert de modèle issu des apprentissage profonds (VGG). Les méthodes proposées ont été testées sur deux ensembles de données de référence, AVEC et le Wizard of Oz (DAICWOZ), enregistrés à partir d'individus non diagnostiqués et annotés à l'aide d'instruments d'évaluation de la dépression. Un nouvel ensemble de données a également été développé pour inclure les patients présentant un diagnostic clinique de dépression (n = 20) ainsi que les volontaires sains (n = 45).Deux types différents d'évaluation de la dépression ont été testés sur les ensembles de données disponibles, catégorique (classification) et continue (régression). Le MHI avec VGG pour l'ensemble de données de référence AVEC'14 a surpassé l'état de l’art avec un F1-Score de 87,4% pour l'évaluation catégorielle binaire. Pour l'évaluation continue des symptômes de dépression « autodéclarés », LMHI combinée aux caractéristiques issues des HOG et à celles issues du modèle VGG ont conduit à des résultats comparatifs aux meilleures techniques de l’état de l’art sur le jeu de données AVEC'14 et sur notre ensemble de données, avec une erreur quadratique moyenne (RMSE) et une erreur absolue moyenne (MAE) de 10,59 / 7,46 et 10,15 / 8,48 respectivement. La meilleure performance de la méthodologie proposée a été obtenue dans la prédiction des symptômes d'anxiété auto-déclarés sur notre ensemble de données, avec une RMSE/MAE de 9,94 / 7,88.Les résultats sont discutés en relation avec les limitations cliniques et techniques et des améliorations potentielles pour des travaux futurs sont proposées
Depression is the most prevalent mood disorder worldwide having a significant impact on well-being and functionality, and important personal, family and societal effects. The early and accurate detection of signs related to depression could have many benefits for both clinicians and affected individuals. The present work aimed at developing and clinically testing a methodology able to detect visual signs of depression and support clinician decisions.Several analysis pipelines were implemented, focusing on motion representation algorithms, including Local Curvelet Binary Patterns-Three Orthogonal Planes (LCBP-TOP), Local Curvelet Binary Patterns- Pairwise Orthogonal Planes (LCBP-POP), Landmark Motion History Images (LMHI), and Gabor Motion History Image (GMHI). These motion representation methods were combined with different appearance-based feature extraction algorithms, namely Local Binary Patterns (LBP), Histogram of Oriented Gradients (HOG), Local Phase Quantization (LPQ), as well as Visual Graphic Geometry (VGG) features based on transfer learning from deep learning networks. The proposed methods were tested on two benchmark datasets, the AVEC and the Distress Analysis Interview Corpus - Wizard of Oz (DAICWOZ), which were recorded from non-diagnosed individuals and annotated based on self-report depression assessment instruments. A novel dataset was also developed to include patients with a clinical diagnosis of depression (n=20) as well as healthy volunteers (n=45).Two different types of depression assessment were tested on the available datasets, categorical (classification) and continuous (regression). The MHI with VGG for the AVEC’14 benchmark dataset outperformed the state-of-the-art with 87.4% F1-Score for binary categorical assessment. For continuous assessment of self-reported depression symptoms, MHI combined with HOG and VGG performed at state-of-the-art levels on both the AVEC’14 dataset and our dataset, with Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) of 10.59/7.46 and 10.15/8.48, respectively. The best performance of the proposed methodology was achieved in predicting self-reported anxiety symptoms in our dataset, with RMSE/MAE of 9.94/7.88.Results are discussed in relation to clinical and technical limitations and potential improvements in future work
APA, Harvard, Vancouver, ISO, and other styles
32

Baveye, Yoann. "Automatic prediction of emotions induced by movies." Thesis, Ecully, Ecole centrale de Lyon, 2015. http://www.theses.fr/2015ECDL0035/document.

Full text
Abstract:
Jamais les films n’ont été aussi facilement accessibles aux spectateurs qui peuvent profiter de leur potentiel presque sans limite à susciter des émotions. Savoir à l’avance les émotions qu’un film est susceptible d’induire à ses spectateurs pourrait donc aider à améliorer la précision des systèmes de distribution de contenus, d’indexation ou même de synthèse des vidéos. Cependant, le transfert de cette expertise aux ordinateurs est une tâche complexe, en partie due à la nature subjective des émotions. Cette thèse est donc dédiée à la détection automatique des émotions induites par les films, basée sur les propriétés intrinsèques du signal audiovisuel. Pour s’atteler à cette tâche, une base de données de vidéos annotées selon les émotions induites aux spectateurs est nécessaire. Cependant, les bases de données existantes ne sont pas publiques à cause de problèmes de droit d’auteur ou sont de taille restreinte. Pour répondre à ce besoin spécifique, cette thèse présente le développement de la base de données LIRIS-ACCEDE. Cette base a trois avantages principaux: (1) elle utilise des films sous licence Creative Commons et peut donc être partagée sans enfreindre le droit d’auteur, (2) elle est composée de 9800 extraits vidéos de bonne qualité qui proviennent de 160 films et courts métrages, et (3) les 9800 extraits ont été classés selon les axes de “valence” et “arousal” induits grâce un protocole de comparaisons par paires mis en place sur un site de crowdsourcing. L’accord inter-annotateurs élevé reflète la cohérence des annotations malgré la forte différence culturelle parmi les annotateurs. Trois autres expériences sont également présentées dans cette thèse. Premièrement, des scores émotionnels ont été collectés pour un sous-ensemble de vidéos de la base LIRIS-ACCEDE dans le but de faire une validation croisée des classements obtenus via crowdsourcing. Les scores émotionnels ont aussi rendu possible l’apprentissage d’un processus gaussien par régression, modélisant le bruit lié aux annotations, afin de convertir tous les rangs liés aux vidéos de la base LIRIS-ACCEDE en scores émotionnels définis dans l’espace 2D valence-arousal. Deuxièmement, des annotations continues pour 30 films ont été collectées dans le but de créer des modèles algorithmiques temporellement fiables. Enfin, une dernière expérience a été réalisée dans le but de mesurer de façon continue des données physiologiques sur des participants regardant les 30 films utilisés lors de l’expérience précédente. La corrélation entre les annotations physiologiques et les scores continus renforce la validité des résultats de ces expériences. Equipée d’une base de données, cette thèse présente un modèle algorithmique afin d’estimer les émotions induites par les films. Le système utilise à son avantage les récentes avancées dans le domaine de l’apprentissage profond et prend en compte la relation entre des scènes consécutives. Le système est composé de deux réseaux de neurones convolutionnels ajustés. L’un est dédié à la modalité visuelle et utilise en entrée des versions recadrées des principales frames des segments vidéos, alors que l’autre est dédié à la modalité audio grâce à l’utilisation de spectrogrammes audio. Les activations de la dernière couche entièrement connectée de chaque réseau sont concaténées pour nourrir un réseau de neurones récurrent utilisant des neurones spécifiques appelés “Long-Short-Term- Memory” qui permettent l’apprentissage des dépendances temporelles entre des segments vidéo successifs. La performance obtenue par le modèle est comparée à celle d’un modèle basique similaire à l’état de l’art et montre des résultats très prometteurs mais qui reflètent la complexité de telles tâches. En effet, la prédiction automatique des émotions induites par les films est donc toujours une tâche très difficile qui est loin d’être complètement résolue
Never before have movies been as easily accessible to viewers, who can enjoy anywhere the almost unlimited potential of movies for inducing emotions. Thus, knowing in advance the emotions that a movie is likely to elicit to its viewers could help to improve the accuracy of content delivery, video indexing or even summarization. However, transferring this expertise to computers is a complex task due in part to the subjective nature of emotions. The present thesis work is dedicated to the automatic prediction of emotions induced by movies based on the intrinsic properties of the audiovisual signal. To computationally deal with this problem, a video dataset annotated along the emotions induced to viewers is needed. However, existing datasets are not public due to copyright issues or are of a very limited size and content diversity. To answer to this specific need, this thesis addresses the development of the LIRIS-ACCEDE dataset. The advantages of this dataset are threefold: (1) it is based on movies under Creative Commons licenses and thus can be shared without infringing copyright, (2) it is composed of 9,800 good quality video excerpts with a large content diversity extracted from 160 feature films and short films, and (3) the 9,800 excerpts have been ranked through a pair-wise video comparison protocol along the induced valence and arousal axes using crowdsourcing. The high inter-annotator agreement reflects that annotations are fully consistent, despite the large diversity of raters’ cultural backgrounds. Three other experiments are also introduced in this thesis. First, affective ratings were collected for a subset of the LIRIS-ACCEDE dataset in order to cross-validate the crowdsourced annotations. The affective ratings made also possible the learning of Gaussian Processes for Regression, modeling the noisiness from measurements, to map the whole ranked LIRIS-ACCEDE dataset into the 2D valence-arousal affective space. Second, continuous ratings for 30 movies were collected in order develop temporally relevant computational models. Finally, a last experiment was performed in order to collect continuous physiological measurements for the 30 movies used in the second experiment. The correlation between both modalities strengthens the validity of the results of the experiments. Armed with a dataset, this thesis presents a computational model to infer the emotions induced by movies. The framework builds on the recent advances in deep learning and takes into account the relationship between consecutive scenes. It is composed of two fine-tuned Convolutional Neural Networks. One is dedicated to the visual modality and uses as input crops of key frames extracted from video segments, while the second one is dedicated to the audio modality through the use of audio spectrograms. The activations of the last fully connected layer of both networks are conv catenated to feed a Long Short-Term Memory Recurrent Neural Network to learn the dependencies between the consecutive video segments. The performance obtained by the model is compared to the performance of a baseline similar to previous work and shows very promising results but reflects the complexity of such tasks. Indeed, the automatic prediction of emotions induced by movies is still a very challenging task which is far from being solved
APA, Harvard, Vancouver, ISO, and other styles
33

Eladhari, Mirjam Palosaari. "Characterising action potential in virtual game worlds applied with the mind module." Thesis, Teesside University, 2010. http://hdl.handle.net/10149/129791.

Full text
Abstract:
Because games set in persistent virtual game worlds (VGWs) have massive numbers of players, these games need methods of characterisation for playable characters (PCs) that differ from the methods used in traditional narrative media. VGWs have a number of particularly interesting qualities. Firstly, VGWs are places where players interact with and create elements carrying narrative potential. Secondly, players add goals, motives and driving forces to the narrative potential of a VGW, which sometimes originates from the ordinary world. Thirdly, the protagonists of the world are real people, and when acting in the world their characterisation is not carried out by an author, but expressed by players characterising their PCs. How they can express themselves in ways that characterise them depend on what they can do, and how they can do it, and this characterising action potential (CAP) is defined by the game design of particular VGWs. In this thesis, two main questions are explored. Firstly, how can CAP be designed to support players in expressing consistent characters in VGWs? Secondly, how can VGWs support role-play in their rule-systems? By using iterative design, I explore the design space of CAP by building a semiautonomous agent structure, the Mind Module (MM) and apply it in five experimental prototypes where the design of CAP and other game features is derived from the MM. The term semiautonomy is used because the agent structure is designed to be used by a PC, and is thus partly controlled by the system and partly by the player. The MM models a PC’s personality as a collection of traits, maintains dynamic emotional state as a function of interactions with objects in the environment, and summarises a PC’s current emotional state in terms of ‘mood’. The MM consists of a spreading-activation network of affect nodes that are interconnected by weighted relationships. There are four types of affect node: personality trait nodes, emotion nodes, mood nodes, and sentiment nodes. The values of the nodes defining the personality traits of characters govern an individual PC’s state of mind through these weighted relationships, resulting in values characterising for a PC’s personality. The sentiment nodes constitute emotionally valenced connections between entities. For example, a PC can ‘feel’ anger toward another PC. This thesis also describes a guided paper-prototype play-test of the VGW prototype World of Minds, in which the game mechanics build upon the MM’s model of personality and emotion. In a case study of AI-based game design, lessons learned from the test are presented. The participants in the test were able to form and communicate mental models of the MM and game mechanics, validating the design and giving valuable feedback for further development. Despite the constrained scenarios presented to test players, they discovered interesting, alternative strategies, indicating that for game design the ‘mental physics’ of the MM may open up new possibilities. The results of the play-test influenced the further development of the MM as it was used in the digital VGW prototype the Pataphysic Institute. In the Pataphysic Institute the CAP of PCs is largely governed by their mood. Depending on which mood PCs are in they can cast different ‘spells’, which affect values such as mental energy, resistance and emotion in their targets. The mood also governs which ‘affective actions’ they can perform toward other PCs and what affective actions they are receptive to. By performing affective actions on each other PCs can affect each others’ emotions, which - if they are strong - may result in sentiments toward each other. PCs’ personalities govern the individual fluctuations of mood and emotions, and define which types of spell PCs can cast. Formalised social relationships such as friendships affect CAP, giving players more energy, resistance, and other benefits. PCs’ states of mind are reflected in the VGW in the form of physical manifestations that emerge if an emotion is very strong. These manifestations are entities which cast different spells on PCs in close proximity, depending on the emotions that the manifestations represent. PCs can also partake in authoring manifestations that become part of the world and the game-play in it. In the Pataphysic Institute potential story structures are governed by the relations the sentiment nodes constitute between entities.
APA, Harvard, Vancouver, ISO, and other styles
34

Tasooji, Reza. "Determining Correlation Between Video Stimulus and Electrodermal Activity." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/84509.

Full text
Abstract:
With the growth of wearable devices capable of measuring physiological signals, affective computing is becoming more popular than before that gradually will remove our cognitive approach. One of the physiological signals is the electrodermal activities (EDA) signal. We explore how video stimulus that might arouse fear affect the EDA signal. To better understand EDA signal, two different medians, a scene from a movie and a scene from a video game, were selected to arouse fear. We conducted a user study with 20 participants and analyzed the differences between medians and proposed a method capable of detecting the highlights of the stimulus using only EDA signals. The study results show that there are no significant differences between two medians except that users are more engaged with the content of the video game. From gathered data, we propose a similarity measurement method for clustering different users based on how common they reacted to different highlights. The result shows for 300 seconds stimulus, using a window size of 10 seconds, our approach for detecting highlights of the stimulus has the precision of one for both medians, and F1 score of 0.85 and 0.84 for movie and video game respectively.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
35

Neto, Ary Fagundes Bressane. "Uma arquitetura para agentes inteligentes com personalidade e emoção." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-28072010-121443/.

Full text
Abstract:
Uma das principais motivações da Inteligência Artificial no contexto dos sistemas de entretenimento digital é criar personagens adaptáveis a novas situações, pouco previsíveis, com aprendizado rápido, memória de situações passadas e uma grande diversidade de comportamentos consistente e convincente ao longo do tempo. De acordo com recentes estudos desenvolvidos nos campos da Neurociência e da Psicologia, a capacidade de resolução de problemas não está unicamente atrelada à facilidade na manipulação de símbolos, mas também à exploração das características do ambiente e à interação social, que pode ser expressa na forma de fenômenos emocionais. Os resultados desses estudos confirmam o papel fundamental que cumprem a personalidade e as emoções nas atividades de percepção, planejamento, raciocínio, criatividade, aprendizagem, memória e tomada de decisão. Quando módulos para a manipulação de personalidade e emoções são incorporados à teoria de agentes, é possível a construção de Agentes com Comportamento Convincente (Believable Agents). O objetivo principal deste trabalho é desenvolver e implementar uma arquitetura de agentes inteligentes para construir personagens sintéticos cujos estados afetivos influenciam em suas atividades cognitivas. Para o desenvolvimento de tal arquitetura utilizou-se o modelo BDI (Beliefs, Desires e Intentions) como base e aos módulos existentes em uma implementação desse modelo foi incluído um Módulo Afetivo. Esse Módulo Afetivo é constituído por três submódulos (Personalidade, Humor e Emoção) e deve impactar nas atividades cognitivas de percepção, memória e tomada de decisão do agente. Duas provas de conceito (experimentos) foram construídas : a simulação do problema do ``Dilema do Prisioneiro Iterado\'\' e a versão computadorizada do ``Jogo da Memória\'\'. A construção desses experimentos permitiu avaliar empiricamente a influência da personalidade, humor e emoção nas atividades cognitivas dos agentes, e consequentemente no seu comportamento. Os resultados evidenciam que a utilização da nova arquitetura permite a construção de agentes com comportamentos mais coerentes, adaptativos e cooperativos quando comparados aos de agentes construídos com arquiteturas cujas atividades cognitivas não consideram o estado afetivo, e também produz um comportamento mais próximo de um agente humano que de um comportamento ótimo ou aleatório. Essa evidência de sucesso, apresentada nos resultados, mostra que os agentes construídos com a arquitetura proposta nessa dissertação indicam um avanço na direção do desenvolvimento dos Agentes com Comportamento Convincente.
One of the main motivations of Artificial Intelligence in the context of the digital entertainment systems is to create characters that are adaptable to new situations, unpredictable, fast learners, enable with memory of past situations and a variety of consistent and convincing behavior over time. According to recent studies conducted in the fields of Neuroscience and Psychology, the ability to solve problems is not only related to the capacity to manipulate symbols, but also to the ability to explore the environment and to engage into social interaction, which can be expressed as emotional phenomena. The results of these studies confirm the key role the personality and emotions play in the activities of perception, attention, planning, reasoning, creativity, learning, memory and decision making. When modules for handling personality and emotion, are incorporated in a theory of agents, it is possible to build Believable Agents. The main objective of this work is to develop and implement an intelligent agent architecture to build synthetic characters whose affective states influence their cognitive activities. To develop such architecture the BDI model (Beliefs, Desires and Intentions) was used as a basis, to which an Affective Module was included. The Affective Module consists of three sub-modules (Personality, Mood and Emotion), which influence the cognitive activities of perception, memory and decision making. Finally, two proofs of concept were built: the simulation of the problem of ``Iterated Prisoner\'s Dilemma\'\' and the computerized version of the ``Memory Game.\'\' The construction of these experiments allowed to evaluate empirically the influence of personality, mood and emotion in cognitive activities of agents and consequently in their behavior. The results show that using the proposed architecture one can build agents with more consistent, adaptive and cooperative behaviors when compared to agents built with architectures whose affective states do not influence their cognitive activities. It also produces a behavior that is closer to a human user than that of optimal or random behavior. This evidence of success, presented in the obtained results, show that agents built with the proposed architecture indicate an advance towards the development of Believable Agents.
APA, Harvard, Vancouver, ISO, and other styles
36

Pereira, Adriano. "AFFECTIVE-RECOMMENDER: UM SISTEMA DE RECOMENDAÇÃO SENSÍVEL AO ESTADO AFETIVO DO USUÁRIO." Universidade Federal de Santa Maria, 2012. http://repositorio.ufsm.br/handle/1/5406.

Full text
Abstract:
Pervasive computing systems aim to improve human-computer interaction, using users situation variables that define context. The boom of Internet makes growing availables items to choose, giving cost in made decision process. Affective Computing has in its goals to identify user s affective/emotional state in a computing interaction, in order to respond to it automatically. Recommendation systems help made decision selecting and suggesting items in scenarios where there are huge information volume, using, traditionally, users prefferences data. This process could be enhanced using context information (as physical, environmental or social), rising the Context-Aware Recommendation Systems. Due to emotions importance in our lives, that could be treated with Affective Computing, this work uses affective context as context variable, in recommendation process, proposing the Affective-Recommender a recommendation system that uses user s affective state to select and to suggest items. The system s model has four components: (i) detector, that identifies affective-state, using the multidimesional Pleasure, Arousal and Dominance model, and Self-Assessment Maniking instrument, that asks user to inform how he/she feels; (ii) recommender, that selects and suggests items, using a collaborative-filtering based approache, in which user s prefference to an item is his/her affective reaction to it as the affective state detected after access; (iii) application, which interacts with user, shows probable most interesting items defined by recommender, and requests affect identification when it is necessarly; and (iv) data base, that stores available items and users prefferences. As a use case, Affective-Recommender is used in a e-learning scenario, due to personalization obtained with recommendation and emotion importances in learning process. The system was implemented over Moodle LMS. To exposes its operation, a use scenario was organized, simulating recommendation process. In order to check system applicability, with students opinion about to inform how he/she feels and to receive suggestions, it was applied in three UFSM graduation courses classes, and then it were analyzed data access and the answers to a sent questionnaire. As results, it was perceived that students were able to inform how they feel, and that occured changes in their affecive state, based on accessed item, although they don t see improvements with the recommendation, due to small data available to process and showr time of application.
Sistemas de Computação Pervasiva buscam melhorar a interação humano-computador através do uso de variáveis da situação do usuário que definem o contexto. A explosão da Internet e das tecnologias de informação e comunicação torna crescente a quantidade de itens disponíveis para a escolha, impondo custo para o usuário no processo de tomada de decisão. A Computação Afetiva tem entre seus objetivos identificar o estado emocional/afetivo do usuário durante uma interação computacional, para automaticamente responder a ele. Já Sistemas de Recomendação auxiliam a tomada de decisão, selecionando e sugerindo itens em situações onde há grandes volumes de informação, tradicionalmente, utilizando as preferências dos usuários para a seleção e sugestão. Esse processo pode ser melhorado com o uso do contexto (físico, ambiental, social), surgindo os Sistemas de Recomendação Sensíveis ao Contexto. Tendo em vista a importância das emoções em nossas vidas, e a possibilidade de tratamento delas com a Computação Afetiva, este trabalho utiliza o contexto afetivo do usuário como variável da situação, durante o processo de recomendação, propondo o Affective-Recommender um sistema de recomendação que faz uso do estado afetivo do usuário para selecionar e sugerir itens. O sistema foi modelado a partir de quatro componentes: (i) detector, que identifica o estado afetivo, utilizando o modelo multidimensional Pleasure, Arousal e Dominance e o instrumento Self-Assessment Manikin, solicitando que o usuário informe como se sente; (ii) recomendador, que escolhe e sugere itens, utilizando uma abordagem baseada em filtragem colaborativa, em que a preferência de um usuário para um item é vista como sua reação estado afetivo detectado após o contato ao item; (iii) aplicação, que interage com o usuário, exibe os itens de provável maior interesse definidos pelo recomendador, e solicita que o estado seja identificado, sempre que necessário; e (iv) base de dados, que armazena os itens disponíveis para serem sugeridos e as preferências de cada usuário. Como um caso de uso e prova de conceito, o Affective-Recommender é empregado em um cenário de e-learning, devido à importância da personalização, obtida com a recomendação, e das emoções no processo de aprendizagem. O sistema foi implementado utilizando-se como base o AVEA Moodle. Para expor o funcionamento, estruturou-se um cenário de uso, simulando-se o processo de recomendação. Para verificar a aplicabilidade real do sistema, ele foi empregado em três turmas de cursos de graduação da UFSM, sendo analisados dados de acesso e aplicado um questionário para identificar as impressões do alunos quanto a informar como se sentem e receber recomendações. Como resultados, percebeu-se que os alunos conseguiram informar seus estados afetivos, e que houve uma mudança em neste estado com base no item acessado, embora não tenham vislumbrado melhorias com as recomendações, em virtude da pequena quantidade de dados disponível para processamento e do curto tempo de aplicação.
APA, Harvard, Vancouver, ISO, and other styles
37

Iepsen, Edécio Fernando. "Ensino de algoritmos : detecção do estado afetivo de frustração para apoio ao processo de aprendizagem." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/78020.

Full text
Abstract:
Esta tese apresenta uma pesquisa para detectar os alunos que evidenciam sinais de frustração em atividades de ensino e de aprendizagem na área de Algoritmos, para então, auxiliá-los com ações proativas de apoio. A motivação para o desenvolvimento deste trabalho advém da dificuldade dos alunos na aprendizagem dos conceitos e técnicas de construção de Algoritmos, que se constitui num dos principais fatores que levam os cursos de formação em Computação a atingir altas taxas de evasão. Na busca por diminuir tal evasão, esta pesquisa destaca a importância de considerar os estados afetivos dos alunos, procurando motivá-los a estudar e resolver suas dificuldades de entendimento da resolução de problemas usando como suporte os sistemas computacionais. Para fins de validação da pesquisa foi construída uma ferramenta para: a) inferir o estado afetivo de frustração do aluno durante a resolução dos exercícios de Algoritmos, b) ao detectar sinais associados à frustração, apresentar recursos de apoio ao aprendizado do aluno. A inferência da frustração ocorre a partir da análise das variáveis comportamentais produzidas pelas interações dos alunos com a ferramenta. O apoio consiste na exibição de um tutorial com a resolução passo a passo do exercício no qual o aluno apresenta dificuldades e na recomendação de um novo exercício com níveis de complexidade mais lineares aos conceitos trabalhados até aquele ponto da disciplina. A partir destas ações, pretende-se auxiliar a fazer com que a frustração do aluno possa ser transformada em uma oportunidade de aprendizado. Estudos de Caso foram realizados com alunos de Algoritmos do curso de Tecnologia em Análise e Desenvolvimento de Sistemas da Faculdade de Tecnologia Senac Pelotas durante os anos de 2011 e 2012. Para identificar os padrões de comportamento dos alunos foram utilizadas técnicas de Mineração de Dados. Os resultados dos experimentos demonstraram que evidências como, o alto número de tentativas de compilação de um programa sem sucesso, o grande número de erros em um mesmo programa ou a quantidade de tempo gasto na tentativa de resolver um algoritmo, podem estar relacionadas ao estado de frustração do aluno. Além disso, em um dos experimentos foi realizado um comparativo de pré e pós-teste que demonstrou importantes avanços no aprendizado dos alunos participantes da pesquisa.
This thesis presents a research work on the detection of students who show signs of frustration in learning activities in the area of algorithms, to then assist them with proactive support actions. Our motivation for the development of this work comes from students' difficulty in learning the concepts and techniques for building algorithms, which constitutes one of the main factors for the high dropout rates of computing courses. With the intent of giving a contribution to the reduction of such evasion, this research highlights the importance of considering students' affective states, trying to motivate them to study and work out their difficulties, with the assistance of computer systems. For research validation purposes, a tool was built to: a) infer the student’s affective state of frustration while solving exercises of algorithms; b) detect signs associated with frustration, to provide resources to support student learning. The inference of frustration comes from the analysis of behavioral variables produced by the interactions of students with the tool. The support consists in displaying a tutorial with a step by step solution for the exercise in which the student shows difficulties, and the recommendation of a new exercise with more linear levels of complexity than the concepts worked until that point in the course. With these actions, our intention is to turn student's frustration into a learning opportunity. Case studies were conducted with students of Algorithms at the Faculty of Technology Senac Pelotas, in 2011 and 2012. Data mining techniques were used to identify patterns of student behavior. The experiment results showed that evidence such as the high number of attempts to compile a program without success, the large number of errors in a program or even the amount of time spent trying to solve an algorithm, might be related to the student’s frustration state. Additionally, a pre and post-test comparison showed significant progress in students' learning.
APA, Harvard, Vancouver, ISO, and other styles
38

Haines, Nathaniel. "Decoding facial expressions that produce emotion valence ratings with human-like accuracy." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1511257717736851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Gnjatović, Milan. "Adaptive dialogue management in human-machine interaction." München Verl. Dr. Hut, 2009. http://d-nb.info/997723475/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Elkins, Aaron Chaim. "Vocalic Markers of Deception and Cognitive Dissonance for Automated Emotion Detection Systems." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/202930.

Full text
Abstract:
This dissertation investigates vocal behavior, measured using standard acoustic and commercial vocal analysis software, as it occurs naturally while lying, experiencing cognitive dissonance, or receiving a security interview conducted by an Embodied Conversational Agent (ECA).In study one, vocal analysis software used for credibility assessment was investigated experimentally. Using a repeated measures design, 96 participants lied and told the truth during a multiple question interview. The vocal analysis software's built-in deception classifier performed at the chance level. When the vocal measurements were analyzed independent of the software's interface, the variables FMain (Stress), AVJ (Cognitive Effort), and SOS (Fear) significantly differentiated between truth and deception. Using these measurements, a logistic regression and machine learning algorithms predicted deception with accuracy up to 62.8%. Using standard acoustic measures, vocal pitch and voice quality was predicted by deception and stress.In study two, deceptive vocal and linguistic behaviors were investigated using a direct manipulation of arousal, affect, and cognitive difficulty by inducing cognitive dissonance. Participants (N=52) made verbal counter-attitudinal arguments out loud that were subjected to vocal and linguistic analysis. Participants experiencing cognitive dissonance spoke with higher vocal pitch, response latency, linguistic Quantity, and Certainty and lower Specificity. Linguistic Specificity mediated the dissonance and attitude change. Commercial vocal analysis software revealed that cognitive dissonance induced participants exhibited higher initial levels of Say or Stop (SOS), a measurement of fear.Study three investigated the use of the voice to predict trust. Participants (N=88) received a screening interview from an Embodied Conversational Agent (ECA) and reported their perceptions of the ECA. A growth model was developed that predicted trust during the interaction using the voice, time, and demographics.In study four, border guards participants were randomly assigned into either the Bomb Maker (N = 16) or Control (N = 13) condition. Participants either did or did not assemble a realistic, but non-operational, improvised explosive device (IED) to smuggle past an ECA security interviewer. Participants in the Bomb Maker condition had 25.34% more variation in their vocal pitch than the control condition participants.This research provides support that the voice is potentially a reliable and valid measurement of emotion and deception suitable for integration into future technologies such as automated security screenings and advanced human-computer interactions.
APA, Harvard, Vancouver, ISO, and other styles
41

Karg, Michelle E. [Verfasser], Kolja [Akademischer Betreuer] Kühnlenz, and Gerhard [Akademischer Betreuer] Rigoll. "Pattern Recognition Algorithms for Gait Analysis with Application to Affective Computing / Michelle Karg. Gutachter: Gerhard Rigoll. Betreuer: Kolja Kühnlenz." München : Universitätsbibliothek der TU München, 2012. http://d-nb.info/1019589450/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Navarro, Sainz Adriana G. "An Exploratory Study: Personal Digital Technologies For Stress Care in Women." University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1543579225538012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Delaborde, Agnès. "Modélisation du profil émotionnel de l’utilisateur dans les interactions parlées Humain-Machine." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112225/document.

Full text
Abstract:
Les travaux de recherche de la thèse portent sur l'étude et la formalisation des interactions émotionnelles Humain-Machine. Au delà d’une détection d'informations paralinguistiques (émotions, disfluences,...) ponctuelles, il s'agit de fournir au système un profil interactionnel et émotionnel de l'utilisateur dynamique, enrichi pendant l’interaction. Ce profil permet d’adapter les stratégies de réponses de la machine au locuteur, et il peut également servir pour mieux gérer des relations à long terme. Le profil est fondé sur une représentation multi-niveau du traitement des indices émotionnels et interactionnels extraits à partir de l'audio via les outils de détection des émotions du LIMSI. Ainsi, des indices bas niveau (variations de la F0, d'énergie, etc.), fournissent des informations sur le type d'émotion exprimée, la force de l'émotion, le degré de loquacité, etc. Ces éléments à moyen niveau sont exploités dans le système afin de déterminer, au fil des interactions, le profil émotionnel et interactionnel de l'utilisateur. Ce profil est composé de six dimensions : optimisme, extraversion, stabilité émotionnelle, confiance en soi, affinité et domination (basé sur le modèle de personnalité OCEAN et les théories de l’interpersonal circumplex). Le comportement social du système est adapté en fonction de ce profil, de l'état de la tâche en cours, et du comportement courant du robot. Les règles de création et de mise à jour du profil émotionnel et interactionnel, ainsi que de sélection automatique du comportement du robot, ont été implémentées en logique floue à l'aide du moteur de décision développé par un partenaire du projet ROMEO. L’implémentation du système a été réalisée sur le robot NAO. Afin d’étudier les différents éléments de la boucle d’interaction émotionnelle entre l’utilisateur et le système, nous avons participé à la conception de plusieurs systèmes : système en Magicien d’Oz pré-scripté, système semi-automatisé, et système d’interaction émotionnelle autonome. Ces systèmes ont permis de recueillir des données en contrôlant plusieurs paramètres d’élicitation des émotions au sein d’une interaction ; nous présentons les résultats de ces expérimentations, et des protocoles d’évaluation de l’Interaction Humain-Robot via l’utilisation de systèmes à différents degrés d’autonomie
Analysing and formalising the emotional aspect of the Human-Machine Interaction is the key to a successful relation. Beyond and isolated paralinguistic detection (emotion, disfluences…), our aim consists in providing the system with a dynamic emotional and interactional profile of the user, which can evolve throughout the interaction. This profile allows for an adaptation of the machine’s response strategy, and can deal with long term relationships. A multi-level processing of the emotional and interactional cues extracted from speech (LIMSI emotion detection tools) leads to the constitution of the profile. Low level cues ( F0, energy, etc.), are then interpreted in terms of expressed emotion, strength, or talkativeness of the speaker. These mid-level cues are processed in the system so as to determine, over the interaction sessions, the emotional and interactional profile of the user. The profile is made up of six dimensions: optimism, extroversion, emotional stability, self-confidence, affinity and dominance (based on the OCEAN personality model and the interpersonal circumplex theories). The information derived from this profile could allow for a measurement of the engagement of the speaker. The social behaviour of the system is adapted according to the profile, and the current task state and robot behaviour. Fuzzy logic rules drive the constitution of the profile and the automatic selection of the robotic behaviour. These determinist rules are implemented on a decision engine designed by a partner in the project ROMEO. We implemented the system on the humanoid robot NAO. The overriding issue dealt with in this thesis is the viable interpretation of the paralinguistic cues extracted from speech into a relevant emotional representation of the user. We deem it noteworthy to point out that multimodal cues could reinforce the profile’s robustness. So as to analyse the different parts of the emotional interaction loop between the user and the system, we collaborated in the design of several systems with different autonomy degrees: a pre-scripted Wizard-of-Oz system, a semi-automated system, and a fully autonomous system. Using these systems allowed us to collect emotional data in robotic interaction contexts, by controlling several emotion elicitation parameters. This thesis presents the results of these data collections, and offers an evaluation protocol for Human-Robot Interaction through systems with various degrees of autonomy
APA, Harvard, Vancouver, ISO, and other styles
44

Boukhris, Mehdi. "Modélisation et évaluation de la fidélité d'un clone virtuel." Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLS177/document.

Full text
Abstract:
L'identification des visages est primordiale lors de nos interactions sociales. Ainsi, notre comportement change suite à l'identification de la personne avec laquelle nous interagissons. De plus, les travaux en psychologie et en neurosciences ont observé que le traitement cognitif face à un visage familier diffère de celui que nous avons face à un visage inconnu.D'une autre part, les dernières techniques de rendu 3D et les dernières avancées des scans 3D ont permis la création de visages virtuels photo-réalistes modélisant des personnes réelles existantes. La tendance actuelle pour modéliser des humains virtuels est de se baser sur des techniques d'acquisition de données réelles (issues de scans et de sessions de capture de mouvement). Par conséquent, les recherches et applications en humains virtuels ont connu un intérêt croissant pour ces clones virtuels (des agents ayant un aspect familier ou du moins reconnaissable). Les clones virtuels sont donc de plus en plus répandus dans des interfaces homme-machine et dans l'industrie audio-visuelle.L'étude de la perception et de l'interaction avec des clones virtuels est donc devenue nécessaire pour appréhender la conception et l'évaluation de cette technologie. En effet, très peu d'études se sont penchées sur l'évaluation de la fidélité de ces clones virtuels. L'objectif de cette thèse consiste à explorer cet axe de recherche en examinant le processus de perception de la fidélité d'un visage virtuel, clone d'une personne réelle (que l'on connait ou non).Nos travaux répondent à plusieurs questions de recherche: Quels sont les éléments qui nous permettent d'évaluer la ressemblance du clone virtuel avec son référent? Parmi les multiples possibilités de techniques de rendu, d'animation et d'acquisition de données qu'offre l'informatique graphique, quelle est la meilleure combinaison pour assurer le plus haut degré de fidélité perçue ? L'apparence visuelle n'est cependant qu'une des composantes qui interviennent dans la reconnaissance de personnes familières. Les autres composantes comprennent ainsi l'expressivité mais aussi le traitement des connaissances que nous avons sur cette personne (par exemple sa manière particulière d'évaluer une situation émotionnelle et de l'exprimer via son visage).Nos contributions apportent des éléments de réponse à ces questions à plusieurs niveaux. Nous avons défini un cadre conceptuel identifiant les principaux concepts pertinents pour l'étude de la fidélité d'un visage virtuel. Nous avons aussi étudié l'aspect visuel de la fidélité à travers l'exploration de différentes techniques de rendu. Nous avons étudié dans une autre étape l'impact de la familiarité dans le jugement de la fidélité. Finalement, nous avons proposé un modèle informatique individuel basé sur une approche cognitive des émotions qui permet de guider l'animation expressive du clone virtuel.Ces travaux de thèse ouvrent des perspectives pour la conception et l'amélioration de clones virtuels, mais aussi plus généralement des interfaces homme-machine basées sur des agents expressifs
Face identification plays a crucial role in our daily social interactions. Indeed, our behavior changes according to the identification of the person with whom we interact. Moreover, several studies in Psychology and Neurosciences have observed that our cognitive processing of familiar faces is different from the cognitive processing of unfamiliar faces.Creating photorealistic an animated human-like face of a real person is now possible thanks to recent advances in Computer Graphics and 3D scan systems. Recent rendering techniques are challenging our ability to distinguish between computer generated faces and real human faces. Besides, the current trend to model virtual humans is to involve real data collected using scans and motion capture systems. Research and applications in virtual humans have experienced a growing interest in so-called virtual clones (agents with a familiar aspect or at least recognizable). Virtual clones are therefore increasingly used in human-machine interfaces and in the audiovisual industry. Studies about the perception and interaction with virtual clones are therefore required to better understand how we should design and evaluate this kind of technology. Indeed, very few studies have tried to evaluate virtual clones' fidelity with respect to the original human (hereafter called “the referent”). The main goal of this thesis is to explore this line of research. Our work rises several research questions: What are the features of the virtual clone that enable us to evaluate the resemblance between a virtual clone and its referent? Among several possibilities of rendering, animation and data acquisition techniques offered by Computer Graphics, what is the best combination of techniques to ensure the highest level of perceived fidelity?However, visual appearance is not the only component that is involved in recognizing familiar people. The other components include facial expressiveness but also the possible knowledge that we have about the referent (e.g. his particular way of assessing an emotional situation and expressing it through his face).Our contributions provide answers to these questions at several levels. We define a conceptual framework identifying the key concepts which are relevant for the study of the fidelity of a virtual face. We explore different rendering techniques. We describe an experimental study about the impact of familiarity in the judgment of fidelity. Finally, we propose a preliminary individual computational model based on a cognitive approach of emotions that could drive the animation of the virtual clone.This work opens avenues for the design and improvement of virtual clones, and more generally for the human-machine interfaces based on expressive virtual agents
APA, Harvard, Vancouver, ISO, and other styles
45

Paleari, Marco. "Informatique Affective : Affichage, Reconnaissance, et Synthèse par Ordinateur des Émotions." Phd thesis, Télécom ParisTech, 2009. http://pastel.archives-ouvertes.fr/pastel-00005615.

Full text
Abstract:
L'informatique Affective regarde la computation que se rapporte, surgit de, ou influence délibérément les émotions et trouve son domaine d'application naturel dans les interactions homme-machine a haut niveau d'abstraction. L'informatique affective peut être divisée en trois sujets principaux, à savoir: l'affichage,l'identification, et la synthèse. La construction d'une machine intelligente capable dinteragir'de façon naturelle avec son utilisateur passe forcement par ce trois phases. Dans cette thèse nous proposions une architecture basée principalement sur le modèle dite "Multimodal Affective User Interface" de Lisetti et la théorie psychologique des émotions nommé "Component Process Theory" de Scherer. Dans nos travaux nous avons donc recherché des techniques pour l'extraction automatique et en temps-réel des émotions par moyen des expressions faciales et de la prosodie vocale. Nous avons aussi traité les problématiques inhérentes la génération d'expressions sur de différentes plateformes, soit elles des agents virtuel ou robotique. Finalement, nous avons proposé et développé une architecture pour des agents intelligents capable de simuler le processus humaine d'évaluation des émotions comme décrit par Scherer.
APA, Harvard, Vancouver, ISO, and other styles
46

Weber, Marlene. "Automotive emotions : a human-centred approach towards the measurement and understanding of drivers' emotions and their triggers." Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/16647.

Full text
Abstract:
The automotive industry is facing significant technological and sociological shifts, calling for an improved understanding of driver and passenger behaviours, emotions and needs, and a transformation of the traditional automotive design process. This research takes a human-centred approach to automotive research, investigating the users' emotional states during automobile driving, with the goal to develop a framework for automotive emotion research, thus enabling the integration of technological advances into the driving environment. A literature review of human emotion and emotion in an automotive context was conducted, followed by three driving studies investigating emotion through Facial-Expression Analysis (FEA): An exploratory study investigated whether emotion elicitation can be applied in driving simulators, and if FEA can detect the emotions triggered. The results allowed confidence in the applicability of emotion elicitation to a lab-based environment to trigger emotional responses, and FEA to detect those. An on-road driving study was conducted in a natural setting to investigate whether natures and frequencies of emotion events could be automatically measured. The possibility of assigning triggers to those was investigated. Overall, 730 emotion events were detected during a total driving time of 440 minutes, and event triggers were assigned to 92% of the emotion events. A similar second on-road study was conducted in a partially controlled setting on a planned road circuit. In 840 minutes, 1947 emotion events were measured, and triggers were successfully assigned to 94% of those. The differences in natures, frequencies and causes of emotions on different road types were investigated. Comparison of emotion events for different roads demonstrated substantial variances of natures, frequencies and triggers of emotions on different road types. The results showed that emotions play a significant role during automobile driving. The possibility of assigning triggers can be used to create a better understanding of causes of emotions in the automotive habitat. Both on-road studies were compared through statistical analysis to investigate influences of the different study settings. Certain conditions (e.g. driving setting, social interaction) showed significant influence on emotions during driving. This research establishes and validates a methodology for the study of emotions and their causes in the driving environment through which systems and factors causing positive and negative emotional effects can be identified. The methodology and results can be applied to design and research processes, allowing the identification of issues and opportunities in current automotive design to address challenges of future automotive design. Suggested future research includes the investigation of a wider variety of road types and situations, testing with different automobiles and the combination of multiple measurement techniques.
APA, Harvard, Vancouver, ISO, and other styles
47

Yngström, Karl. "Hjälpmedel för att tydliggöra känslor hos personer med AST." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20498.

Full text
Abstract:
Smarttelefoner är som kommunikationsverktyg mycket kraftfullt och tillåter kommunikationpå nya sätt. Att använda teknologiska hjälpmedel för att assistera människor med psykiskafunktionshinder är på framgång. Autismspektrum tillstånd, eller AST, är ett sådantfunktionshinder som manifesterar sig olika på olika människor, men ett genomgående dragär oförmåga att förstå känslor. Att mäta känslor är inte något som enkelt låter sig göras och iperioder har forskningsområdet för känslor varit eftersatt då det inte anses tillräckligtlogiskt.Denna uppsats använder Russels modell för känslor och core affect, den kartlägger känslorutifrån två korslagda axlar, aktivitet och valens (positiv - negativ).Syftet med studien är evaluera olika metoder för att mäta och registrera känslor hosmänniskor som har AST på ett enkelt och tillgängligt sätt. Detta görs genom existerandemodeller för känslor och använder en smarttelefon som verktyg och skall vara till hjälp i detdagliga livet för människor med AST och de människor som finns runt dem.
Smartphones are a powerful tool to facilitate communication in new ways and research onusing technology to assist people with various mental disabilities is a growing field. AutismSpectrum Disorder or ASD is one such disability, which will manifest differently for differentpeople, but one general theme is the inability to understand emotion. Measuring emotion issomething not easily done and for some time research into emotion has been overlooked infavor of more logical thought processes.This paper uses Russell's model for emotion and core affect, it maps emotion based on twocrossed axis, activation and valence (positive – negative).The purpose of this study is to evaluate various methods for measuring and registeringemotion for people with ASD in a simple cheap and accessible way. This is done based onexisting models of emotion and using a smartphone as a tool, and should be helpful in thedaily life of people with ASD and people around them.
APA, Harvard, Vancouver, ISO, and other styles
48

Esau, Natalia. "Emotionale Aspekte der Mensch-Roboter-Interaktion und ihre Realisierung in verhaltensbasierten Systemen /." Aachen : Shaker, 2009. http://d-nb.info/997696605/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

BURSIC, SATHYA. "ON WIRING EMOTION TO WORDS: A BAYESIAN MODEL." Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/932589.

Full text
Abstract:
Language and emotion are deeply entangled. In this dissertation we present a theoretical model that addresses how language and emotions intertwine with one another. To such end, we draw on the several results achieved in emotion theory (either at the psychological and the neurobiological levels) that go under the constructivist umbrella of the Conceptual Act Theory and those related to an emerging theoretical framework for pragmatic inference, the Rational Speech Act framework. We connect these theories and spell such connection in the language of probability, namely in that of Bayesian probabilistic modelling. Our endeavour is addressed to those fields of computer science such as artificial intelligence and machine learning where, in spite of the remarkable progress in the computational processing of language and affect, the study of their intersection is at best at its infancy, in our view. We argue that any further step in such direction only can be afforded by reducing the gap between Affective Science and computational approaches. To pave the way, simulations of the proposed model are presented that account for well known case-studies in pragmatics. In brief, at a high-level abstract representation we consider two interacting agents-in-context, where each agent performs a conceptual act based on interoceptive and exteroceptive sensation, in order to regulate their body budget. The agents communicate, performing communication acts that in turn regulate the agents’ conceptual acts and vice versa, and in this way they create, communicate and share categories, and even add new functions to the world. We implement this framework through two simulations of non-literal language use, namely hyperbole, irony, and a third dealing with politeness, a form of social reasoning. In addition, a fourth simulation concerns the assessment of the stochastic dynamics of the key component of the model, core affect.
APA, Harvard, Vancouver, ISO, and other styles
50

Poikolainen, Rosén Anton. "Words have power: Speech recognition in interactive jewelry : a case study with newcome LGBT+ immigrants." Thesis, Södertörns högskola, Medieteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-32992.

Full text
Abstract:
This paper addresses a design exploration focusing on interactive jewelry conducted with newcome LGBT+ immigrants in Sweden, leading to a necklace named PoWo that is “powered” by the spoken word through a mobile application that reacts to customizable keywords triggering LED-lights in the necklace. Interactive jewelry is in this paper viewed as a medium with a simultaneous relation to wearer and spectator thus affording use on the themes of symbolism, emotion, body and communication. These themes are demonstrated through specific use scenarios of the necklace relating to the participants of the design exploration e.g. addressing consent, societal issues, meeting situations and expressions of love and sexuality.  The potential of speech based interactive jewelry is investigated in this paper e.g. finding speech recognition in LED-jewelry to act as an amplifier of spoken words, actions and meaning; and as a visible extension of the smartphone and human body. In addition use qualities of visibility, ambiguity, continuity and fluency are discussed in relation to speech based LED-jewelry.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography