Dissertations / Theses on the topic 'Affective Computing'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Affective Computing.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Galván, Suazo José Daniel, and Lucas Victor Manuel Segura. "Proyecto desarrollo de aplicaciones con affective computing." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2017. http://hdl.handle.net/10757/622083.
Full textAffective computing is a field of research and emerging development which use in very interesting in different fields of business today. In this paper sets out the scope of this project is the development of Affective Computing demonstrations recognition considering five technologies: facial recognition, gait recognition, voice recognition, gesture recognition and Gaze Control. Chapter 1 will describe Project since a management perspective. In this context, the general objective whose compliance is determined by the completion of Specific Objectives, which are related one success indicators described also be considered. Finally, the present project planning, in which the scope and Management Planning terms of time, human resources, communications and risks are detailed. In Chapter 2, presents the list of different student outcomes. In this chapter, each point describes how the project satisfied the student outcome’s criteria. In Chapter 3, presents the theoretical framework of the project, which is initiated with the definition of emotion detection, which is the principal component that uses the Affective Computing is present. Subsequently, Affective Computing is defined. In Chapter 4, the State of the Art will take place, presenting some of the predecessors Projects exposing the current state of progress of the implementation of solutions based on Affective Computing. Finally, there are the Conclusions. In Chapter 5, solutions, it’s explain about the final product, documenting his description, user stories, maps Interaction and solution architecture will be described. Finally, in Chapter 6, the three proposals Affective Computing solution will be documented.
Thompson, Nik. "Development of an open affective computing environment." Thesis, Thompson, Nik (2012) Development of an open affective computing environment. PhD thesis, Murdoch University, 2012. https://researchrepository.murdoch.edu.au/id/eprint/13923/.
Full textBecker-Asano, Christian. "WASABI: affect simulation for agents with believable interactivity /." Heidelberg : Akademische Verlagsgesellschaft Aka, 2008. http://opac.nebis.ch/cgi-bin/showAbstract.pl?u20=9783898383196.
Full textReynolds, Carson Jonathan 1976. "Adversarial uses of affective computing and ethical implications." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33881.
Full textPage 158 blank.
Includes bibliographical references (p. 141-145).
Much existing affective computing research focuses on systems designed to use information related to emotion to benefit users. Many technologies are used in situations their designers didn't anticipate and would not have intended. This thesis discusses several adversarial uses of affective computing: use of systems with the goal of hindering some users. The approach taken is twofold: first experimental observation of use of systems that collect affective signals and transmit them to an adversary; second discussion of normative ethical judgments regarding adversarial uses of these same systems. This thesis examines three adversarial contexts: the Quiz Experiment, the Interview Experiment, and the Poker Experiment. In the quiz experiment, participants perform a tedious task that allows increasing their monetary reward by reporting they solved more problems than they actually did. The Interview Experiment centers on a job interview where some participants hide or distort information, interviewers are rewarded for hiring the honest, and where interviewees are rewarded for being hired. In the Poker Experiment subjects are asked to play a simple poker-like game against an adversary who has extra affective or game state information.
(cont.) These experiments extend existing work on ethical implications of polygraphs by considering variables (e.g. context or power relationships) other than recognition rate and using systems where information is completely mediated by computers. In all three experiments it is hypothesized that participants using systems that sense and transmit affective information to an adversary will have degraded performance and significantly different ethical evaluations than those using comparable systems that do not sense or transmit affective information. Analysis of the results of these experiments shows a complex situation in which the context of using affective computing systems bears heavily on reports dealing with ethical implications. The contribution of this thesis is these novel experiments that solicit participant opinion about ethical implications of actual affective computing systems and dimensional metaethics, a procedure for anticipating ethical problems with affective computing systems.
by Carson Jonathan Reynolds.
Ph.D.
Bortz, Brennon Christopher. "Using Music and Emotion to Enable Effective Affective Computing." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/90888.
Full textDoctor of Philosophy
The computing devices with which we interact daily continue to become ever smaller, intelligent, and pervasive. Not only are they becoming more intelligent, but some are developing awareness of a user’s affective state. Affective computing—computing that in some way senses, expresses, or modifies affect—is still a field very much in its youth. While progress has been made, the field is still limited by the need for larger sets of diverse, naturalistic, and multimodal data. This dissertation contributes the findings from a number of explorations of the relationships between strong reactions to music and the characteristics and self-reported affect of listeners. It demonstrates not only that such relationships do exist, but takes steps toward automatically predicting whether or not a listener will exhibit such exceptional responses. Second, this work contributes a flexible strategy and functional system for both successfully executing large-scale, distributed studies of psychophysiology and affect; and for synthesizing, managing, and disseminating the data collected through such efforts. Finally, and most importantly, this work presents the Emotion in Motion (EiM) (a study of human affective/psychophysiological response to musical stimuli) database comprising over 23,000 participants and nearly 67,000 psychophysiological responses.
Radits, Markus. "The Affective PDF Reader." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-7033.
Full textThe Affective PDF Reader is a PDF Reader combined with affect recognition systems. The aim of the project is to research a way to provide the reader of a PDF with real - time visual feedback while reading the text to influence the reading experience in a positive way. The visual feedback is given in accordance to analyzed emotional states of the person reading the text - this is done by capturing and interpreting affective information with a facial expression recognition system. Further enhancements would also include analysis of voice in the computation as well as gaze tracking software to be able to use the point of gaze when rendering the visualizations.The idea of the Affective PDF Reader mainly arose in admitting that the way we read text on computers, mostly with frozen and dozed off faces, is somehow an unsatisfactory state or moreover a lonesome process and a poor communication. This work is also inspired by the significant progress and efforts in recognizing emotional states from video and audio signals and the new possibilities that arise from.The prototype system was providing visualizations of footprints in different shapes and colours which were controlled by captured facial expressions to enrich the textual content with affective information. The experience showed that visual feedback controlled by utterances of facial expressions can bring another dimension to the reading experience if the visual feedback is done in a frugal and non intrusive way and it showed that the evolvement of the users can be enhanced.
Anderson, Keith William John. "A real-time facial expression recognition system for affective computing." Thesis, Queen Mary, University of London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.405823.
Full textVilleda, Enrique Edgar LeoÌn. "Towards affective pervasive computing : emotion detection in intelligent inhabited environments." Thesis, University of Essex, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.438154.
Full textYates, Heath. "Affective Intelligence in Built Environments." Diss., Kansas State University, 2018. http://hdl.handle.net/2097/38790.
Full textDepartment of Computer Science
William H. Hsu
The contribution of the proposed dissertation is the application of affective intelligence in human-developed spaces where people live, work, and recreate daily, also known as built environments. Built environments have been known to influence and impact individual affective responses. The implications of built environments on human well-being and mental health necessitate the need to develop new metrics to measure and detect how humans respond subjectively in built environments. Detection of arousal in built environments given biometric data and environmental characteristics via a machine learning-centric approach provides a novel and new capability to measure human responses to built environments. Work was also conducted on experimental design methodologies for multiple sensor fusion and detection of affect in built environments. These contributions include exploring new methodologies in applying supervised machine learning algorithms, such as logistic regression, random forests, and artificial neural networks, in the detection of arousal in built environments. Results have shown a machine learning approach can not only be used to detect arousal in built environments but also for the construction of novel explanatory models of the data.
Axelrod, Lesley Ann. "Emotional recognition in computing." Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/5758.
Full textCoots, Ian. "Deep Learning of Affective Content from Audio for Computing Movie Similarities." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-167976.
Full textBerthelon, Franck. "Modélisation et détection des émotions à partir de données expressives et contextuelles." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00917416.
Full textGamberini, Jacopo. "AFFECTIVE COMPUTING IN SMART EDUCATION: Stato dell'Arte e Sviluppo di un Prototipo." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.
Find full textMoshkina, Lilia V. "An integrative framework of time-varying affective robotic behavior." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39568.
Full textKetchum, Devin Kyle. "The Use of the CAfFEINE Framework in a Step-by-Step Assembly Guide." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/96609.
Full textMaster of Science
The purpose of this thesis was to use the CAfFEINE Framework, proposed by Dr. Saha, in a real-world environment. Dr. Saha's Framework utilizes a user's physical responses, i.e. heart rate, in a smart environment to give information to the smart devices. For example, if Siri were to give a user directions to someone's home and told that user to turn right when the user knew they needed to turn left. That user would have a physical reaction as in their heart rate would increase. If the user were wearing a smart watch, Siri would be able to see the heart rate increase and realize, from past experiences with that user, that the information she gave to the user was incorrect. Then she would be able to correct herself. My research focused on measuring user reaction to a smart service provided in a real-world situation using a Tangram puzzle as a mock version of an industrial assembly situation. The users were asked to follow on-screen instructions to assemble the Tangram puzzle. Their reactions were recorded through a smart watch and analyzed post-experiment. Based on the results of a Paced Stroop Test they took before the experiment, a computer algorithm would predict their stress levels for each service provided by the step-by-step instruction guide. However, the results did not turn out as expected. Therefore, the rest of the research focused more on why the results did not support Dr. Saha's previous Framework results.
Ayoub, Issa. "Multimodal Affective Computing Using Temporal Convolutional Neural Network and Deep Convolutional Neural Networks." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39337.
Full textROMEO, LUCA. "Applied Machine Learning for Health Informatics: Human Motion Analysis and Affective Computing Application." Doctoral thesis, Università Politecnica delle Marche, 2018. http://hdl.handle.net/11566/253031.
Full textThe monitoring of the quality of life and the subject's well-being represent an open challenge in the healthcare scenario. The emergence of solving this task in the new era of Artificial Intelligence leads to the application of methods in the machine learning field. The objectives and the contributions of this thesis reflect the research activities performed on the topics of (i) human motion analysis: the automatic monitoring and assessment of human movement during physical rehabilitation and (ii) affective computing: the inferring of the affective state of the subject. In the first topic, the author presents an algorithm able to extract clinically relevant motion features from the RGB-D visual skeleton joints input and provide a related score about subject’s performance. The proposed approach is respectively based on rules derived by clinician suggestions and machine learning algorithm (i.e., Hidden Semi Markov Model). The reliability of the proposed approach is tested over a dataset collected by the author and with respect to a gold standard algorithm and with respect to the clinical assessment. The results support the use of the proposed methodology for quantitatively assessing motor performance during a physical rehabilitation. In the second topic, the author proposes the application of a Multiple Instance Learning (MIL) framework for learning emotional response in presence of continuous and ambiguous labels. This is often the case with affective response to external stimuli (e.g., multimedia interaction). The reliability of the MIL approach is investigated over a benchmark database and one dataset closer to real-world problematic collected by the author. The obtained results point out how the applied methodology is consistent for predicting the human affective response.
Yacoubi, Alya. "Vers des agents conversationnels capables de réguler leurs émotions : un modèle informatique des tendances à l’action." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS378/document.
Full textConversational virtual agents with social behavior are often based on at least two different disciplines : computer science and psychology. In most cases, psychological findings are converted into computational mechanisms in order to make agents look and behave in a believable manner. In this work, we aim at increasing conversational agents’ belivielibity and making human-agent interaction more natural by modelling emotions. More precisely, we are interested in task-oriented conversational agents, which are used as a custumer-relationship channel to respond to users request. We propose an affective model of emotional responses’ generation and control during a task-oriented interaction. Our proposed model is based, on one hand, on the theory of Action Tendencies (AT) in psychology to generate emotional responses during the interaction. On the other hand, the emotional control mechanism is inspired from social emotion regulation in empirical psychology. Both mechanisms use agent’s goals, beliefs and ideals. This model has been implemented in an agent architecture endowed with a natural language processing engine developed by the company DAVI. In order to confirm the relevance of our approach, we realized several experimental studies. The first was about validating verbal expressions of action tendency in a human-agent dialogue. In the second, we studied the impact of different emotional regulation strategies on the agent perception by the user. This study allowed us to design a social regulation algorithm based on theoretical and empirical findings. Finally, the third study focuses on the evaluation of emotional agents in real-time interactions. Our results show that the regulation process contributes in increasing the credibility and perceived competence of agents as well as in improving the interaction. Our results highlight the need to take into consideration of the two complementary emotional mechanisms : the generation and regulation of emotional responses. They open perspectives on different ways of managing emotions and their impact on the perception of the agent
Tsoukalas, Kyriakos. "On Affective States in Computational Cognitive Practice through Visual and Musical Modalities." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/104069.
Full textDoctor of Philosophy
This dissertation investigates the role of learners' affect during instructional activities of visual and musical computing. More specifically, learners' enjoyment, excitement, and motivation are measured before and after a computing activity offered in four distinct ways. The computing activities are based on a prototype instructional apparatus, which was designed and fabricated for the practice of computational thinking. A study was performed using a virtual simulation accessible via internet browser. The study suggests that maintaining enjoyment during instructional activities is a more direct path to academic motivation than excitement.
Baltrušaitis, Tadas. "Automatic facial expression analysis." Thesis, University of Cambridge, 2014. https://www.repository.cam.ac.uk/handle/1810/245253.
Full textJaimes, Luis Gabriel. "On the Selection of Just-in-time Interventions." Scholar Commons, 2015. https://scholarcommons.usf.edu/etd/5506.
Full textFeghoul, Kevin. "Deep learning for simulation in healthcare : Application to affective computing and surgical data science." Electronic Thesis or Diss., Université de Lille (2022-....), 2024. http://www.theses.fr/2024ULILS033.
Full textIn this thesis, we address various tasks within the fields of affective computing and surgicaldata science that have the potential to enhance medical simulation. Specifically, we focuson four key challenges: stress detection, emotion recognition, surgical skill assessment, andsurgical gesture recognition. Simulation has become a crucial component of medical training,offering students the opportunity to gain experience and refine their skills in a safe, controlledenvironment. However, despite significant advancements, simulation-based trainingstill faces important challenges that limit its full potential. Some of these challengesinclude ensuring realistic scenarios, addressing individual variations in learners’ emotionalresponses, and, for certain types of simulations, such as surgical simulation, providing objectiveassessments. Integrating the monitoring of medical students’ cognitive states, stresslevels and emotional states, along with incorporating tools that provide objective and personalizedfeedback, especially for surgical simulations, could help address these limitations.In recent years, deep learning has revolutionized the waywe solve complex problems acrossvarious disciplines, leading to significant advancements in affective computing and surgicaldata science. However, several domain-specific challenges remain. In affective computing,automatically recognizing stress and emotions is challenging due to difficulties in definingthese states and the variability in their expression across individuals. Furthermore, themultimodal nature of stress and emotion expression introduces another layer of complexity,as effectively integrating diverse data sources remains a significant challenge. In surgicaldata science, the variability in surgical techniques across practitioners, the dynamic natureof surgical environments, and the challenge of effectively integrating multiple modalitieshighlight ongoing challenges in surgical skill assessment and gesture recognition. The firstpart of this thesis introduces a novel Transformer-based multimodal framework for stressdetection that leverages multiple fusion techniques. This framework integrates physiologicalsignals from two sensors, with each sensor’s data treated as a distinct modality. Foremotion recognition, we propose a novel multimodal approach that employs a Graph ConvolutionalNetwork (GCN) to effectively fuse intermediate representations from multiplemodalities, extracted using unimodal Transformer encoders. In the second part of this thesis,we introduce a new deep learning framework that combines a GCN with a Transformerencoder for surgical skill assessment, leveraging sequences of hand skeleton data. We evaluateour approach using two surgical simulation tasks that we have collected. Additionally,we propose a novel Transformer-based multimodal framework for surgical gesture recognitionthat incorporates an iterative multimodal refinement module to enhance the fusionof complementary information from different modalities. To address existing dataset limitationsin surgical gesture recognition, we collected two new datasets specifically designedfor this task, on which we conducted unimodal and multimodal benchmarks for the firstdataset and unimodal benchmarks for the second
Vielzeuf, Valentin. "Apprentissage neuronal profond pour l'analyse de contenus multimodaux et temporels." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMC229/document.
Full textOur perception is by nature multimodal, i.e. it appeals to many of our senses. To solve certain tasks, it is therefore relevant to use different modalities, such as sound or image.This thesis focuses on this notion in the context of deep learning. For this, it seeks to answer a particular problem: how to merge the different modalities within a deep neural network?We first propose to study a problem of concrete application: the automatic recognition of emotion in audio-visual contents.This leads us to different considerations concerning the modeling of emotions and more particularly of facial expressions. We thus propose an analysis of representations of facial expression learned by a deep neural network.In addition, we observe that each multimodal problem appears to require the use of a different merge strategy.This is why we propose and validate two methods to automatically obtain an efficient fusion neural architecture for a given multimodal problem, the first one being based on a central fusion network and aimed at preserving an easy interpretation of the adopted fusion strategy. While the second adapts a method of neural architecture search in the case of multimodal fusion, exploring a greater number of strategies and therefore achieving better performance.Finally, we are interested in a multimodal view of knowledge transfer. Indeed, we detail a non-traditional method to transfer knowledge from several sources, i.e. from several pre-trained models. For that, a more general neural representation is obtained from a single model, which brings together the knowledge contained in the pre-trained models and leads to state-of-the-art performances on a variety of facial analysis tasks
Abd, Gaus Yona Falinie. "Artificial intelligence system for continuous affect estimation from naturalistic human expressions." Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/16348.
Full textHamieh, Salam. "Utilisation des méthodes de détection d'anomalies pour l'informatique affective." Electronic Thesis or Diss., Université Grenoble Alpes, 2024. http://www.theses.fr/2024GRALT017.
Full textRecent technological advancements have paved the way for automation in various sectors, from education to autonomous driving, collaborative robots, and customer service. This has led to an increasing interest in the development of machine learning models for emotion recognition and interpretation. Nonetheless, the efficient computer-based assessment of affective and mental states faces several significant challenges, which include the difficulty of obtaining sufficient data, the intricacy of labeling, and the complexity of the task. One promising solution to these challenges lies in the field of anomaly detection, which has demonstrated its significance in numerous domains. This thesis is dedicated to addressing the multifaceted challenges in the field of affective computing by leveraging the power of anomaly detection methods.One of the key challenges addressed is data scarcity, a pervasive issue when striving to construct machine learning models capable of accurately identifying rare mental states. We study anomaly detection methods, utilizing unsupervised approaches in two critical applications: Visual Distraction Detection and Psychotic Relapse Prediction. These scenarios represent demanding and sometimes perilous states for data collection in real-world contexts. The study encompasses a comprehensive exploration of traditional and deep learning-based models, such as autoencoders, demonstrating the success of these methods in overcoming the challenges posed by unbalanced datasets. This success suggests the potential for wider applications in the future, which will help us better understand and deal with rare and hard-to-collect mental and affective states across various areas where obtaining sufficient data is not possible.Furthermore, this research addresses the challenge of inter-variability among individuals in the domain of affective states, particularly in the context of patients with psychotic relapse. The study provides a comparative analysis, exploring the strengths and limitations of both global and personalized models. Personalization is a solution to this challenge, although gathering sufficient personal data, especially for relapse situations, is challenging. However, by employing anomaly detection, it becomes feasible to use an individual's data to model their healthy patterns and detect anomalies when these patterns deviate from the norm. The findings underscore the significance of personalization as an avenue for enhancing the precision of models, especially in scenarios characterized by substantial inter-variability among subjects.Moreover, the complexity of unbalanced datasets is another focus of this thesis. It explores feature selection methods tailored to address these specific dataset characteristics. By leveraging state-of-the-art techniques, including autoencoders, the research advances novel strategies for addressing feature selection challenges posed by unbalanced datasets in applications such as Visual Distraction Detection and Psychotic Relapse Prediction.Finally, the study introduces a novel solution for information fusion from multiple sources, enhancing predictive accuracy in affective computing. This novel approach incorporates an innovative difficulty data indicator derived from an autoencoder's reconstruction error. The outcome is the development of multimodal continuous emotion recognition systems that exhibit superior performance. This approach is studied using the ULM TSST dataset for predicting arousal and valence among participants in stress-induced situations.In this thesis, we investigated various applications of anomaly detection methods in affective computing domain. While these are initial steps showcasing the potential of our proposed approaches, they also lay the groundwork for further exploration in different applications and their variations
Reitberger, Wolfgang Heinrich. "Affective Dynamics in Responsive Media Spaces." Thesis, Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4975.
Full textSaha, Deba Pratim. "A Study of Methods in Computational Psychophysiology for Incorporating Implicit Affective Feedback in Intelligent Environments." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/84469.
Full textPh. D.
Haglund, Sonja. "Färgens påverkan på mänsklig emotion vid gränssnittsdesign." Thesis, University of Skövde, School of Humanities and Informatics, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-856.
Full textDagens teknologiska samhälle ställer höga krav på människan, bland annat gällande att processa information. Vid utformning av system tas det numera vanligtvis hänsyn till människa-datorinteraktionen (MDI) för att erhålla en så hög användbarhet som möjligt. Affektiv Informatik, som är ett utvecklat sätt att förhålla sig till MDI, talar för att utveckla system som både kan uppfatta och förmedla emotioner till användaren. Fokus i rapporten är hur ett system kan förmedla emotioner, via dess färgsättning, och därmed påverka användarens emotionella tillstånd. En kvantitativ undersökning har utförts för att ta reda på hur färger kan användas i ett system för att förmedla känslouttryck till användare. Vidare har en jämförelse gjorts mellan undersökningens resultat och tidigare teorier om hur färg påverkar människans emotioner för att ta reda på huruvida de är lämpliga att tillämpa vid gränssnittsdesign. Resultatet pekade på en samständighet med de tidigare teorierna, men med endast en statistisk signifikant skillnad mellan blått och gult gällande behagligheten.
Jerčić, Petar. "Design and Evaluation of Affective Serious Games for Emotion Regulation Training." Doctoral thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-10478.
Full textAranha, Renan Vinicius. "EasyAffecta: um framework baseado em Computação Afetiva para adaptação automática de jogos sérios para reabilitação motora." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-24072017-083504/.
Full textThe use of serious games in many activities, including health cases, like the motor rehabilitation process, has demonstrated results that encourage the development of new applications in this scenario. These activities can be more interesting and funnier by using games, as well as help the patients to execute the steps of the rehabilitation process. In these applications, strategies to maintain the user\'s motivation level during the game are very important. Thus, in this research, we investigated the context adaptation on serious games using techniques of Affective Computing. The proposal consists of a framework that makes the cost of implementing affective adaptation in games lower to programmers and allows the physiotherapists to configure the adaptations that will be executed in the game, according to the profile of the patients. In order to verify the feasibility of the proposal, two games for motor rehabilitation and a version of the framework were implemented, allowing the realization of experiments with programmers, physiotherapists, and patients. The results obtained allow us to conclude that the proposed approach tends to provide great social and technological impact
Pampouchidou, Anastasia. "Automatic detection of visual cues associated to depression." Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCK054/document.
Full textDepression is the most prevalent mood disorder worldwide having a significant impact on well-being and functionality, and important personal, family and societal effects. The early and accurate detection of signs related to depression could have many benefits for both clinicians and affected individuals. The present work aimed at developing and clinically testing a methodology able to detect visual signs of depression and support clinician decisions.Several analysis pipelines were implemented, focusing on motion representation algorithms, including Local Curvelet Binary Patterns-Three Orthogonal Planes (LCBP-TOP), Local Curvelet Binary Patterns- Pairwise Orthogonal Planes (LCBP-POP), Landmark Motion History Images (LMHI), and Gabor Motion History Image (GMHI). These motion representation methods were combined with different appearance-based feature extraction algorithms, namely Local Binary Patterns (LBP), Histogram of Oriented Gradients (HOG), Local Phase Quantization (LPQ), as well as Visual Graphic Geometry (VGG) features based on transfer learning from deep learning networks. The proposed methods were tested on two benchmark datasets, the AVEC and the Distress Analysis Interview Corpus - Wizard of Oz (DAICWOZ), which were recorded from non-diagnosed individuals and annotated based on self-report depression assessment instruments. A novel dataset was also developed to include patients with a clinical diagnosis of depression (n=20) as well as healthy volunteers (n=45).Two different types of depression assessment were tested on the available datasets, categorical (classification) and continuous (regression). The MHI with VGG for the AVEC’14 benchmark dataset outperformed the state-of-the-art with 87.4% F1-Score for binary categorical assessment. For continuous assessment of self-reported depression symptoms, MHI combined with HOG and VGG performed at state-of-the-art levels on both the AVEC’14 dataset and our dataset, with Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) of 10.59/7.46 and 10.15/8.48, respectively. The best performance of the proposed methodology was achieved in predicting self-reported anxiety symptoms in our dataset, with RMSE/MAE of 9.94/7.88.Results are discussed in relation to clinical and technical limitations and potential improvements in future work
Baveye, Yoann. "Automatic prediction of emotions induced by movies." Thesis, Ecully, Ecole centrale de Lyon, 2015. http://www.theses.fr/2015ECDL0035/document.
Full textNever before have movies been as easily accessible to viewers, who can enjoy anywhere the almost unlimited potential of movies for inducing emotions. Thus, knowing in advance the emotions that a movie is likely to elicit to its viewers could help to improve the accuracy of content delivery, video indexing or even summarization. However, transferring this expertise to computers is a complex task due in part to the subjective nature of emotions. The present thesis work is dedicated to the automatic prediction of emotions induced by movies based on the intrinsic properties of the audiovisual signal. To computationally deal with this problem, a video dataset annotated along the emotions induced to viewers is needed. However, existing datasets are not public due to copyright issues or are of a very limited size and content diversity. To answer to this specific need, this thesis addresses the development of the LIRIS-ACCEDE dataset. The advantages of this dataset are threefold: (1) it is based on movies under Creative Commons licenses and thus can be shared without infringing copyright, (2) it is composed of 9,800 good quality video excerpts with a large content diversity extracted from 160 feature films and short films, and (3) the 9,800 excerpts have been ranked through a pair-wise video comparison protocol along the induced valence and arousal axes using crowdsourcing. The high inter-annotator agreement reflects that annotations are fully consistent, despite the large diversity of raters’ cultural backgrounds. Three other experiments are also introduced in this thesis. First, affective ratings were collected for a subset of the LIRIS-ACCEDE dataset in order to cross-validate the crowdsourced annotations. The affective ratings made also possible the learning of Gaussian Processes for Regression, modeling the noisiness from measurements, to map the whole ranked LIRIS-ACCEDE dataset into the 2D valence-arousal affective space. Second, continuous ratings for 30 movies were collected in order develop temporally relevant computational models. Finally, a last experiment was performed in order to collect continuous physiological measurements for the 30 movies used in the second experiment. The correlation between both modalities strengthens the validity of the results of the experiments. Armed with a dataset, this thesis presents a computational model to infer the emotions induced by movies. The framework builds on the recent advances in deep learning and takes into account the relationship between consecutive scenes. It is composed of two fine-tuned Convolutional Neural Networks. One is dedicated to the visual modality and uses as input crops of key frames extracted from video segments, while the second one is dedicated to the audio modality through the use of audio spectrograms. The activations of the last fully connected layer of both networks are conv catenated to feed a Long Short-Term Memory Recurrent Neural Network to learn the dependencies between the consecutive video segments. The performance obtained by the model is compared to the performance of a baseline similar to previous work and shows very promising results but reflects the complexity of such tasks. Indeed, the automatic prediction of emotions induced by movies is still a very challenging task which is far from being solved
Eladhari, Mirjam Palosaari. "Characterising action potential in virtual game worlds applied with the mind module." Thesis, Teesside University, 2010. http://hdl.handle.net/10149/129791.
Full textTasooji, Reza. "Determining Correlation Between Video Stimulus and Electrodermal Activity." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/84509.
Full textMaster of Science
Neto, Ary Fagundes Bressane. "Uma arquitetura para agentes inteligentes com personalidade e emoção." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-28072010-121443/.
Full textOne of the main motivations of Artificial Intelligence in the context of the digital entertainment systems is to create characters that are adaptable to new situations, unpredictable, fast learners, enable with memory of past situations and a variety of consistent and convincing behavior over time. According to recent studies conducted in the fields of Neuroscience and Psychology, the ability to solve problems is not only related to the capacity to manipulate symbols, but also to the ability to explore the environment and to engage into social interaction, which can be expressed as emotional phenomena. The results of these studies confirm the key role the personality and emotions play in the activities of perception, attention, planning, reasoning, creativity, learning, memory and decision making. When modules for handling personality and emotion, are incorporated in a theory of agents, it is possible to build Believable Agents. The main objective of this work is to develop and implement an intelligent agent architecture to build synthetic characters whose affective states influence their cognitive activities. To develop such architecture the BDI model (Beliefs, Desires and Intentions) was used as a basis, to which an Affective Module was included. The Affective Module consists of three sub-modules (Personality, Mood and Emotion), which influence the cognitive activities of perception, memory and decision making. Finally, two proofs of concept were built: the simulation of the problem of ``Iterated Prisoner\'s Dilemma\'\' and the computerized version of the ``Memory Game.\'\' The construction of these experiments allowed to evaluate empirically the influence of personality, mood and emotion in cognitive activities of agents and consequently in their behavior. The results show that using the proposed architecture one can build agents with more consistent, adaptive and cooperative behaviors when compared to agents built with architectures whose affective states do not influence their cognitive activities. It also produces a behavior that is closer to a human user than that of optimal or random behavior. This evidence of success, presented in the obtained results, show that agents built with the proposed architecture indicate an advance towards the development of Believable Agents.
Pereira, Adriano. "AFFECTIVE-RECOMMENDER: UM SISTEMA DE RECOMENDAÇÃO SENSÍVEL AO ESTADO AFETIVO DO USUÁRIO." Universidade Federal de Santa Maria, 2012. http://repositorio.ufsm.br/handle/1/5406.
Full textSistemas de Computação Pervasiva buscam melhorar a interação humano-computador através do uso de variáveis da situação do usuário que definem o contexto. A explosão da Internet e das tecnologias de informação e comunicação torna crescente a quantidade de itens disponíveis para a escolha, impondo custo para o usuário no processo de tomada de decisão. A Computação Afetiva tem entre seus objetivos identificar o estado emocional/afetivo do usuário durante uma interação computacional, para automaticamente responder a ele. Já Sistemas de Recomendação auxiliam a tomada de decisão, selecionando e sugerindo itens em situações onde há grandes volumes de informação, tradicionalmente, utilizando as preferências dos usuários para a seleção e sugestão. Esse processo pode ser melhorado com o uso do contexto (físico, ambiental, social), surgindo os Sistemas de Recomendação Sensíveis ao Contexto. Tendo em vista a importância das emoções em nossas vidas, e a possibilidade de tratamento delas com a Computação Afetiva, este trabalho utiliza o contexto afetivo do usuário como variável da situação, durante o processo de recomendação, propondo o Affective-Recommender um sistema de recomendação que faz uso do estado afetivo do usuário para selecionar e sugerir itens. O sistema foi modelado a partir de quatro componentes: (i) detector, que identifica o estado afetivo, utilizando o modelo multidimensional Pleasure, Arousal e Dominance e o instrumento Self-Assessment Manikin, solicitando que o usuário informe como se sente; (ii) recomendador, que escolhe e sugere itens, utilizando uma abordagem baseada em filtragem colaborativa, em que a preferência de um usuário para um item é vista como sua reação estado afetivo detectado após o contato ao item; (iii) aplicação, que interage com o usuário, exibe os itens de provável maior interesse definidos pelo recomendador, e solicita que o estado seja identificado, sempre que necessário; e (iv) base de dados, que armazena os itens disponíveis para serem sugeridos e as preferências de cada usuário. Como um caso de uso e prova de conceito, o Affective-Recommender é empregado em um cenário de e-learning, devido à importância da personalização, obtida com a recomendação, e das emoções no processo de aprendizagem. O sistema foi implementado utilizando-se como base o AVEA Moodle. Para expor o funcionamento, estruturou-se um cenário de uso, simulando-se o processo de recomendação. Para verificar a aplicabilidade real do sistema, ele foi empregado em três turmas de cursos de graduação da UFSM, sendo analisados dados de acesso e aplicado um questionário para identificar as impressões do alunos quanto a informar como se sentem e receber recomendações. Como resultados, percebeu-se que os alunos conseguiram informar seus estados afetivos, e que houve uma mudança em neste estado com base no item acessado, embora não tenham vislumbrado melhorias com as recomendações, em virtude da pequena quantidade de dados disponível para processamento e do curto tempo de aplicação.
Iepsen, Edécio Fernando. "Ensino de algoritmos : detecção do estado afetivo de frustração para apoio ao processo de aprendizagem." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/78020.
Full textThis thesis presents a research work on the detection of students who show signs of frustration in learning activities in the area of algorithms, to then assist them with proactive support actions. Our motivation for the development of this work comes from students' difficulty in learning the concepts and techniques for building algorithms, which constitutes one of the main factors for the high dropout rates of computing courses. With the intent of giving a contribution to the reduction of such evasion, this research highlights the importance of considering students' affective states, trying to motivate them to study and work out their difficulties, with the assistance of computer systems. For research validation purposes, a tool was built to: a) infer the student’s affective state of frustration while solving exercises of algorithms; b) detect signs associated with frustration, to provide resources to support student learning. The inference of frustration comes from the analysis of behavioral variables produced by the interactions of students with the tool. The support consists in displaying a tutorial with a step by step solution for the exercise in which the student shows difficulties, and the recommendation of a new exercise with more linear levels of complexity than the concepts worked until that point in the course. With these actions, our intention is to turn student's frustration into a learning opportunity. Case studies were conducted with students of Algorithms at the Faculty of Technology Senac Pelotas, in 2011 and 2012. Data mining techniques were used to identify patterns of student behavior. The experiment results showed that evidence such as the high number of attempts to compile a program without success, the large number of errors in a program or even the amount of time spent trying to solve an algorithm, might be related to the student’s frustration state. Additionally, a pre and post-test comparison showed significant progress in students' learning.
Haines, Nathaniel. "Decoding facial expressions that produce emotion valence ratings with human-like accuracy." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1511257717736851.
Full textGnjatović, Milan. "Adaptive dialogue management in human-machine interaction." München Verl. Dr. Hut, 2009. http://d-nb.info/997723475/04.
Full textElkins, Aaron Chaim. "Vocalic Markers of Deception and Cognitive Dissonance for Automated Emotion Detection Systems." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/202930.
Full textKarg, Michelle E. [Verfasser], Kolja [Akademischer Betreuer] Kühnlenz, and Gerhard [Akademischer Betreuer] Rigoll. "Pattern Recognition Algorithms for Gait Analysis with Application to Affective Computing / Michelle Karg. Gutachter: Gerhard Rigoll. Betreuer: Kolja Kühnlenz." München : Universitätsbibliothek der TU München, 2012. http://d-nb.info/1019589450/34.
Full textNavarro, Sainz Adriana G. "An Exploratory Study: Personal Digital Technologies For Stress Care in Women." University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1543579225538012.
Full textDelaborde, Agnès. "Modélisation du profil émotionnel de l’utilisateur dans les interactions parlées Humain-Machine." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112225/document.
Full textAnalysing and formalising the emotional aspect of the Human-Machine Interaction is the key to a successful relation. Beyond and isolated paralinguistic detection (emotion, disfluences…), our aim consists in providing the system with a dynamic emotional and interactional profile of the user, which can evolve throughout the interaction. This profile allows for an adaptation of the machine’s response strategy, and can deal with long term relationships. A multi-level processing of the emotional and interactional cues extracted from speech (LIMSI emotion detection tools) leads to the constitution of the profile. Low level cues ( F0, energy, etc.), are then interpreted in terms of expressed emotion, strength, or talkativeness of the speaker. These mid-level cues are processed in the system so as to determine, over the interaction sessions, the emotional and interactional profile of the user. The profile is made up of six dimensions: optimism, extroversion, emotional stability, self-confidence, affinity and dominance (based on the OCEAN personality model and the interpersonal circumplex theories). The information derived from this profile could allow for a measurement of the engagement of the speaker. The social behaviour of the system is adapted according to the profile, and the current task state and robot behaviour. Fuzzy logic rules drive the constitution of the profile and the automatic selection of the robotic behaviour. These determinist rules are implemented on a decision engine designed by a partner in the project ROMEO. We implemented the system on the humanoid robot NAO. The overriding issue dealt with in this thesis is the viable interpretation of the paralinguistic cues extracted from speech into a relevant emotional representation of the user. We deem it noteworthy to point out that multimodal cues could reinforce the profile’s robustness. So as to analyse the different parts of the emotional interaction loop between the user and the system, we collaborated in the design of several systems with different autonomy degrees: a pre-scripted Wizard-of-Oz system, a semi-automated system, and a fully autonomous system. Using these systems allowed us to collect emotional data in robotic interaction contexts, by controlling several emotion elicitation parameters. This thesis presents the results of these data collections, and offers an evaluation protocol for Human-Robot Interaction through systems with various degrees of autonomy
Boukhris, Mehdi. "Modélisation et évaluation de la fidélité d'un clone virtuel." Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLS177/document.
Full textFace identification plays a crucial role in our daily social interactions. Indeed, our behavior changes according to the identification of the person with whom we interact. Moreover, several studies in Psychology and Neurosciences have observed that our cognitive processing of familiar faces is different from the cognitive processing of unfamiliar faces.Creating photorealistic an animated human-like face of a real person is now possible thanks to recent advances in Computer Graphics and 3D scan systems. Recent rendering techniques are challenging our ability to distinguish between computer generated faces and real human faces. Besides, the current trend to model virtual humans is to involve real data collected using scans and motion capture systems. Research and applications in virtual humans have experienced a growing interest in so-called virtual clones (agents with a familiar aspect or at least recognizable). Virtual clones are therefore increasingly used in human-machine interfaces and in the audiovisual industry. Studies about the perception and interaction with virtual clones are therefore required to better understand how we should design and evaluate this kind of technology. Indeed, very few studies have tried to evaluate virtual clones' fidelity with respect to the original human (hereafter called “the referent”). The main goal of this thesis is to explore this line of research. Our work rises several research questions: What are the features of the virtual clone that enable us to evaluate the resemblance between a virtual clone and its referent? Among several possibilities of rendering, animation and data acquisition techniques offered by Computer Graphics, what is the best combination of techniques to ensure the highest level of perceived fidelity?However, visual appearance is not the only component that is involved in recognizing familiar people. The other components include facial expressiveness but also the possible knowledge that we have about the referent (e.g. his particular way of assessing an emotional situation and expressing it through his face).Our contributions provide answers to these questions at several levels. We define a conceptual framework identifying the key concepts which are relevant for the study of the fidelity of a virtual face. We explore different rendering techniques. We describe an experimental study about the impact of familiarity in the judgment of fidelity. Finally, we propose a preliminary individual computational model based on a cognitive approach of emotions that could drive the animation of the virtual clone.This work opens avenues for the design and improvement of virtual clones, and more generally for the human-machine interfaces based on expressive virtual agents
Paleari, Marco. "Informatique Affective : Affichage, Reconnaissance, et Synthèse par Ordinateur des Émotions." Phd thesis, Télécom ParisTech, 2009. http://pastel.archives-ouvertes.fr/pastel-00005615.
Full textWeber, Marlene. "Automotive emotions : a human-centred approach towards the measurement and understanding of drivers' emotions and their triggers." Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/16647.
Full textYngström, Karl. "Hjälpmedel för att tydliggöra känslor hos personer med AST." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20498.
Full textSmartphones are a powerful tool to facilitate communication in new ways and research onusing technology to assist people with various mental disabilities is a growing field. AutismSpectrum Disorder or ASD is one such disability, which will manifest differently for differentpeople, but one general theme is the inability to understand emotion. Measuring emotion issomething not easily done and for some time research into emotion has been overlooked infavor of more logical thought processes.This paper uses Russell's model for emotion and core affect, it maps emotion based on twocrossed axis, activation and valence (positive – negative).The purpose of this study is to evaluate various methods for measuring and registeringemotion for people with ASD in a simple cheap and accessible way. This is done based onexisting models of emotion and using a smartphone as a tool, and should be helpful in thedaily life of people with ASD and people around them.
Esau, Natalia. "Emotionale Aspekte der Mensch-Roboter-Interaktion und ihre Realisierung in verhaltensbasierten Systemen /." Aachen : Shaker, 2009. http://d-nb.info/997696605/04.
Full textBURSIC, SATHYA. "ON WIRING EMOTION TO WORDS: A BAYESIAN MODEL." Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/932589.
Full textPoikolainen, Rosén Anton. "Words have power: Speech recognition in interactive jewelry : a case study with newcome LGBT+ immigrants." Thesis, Södertörns högskola, Medieteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-32992.
Full text