Статті в журналах з теми "Computational model of emotion"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Computational model of emotion.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Computational model of emotion".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Guojiang, Wang, and Teng Shaodong. "A Computational Model Based on Emotion Energy." International Journal of Signal Processing Systems 7, no. 2 (March 2019): 54–59. http://dx.doi.org/10.18178/ijsps.7.2.54-59.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Meftah, Imen Tayari, Nhan Le Thanh, and Chokri Ben Amar. "Multimodal Approach for Emotion Recognition Using a Formal Computational Model." International Journal of Applied Evolutionary Computation 4, no. 3 (July 2013): 11–25. http://dx.doi.org/10.4018/jaec.2013070102.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Emotions play a crucial role in human-computer interaction. They are generally expressed and perceived through multiple modalities such as speech, facial expressions, physiological signals. Indeed, the complexity of emotions makes the acquisition very difficult and makes unimodal systems (i.e., the observation of only one source of emotion) unreliable and often unfeasible in applications of high complexity. Moreover the lack of a standard in human emotions modeling hinders the sharing of affective information between applications. In this paper, the authors present a multimodal approach for the emotion recognition from many sources of information. This paper aims to provide a multi-modal system for emotion recognition and exchange that will facilitate inter-systems exchanges and improve the credibility of emotional interaction between users and computers. The authors elaborate a multimodal emotion recognition method from Physiological Data based on signal processing algorithms. The authors’ method permits to recognize emotion composed of several aspects like simulated and masked emotions. This method uses a new multidimensional model to represent emotional states based on an algebraic representation. The experimental results show that the proposed multimodal emotion recognition method improves the recognition rates in comparison to the unimodal approach. Compared to the state of art multimodal techniques, the proposed method gives a good results with 72% of correct.
3

Hudlicka, Eva. "Guidelines for Designing Computational Models of Emotions." International Journal of Synthetic Emotions 2, no. 1 (January 2011): 26–79. http://dx.doi.org/10.4018/jse.2011010103.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Rapid growth in computational modeling of emotion and cognitive-affective architectures occurred over the past 15 years. Emotion models and architectures are built to elucidate the mechanisms of emotions and enhance believability and effectiveness of synthetic agents and robots. Despite the many emotion models developed to date, a lack of consistency and clarity regarding what exactly it means to ‘model emotions’ persists. There are no systematic guidelines for development of computational models of emotions. This paper deconstructs the often vague term ‘emotion modeling’ by suggesting the view of emotion models in terms of two fundamental categories of processes: emotion generation and emotion effects. Computational tasks necessary to implement these processes are also identified. The paper addresses how computational building blocks provide a basis for the development of more systematic guidelines for affective model development. The paper concludes with a description of an affective requirements analysis and design process for developing affective computational models in agent architectures.
4

Sun, Ron, Joseph Allen, and Eric Werbin. "Modeling Emotion Contagion within a Computational Cognitive Architecture." Journal of Cognition and Culture 22, no. 1-2 (March 11, 2022): 60–89. http://dx.doi.org/10.1163/15685373-12340125.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The issue of emotion contagion has been gaining attention. Humans can share emotions, for example, through gestures, through speech, or even through online text via social media. There have been computational models trying to capture emotion contagion. However, these models are limited as they tend to represent agents in a very simplified way. There exist also more complex models of agents and their emotions, but they are not yet addressing emotion contagion. We use a more psychologically realistic and better validated model – the Clarion cognitive architecture – as the basis to model emotion and emotion contagion in a more psychologically realistic way. In particular, we use Clarion to capture and explain human data from typical human experiments on emotion contagion. This approach may open up avenues for more nuanced understanding of emotion contagion and more realistic capturing of its effects in different circumstances.
5

Talanov, Max, and Alexander Toschev. "Computational Emotional Thinking and Virtual Neurotransmitters." International Journal of Synthetic Emotions 5, no. 1 (January 2014): 1–8. http://dx.doi.org/10.4018/ijse.2014010101.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Turing genius anticipated current research in AI field for 65 years and stated that idea of intelligent machines “cannot be wholly ignored, because the idea of 'intelligence' is itself emotional rather than mathematical” (Turing, 1948). The authors' work is dedicated to construction or synthesis of computational emotional thinking. The authors used 3 bases for their work: AI - six thinking levels model described in book “The emotion machine” (Minsky, 2007). Evolutionary psychology model of emotions that is called “Wheel of emotions” (Plutchik, 2001), the authors used as subjective perception model. Neuroscience (neurotransmission) theory of emotions by Lovheim “Cube of emotions” (Lovheim, 2012) was used as objective brain emotional response model. Based on neurotransmitters impact the authors propose to model emotional computing systems. Overall presented work is synthesis of several emotional/affective theories to produce a model of emotions and affective mechanisms that fit model of six thinking levels architecture.
6

Scherer, Klaus R. "Emotions are emergent processes: they require a dynamic computational architecture." Philosophical Transactions of the Royal Society B: Biological Sciences 364, no. 1535 (December 12, 2009): 3459–74. http://dx.doi.org/10.1098/rstb.2009.0141.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Emotion is a cultural and psychobiological adaptation mechanism which allows each individual to react flexibly and dynamically to environmental contingencies. From this claim flows a description of the elements theoretically needed to construct a virtual agent with the ability to display human-like emotions and to respond appropriately to human emotional expression. This article offers a brief survey of the desirable features of emotion theories that make them ideal blueprints for agent models. In particular, the component process model of emotion is described, a theory which postulates emotion-antecedent appraisal on different levels of processing that drive response system patterning predictions. In conclusion, investing seriously in emergent computational modelling of emotion using a nonlinear dynamic systems approach is suggested.
7

Gratch, Jonathan, and Stacy Marsella. "Evaluating a Computational Model of Emotion." Autonomous Agents and Multi-Agent Systems 11, no. 1 (June 9, 2005): 23–43. http://dx.doi.org/10.1007/s10458-005-1081-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Jain, Shikha, and Krishna Asawa. "EMIA: Emotion Model for Intelligent Agent." Journal of Intelligent Systems 24, no. 4 (December 1, 2015): 449–65. http://dx.doi.org/10.1515/jisys-2014-0071.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractEmotions play a significant role in human cognitive processes such as attention, motivation, learning, memory, and decision making. Many researchers have worked in the field of incorporating emotions in a cognitive agent. However, each model has its own merits and demerits. Moreover, most studies on emotion focus on steady-state emotions than emotion switching. Thus, in this article, a domain-independent computational model of emotions for intelligent agent is proposed that have modules for emotion elicitation, emotion regulation, and emotion transition. The model is built on some well-known psychological theories such as appraisal theories of emotions, emotion regulation theory, and multistore human memory model. The design of the model is using the concept of fuzzy logic to handle uncertain and subjective information. The main focus is on primary emotions as suggested by Ekman; however, simultaneous elicitation of multiple emotions (called secondary emotion) is also supported by the model.
9

Fathalla, Rana. "Emotional Models." International Journal of Synthetic Emotions 11, no. 2 (July 2020): 1–18. http://dx.doi.org/10.4018/ijse.2020070101.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Emotion modeling has gained attention for almost two decades now due to the rapid growth of affective computing (AC). AC aims to detect and respond to the end-user's emotions by devices and computers. Despite the hard efforts being directed to emotion modeling with numerous tries to build different models of emotions, emotion modeling remains an art with a lack of consistency and clarity regarding the exact meaning of emotion modeling. This review deconstructs the vagueness of the term ‘emotion modeling' by discussing the various types and categories of emotion modeling, including computational models and its categories—emotion generation and emotion effects—and emotion representation models and its categories—categorical, dimensional, and componential models. This review deals with applications associated with each type of emotion model including artificial intelligence and robotics architecture, computer-human interaction applications of the computational models, and emotion classification and affect-aware applications such as video games and tutoring systems applications of emotion representation models.
10

Broekens, Joost. "Modeling the Experience of Emotion." International Journal of Synthetic Emotions 1, no. 1 (January 2010): 1–17. http://dx.doi.org/10.4018/jse.2010101601.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Affective computing has proven to be a viable field of research comprised of a large number of multidisciplinary researchers, resulting in work that is widely published. The majority of this work consists of emotion recognition technology, computational modeling of causal factors of emotion and emotion expression in virtual characters and robots. A smaller part is concerned with modeling the effects of emotion on cognition and behavior, formal modeling of cognitive appraisal theory and models of emergent emotions. Part of the motivation for affective computing as a field is to better understand emotion through computational modeling. In psychology, a critical and neglected aspect of having emotions is the experience of emotion: what does the content of an emotional episode look like, how does this content change over time, and when do we call the episode emotional. Few modeling efforts in affective computing have these topics as a primary focus. The launch of a journal on synthetic emotions should motivate research initiatives in this direction, and this research should have a measurable impact on emotion research in psychology. In this article, I show that a good way to do so is to investigate the psychological core of what an emotion is: an experience. I present ideas on how computational modeling of emotion can help to better understand the experience of motion, and provide evidence that several computational models of emotion already address the issue.
11

Rodríguez, Luis-Felipe, Félix Ramos, and Yingxu Wang. "Cognitive Computational Models of Emotions and Affective Behaviors." International Journal of Software Science and Computational Intelligence 4, no. 2 (April 2012): 41–63. http://dx.doi.org/10.4018/jssci.2012040103.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Emotions are one of the important subconscious mechanisms that influence human behaviors, attentions, and decision making. The emotion process helps to determine how humans perceive their internal status and needs in order to form consciousness of an individual. Emotions have been studied from multidisciplinary perspectives and covered a wide range of empirical and psychological topics, such as understanding the emotional processes, creating cognitive and computational models of emotions, and applications in computational intelligence. This paper presents a comprehensive survey of cognitive and computational models of emotions resulted from multidisciplinary studies. It explores how cognitive models serve as the theoretical basis of computational models of emotions. The mechanisms underlying affective behaviors are examined as important elements in the design of these computational models. A comparative analysis of current approaches is elaborated based on recent advances towards a coherent cognitive computational model of emotions, which leads to the machine simulated emotions for cognitive robots and autonomous agent systems in cognitive informatics and cognitive computing.
12

Kragel, Philip A., Marianne C. Reddan, Kevin S. LaBar, and Tor D. Wager. "Emotion schemas are embedded in the human visual system." Science Advances 5, no. 7 (July 2019): eaaw4358. http://dx.doi.org/10.1126/sciadv.aaw4358.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Theorists have suggested that emotions are canonical responses to situations ancestrally linked to survival. If so, then emotions may be afforded by features of the sensory environment. However, few computational models describe how combinations of stimulus features evoke different emotions. Here, we develop a convolutional neural network that accurately decodes images into 11 distinct emotion categories. We validate the model using more than 25,000 images and movies and show that image content is sufficient to predict the category and valence of human emotion ratings. In two functional magnetic resonance imaging studies, we demonstrate that patterns of human visual cortex activity encode emotion category–related model output and can decode multiple categories of emotional experience. These results suggest that rich, category-specific visual features can be reliably mapped to distinct emotions, and they are coded in distributed representations within the human visual system.
13

Korendo, Marta. "SIGNAL DETECTION APPROACH IN MODELING CONSCIOUSNESS – EMOTION INTERACTIONS." Acta Neuropsychologica 15, no. 1 (March 12, 2017): 89–96. http://dx.doi.org/10.5604/12321966.1238143.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Contemporary cognitive science attempts to provide computational models that describe how consciousness and emotion constitute adaptive behavior. Given the recent neurobiological view that highlights the fact that the cognitive and emotional regions of the brain work together to achieve conscious behavior, it was shown that signal-detection theory (SDT) can effectively capture the notion of the consciousness–emotion interactions that underlie emotional experience. In particular, I have demonstrated that the hierarchical SDT model is capable of estimating different levels of the hierarchical organization of emotional experience. I have also shown that the threshold SDT model predicts that the formation of emotion experience requires a discrete decision space, which implies that the neural representations of emotion are mediated by thresholds to be experienced consciously. The application of both computational SDT models seems to be a promising advance for studying consciousness–emotion interactions.
14

Talanov, Max, and Alexander Toschev. "Appraisal, Coping and High Level Emotions Aspects of Computational Emotional Thinking." International Journal of Synthetic Emotions 6, no. 1 (January 2015): 24–39. http://dx.doi.org/10.4018/ijse.2015010102.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Turing genius anticipated current research in AI field for 65 years and stated that idea of intelligent machines “cannot be wholly ignored, because the idea of 'intelligence' is itself emotional rather than mathematical” (). This is the second article dedicated to emotional thinking bases. In the first article, the authors () created overall picture and proposed framework for computational emotional thinking. They used 3 bases for their work: AI - six thinking levels model described in book “The emotion machine” (). Evolutionary psychology model: “Wheel of emotions” (). Neuroscience (neurotransmission) theory of emotions by Lovheim “Cube of emotions” (). Based on neurotransmitters impact the authors proposed to model emotional computing systems. Current work is dedicated to three aspects left not described in first article: appraisal: algorithm and predicates - how inbound stimulus is estimated to trigger proper emotional response, coping: the way human treat with emotional state triggered by stimulus appraisal and further thinking processes, high level emotions impact on system and its computational processes.
15

Soman, Gayathri, M. V. Vivek, M. V. Judy, Elpiniki Papageorgiou, and Vassilis C. Gerogiannis. "Precision-Based Weighted Blending Distributed Ensemble Model for Emotion Classification." Algorithms 15, no. 2 (February 6, 2022): 55. http://dx.doi.org/10.3390/a15020055.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Focusing on emotion recognition, this paper addresses the task of emotion classification and its performance with respect to accuracy, by investigating the capabilities of a distributed ensemble model using precision-based weighted blending. Research on emotion recognition and classification refers to the detection of an individual’s emotional state by considering various types of data as input features, such as textual data, facial expressions, vocal, gesture and physiological signal recognition, electrocardiogram (ECG) and electrodermography (EDG)/galvanic skin response (GSR). The extraction of effective emotional features from different types of input data, as well as the analysis of large volume of real-time data, have become increasingly important tasks in order to perform accurate classification. Taking into consideration the volume and variety of the examined problem, a machine learning model that works in a distributed manner is essential. In this direction, we propose a precision-based weighted blending distributed ensemble model for emotion classification. The suggested ensemble model can work well in a distributed manner using the concepts of Spark’s resilient distributed datasets, which provide quick in-memory processing capabilities and also perform iterative computations effectively. Regarding model validation set, weights are assigned to different classifiers in the ensemble model, based on their precision value. Each weight determines the importance of the respective classifier in terms of its performing prediction, while a new model is built upon the derived weights. The produced model performs the task of final prediction on the test dataset. The results disclose that the proposed ensemble model is sufficiently accurate in differentiating between primary emotions (such as sadness, fear, and anger) and secondary emotions. The suggested ensemble model achieved accuracy of 76.2%, 99.4%, and 99.6% on the FER-2013, CK+, and FERG-DB datasets, respectively.
16

Ullah, Nimat, Jan Treur, and Sander L. Koole. "A computational model for flexibility in emotion regulation." Procedia Computer Science 145 (2018): 572–80. http://dx.doi.org/10.1016/j.procs.2018.11.100.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Yu, Dong Mei. "Affective-Cognitive Reward Model Based on Emotional Interactions." Applied Mechanics and Materials 333-335 (July 2013): 1357–60. http://dx.doi.org/10.4028/www.scientific.net/amm.333-335.1357.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, author presents a new computational model where both the results of emotional interactions from the intrinsic emotion and recognition and the extrinsic environmental stimulation (another agent), the two parts play an important role in everyday life. We take a kind of six basic emotion states (happiness, surprise, anger, fear, sadness, disgust), and updates its state depending on its current emotional and cognitive state and meeting with another one in random environment. At last, we also design some experiments to verify the effects of affective cognitive algorithm. Those experimental results are accordance with the emotion principle of human being.
18

Qura-tul-ain Khan and Tahir Alyas. "Modeling Emotional Mutation and Evolvement Using Genetic Algorithm in Agency." Lahore Garrison University Research Journal of Computer Science and Information Technology 1, no. 2 (June 30, 2017): 52–61. http://dx.doi.org/10.54692/lgurjcsit.2017.010228.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Human mind has the ability to generate emotions based on internal and external environment. These emotions are based on past experiences and the current situation. Mutation of emotions in human is the change in the intensity of emotion and the more intense the emotion is, it has more chances of existence. In mutative state two emotions are crossover and from the new emotions only the fittest and strongest emotion survive. Emotional mutation and evolvement helps human mind in decision making and in generating response. In agency the phenomenon of emotional modeling can be accomplished by Mutation and Evolvement for generating output. Genetic algorithm is computational model that is inspired by evolution of biological population and by using mutation and crossover of Genetic Algorithm the agency is able to generate output.This paper presents the algorithmic approach for emotional Mutation and Evolvement using Genetic Algorithm for generating output in agency.
19

Et.al, Khodijah Hulliyah. "Analysis of Emotion Recognition Model Using Electroencephalogram (EEG) Signals Based on Stimuli Text." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 3 (April 10, 2021): 1384–93. http://dx.doi.org/10.17762/turcomat.v12i3.910.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recognizing emotions through the brain wave approach with facial or sound expression is widely used, but few use text stimuli. Therefore, this study aims to analyze the emotion recognition experiment by stimulating sentiment-tones using EEG. The process of classifying emotions uses a random forest model approach which is compared with two models, namely Support Vector Machine and decision tree as benchmarks. The raw data used comes from the results of scrapping Twitter data. The dataset of emotional annotation was carried out manually based on four classifications, specifically: happiness, sadness, fear, and anger. The annotated dataset was tested using an Electroencephalogram (EEG) device attached to the participant's head to determine the brain waves appearing after reading the text. The results showed that the random forest model has the highest accuracy level with a rate of 98% which is slightly different from the decision tree with 88%. Meanwhile, in SVM the accuracy results are less good with a rate of 32%. Furthermore, the match level of angry emotions from the three models above during manual annotation and using the EEG device showed a high number with an average value above 90%, because reading with angry expressions is easier to perform. For this reason, this study aims to test the emotion recognition experiment by stimulating sentiment-tones using EEG. The process of classifying emotions uses a random forest model approach which is compared with two models, namely SVM and decision tree as benchmarks. The dataset used comes from the results of scrapping Twitter data.
20

V. S., Bakkialakshmi, and Sudalaimuthu Thalavaipillai. "AMIGOS: a robust emotion detection framework through Gaussian ResiNet." Bulletin of Electrical Engineering and Informatics 11, no. 4 (August 1, 2022): 2142–50. http://dx.doi.org/10.11591/eei.v11i4.3783.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Affective computing is the study of the deep extraction of emotional impacts that triggers humans for various reasons. Emotions directly reflect on human behaviour. The proposed analysis is inclined towards deep emotion extraction through a novel concept with less computation time. Designing a robust analysis model is focused on here. AMIGOS dataset on affect, and personality modelling is considered here. A Novel Gaussian ResiNet (GRN) algorithm is evaluated here. Any changes in the emotions of humans are the brainy response given to the actions faced. The features of the given physiological factors are considered for analysis, further with GMM-ResiNet (GRN) a low computational structure is used for classification. The Novel Gaussian ResiNet (GRN) is created from the given dataset for similar feature validations. The system predicts the correlated relative data from the training set and testing set and achieved the performance metrics using error rate (ER), Algorithm Computation Time (ACT), Full Computation Time (FCT), Accuracy (AUC) etc. Novel Gaussian ResiNet (GRN) is created and tested with processed data of the AMIGOS dataset. The model created is validated with state-of-art approaches and achieved an accuracy of 92.6%.
21

Winters, R. Michael, and Marcelo M. Wanderley. "Sonification of Emotion: Strategies and results from the intersection with music." Organised Sound 19, no. 1 (February 26, 2014): 60–69. http://dx.doi.org/10.1017/s1355771813000411.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Emotion is a word not often heard in sonification, though advances in affective computing make the data type imminent. At times the relationship between emotion and sonification has been contentious due to an implied overlap with music. This paper clarifies the relationship, demonstrating how it can be mutually beneficial. After identifying contexts favourable to auditory display of emotion, and the utility of its development to research in musical emotion, the current state of the field is addressed, reiterating the necessary conditions for sound to qualify as a sonification of emotion. With this framework, strategies for display are presented that use acoustic and structural cues designed to target select auditory-cognitive mechanisms of musical emotion. Two sonifications are then described using these strategies to convey arousal and valence though differing in design methodology: one designed ecologically, the other computationally. Each model is sampled at 15-second intervals at 49 evenly distributed points on the AV space, and evaluated using a publically available tool for computational music emotion recognition. The computational design performed 65 times better in this test, but the ecological design is argued to be more useful for emotional communication. Conscious of these limitations, computational design and evaluation is supported for future development.
22

Castellanos, Sergio, and Luis-Felipe Rodríguez. "A Flexible Scheme to Model the Cognitive Influence on Emotions in Autonomous Agents." International Journal of Cognitive Informatics and Natural Intelligence 12, no. 4 (October 2018): 81–100. http://dx.doi.org/10.4018/ijcini.2018100105.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Autonomous agents (AAs) are designed to embody the natural intelligence by incorporating cognitive mechanisms that are applied to evaluate stimuli from an emotional perspective. Computational models of emotions (CMEs) implement mechanisms of human information processing in order to provide AAs for a capability to assign emotional values to perceived stimuli and implement emotion-driven behaviors. However, a major challenge in the design of CMEs is how cognitive information is projected from the architecture of AAs. This article presents a cognitive model for CMEs based on appraisal theory aimed at modeling AAs' interactions between cognitive and affective processes. The proposed scheme explains the influence of AAs' cognition on emotions by fuzzy membership functions associated to appraisal dimensions. The computational simulation is designed in the context of an integrative framework to facilitate the development of CMEs, which are capable of interacting with cognitive components of AAs. This article presents a case study and experiment that demonstrate the functionality of the proposed models.
23

Abro, Altaf H., Adnan Manzoor, Seyed Amin Tabatabaei, and Jan Treur. "A Computational Cognitive Model Integrating Different Emotion Regulation Strategies." Procedia Computer Science 71 (2015): 157–68. http://dx.doi.org/10.1016/j.procs.2015.12.187.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Bosse, Tibor, Matthijs Pontier, and Jan Treur. "A computational model based on Gross’ emotion regulation theory." Cognitive Systems Research 11, no. 3 (September 2010): 211–30. http://dx.doi.org/10.1016/j.cogsys.2009.10.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Shi, Xue Fei, and Tao Feng. "Emotion Intelligence Computation Based on Matrix Description of State Machine." Advanced Materials Research 717 (July 2013): 439–43. http://dx.doi.org/10.4028/www.scientific.net/amr.717.439.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Nowadays the functional role of emotions has been recently fully recognized as essential for intelligent systems. In this paper an emotion and behavior model are presented based on the similarity between primary emotion and state machine. A two-layer emotional state generator based on the brain science is introduced firstly. The matrix description of state machine is applied to construct the bottom level of emotion generator. This method could improve the reactive performance of intelligent system. A neural cell model named Lapicque is used to describe the transition of emotion state. Experimental results is presented in the end demonstrate the response advantage of our model.
26

Iovane, Gerardo, Iana Fominska, Riccardo Emanuele Landi, and Francesco Terrone. "Smart Sensing: An Info-Structural Model of Cognition for Non-Interacting Agents." Electronics 9, no. 10 (October 15, 2020): 1692. http://dx.doi.org/10.3390/electronics9101692.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study explores an info-structural model of cognition for non-interacting agents affected by human sensation, perception, emotion, and affection. We do not analyze the neuroscientific or psychological debate concerning the human mind working, but we underline the importance of modeling the above cognitive levels when designing artificial intelligence agents. Our aim was to start a reflection on the computational reproduction of intelligence, providing a methodological approach through which the aforementioned human factors in autonomous systems are enhanced. The presented model must be intended as part of a larger one, which also includes concepts of attention, awareness, and consciousness. Experiments have been performed by providing visual stimuli to the proposed model, coupling the emotion cognitive level with a supervised learner to produce artificial emotional activity. For this purpose, performances with Random Forest and XGBoost have been compared and, with the latter algorithm, 85% accuracy and 92% coherency over predefined emotional episodes have been achieved. The model has also been tested on emotional episodes that are different from those related to the training phase, and a decrease in accuracy and coherency has been observed. Furthermore, by decreasing the weight related to the emotion cognitive instances, the model reaches the same performances recorded during the evaluation phase. In general, the framework achieves a first emotional generalization responsiveness of 94% and presents an approximately constant relative frequency related to the agent’s displayed emotions.
27

Belhaj, Mouna, Fahem Kebair, and Lamjed Ben Said. "A Computational Model of Emotions for the Simulation of Human Emotional Dynamics in Emergency Situations." International Journal of Computer Theory and Engineering 6, no. 3 (2014): 227–33. http://dx.doi.org/10.7763/ijcte.2014.v6.867.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Liao, Yi-Jr, Wei-Chun Wang, Shanq-Jang Ruan, Yu-Hao Lee, and Shih-Ching Chen. "A Music Playback Algorithm Based on Residual-Inception Blocks for Music Emotion Classification and Physiological Information." Sensors 22, no. 3 (January 20, 2022): 777. http://dx.doi.org/10.3390/s22030777.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Music can generate a positive effect in runners’ performance and motivation. However, the practical implementation of music intervention during exercise is mostly absent from the literature. Therefore, this paper designs a playback sequence system for joggers by considering music emotion and physiological signals. This playback sequence is implemented by a music selection module that combines artificial intelligence techniques with physiological data and emotional music. In order to make the system operate for a long time, this paper improves the model and selection music module to achieve lower energy consumption. The proposed model obtains fewer FLOPs and parameters by using logarithm scaled Mel-spectrogram as input features. The accuracy, computational complexity, trainable parameters, and inference time are evaluated on the Bi-modal, 4Q emotion, and Soundtrack datasets. The experimental results show that the proposed model is better than that of Sarkar et al. and achieves competitive performance on Bi-modal (84.91%), 4Q emotion (92.04%), and Soundtrack (87.24%) datasets. More specifically, the proposed model reduces the computational complexity and inference time while maintaining the classification accuracy, compared to other models. Moreover, the size of the proposed model for network training is small, which can be applied to mobiles and other devices with limited computing resources. This study designed the overall playback sequence system by considering the relationship between music emotion and physiological situation during exercise. The playback sequence system can be adopted directly during exercise to improve users’ exercise efficiency.
29

Oh, SeungJun, and Dong-Keun Kim. "Comparative Analysis of Emotion Classification Based on Facial Expression and Physiological Signals Using Deep Learning." Applied Sciences 12, no. 3 (January 26, 2022): 1286. http://dx.doi.org/10.3390/app12031286.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study aimed to classify emotion based on facial expression and physiological signals using deep learning and to compare the analyzed results. We asked 53 subjects to make facial expressions, expressing four types of emotion. Next, the emotion-inducing video was watched for 1 min, and the physiological signals were obtained. We defined four emotions as positive and negative emotions and designed three types of deep-learning models that can classify emotions. Each model used facial expressions and physiological signals as inputs, and a model in which these two types of input were applied simultaneously was also constructed. The accuracy of the model was 81.54% when physiological signals were used, 99.9% when facial expressions were used, and 86.2% when both were used. Constructing a deep-learning model with only facial expressions showed good performance. The results of this study confirm that the best approach for classifying emotion is using only facial expressions rather than data from multiple inputs. However, this is an opinion presented only in terms of accuracy without considering the computational cost, and it is suggested that physiological signals and multiple inputs be used according to the situation and research purpose.
30

Steephen, John E. "HED: A Computational Model of Affective Adaptation and Emotion Dynamics." IEEE Transactions on Affective Computing 4, no. 2 (April 2013): 197–210. http://dx.doi.org/10.1109/t-affc.2013.2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Asma Sulaiman, Nor Anis, and Leelavathi Rajamanickam. "The State Art of Text Sentiment from Opinions to Emotion Mining." Journal of Engineering & Technological Advances 5, no. 2 (2020): 43–52. http://dx.doi.org/10.35934/segi.v5i2.43.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study is aiming to analyse the feelings expressed by the users in a text on a comment posted on social media. Text Mining and Emotion Mining can be analysed by using both technique of Natural Processing Language (NLP). Mostly on the previous study of text mining is using unsupervised technique and referring to Ekman’s Emotion Model (EEM) but it has restrained coverage of polarity shifters, negations and lack emoticon. In this study have proposed a Naïve Bayes algorithm as a tool to produce users’ emotion pattern. The most important contribution of this study is to visualize the emotion’s theory with the text sentiment based on the computational methods for classifying users’ feelings from natural language text. Then, the general system framework of extracting opinions to emotion mining has produced and capable use in any domains.
32

Beveridge, Scott, and Don Knox. "Popular music and the role of vocal melody in perceived emotion." Psychology of Music 46, no. 3 (June 30, 2017): 411–23. http://dx.doi.org/10.1177/0305735617713834.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The voice plays a crucial role in expressing emotion in popular music. However, the importance of the voice in this context has not been systematically assessed. This study investigates the emotional effect of vocal features in popular music. In particular, it focuses on nonverbal characteristics, including vocal melody and rhythm. To determine the efficacy of these features, they are used to construct a computational Music Emotion Recognition (MER) system. The system is based on the circumplex model that expresses emotion in terms of arousal and valence. Two independent studies were used to develop the system. The first study established models for predicting arousal and valence based on a range of acoustical and nonverbal vocal features. The second study was used for independent validation of these models. Results show that features describing rhythmic qualities of the vocal line produce emotion models with a high level of generalizability. In particular these models reliably predict emotional valence, a well-known issue in existing Music Emotion Recognition systems.
33

Ohira, Hideki. "Predictive Processing of Interoception, Decision-Making, and Allostasis." Psihologijske teme 29, no. 1 (2020): 1–16. http://dx.doi.org/10.31820/pt.29.1.1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Emotional intelligence is composed of a set of emotional abilities, including recognition of emotional states in the self and others, the use of emotions to guide thoughts and behaviours, and emotion regulation. Previous studies have demonstrated that emotional intelligence is associated with mental health, social problem solving, interpersonal relationship quality, and academic and job performance. Although emotional intelligence has received much interest both in basic research fields and applied and clinical fields, the mechanisms underlying the functions of emotional intelligence remain unclear. The aim of the present article was to consider the mechanisms of emotional intelligence using a computational approach. Recent theories of emotion in psychology and neuroscience have emphasized the importance of predictive processing. It has been proposed that the brain createsinternal models that can provide predictions for sensation and motor movement, and perception and behaviors emerge from Bayesian computations rooted in these predictions. This theoretical framework has been expanded to include interoceptive perception of the internal body to explain affect and decision-making as phenomena based on interoception. This perspective has implications for understanding issues of emotional intelligence.
34

MARTIN, JEAN-CLAUDE, RADOSLAW NIEWIADOMSKI, LAURENCE DEVILLERS, STEPHANIE BUISINE, and CATHERINE PELACHAUD. "MULTIMODAL COMPLEX EMOTIONS: GESTURE EXPRESSIVITY AND BLENDED FACIAL EXPRESSIONS." International Journal of Humanoid Robotics 03, no. 03 (September 2006): 269–91. http://dx.doi.org/10.1142/s0219843606000825.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
One of the challenges of designing virtual humans is the definition of appropriate models of the relation between realistic emotions and the coordination of behaviors in several modalities. In this paper, we present the annotation, representation and modeling of multimodal visual behaviors occurring during complex emotions. We illustrate our work using a corpus of TV interviews. This corpus has been annotated at several levels of information: communicative acts, emotion labels, and multimodal signs. We have defined a copy-synthesis approach to drive an Embodied Conversational Agent from these different levels of information. The second part of our paper focuses on a model of complex (superposition and masking of) emotions in facial expressions of the agent. We explain how the complementary aspects of our work on corpus and computational model is used to specify complex emotional behaviors.
35

Li, Bo, Duoyong Sun, Shuquan Guo, and Zihan Lin. "Agent Based Simulation of Group Emotions Evolution and Strategy Intervention in Extreme Events." Discrete Dynamics in Nature and Society 2014 (2014): 1–17. http://dx.doi.org/10.1155/2014/464190.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Agent based simulation method has become a prominent approach in computational modeling and analysis of public emergency management in social science research. The group emotions evolution, information diffusion, and collective behavior selection make extreme incidents studies a complex system problem, which requires new methods for incidents management and strategy evaluation. This paper studies the group emotion evolution and intervention strategy effectiveness using agent based simulation method. By employing a computational experimentation methodology, we construct the group emotion evolution as a complex system and test the effects of three strategies. In addition, the events-chain model is proposed to model the accumulation influence of the temporal successive events. Each strategy is examined through three simulation experiments, including two make-up scenarios and a real case study. We show how various strategies could impact the group emotion evolution in terms of the complex emergence and emotion accumulation influence in extreme events. This paper also provides an effective method of how to use agent-based simulation for the study of complex collective behavior evolution problem in extreme incidents, emergency, and security study domains.
36

Pan, Bo, and Wei Zheng. "Emotion Recognition Based on EEG Using Generative Adversarial Nets and Convolutional Neural Network." Computational and Mathematical Methods in Medicine 2021 (October 11, 2021): 1–11. http://dx.doi.org/10.1155/2021/2520394.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Emotion recognition plays an important role in the field of human-computer interaction (HCI). Automatic emotion recognition based on EEG is an important topic in brain-computer interface (BCI) applications. Currently, deep learning has been widely used in the field of EEG emotion recognition and has achieved remarkable results. However, due to the cost of data collection, most EEG datasets have only a small amount of EEG data, and the sample categories are unbalanced in these datasets. These problems will make it difficult for the deep learning model to predict the emotional state. In this paper, we propose a new sample generation method using generative adversarial networks to solve the problem of EEG sample shortage and sample category imbalance. In experiments, we explore the performance of emotion recognition with the frequency band correlation and frequency band separation computational models before and after data augmentation on standard EEG-based emotion datasets. Our experimental results show that the method of generative adversarial networks for data augmentation can effectively improve the performance of emotion recognition based on the deep learning model. And we find that the frequency band correlation deep learning model is more conducive to emotion recognition.
37

Karyotis, Charalampos, Faiyaz Doctor, Rahat Iqbal, Anne James, and Victor Chang. "A fuzzy computational model of emotion for cloud based sentiment analysis." Information Sciences 433-434 (April 2018): 448–63. http://dx.doi.org/10.1016/j.ins.2017.02.004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Huang, Yucheng, Rui Song, Fausto Giunchiglia, and Hao Xu. "A Multitask Learning Framework for Abuse Detection and Emotion Classification." Algorithms 15, no. 4 (March 28, 2022): 116. http://dx.doi.org/10.3390/a15040116.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The rapid development of online social media makes abuse detection a hot topic in the field of emotional computing. However, most natural language processing (NLP) methods only focus on linguistic features of posts and ignore the influence of users’ emotions. To tackle the problem, we propose a multitask framework combining abuse detection and emotion classification (MFAE) to expand the representation capability of the algorithm on the basis of the existing pretrained language model. Specifically, we use bidirectional encoder representation from transformers (BERT) as the encoder to generate sentence representation. Then, we used two different decoders for emotion classification and abuse detection, respectively. To further strengthen the influence of the emotion classification task on abuse detection, we propose a cross-attention (CA) component in the decoder, which further improves the learning effect of our multitask learning framework. Experimental results on five public datasets show that our method is superior to other state-of-the-art methods.
39

Smith, Geneva, and Jacques Carette. "Design Foundations for Emotional Game Characters." Eludamos: Journal for Computer Game Culture 10, no. 1 (April 21, 2020): 109–40. http://dx.doi.org/10.7557/23.6175.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recent Computer Role Playing Games such as Bethesda’s The Elder Scrolls V: Skyrim and Nintendo’s The Legend of Zelda: Breath of the Wild, have entranced us with their expansive, complex worlds. However, the Non-Player Characters (NPCs) in these games remain stale and lackluster outside of scripted events. This is, in part, because game engines generally do not simulate emotions in their NPCs while they wander in the world. Wouldn't these games be much more interesting, potentially even more re-playable, if NPCs reacted more appropriately to the situations they find themselves in?To be able to do this, designers need an engine that models emotion, based on inputs available in the game world and from other designer-defined character elements such as personality, goals, and mood. A full-fledged cognitive architecture could fulfill this task, but it would likely be much too inefficient for use in a real-time environment like a game.There are many psychological models of emotion but only a few have been explored for video game applications. A game requires an emotion engine which generates believable results to enhance NPC agency and player engagement. Unlike AI agents and simulations of cognitive psychology theories, an emotion engine for games does not need to be correct or even justifiable. This enables the exploration of a variety of emotion theories that have not been actively considered for games. One such theory is Plutchik's psychoevolutionary synthesis. He proposes a method of organizing emotions into a cone, where the intensity of an emotion increases as one moves up the sides. It also postulates that primary emotions in the model can be arranged in opposing pairs and that other emotions can be composed from the primary emotions and their intensities. This allows for greater flexibility in the number and type of emotions to include, whereas most models that have been used before define a closed set of emotion types—a serious constraint on designer's freedom. A second theory, Lazarus's cognitive appraisal, better describes emotion elicitation and behaviour selection, and appears to integrate well with Plutchik's work.An emotion engine based on simplified versions of psychoevolutionary synthesis and cognitive appraisal is an understudied approach towards emotional NPCs. Together with readily identifiable elements of emotion processing, such as attention and action selection, an engine can be designed and customized to meet the needs of game designers with minimal impact on computational resources.We will present an overview of some existing cognitive architectures and emotion engines followed by a description of key elements in psychoevolutionary synthesis and cognitive appraisal. Next we list some requirements for an emotion engine for NPCs and how our selected emotion theories meet them. Finally, we propose a design and a collection of game-oriented test scenarios to illustrate how our design handles various facets of NPC emotional responses.
40

Osuna, Enrique, Sergio Castellanos, Jonathan Hernando Rosales, and Luis-Felipe Rodríguez. "An Interoperable Framework for Computational Models of Emotion." International Journal of Cognitive Informatics and Natural Intelligence 16, no. 1 (January 2022): 1–15. http://dx.doi.org/10.4018/ijcini.296257.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Computational models of emotion (CMEs) are software systems designed to emulate specific aspects of the human emotions process. The underlying components of CMEs interact with cognitive components of cognitive agent architectures to produce realistic behaviors in intelligent agents. However, in contemporary CMEs, the interaction between affective and cognitive components occurs in ad-hoc manner, which leads to difficulties when new affective or cognitive components should be added in the CME. This paper presents a framework that facilitates taking into account in CMEs the cognitive information generated by cognitive components implemented in cognitive agent architectures. The framework is designed to allow researchers define how cognitive information biases the internal workings of affective components. This framework is inspired in software interoperability practices to enable communication and interpretation of cognitive information and standardize the cognitive-affective communication process by ensuring semantic communication channels used to modulate affective mechanisms of CMEs
41

Anvarjon, Tursunov, Mustaqeem, and Soonil Kwon. "Deep-Net: A Lightweight CNN-Based Speech Emotion Recognition System Using Deep Frequency Features." Sensors 20, no. 18 (September 12, 2020): 5212. http://dx.doi.org/10.3390/s20185212.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Artificial intelligence (AI) and machine learning (ML) are employed to make systems smarter. Today, the speech emotion recognition (SER) system evaluates the emotional state of the speaker by investigating his/her speech signal. Emotion recognition is a challenging task for a machine. In addition, making it smarter so that the emotions are efficiently recognized by AI is equally challenging. The speech signal is quite hard to examine using signal processing methods because it consists of different frequencies and features that vary according to emotions, such as anger, fear, sadness, happiness, boredom, disgust, and surprise. Even though different algorithms are being developed for the SER, the success rates are very low according to the languages, the emotions, and the databases. In this paper, we propose a new lightweight effective SER model that has a low computational complexity and a high recognition accuracy. The suggested method uses the convolutional neural network (CNN) approach to learn the deep frequency features by using a plain rectangular filter with a modified pooling strategy that have more discriminative power for the SER. The proposed CNN model was trained on the extracted frequency features from the speech data and was then tested to predict the emotions. The proposed SER model was evaluated over two benchmarks, which included the interactive emotional dyadic motion capture (IEMOCAP) and the berlin emotional speech database (EMO-DB) speech datasets, and it obtained 77.01% and 92.02% recognition results. The experimental results demonstrated that the proposed CNN-based SER system can achieve a better recognition performance than the state-of-the-art SER systems.
42

Wang, Jianmin, Yujia Liu, Yuxi Wang, Jinjing Mao, Tianyang Yue, and Fang You. "SAET: The Non-Verbal Measurement Tool in User Emotional Experience." Applied Sciences 11, no. 16 (August 17, 2021): 7532. http://dx.doi.org/10.3390/app11167532.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, the development process and validation of a self-assessment emotion tool (SAET) is described, which establishes an emotion-assessment method to improve pictorial expression design. The tool is based on an emotion set of emotional-cognition-derived rules obtained from an OCC model proposed by Ortony, Clore, and Collins, and the emotion set and expression design are validated by numerical computation of the dimensional space pleasure–arousal–dominance (PAD) and the cognitive assessment of emotion words. The SAET consists of twenty images that display a cartoon figure expressing ten positive and ten negative emotions. The instrument can be used during interactions with visual interfaces such as websites, posters, cell phones, and vehicles, and allows participants to select interface elements that elicit specific emotions. Experimental results show the validity of this type of tool in terms of both semantic discrimination of emotions and quantitative numerical validation.
43

Liu, Xuan, Kokil Jaidka, and Niyati Chayya. "The Psychology of Semantic Spaces: Experiments with Positive Emotion (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 13007–8. http://dx.doi.org/10.1609/aaai.v36i11.21640.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Psychological concepts can help computational linguists to better model the latent semantic spaces of emotions, and understand the underlying states motivating the sharing or suppressing of emotions. This abstract applies the understanding of agency and social interaction in the happiness semantic space to its role in positive emotion. First, BERT-based fine-tuning yields an expanded seed set to understand the vocabulary of the latent space. Next, results benchmarked against many emotion datasets suggest that the approach is valid, robust, offers an improvement over direct prediction, and is useful for downstream predictive tasks related to psychological states.
44

Hassan, Awais, Maida Shahid, Faisal Hayat, Jehangir Arshad, Mujtaba Hussain Jaffery, Ateeq Ur Rehman, Kalim Ullah, Seada Hussen, and Habib Hamam. "Improving the Survival Time of Multiagents in Social Dilemmas through Neurotransmitter-Based Deep Q-Learning Model of Emotions." Journal of Healthcare Engineering 2022 (January 25, 2022): 1–15. http://dx.doi.org/10.1155/2022/3449433.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In multiagent systems, social dilemmas often arise whenever there is a competition over the limited resources. The major challenge is to establish cooperation among intelligent virtual agents for solving the situations of social dilemmas. In humans, personality and emotions are the primary factors that lead them toward a cooperative environment. To make agents cooperate, they have to become more like humans, that is, believable. Therefore, we hypothesize that emotions according to the personality give birth to believability, and if believability is introduced into agents through emotions, it improves their survival rate in social dilemma situations. The existing researches have introduced different computational models to introduce emotions in virtual agents, but they lack emotions through neurotransmitters. We have proposed a neurotransmitters-based deep Q-learning computational model in multiagents that is a suitable choice for emotion modeling and, hence, believability. The proposed model regulates the agents’ emotions by controlling the virtual neurotransmitters (dopamine and oxytocin) according to the agent’s personality. The personality of the agent is introduced using OCEAN model. To evaluate the proposed system, we simulated a survival scenario with limited food resources in different experiments. These experiments vary the number of selfish agents (higher neuroticism personality trait) and the selfless agents (higher agreeableness personality trait). Experimental results show that by adding the selfless agents in the scenario, the agents develop cooperation, and their collective survival time increases. Thus, to resolve the social dilemma problems in virtual agents, we can make agents believable through the proposed neurotransmitter-based emotional model. This proposed work may help in developing nonplayer characters (NPCs) in games.
45

Castellanos, Sergio, Luis-Felipe Rodríguez, Luis A. Castro, and J. Octavio Gutierrez-Garcia. "A computational model of emotion assessment influenced by cognition in autonomous agents." Biologically Inspired Cognitive Architectures 25 (August 2018): 26–36. http://dx.doi.org/10.1016/j.bica.2018.07.007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

De SILVA, P. R. "A Computational Model for Recognizing Emotion with Intensity for Machine Vision Applications." IEICE Transactions on Information and Systems E89-D, no. 7 (July 1, 2006): 2171–79. http://dx.doi.org/10.1093/ietisy/e89-d.7.2171.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Gupta, Shakuntala, Ramakrishna Biswal, and Supratim Gupta. "Toward Computational Model of Emotion from Individual Difference in Perceiving Facial Expressions." Psychological Studies 63, no. 3 (August 4, 2018): 266–75. http://dx.doi.org/10.1007/s12646-018-0459-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Luo, Xiangfeng, and Yawen Yi. "Topic-Specific Emotion Mining Model for Online Comments." Future Internet 11, no. 3 (March 24, 2019): 79. http://dx.doi.org/10.3390/fi11030079.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Nowadays, massive texts are generated on the web, which contain a variety of viewpoints, attitudes, and emotions for products and services. Subjective information mining of online comments is vital for enterprises to improve their products or services and for consumers to make purchase decisions. Various effective methods, the mainstream one of which is the topic model, have been put forward to solve this problem. Although most of topic models can mine the topic-level emotion of the product comments, they do not consider interword relations and the number of topics determined adaptively, which leads to poor comprehensibility, high time requirement, and low accuracy. To solve the above problems, this paper proposes an unsupervised Topic-Specific Emotion Mining Model (TSEM), which adds corresponding relationship between aspect words and opinion words to express comments as a bag of aspect–opinion pairs. On one hand, the rich semantic information obtained by adding interword relationship can enhance the comprehensibility of results. On the other hand, text dimensions reduced by adding relationships can cut the computation time. In addition, the number of topics in our model is adaptively determined by calculating perplexity to improve the emotion accuracy of the topic level. Our experiments using Taobao commodity comments achieve better results than baseline models in terms of accuracy, computation time, and comprehensibility. Therefore, our proposed model can be effectively applied to online comment emotion mining tasks.
49

Rairán-Antolines, José Danilo. "Modelo circumplejo del afecto aplicado al control de sistemas dinámicos." Respuestas 14, no. 1 (May 5, 2016): 5–15. http://dx.doi.org/10.22463/0122820x.521.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
En este artículo se propone la construcción de una estrategia de control de sistemas dinámicos nueva, basada en las emociones humanas. Se quiere emular las emociones porque psicólogos y neurocientíficos han demostrado que son indispensables en el proceso de toma de decisiones, y esta es la tarea de todo controlador. Se exponen cuatro campos de aplicación para esta invención, en los cuales la autonomía y la adaptación son esenciales. Además, ocho modelos computacionales de las emociones son referenciados, dado que involucran la cognición; se concluye que es necesario hacer ajustes para incluir uno de ellos en una estrategia de control. Por esto se formula, como aporte del artículo, el uso del modelo circumplejo del afecto; este etiqueta a la combinación de dos variables como una emoción humana.Palabras Clave:Modelos de las emociones humanas, los procesos de decisión, Circumplejo modelo de afecto, control autónomo. AbstractIn this paper we propose the design of a new control strategy for dynamic systems based on human emotions. Since psychologists and neuroscientists have demonstrated that emotions are indispensable in the Decision-Making-Process, and decision-making is the work of every controller, it is our desire to emulate emotions in the systems design. Here we deal with four areas of application for this invention in which autonomy and adaptability are essential. As well, eight emotion computational models are referenced. Given that these involve cognition, it is concluded as necessary to make adjustments to include one of these in a control strategy. For this reason, this article formulates the use of the Circumplex Model of Affect, a model which labels a human emotion as a variable. Keywords: Human Emotion Models, Decision-Making Processes, Circumplex Model of Affect, Autonomous Control.
50

Chen, Lijiang, Jie Ren, Xia Mao, and Qi Zhao. "Electroglottograph-Based Speech Emotion Recognition via Cross-Modal Distillation." Applied Sciences 12, no. 9 (April 25, 2022): 4338. http://dx.doi.org/10.3390/app12094338.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Speech emotion recognition (SER) is an important component of emotion computation and signal processing. Recently, many works have applied abundant acoustic features and complex model architectures to enhance the model’s performance, but these works sacrifice the portability of the model. To address this problem, we propose a model utilizing only the fundamental frequency from electroglottograph (EGG) signals. EGG signals are a sort of physiological signal that can directly reflect the movement of the vocal cord. Under the assumption that different acoustic features share similar representations in the internal emotional state, we propose cross-modal emotion distillation (CMED) to train the EGG-based SER model by transferring robust speech emotion representations from the log-Mel-spectrogram-based model. Utilizing the cross-modal emotion distillation, we achieve an increase of recognition accuracy from 58.98% to 66.80% on the S70 subset of the Chinese Dual-mode Emotional Speech Database (CDESD 7-classes) and 32.29% to 42.71% on the EMO-DB (7-classes) dataset, which shows that our proposed method achieves a comparable result with the human subjective experiment and realizes a trade-off between model complexity and performance.

До бібліографії