To see the other types of publications on this topic, follow the link: Emotion-aware Systems.

Journal articles on the topic 'Emotion-aware Systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Emotion-aware Systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Yin, Yongfeng Qian, Di Wu, M. Shamim Hossain, Ahmed Ghoneim, and Min Chen. "Emotion-Aware Multimedia Systems Security." IEEE Transactions on Multimedia 21, no. 3 (2019): 617–24. http://dx.doi.org/10.1109/tmm.2018.2882744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

S Jotsov, Vladimir. "Emotion-Aware Education and Research Systems." Issues in Informing Science and Information Technology 6 (2009): 779–94. http://dx.doi.org/10.28945/1097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ayata, Değer, Yusuf Yaslan, and Mustafa E. Kamasak. "Emotion Recognition from Multimodal Physiological Signals for Emotion Aware Healthcare Systems." Journal of Medical and Biological Engineering 40, no. 2 (2020): 149–57. http://dx.doi.org/10.1007/s40846-019-00505-7.

Full text
Abstract:
Abstract Purpose The purpose of this paper is to propose a novel emotion recognition algorithm from multimodal physiological signals for emotion aware healthcare systems. In this work, physiological signals are collected from a respiratory belt (RB), photoplethysmography (PPG), and fingertip temperature (FTT) sensors. These signals are used as their collection becomes easy with the advance in ergonomic wearable technologies. Methods Arousal and valence levels are recognized from the fused physiological signals using the relationship between physiological signals and emotions. This recognition is performed using various machine learning methods such as random forest, support vector machine and logistic regression. The performance of these methods is studied. Results Using decision level fusion, the accuracy improved from 69.86 to 73.08% for arousal, and from 69.53 to 72.18% for valence. Results indicate that using multiple sources of physiological signals and their fusion increases the accuracy rate of emotion recognition. Conclusion This study demonstrated a framework for emotion recognition using multimodal physiological signals from respiratory belt, photo plethysmography and fingertip temperature. It is shown that decision level fusion from multiple classifiers (one per signal source) improved the accuracy rate of emotion recognition both for arousal and valence dimensions.
APA, Harvard, Vancouver, ISO, and other styles
4

Karamimehr, Zahra, Mohammad Mehdi Sepehri, Soheil Sibdari, Toktam Khatibi, and Hassan Aghajani. "Personalised emotion-aware e-learning systems with interventions." International Journal of Smart Technology and Learning 3, no. 3/4 (2023): 187–211. http://dx.doi.org/10.1504/ijsmarttl.2023.136909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Aghajani, Hassan, Toktam Khatibi, Zahra Karamimehr, Soheil Sibdari, and Mohammad Mehdi Sepehri. "Personalised emotion-aware e-learning systems with interventions." International Journal of Smart Technology and Learning 3, no. 3/4 (2023): 187–211. http://dx.doi.org/10.1504/ijsmarttl.2023.10062511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zainab, Hira. "Emotion-Aware AI: Harnessing VPSYC, EEG, and Visual Recognition for Mental Health Insights." AlgoVista: Journal of AI & Computer Science 2, no. 1 (2025): 9–17. https://doi.org/10.70445/avjcs.2.1.2025.9-17.

Full text
Abstract:
Artificial intelligence that senses human emotions is changing mental healthcare through advanced technology systems. Our team combines Virtual Psychological Assessments with EEG and visual recognition technologies to improve how mental health information is acquired. When emotion-aware AI systems use multiple approaches, they produce better emotional state insights to help patients get more suitable therapy. These systems connect psychological assessments to physiological signals in new ways that help us better understand our emotions. Emotion-aware AI technology helps us solve mental health problems worldwide by making care more accessible and early detection easier. Through detailed research this paper explains how emotion-aware AI transforms mental healthcare operations and identifies the technology's benefits and potential issues.
APA, Harvard, Vancouver, ISO, and other styles
7

Mwaita, Kevin Fred, Rahul Bhaumik, Aftab Ahmed, Adwait Sharma, Antonella De Angeli, and Michael Haller. "Emotion-Aware In-Car Feedback: A Comparative Study." Multimodal Technologies and Interaction 8, no. 7 (2024): 54. http://dx.doi.org/10.3390/mti8070054.

Full text
Abstract:
We investigate personalised feedback mechanisms to help drivers regulate their emotions, aiming to improve road safety. We systematically evaluate driver-preferred feedback modalities and their impact on emotional states. Using unobtrusive vision-based emotion detection and self-labeling, we captured the emotional states and feedback preferences of 21 participants in a simulated driving environment. Results show that in-car feedback systems effectively influence drivers’ emotional states, with participants reporting positive experiences and varying preferences based on their emotions. We also developed a machine learning classification system using facial marker data to demonstrate the feasibility of our approach for classifying emotional states. Our contributions include design guidelines for tailored feedback systems, a systematic analysis of user reactions across three feedback channels with variations, an emotion classification system, and a dataset with labeled face landmark annotations for future research.
APA, Harvard, Vancouver, ISO, and other styles
8

Wei, Wei, Jiayi Liu, Xianling Mao, et al. "Target-guided Emotion-aware Chat Machine." ACM Transactions on Information Systems 39, no. 4 (2021): 1–24. http://dx.doi.org/10.1145/3456414.

Full text
Abstract:
The consistency of a response to a given post at the semantic level and emotional level is essential for a dialogue system to deliver humanlike interactions. However, this challenge is not well addressed in the literature, since most of the approaches neglect the emotional information conveyed by a post while generating responses. This article addresses this problem and proposes a unified end-to-end neural architecture, which is capable of simultaneously encoding the semantics and the emotions in a post and leveraging target information to generate more intelligent responses with appropriately expressed emotions. Extensive experiments on real-world data demonstrate that the proposed method outperforms the state-of-the-art methods in terms of both content coherence and emotion appropriateness.
APA, Harvard, Vancouver, ISO, and other styles
9

T, Vijayakumar. "Enhancing User Experience through Emotion-Aware Interfaces: A Multimodal Approach." Journal of Innovative Image Processing 6, no. 1 (2024): 27–39. http://dx.doi.org/10.36548/jiip.2024.1.003.

Full text
Abstract:
The ability of a system or entity—such as an artificial intelligence system, computer program, or interface—to identify, comprehend, and react to human emotions is known as emotion awareness. In human-computer interaction, where the aim is to develop more intuitive and sympathetic systems that can comprehend and adjust to users' emotional states, this idea is especially pertinent. Improving user experience with emotion-aware interfaces is a multifaceted problem that calls for a multimodal strategy. Through the integration of several modalities, such as auditory, haptic, and visual feedback, interface designers may develop systems that not only react to user inputs but also identify and adjust based on the emotional states of users. The way users interact in the multimodal domain of emotion awareness will be explained in this research. Following that, a multimodal exploration of the user's experience with emotion awareness will take place.
APA, Harvard, Vancouver, ISO, and other styles
10

Fu, Yujun, Hong Va Leong, Grace Ngai, Michael Xuelin Huang, and Stephen C. F. Chan. "Physiological mouse: toward an emotion-aware mouse." Universal Access in the Information Society 16, no. 2 (2016): 365–79. http://dx.doi.org/10.1007/s10209-016-0469-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Kadiri, Sudarsana Reddy, and Paavo Alku. "Subjective Evaluation of Basic Emotions from Audio–Visual Data." Sensors 22, no. 13 (2022): 4931. http://dx.doi.org/10.3390/s22134931.

Full text
Abstract:
Understanding of the perception of emotions or affective states in humans is important to develop emotion-aware systems that work in realistic scenarios. In this paper, the perception of emotions in naturalistic human interaction (audio–visual data) is studied using perceptual evaluation. For this purpose, a naturalistic audio–visual emotion database collected from TV broadcasts such as soap-operas and movies, called the IIIT-H Audio–Visual Emotion (IIIT-H AVE) database, is used. The database consists of audio-alone, video-alone, and audio–visual data in English. Using data of all three modes, perceptual tests are conducted for four basic emotions (angry, happy, neutral, and sad) based on category labeling and for two dimensions, namely arousal (active or passive) and valence (positive or negative), based on dimensional labeling. The results indicated that the participants’ perception of emotions was remarkably different between the audio-alone, video-alone, and audio–video data. This finding emphasizes the importance of emotion-specific features compared to commonly used features in the development of emotion-aware systems.
APA, Harvard, Vancouver, ISO, and other styles
12

Chen, Xinlei, Yulei Zhao, and Yong Li. "QoE-Aware wireless video communications for emotion-aware intelligent systems: A multi-layered collaboration approach." Information Fusion 47 (May 2019): 1–9. http://dx.doi.org/10.1016/j.inffus.2018.06.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Li, Chenghao, Kah Phooi Seng, and Li-Minn Ang. "Gait-To-Gait Emotional Human–Robot Interaction Utilizing Trajectories-Aware and Skeleton-Graph-Aware Spatial–Temporal Transformer." Sensors 25, no. 3 (2025): 734. https://doi.org/10.3390/s25030734.

Full text
Abstract:
The emotional response of robotics is crucial for promoting the socially intelligent level of human–robot interaction (HRI). The development of machine learning has extensively stimulated research on emotional recognition for robots. Our research focuses on emotional gaits, a type of simple modality that stores a series of joint coordinates and is easy for humanoid robots to execute. However, a limited amount of research investigates emotional HRI systems based on gaits, indicating an existing gap in human emotion gait recognition and robotic emotional gait response. To address this challenge, we propose a Gait-to-Gait Emotional HRI system, emphasizing the development of an innovative emotion classification model. In our system, the humanoid robot NAO can recognize emotions from human gaits through our Trajectories-Aware and Skeleton-Graph-Aware Spatial–Temporal Transformer (TS-ST) and respond with pre-set emotional gaits that reflect the same emotion as the human presented. Our TS-ST outperforms the current state-of-the-art human-gait emotion recognition model applied to robots on the Emotion-Gait dataset.
APA, Harvard, Vancouver, ISO, and other styles
14

Sarah, Zaheer. "Designing Emotion-Aware UX: Leveraging Sentiment Analysis to Adapt Digital Experiences." International Journal of Leading Research Publication 4, no. 8 (2023): 1–12. https://doi.org/10.5281/zenodo.15259144.

Full text
Abstract:
The integrating sentiment analysis and affective computing into UX design, highlighting how intelligent systems can adjust in real-time to emotional states of users. Using multimodal data like facial expressions, tone of voice, physiological signals, and interaction behavior, these systems dynamically adjust content, interface feedback, and engagement mechanisms to enhance personalization. The intersection of artificial intelligence, affective computing, and human-computer interaction holds great promise in the development of emotionally intelligent applications across different industries. Emotion detection in real-time has the potential to personalize experiences in customer service, health care, education platforms, and entertainment. This paper also explores the ethical issues, including privacy, data consent, and algorithmic bias, that accompany emotionally adaptive technologies. We introduce an emotion-aware design taxonomy, assess current AI models for affect recognition, and suggest design principles for emotion-sensitive UX interfaces. It explains how sentiment-based UI adjustments enhance satisfaction, minimize user frustration, and create emotional bonding. We also discuss how emotion-aware interfaces can assist vulnerable groups by detecting distress and providing the right support. While the advantages are numerous, we highlight the importance of transparency and ethical AI practices. The future of UX is emotionally adaptive systems that acknowledge and respect user emotions. By integrating emotion-awareness into fundamental UX frameworks, developers can attain greater engagement and more human technology experiences. This work aims to bridge cognitive science and design thinking for the next generation of responsive, ethical, and empathetic interfaces
APA, Harvard, Vancouver, ISO, and other styles
15

Dendy K Pramudito. "Utilization of Computer Vision Technology for Human Emotion Detection and Recognition in the Development of a More Responsive Human-Computer Interaction System." Journal of Information Systems Engineering and Management 10, no. 51s (2025): 446–53. https://doi.org/10.52783/jisem.v10i51s.10411.

Full text
Abstract:
In recent years, the integration of emotion recognition technologies into Human-Computer Interaction (HCI) systems has emerged as a critical advancement in the pursuit of more responsive and human-centered digital experiences. This study investigates the utilization of computer vision technology for detecting and recognizing human emotions to enhance the adaptability and empathy of HCI systems. Employing a qualitative research approach through a structured literature review and library research method, this paper synthesizes findings from selected peer-reviewed studies published over the past five years. The review highlights key developments in deep learning-based emotion recognition, with particular emphasis on facial expression analysis, body gesture interpretation, and multimodal data integration. Advanced computer vision techniques, such as convolutional neural networks (CNNs) and transformer-based models, are shown to significantly improve accuracy in identifying emotional states. Additionally, this study discusses current challenges, including cultural biases, data privacy concerns, and real-world implementation limitations. It also explores the ethical implications of emotion-aware systems and underscores the necessity for inclusive, transparent, and context-aware AI design. By analyzing and interpreting these trends and challenges, this research offers valuable insights for future innovations in emotion-sensitive HCI. The study concludes that emotionally intelligent interfaces, when ethically developed and inclusively trained, can redefine digital interactions across various domains such as education, healthcare, and customer service. Recommendations are proposed for future research to address existing gaps and enhance the practical applicability of emotion recognition systems.
APA, Harvard, Vancouver, ISO, and other styles
16

Ishanka, U. A. Piumi, and Takashi Yukawa. "The Prefiltering Techniques in Emotion Based Place Recommendation Derived by User Reviews." Applied Computational Intelligence and Soft Computing 2017 (2017): 1–10. http://dx.doi.org/10.1155/2017/5680398.

Full text
Abstract:
Context-aware recommendation systems attempt to address the challenge of identifying products or items that have the greatest chance of meeting user requirements by adapting to current contextual information. Many such systems have been developed in domains such as movies, books, and music, and emotion is a contextual parameter that has already been used in those fields. This paper focuses on the use of emotion as a contextual parameter in a tourist destination recommendation system. We developed a new corpus that incorporates the emotion parameter by employing semantic analysis techniques for destination recommendation. We review the effectiveness of incorporating emotion in a recommendation process using prefiltering techniques and show that the use of emotion as a contextual parameter for location recommendation in conjunction with collaborative filtering increases user satisfaction.
APA, Harvard, Vancouver, ISO, and other styles
17

Arya, Kumar Goypal, G. Astagi Harshavardhan, V. Bharath, Bhardwaj MS Krishna, Rajesh Kumar Saha Dr., and NR Dr.Deepak. "Implementation of Emotions in Artificial Intelligence." Journal of Research in Artificial Neural Network Systems 1, no. 1 (2025): 15–20. https://doi.org/10.5281/zenodo.14864276.

Full text
Abstract:
<em>The integration of emotions into Artificial Intelligence (AI) systems has gained increasing attention in recent years, with applications ranging from human-computer interaction to healthcare. The concept of Emotion AI, or Affective Computing, allows AI systems to recognize, simulate, and respond to human emotions, making interactions more natural and empathetic. This paper explores the state-of-the-art methods used in emotion recognition, the diverse applications of emotion-aware AI, and the challenges and ethical concerns associated with its development. The research focuses on techniques such as facial expression recognition, speech emotion detection, sentiment analysis, and deep learning models, all emerging after 2022. Moreover, this paper discusses the transformative impact of emotion AI in sectors like healthcare, robotics, education, and customer service. Finally</em><em>, it addresses the ethical issues of privacy, manipulation, and the humanization of AI.</em>
APA, Harvard, Vancouver, ISO, and other styles
18

Moreira, Mário W. L., Joel J. P. C. Rodrigues, Neeraj Kumar, Kashif Saleem, and Igor V. Illin. "Postpartum depression prediction through pregnancy data analysis for emotion-aware smart systems." Information Fusion 47 (May 2019): 23–31. http://dx.doi.org/10.1016/j.inffus.2018.07.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Shu, Chonghuan Xu, Austin Shijun Ding, and Zhongyun Tang. "A Novel Emotion-Aware Hybrid Music Recommendation Method Using Deep Neural Network." Electronics 10, no. 15 (2021): 1769. http://dx.doi.org/10.3390/electronics10151769.

Full text
Abstract:
Emotion-aware music recommendations has gained increasing attention in recent years, as music comes with the ability to regulate human emotions. Exploiting emotional information has the potential to improve recommendation performances. However, conventional studies identified emotion as discrete representations, and could not predict users’ emotional states at time points when no user activity data exists, let alone the awareness of the influences posed by social events. In this study, we proposed an emotion-aware music recommendation method using deep neural networks (emoMR). We modeled a representation of music emotion using low-level audio features and music metadata, model the users’ emotion states using an artificial emotion generation model with endogenous factors exogenous factors capable of expressing the influences posed by events on emotions. The two models were trained using a designed deep neural network architecture (emoDNN) to predict the music emotions for the music and the music emotion preferences for the users in a continuous form. Based on the models, we proposed a hybrid approach of combining content-based and collaborative filtering for generating emotion-aware music recommendations. Experiment results show that emoMR performs better in the metrics of Precision, Recall, F1, and HitRate than the other baseline algorithms. We also tested the performance of emoMR on two major events (the death of Yuan Longping and the Coronavirus Disease 2019 (COVID-19) cases in Zhejiang). Results show that emoMR takes advantage of event information and outperforms other baseline algorithms.
APA, Harvard, Vancouver, ISO, and other styles
20

Fathalla, Rana. "Emotional Models." International Journal of Synthetic Emotions 11, no. 2 (2020): 1–18. http://dx.doi.org/10.4018/ijse.2020070101.

Full text
Abstract:
Emotion modeling has gained attention for almost two decades now due to the rapid growth of affective computing (AC). AC aims to detect and respond to the end-user's emotions by devices and computers. Despite the hard efforts being directed to emotion modeling with numerous tries to build different models of emotions, emotion modeling remains an art with a lack of consistency and clarity regarding the exact meaning of emotion modeling. This review deconstructs the vagueness of the term ‘emotion modeling' by discussing the various types and categories of emotion modeling, including computational models and its categories—emotion generation and emotion effects—and emotion representation models and its categories—categorical, dimensional, and componential models. This review deals with applications associated with each type of emotion model including artificial intelligence and robotics architecture, computer-human interaction applications of the computational models, and emotion classification and affect-aware applications such as video games and tutoring systems applications of emotion representation models.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Xiangju, Shi Feng, Daling Wang, and Yifei Zhang. "Context-aware emotion cause analysis with multi-attention-based neural network." Knowledge-Based Systems 174 (June 2019): 205–18. http://dx.doi.org/10.1016/j.knosys.2019.03.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Jeevarathinam, Mrs A. "Audio Based Speech Emotion Prediction Using CNN Algorithm." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 03 (2025): 1–9. https://doi.org/10.55041/ijsrem42722.

Full text
Abstract:
Emotion recognition plays a pivotal role in advancing Human-Computer Interaction (HCI) by enabling systems to understand and respond to human emotional states. Among various modalities, Speech Emotion Recognition (SER) stands out as a non-invasive, cost-effective, and temporally efficient approach for detecting emotions. This study leverages the RAVDESS dataset, which provides high-quality audio recordings for emotion classification. The proposed methodology involves preprocessing audio signals to remove noise, extracting temporal and spectral features in both time and frequency domains, and implementing machine learning models for multi-class emotion classification. The study evaluates and compares the performance of several classification models, including Random Forest (RF), Multilayer Perceptron (MLP), Support Vector Machine (SVM), Convolutional Neural Networks (CNN), and Decision Trees (DT). Experimental results demonstrate promising accuracy levels, highlighting the potential of these machine learning techniques in SER. This research contributes to the integration of SER within Brain-Computer Interfaces (BCI) and other emotion-aware applications, paving the way for enhanced interactive systems. Keywords – Speech Emotion Recognition -Machine Learning - Random Forest -Multilayer Perceptron - Support Vector Machine - Convolutional Neural Networks - Decision Tree - RAVDESS Dataset
APA, Harvard, Vancouver, ISO, and other styles
23

Yu, Qiao, Wenjing Xiao, Sheng Jiang, Mohammed F. Alhamid, Ghulam Muhammad, and M. Shamim Hossain. "Emotion-aware mobile edge computing system: A case study." Computers & Electrical Engineering 92 (June 2021): 107120. http://dx.doi.org/10.1016/j.compeleceng.2021.107120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Pasupuleti, Murali Krishna. "End-to-End Emotion Recognition from Raw Audio: Speaker-Aware, Noise-Resilient, and Multimodal Adaptive Learning Approaches." International Journal of Academic and Industrial Research Innovations(IJAIRI) 05, no. 04 (2025): 377–87. https://doi.org/10.62311/nesx/rp3525.

Full text
Abstract:
Abstract: Emotion recognition from speech is a cornerstone of next-generation human-computer interaction, social robotics, and healthcare technologies. While traditional approaches have relied heavily on handcrafted acoustic features like Mel-Frequency Cepstral Coefficients (MFCCs), recent advances in deep learning have shifted the paradigm toward end-to-end models that process raw audio waveforms directly. However, significant challenges remain, including speaker variability, environmental noise, and the limited contextual understanding inherent in unimodal systems. This paper proposes a comprehensive hybrid framework that integrates speaker-aware modeling, noise-resilient architectures, and adaptive multimodal learning — combining audio, text, and video modalities. By critically synthesizing recent empirical findings and through scientific modeling, we offer novel interpretations and propose scalable solutions that enhance accuracy, robustness, and real-world applicability in noisy, speaker-diverse environments. Keywords: Emotion Recognition, Raw Audio Processing, End-to-End Deep Learning, Speaker-Aware Models, Noise-Resilient Learning, Multimodal Fusion, Audiovisual Sentiment Analysis, Deep Neural Networks, Human-Computer Interaction, Adaptive Multimodal Systems
APA, Harvard, Vancouver, ISO, and other styles
25

Ghoniem, Algarni, and Shaalan. "Multi-Modal Emotion Aware System Based on Fusion of Speech and Brain Information." Information 10, no. 7 (2019): 239. http://dx.doi.org/10.3390/info10070239.

Full text
Abstract:
In multi-modal emotion aware frameworks, it is essential to estimate the emotional features then fuse them to different degrees. This basically follows either a feature-level or decision-level strategy. In all likelihood, while features from several modalities may enhance the classification performance, they might exhibit high dimensionality and make the learning process complex for the most used machine learning algorithms. To overcome issues of feature extraction and multi-modal fusion, hybrid fuzzy-evolutionary computation methodologies are employed to demonstrate ultra-strong capability of learning features and dimensionality reduction. This paper proposes a novel multi-modal emotion aware system by fusing speech with EEG modalities. Firstly, a mixing feature set of speaker-dependent and independent characteristics is estimated from speech signal. Further, EEG is utilized as inner channel complementing speech for more authoritative recognition, by extracting multiple features belonging to time, frequency, and time–frequency. For classifying unimodal data of either speech or EEG, a hybrid fuzzy c-means-genetic algorithm-neural network model is proposed, where its fitness function finds the optimal fuzzy cluster number reducing the classification error. To fuse speech with EEG information, a separate classifier is used for each modality, then output is computed by integrating their posterior probabilities. Results show the superiority of the proposed model, where the overall performance in terms of accuracy average rates is 98.06%, and 97.28%, and 98.53% for EEG, speech, and multi-modal recognition, respectively. The proposed model is also applied to two public databases for speech and EEG, namely: SAVEE and MAHNOB, which achieve accuracies of 98.21% and 98.26%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
26

Qian, Yongfeng, Yin Zhang, Xiao Ma, Han Yu, and Limei Peng. "EARS: Emotion-aware recommender system based on hybrid information fusion." Information Fusion 46 (March 2019): 141–46. http://dx.doi.org/10.1016/j.inffus.2018.06.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Faria, Diego Resende, Amie Louise Godkin, and Pedro Paulo da Silva Ayrosa. "Advancing Emotionally Aware Child–Robot Interaction with Biophysical Data and Insight-Driven Affective Computing." Sensors 25, no. 4 (2025): 1161. https://doi.org/10.3390/s25041161.

Full text
Abstract:
This paper investigates the integration of affective computing techniques using biophysical data to advance emotionally aware machines and enhance child–robot interaction (CRI). By leveraging interdisciplinary insights from neuroscience, psychology, and artificial intelligence, the study focuses on creating adaptive, emotion-aware systems capable of dynamically recognizing and responding to human emotional states. Through a real-world CRI pilot study involving the NAO robot, this research demonstrates how facial expression analysis and speech emotion recognition can be employed to detect and address negative emotions in real time, fostering positive emotional engagement. The emotion recognition system combines handcrafted and deep learning features for facial expressions, achieving an 85% classification accuracy during real-time CRI, while speech emotions are analyzed using acoustic features processed through machine learning models with an 83% accuracy rate. Offline evaluation of the combined emotion dataset using a Dynamic Bayesian Mixture Model (DBMM) achieved a 92% accuracy for facial expressions, and the multilingual speech dataset yielded 98% accuracy for speech emotions using the DBMM ensemble. Observations from psychological and technological aspects, coupled with statistical analysis, reveal the robot’s ability to transition negative emotions into neutral or positive states in most cases, contributing to emotional regulation in children. This work underscores the potential of emotion-aware robots to support therapeutic and educational interventions, particularly for pediatric populations, while setting a foundation for developing personalized and empathetic human–machine interactions. These findings demonstrate the transformative role of affective computing in bridging the gap between technological functionality and emotional intelligence across diverse domains.
APA, Harvard, Vancouver, ISO, and other styles
28

Karishma, Vijay kankariya, and Prashant M. Yawalkar Dr. "A DEEP LEARNING-BASED SYSTEM FOR RECOMMENDING MOVIES BASED ON EMOTIONS." Journal of the Maharaja Sayajirao University of Baroda 59, no. 1 (I) (2025): 454–60. https://doi.org/10.5281/zenodo.15277022.

Full text
Abstract:
Abstract: In the era of intelligent entertainment, conventional movie recommendation systems oftenfail to capture the dynamic emotional preferences of users. This paper proposes a novel emotion-basedmovie recommendation system that utilizes facial expression recognition to detect a user's real-timeemotional state and generate personalized movie suggestions accordingly. Using convolutional neuralnetworks (CNNs) for facial emotion classification, the system maps detected emotions to appropriatemovie genres through a predefined emotion-genre mapping model. The proposed system enhancesuser satisfaction by adapting to momentary emotional contexts, unlike static, history-based methods.Experimental results demonstrate that the emotion-aware approach significantly improvesrecommendation relevance and user engagement. Future extensions may include multimodal emotiondetection and integration with user profiles for deeper personalization. Keywords&mdash;Convolutional Neural Networks (CNNs), VGG16, and ResNet, Movie Recommendation,Emotions.
APA, Harvard, Vancouver, ISO, and other styles
29

Schuller, Björn. "Responding to uncertainty in emotion recognition." Journal of Information, Communication and Ethics in Society 17, no. 3 (2019): 299–303. http://dx.doi.org/10.1108/jices-07-2019-0080.

Full text
Abstract:
Purpose Uncertainty is an under-respected issue when it comes to automatic assessment of human emotion by machines. The purpose of this paper is to highlight the existent approaches towards such measurement of uncertainty, and identify further research need. Design/methodology/approach The discussion is based on a literature review. Findings Technical solutions towards measurement of uncertainty in automatic emotion recognition (AER) exist but need to be extended to respect a range of so far underrepresented sources of uncertainty. These then need to be integrated into systems available to general users. Research limitations/implications Not all sources of uncertainty in automatic emotion recognition (AER) including emotion representation and annotation can be touched upon in this communication. Practical implications AER systems shall be enhanced by more meaningful and complete information provision on the uncertainty underlying their estimates. Limitations of their applicability should be communicated to users. Social implications Users of automatic emotion recognition technology will become aware of their limitations, potentially leading to a fairer usage in crucial application context. Originality/value There is no previous discussion including the technical view point on extended uncertainty measurement in automatic emotion recognition.
APA, Harvard, Vancouver, ISO, and other styles
30

Xiu, Taiyu, Yin Sun, Xuan Zhang, et al. "The Analysis of Emotion-Aware Personalized Recommendations via Multimodal Data Fusion in the Field of Art." Journal of Organizational and End User Computing 37, no. 1 (2025): 1–29. https://doi.org/10.4018/joeuc.368008.

Full text
Abstract:
This paper proposes an emotion-aware personalized recommendation system (EPR-IoT) based on IoT data and multimodal emotion fusion, aiming to address the limitations of traditional recommendation systems in capturing users' emotional states of artistic product consumption in real time. With the proliferation of smart devices, physiological signals such as heart rate and skin conductance—which are strongly correlated with emotional states—provide new opportunities for emotion recognition. For example, an increase in heart rate is typically associated with emotions like anxiety, anger, or fear, while a decrease is linked to emotional states like relaxation or joy. Similarly, skin conductance rises with emotional arousal, particularly during stress or fear. These physiological signals, combined with text, speech, and video data of art products, are fused to construct an art emotion-driven recommendation model capable of dynamically adjusting the recommended content.
APA, Harvard, Vancouver, ISO, and other styles
31

Lee, Seungheyon, Sooeon Lee, Yumin Choi, Junggab Son, Paolo Bellavista, and Hyunbum Kim. "Cooperative Obstacle-Aware Surveillance for Virtual Emotion Intelligence with Low Energy Configuration." Drones 7, no. 3 (2023): 159. http://dx.doi.org/10.3390/drones7030159.

Full text
Abstract:
In this article, we introduce a cooperative obstacle-aware surveillance system for virtual emotion intelligence which is supported by low energy configuration with the minimal wasted communication cost in self-sustainable network with 6G components. We make a formal definition of the main research problem whose goal is to minimize the wasted communication range of system members on condition that the required detection accuracy with the given number of obstacles is satisfied when the requested number of obstacle-aware surveillance low energy barriers are built in self-sustainable network. To solve the problem, we have originally designed and implemented two different approaches, and then thoroughly evaluated them through extensive simulations. Then, their performances based on numerical outcomes are demonstrated with detailed discussions.
APA, Harvard, Vancouver, ISO, and other styles
32

Kalung Leung, John, Igor Griva, and William G. Kennedy. "Applying the Affective Aware Pseudo Association Method to Enhance the Top-N Recommendations Distribution to Users in Group Emotion Recommender Systems." International Journal on Natural Language Computing 10, no. 1 (2021): 1–20. http://dx.doi.org/10.5121/ijnlc.2021.10101.

Full text
Abstract:
Recommender Systems are a subclass of information retrieval systems, or more succinctly, a class of information filtering systems that seeks to predict how close is the match of the user’s preference to a recommended item. A common approach for making recommendations for a user group is to extend Personalized Recommender Systems’ capability. This approach gives the impression that group recommendations are retrofits of the Personalized Recommender Systems. Moreover, such an approach not taken the dynamics of group emotion and individual emotion into the consideration in making top-N recommendations. Recommending items to a group of two or more users has certainly raised unique challenges in group behaviors that influence group decision-making that researchers only partially understand. This study applies the Affective Aware Pseudo Association Method in studying group formation and dynamics in group decision making. The method shows its adaptability to group's moods change when making recommendations.
APA, Harvard, Vancouver, ISO, and other styles
33

Hasan, Moin. "Intelligent Monitoring System Using Facial Expression." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 04 (2025): 1–9. https://doi.org/10.55041/ijsrem46450.

Full text
Abstract:
Abstract— Facial Emotion Recognition (FER) has emerged as a significant component in the creation of emotionally intelligent systems, attempting to bridge the communication gap between humans and technology. This project presents a real-time face emotion detection web application that uses live camera feeds to determine users' emotional states and dynamically improves user engagement according to mood. To provide a smooth and responsive experience, the system makes use of the DeepFace framework, OpenCV for video processing, and Flask for backend administration. When the application detects an emotion, such as happiness, sorrow, anger, surprise, fear, or neutrality, it instantly modifies the web interface's background color to match the user's mood and recommends a carefully chosen playlist of YouTube songs that are appropriate for the emotion. The user experience can be further customized with optional features like facial recognition and predicted age detection. The suggested method provides a dynamic and immersive platform with real-time input, thereby addressing the shortcomings of conventional static emotion analysis systems. Through the integration of visual, aural, and interactive components, the program improves emotional engagement and shows how emotion-aware services may be used in a variety of industries, including customer service, entertainment, mental wellness, and adaptive learning. Because of the system's emphasis on accessibility, simplicity, and computational economy, it can function properly even on common consumer hardware without the need for expensive GPUs. The potential for developing sympathetic human-computer interfaces that react to users' current states both rationally and emotionally is demonstrated by this work. Embedding direct multimedia playing, enabling multi-emotion detection per frame, and expanding its application to mobile and edge computing platforms are possible future advances. Keywords— Facial Emotion Detection, DeepFace, OpenCV, Flask, Real-Time Processing, Mood-Based Adaptation, Human-Computer Interaction
APA, Harvard, Vancouver, ISO, and other styles
34

John, Kalung Leung, Griva Igor, and G. Kennedy William. "APPLYING THE AFFECTIVE AWARE PSEUDO ASSOCIATION METHOD TO ENHANCE THE TOP-N RECOMMENDATIONS DISTRIBUTION TO USERS IN GROUP EMOTION RECOMMENDER SYSTEMS." International Journal on Natural Language Computing (IJNLC) 10, no. 1 (2021): 1–20. https://doi.org/10.5281/zenodo.4607313.

Full text
Abstract:
Recommender Systems are a subclass of information retrieval systems, or more succinctly, a class of information filtering systems that seeks to predict how close is the match of the user&rsquo;s preference to a recommended item. A common approach for making recommendations for a user group is to extend Personalized Recommender Systems&rsquo; capability. This approach gives the impression that group recommendations are retrofits of the Personalized Recommender Systems. Moreover, such an approach not taken the dynamics of group emotion and individual emotion into the consideration in making top-N recommendations. Recommending items to a group of two or more users has certainly raised unique challenges in group behaviors that influence group decision-making that researchers only partially understand. This study applies the Affective Aware Pseudo Association Method in studying group formation and dynamics in group decision making. The method shows its adaptability to group&#39;s moods change when making recommendations.
APA, Harvard, Vancouver, ISO, and other styles
35

Firdaus, Mauajama, Hardik Chauhan, Asif Ekbal, and Pushpak Bhattacharyya. "More the Merrier: Towards Multi-Emotion and Intensity Controllable Response Generation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 14 (2021): 12821–29. http://dx.doi.org/10.1609/aaai.v35i14.17517.

Full text
Abstract:
The focus on conversational systems has recently shifted towards creating engaging agents by inculcating emotions into them. Human emotions are highly complex as humans can express multiple emotions with varying intensity in a single utterance, whereas the conversational agents convey only one emotion in their responses. To infuse human-like behaviour in the agents, we introduce the task of multi-emotion controllable response generation with the ability to express different emotions with varying levels of intensity in an open-domain dialogue system. We introduce a Multiple Emotion Intensity aware Multi-party Dialogue (MEIMD) dataset having 34k conversations taken from 8 different TV Series. We finally propose a Multiple Emotion with Intensity-based Dialogue Generation (MEI-DG) framework. The system employs two novel mechanisms: viz. (i) determining the trade-off between the emotion and generic words, while focusing on the intensity of the desired emotions; and (ii) computing the amount of emotion left to be expressed, thereby regulating the generation accordingly. The detailed evaluation shows that our proposed approach attains superior performance compared to the baseline models.
APA, Harvard, Vancouver, ISO, and other styles
36

Ozpinar, Alper, Ersin Alpan, and Taner Celik. "Enhancing IVR Systems in Mobile Banking with Emotion Analysis for Adaptive Dialogue Flows and Seamless Transition to Human Assistance." Orclever Proceedings of Research and Development 3, no. 1 (2023): 592–605. http://dx.doi.org/10.56038/oprd.v3i1.382.

Full text
Abstract:
This study introduces an advanced approach to improving Interactive Voice Response (IVR) systems for mobile banking by integrating emotion analysis with a fusion of specialized datasets. Utilizing the RAVDESS, CREMA-D, TESS, and SAVEE datasets, this research exploits a diverse array of emotional speech and song samples to analyze customer sentiment in call center interactions. These datasets provide a multi-modal emotional context that significantly enriches the IVR experience.&#x0D; The cornerstone of our methodology is the implementation of Mel-Frequency Cepstral Coefficients (MFCC) Extraction. The MFCCs, extracted from audio inputs, form a 2D array where time and cepstral coefficients create a structure that closely resembles an image. This format is particularly suitable for Convolutional Neural Networks (CNNs), which excel in interpreting such 'image-like' data for emotion recognition, hence enhancing the system's responsiveness to emotional cues.&#x0D; Proposed system's architecture is adeptly designed to modify dialogue flows dynamically, informed by the emotional tone of customer interactions. This innovation not only improves customer engagement but also ensures a seamless handover to human operators when the situation calls for a personal touch, optimizing the balance between automated efficiency and human empathy.&#x0D; The results of this research demonstrate the potential of emotion-aware IVR systems to anticipate and meet customer needs more effectively, paving the way for a new standard in user-centric banking services.
APA, Harvard, Vancouver, ISO, and other styles
37

Mousa, Ali, and Asia Mahdi Nasser. "ENHANCING HUMAN-ROBOT INTERACTION THROUGH GROUP EMOTION RECOGNITION." Iraqi Journal for Computers and Informatics 49, no. 2 (2023): 111–19. http://dx.doi.org/10.25195/ijci.v49i2.444.

Full text
Abstract:
Abstract - This article explores within the field of Human-Robot Interaction (HRI), focusing on the complicated relationship between emotions, decision-making, and robot behaviors. Emotions are essential to effective communication and interaction, requiring the development of emotion recognition systems in robots. The article explores both individual and group emotion recognition, including microexpressions and macroexpressions. Group emotion dynamics, encompassing phenomena like emotional contagion, convergence, and social influence, are separated to understand how emotions combine within collective settings. A concept, Group Emotion Recognition (GER), is introduced, providing a framework for recognizing emotions within groups. GER involves proximity metrics, emotion classification, and entropy-based analysis to quantify emotion diversity. The article also outlines how GER can enhance user engagement, personalize interactions, improve group dynamics, and foster social acceptance in various human-robot interaction scenarios. Decision-making based on GER, driven by positive or negative emotion labels, is discussed, highlighting the adaptability and sensitivity required for effective human-robot interactions. Ethical considerations regarding the use of emotion recognition technology are addressed throughout the article, emphasizing responsible implementation. Overall, this work lays a solid foundation for advancing the field of HRI by integrating emotion recognition and decision-making to create emotionally intelligent and socially aware robots.
APA, Harvard, Vancouver, ISO, and other styles
38

Bravo, Luis, Ciro Rodriguez, Pedro Hidalgo, and Cesar Angulo. "A Systematic Review on Artificial Intelligence-Based Multimodal Dialogue Systems Capable of Emotion Recognition." Multimodal Technologies and Interaction 9, no. 3 (2025): 28. https://doi.org/10.3390/mti9030028.

Full text
Abstract:
In the current context, the use of technologies in applications for multimodal dialogue systems with computers and emotion recognition through artificial intelligence continues to grow rapidly. Consequently, it is challenging for researchers to identify gaps, propose new models, and increase user satisfaction. The objective of this study is to explore and analyze potential applications based on artificial intelligence for multimodal dialogue systems incorporating emotion recognition. The methodology used in selecting papers is in accordance with PRISMA and identifies 13 scientific articles whose research proposals are generally focused on convolutional neural networks (CNNs), Long Short-Term Memory (LSTM), GRU, and BERT. The research results identify the proposed models as Mindlink-Eumpy, RHPRnet, Emo Fu-Sense, 3FACRNNN, H-MMER, TMID, DKMD, and MatCR. The datasets used are DEAP, MAHNOB-HCI, SEED-IV, SEDD-V, AMIGOS, and DREAMER. In addition, the metrics achieved by the models are presented. It is concluded that emotion recognition models such as Emo Fu-Sense, 3FACRNNN, and H-MMER obtain outstanding results, with their accuracy ranging from 92.62% to 98.19%, and multimodal dialogue models such as TMID and the scene-aware model with BLEU4 metrics obtain values of 51.59% and 29%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
39

Ateeque Ahmed. "Enhancing Human-Robot Collaboration through Multimodal Emotion and Context-Aware Object Detection." Journal of Information Systems Engineering and Management 10, no. 51s (2025): 745–58. https://doi.org/10.52783/jisem.v10i51s.10448.

Full text
Abstract:
In the evolving landscape of human-robot interaction, the ability of robots to perceive and respond to human emotions and surrounding objects is essential for effective collaboration. This study proposes an integrated framework that combines multimodal emotion detection and context-aware object recognition to enhance the intuitiveness and responsiveness of human-robot collaboration. The approach utilizes visual (facial expressions) and auditory (speech tone) cues for emotion detection, while simultaneously identifying and interpreting relevant objects in the environment using computer vision and contextual data. There is an advanced fusion algorithm to make the robots synchronize emotional states and environmental understanding, which allows the robots to have adaptive decisions in real time. For instance, the identification of hazardous objects in addition to reading a user’s stress can enable the robot to change its behavior in such a way, in which the assistance can be, the robot keeps a safe distance or the robot changes its task strategy. Through the integration of these technologies, the robot can be more aware of the situation and have more personalized and humanlike interactions that are more efficient. The research is intended to show that multimodal and context aware systems can change human robot collaboration from reactive automation to proactive cooperation. The findings enable intelligent robots to be deployed in collaborations that require emotional sensitivity and context awareness in healthcare, manufacturing, customer service and domestic environments.
APA, Harvard, Vancouver, ISO, and other styles
40

Yang, Yi, Hao Feng, Yiming Cheng, and Zhu Han. "Emotion-Aware Scene Adaptation: A Bandwidth-Efficient Approach for Generating Animated Shorts." Sensors 24, no. 5 (2024): 1660. http://dx.doi.org/10.3390/s24051660.

Full text
Abstract:
Semantic communication technology in the 6G wireless system focuses on semantic extraction in communication, that is, only the inherent meaning of the intention in the information. Existing technologies still have challenges in extracting emotional perception in the information, high compression rates, and privacy leakage due to knowledge sharing in communication. Large-scale generative-model technology could rapidly generate multimodal information according to user requirements. This paper proposes an approach that leverages large-scale generative models to create animated short films that are semantically and emotionally similar to real scenes and characters. The visual content of the data source is converted into text expression through semantic understanding technology; emotional clues from the data source media are added to the text form through reinforcement learning technology; and finally, a large-scale generative model is used to generate visual media, which is consistent with the semantics of the data source. This paper develops a semantic communication process with distinct modules and assesses the enhancements garnered from incorporating an emotion enhancement module. This approach facilitates the expedited generation of broad media forms and volumes according to the user’s intention, thereby enabling the creation of generated multimodal media within applications in the metaverse and in intelligent driving systems.
APA, Harvard, Vancouver, ISO, and other styles
41

Kiran, B. Kranthi. "Emotion Based Music Recommendation System using VGG16-CNN Architecture." International Journal for Research in Applied Science and Engineering Technology 12, no. 6 (2024): 592–96. http://dx.doi.org/10.22214/ijraset.2024.63181.

Full text
Abstract:
Abstract: This research introduces an Emotion-based Music Recommendation System (EMRS) using Convolutional Neural Networks (CNNs) to analyze facial expressions and recommend music tailored to individual emotional states. Unlike traditional systems, EMRS prioritizes facial expression analysis for personalization. CNNs, trained on a diverse dataset of emotional expressions linked to music, extract key emotional features. EMRS leverages this analysis to intuitively suggest music that can potentially aid in emotional regulation. This research not only advances personalized music recommendation but also opens doors for emotion-aware technology with applications in mental healthcare for emotional imbalance and trauma. EMRS has the potential to serve as a complementary tool for therapists and individuals seeking emotional well-being through music
APA, Harvard, Vancouver, ISO, and other styles
42

Moeez, Rajjan, Deore Prajwal, Mohite Yashraj, and Desai Yash. "Harmonic Fusion: AI-Driven Music Personalization via Emotion-Enhanced Facial Expression Recognition Using Python, OpenCV, TensorFlow, and Flask." Harmonic Fusion: AI-Driven Music Personalization via Emotion-Enhanced Facial Expression Recognition Using Python, OpenCV, TensorFlow, and Flask 8, no. 12 (2023): 7. https://doi.org/10.5281/zenodo.10427665.

Full text
Abstract:
The exciting rise of big data in recent years has drawn a lot of attention to the interesting realm of deep learning. Convolutional Neural Networks (CNNs), a key component of deep learning, have demonstrated their worth, particularly in the field of facial recognition [3]. This research presents a novel and creative technique that combines CNN-based microexpression detection technology with an autonomous music recommendation system [3] [1]. Our innovative algorithm excels at detecting minor facial microexpressions and then goes above and beyond by selecting music that perfectly matches the emotional states represented by these expressions. Our micro-expression recognition model performs admirably on the FER2013 dataset, with a recognition rate of 62.1% [3]. We use a content-based music recommendation algorithm to extract some song feature vectors after we've deciphered the specific facial emotion. Then we turn to the tried-and-true cosine similarity algorithm to do its thing and recommend some music [3]. But it does not end there. This study isn't only about improving music recommendation systems; it's also about investigating how these systems may assist us manage our emotions [2] [1]. The findings of this study offer a great deal of promise, pointing to interesting prospects for incorporating emotion-aware music recommendation algorithms into numerous facets of our life." Keywords:- Deep Learning, Facial Micro-Expression Recognition, Convolutional Neural Network (CNN), FER2013 Dataset, Music Recommendation Algorithm, Emotion Recognition, Emotion Recognition In Conversation (ERC), Recommender Systems, Music Information Retrieval, Artificial Neural Networks, Multi-Layer Neural Network.
APA, Harvard, Vancouver, ISO, and other styles
43

Klein, Maike. "Robotic affective abilities." A Peer-Reviewed Journal About 8, no. 1 (2019): 34–44. http://dx.doi.org/10.7146/aprja.v8i1.115413.

Full text
Abstract:
Within both popular media and (some) scientific contexts, affective and ‘emotional’ machines are assumed to already exist. The aim of this paper is to draw attention to some of the key conceptual and theoretical issues raised by the ostensible affectivity. My investigation starts with three robotic encounters: a robot arm, the first (according to media) ‘emotional’ robot, Pepper, and Mako, a robotic cat. To make sense of affectivity in these encounters, I discuss emotion theoretical implications for affectivity in human-machine-interaction. Which theories have been implemented in the creation of the encountered robots? Being aware that in any given robot, there is no strict implementation of one single emotion theory, I will focus on two commonly used emotion theories: Russell and Mehrabian’s Three-Factor Theory of Emotion (the computational models derived from that theory are known as PAD models) and Ekman’s Basic Emotion Theory. An alternative way to approach affectivity in artificial systems is the Relational Approach of Damiano et al. which emphasizes human-robot-interaction in social robotics. In considering this alternative I also raise questions about the possibility of affectivity in robot-robot-relations.
APA, Harvard, Vancouver, ISO, and other styles
44

Mercado-Diaz, Luis R., Yedukondala Rao Veeranki, Edward W. Large, and Hugo F. Posada-Quintero. "Fractal Analysis of Electrodermal Activity for Emotion Recognition: A Novel Approach Using Detrended Fluctuation Analysis and Wavelet Entropy." Sensors 24, no. 24 (2024): 8130. https://doi.org/10.3390/s24248130.

Full text
Abstract:
The field of emotion recognition from physiological signals is a growing area of research with significant implications for both mental health monitoring and human–computer interaction. This study introduces a novel approach to detecting emotional states based on fractal analysis of electrodermal activity (EDA) signals. We employed detrended fluctuation analysis (DFA), Hurst exponent estimation, and wavelet entropy calculation to extract fractal features from EDA signals obtained from the CASE dataset, which contains physiological recordings and continuous emotion annotations from 30 participants. The analysis revealed significant differences in fractal features across five emotional states (neutral, amused, bored, relaxed, and scared), particularly those derived from wavelet entropy. A cross-correlation analysis showed robust correlations between fractal features and both the arousal and valence dimensions of emotion, challenging the conventional view of EDA as a predominantly arousal-indicating measure. The application of machine learning for emotion classification using fractal features achieved a leave-one-subject-out accuracy of 84.3% and an F1 score of 0.802, surpassing the performance of previous methods on the same dataset. This study demonstrates the potential of fractal analysis in capturing the intricate, multi-scale dynamics of EDA signals for emotion recognition, opening new avenues for advancing emotion-aware systems and affective computing applications.
APA, Harvard, Vancouver, ISO, and other styles
45

Mohammad Nassr, Rasheed, Alia Ahmed Aldossary, and Haidawati Mohamad Nasir. "STUDENTS’ INTENTION TO USE EMOTION-AWARE VIRTUAL LEARNING ENVIRONMENT: DOES A LECTURER’S INTERACTION MAKE A DIFFERENCE?" Malaysian Journal of Learning and Instruction 18, Number 1 (2021): 183–218. http://dx.doi.org/10.32890/mjli2021.18.1.8.

Full text
Abstract:
Purpose – This study explored students’ perspective of using emotion-aware Vertual Learning Environment (VLE) in Malaysia’s higher education institutions. The purpose is to investigate the relationships among dimensions of Technology Readiness Index (TRI), attitude, intention to use VLE, and lecturer interaction. The outcomes concerned the emotions involved in the educational process of Malaysia’s higher education institutions. Methodology – Quantitative data were collected via an online survey from 260 students. An empirical analysis was then conducted using structural equation modelling (Smart PLS) in two phases: (1) examining the direct effect of students’ attitude on VLE adoption intention and (2) examining the indirect effect of constructs using lecturer interaction as a mediator. Findings – The findings revealed a significant mediating role of lecturer interaction on the relationship between attitude and intention to use VLE across the student cohort. Inhibitors, such as insecurity and discomfort, were less significant in affecting students’ attitude towards emotion-aware VLE. The results indicate that students are motivated to use VLE when lecturers understand their emotions and react accordingly. Significance – This is one of the studies pertaining to emotions in VLE and lecturer interaction in higher education institutions. The results facilitate an understanding of the pedagogical role of lecturer interaction as a practical learning motivation. It is of particular interest to curriculum and e-learning stakeholders looking to improve students’ interactions with the VLE systems. Apart from extending the current literature, this study has significant practical implications for education management in higher learning institutions. Keywords: Emotion-Aware VLE, Technology Readiness Index (TRI), Attitude, Intention to Use, Lecturer Interaction, online learning, Smart PLS, Higher Education.
APA, Harvard, Vancouver, ISO, and other styles
46

Wang, Jhing-Fa, Bo-Wei Chen, Wei-Kang Fan, and Chih-Hung Li. "Emotion-Aware Assistive System for Humanistic Care Based on the Orange Computing Concept." Applied Computational Intelligence and Soft Computing 2012 (2012): 1–8. http://dx.doi.org/10.1155/2012/183610.

Full text
Abstract:
Mental care has become crucial with the rapid growth of economy and technology. However, recent movements, such as green technologies, place more emphasis on environmental issues than on mental care. Therefore, this study presents an emerging technology called orange computing for mental care applications. Orange computing refers to health, happiness, and physiopsychological care computing, which focuses on designing algorithms and systems for enhancing body and mind balance. The representative color of orange computing originates from a harmonic fusion of passion, love, happiness, and warmth. A case study on a human-machine interactive and assistive system for emotion care was conducted in this study to demonstrate the concept of orange computing. The system can detect emotional states of users by analyzing their facial expressions, emotional speech, and laughter in a ubiquitous environment. In addition, the system can provide corresponding feedback to users according to the results. Experimental results show that the system can achieve an accurate audiovisual recognition rate of 81.8% on average, thereby demonstrating the feasibility of the system. Compared with traditional questionnaire-based approaches, the proposed system can offer real-time analysis of emotional status more efficiently.
APA, Harvard, Vancouver, ISO, and other styles
47

Venkateswarlu, Dr S. China. "Speech Emotion Recognition using Machine Learning." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–9. https://doi.org/10.55041/ijsrem48705.

Full text
Abstract:
Abstract -- Speech signals are being considered as most effective means of communication between human beings. Many researchers have found different methods or systems to identify emotions from speech signals. Here, the various features of speech are used to classify emotions. Features like pitch, tone, intensity are essential for classification. Large number of the datasets are available for speech emotion recognition. Firstly, the extraction of features from speech emotion is carried out and then another important part is classification of emotions based upon speech. Hence, different classifiers are used to classify emotions such as Happy, Sad, Anger, Surprise, Neutral, etc. Although, there are other approaches based on machine learning algorithms for identifying emotions. Speech Emotion Recognition is a current research topic because of its wide range of applications and it became a challenge in the field of speech processing too. We have carried out a brief study on Speech Emotion Analysis along with Emotion Recognition. Speech Emotion Recognition (SER) can be defined as extraction of the emotional state of the speaker from his or her speech signal. There are few universal emotions including Neutral, Anger, we have worked on different tools to be used in SER. SER is tough because emotions are subjective and annotating audio is challenging task. Emotion recognition is the part of speech recognition which is gaining more popularity and need for it increases enormously. We have classified based on different type of emotions to detect from speech. Key Words: Speech Emotion Recognition, Affective Computing, Machine Learning, Deep Learning, Audio Signal Processing, Emotion Classification, Feature Extraction, Prosodic Features, Spectral Features, Mel-Frequency Cepstral Coefficients (MFCCs), Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), Attention Mechanisms, Multimodal Emotion Recognition, Speaker-Independent SER, Real-Time Emotion Detection, Noise-Robust Emotion Recognition, Data Augmentation, Emotion-Aware Applications
APA, Harvard, Vancouver, ISO, and other styles
48

Zhao, Yingying, Yuhu Chang, Yutian Lu, et al. "Do Smart Glasses Dream of Sentimental Visions?" Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, no. 1 (2022): 1–29. http://dx.doi.org/10.1145/3517250.

Full text
Abstract:
Emotion recognition in smart eyewear devices is valuable but challenging. One key limitation of previous works is that the expression-related information like facial or eye images is considered as the only evidence of emotion. However, emotional status is not isolated; it is tightly associated with people's visual perceptions, especially those with emotional implications. However, little work has examined such associations to better illustrate the causes of emotions. In this paper, we study the emotionship analysis problem in eyewear systems, an ambitious task that requires classifying the user's emotions and semantically understanding their potential causes. To this end, we describe EMOShip, a deep-learning-based eyewear system that can automatically detect the wearer's emotional status and simultaneously analyze its associations with semantic-level visual perception. Experimental studies with 20 participants demonstrate that, thanks to its awareness of emotionship, EMOShip achieves superior emotion recognition accuracy compared to existing methods (80.2% vs. 69.4%) and provides a valuable understanding of the causes of emotions. Further pilot studies with 20 additional participants further motivate the potential use of EMOShip to empower emotion-aware applications, such as emotionship self-reflection and emotionship life-logging.
APA, Harvard, Vancouver, ISO, and other styles
49

Xie, Dr Xiaoling, and Dr Zeming Fang. "Multi-Modal Emotional Understanding in AI Virtual Characters: Integrating Micro-Expression-Driven Feedback within Context-Aware Facial Micro-Expression Processing Systems." Journal of Wireless Mobile Networks, Ubiquitous Computing, and Dependable Applications 15, no. 3 (2024): 474–500. http://dx.doi.org/10.58346/jowua.2024.i3.031.

Full text
Abstract:
To engage users, AI Virtual Characters must comprehend emotions. The paper develops and evaluates Chinese-specific context-aware facial micro-expression processing algorithms and feedback mechanisms to improve AI virtual characters' multi-modal emotional comprehension in Chinese culture. Specialized algorithms were used to collect and evaluate Chinese micro-expressions and assess AI virtual characters' emotional comprehension in user interactions. Chinese participants of various ages, genders, and places were recruited for micro-expression recognition to ensure cultural inclusion. A comprehensive method collected quantitative and qualitative data. We integrated interview and AI virtual character feedback with quantitative indicators like emotion recognition accuracy, user engagement, and micro-expression intensity. In the study, demographics affect emotion recognition accuracy and age, gender, and location-specific virtual avatars increase emotional resonance. A study demonstrated that context influences micro-expression interpretation, particularly in distinguishing urban and rural surprise and grief. Textual micro-expression recommendations and real-time AI character expression modifications increased accuracy and user experience. Micro-expressions, visual and auditory cues, and physiological reactions are linked, requiring multimodal signal processing for emotional awareness. Interactive virtual help, gaming, and education may benefit from culturally appropriate AI characters. Deep learning, multimodal fusion, and explainable AI are used in AI emotional interaction theory and technique. Finally, using Chinese cultural intricacies, our study improves AI virtual character multi-modal emotional comprehension. Culturally sensitive AI personalities and emotional AI technologies are developed using context-aware face micro-expression analysis algorithms and feedback systems.
APA, Harvard, Vancouver, ISO, and other styles
50

Peng, Zhao, Run Zong Fu, Han Peng Chen, Kaede Takahashi, Yuki Tanioka, and Debopriyo Roy. "AI Applications in Emotion Recognition: A Bibliometric Analysis." SHS Web of Conferences 194 (2024): 03005. http://dx.doi.org/10.1051/shsconf/202419403005.

Full text
Abstract:
This paper conducts a preliminary exploration of Artificial Intelligence (AI) for emotion recognition, particularly in its business applications. Employing adaptive technologies like machine learning algorithms and computer vision, AI systems analyze human emotions through facial expressions, speech patterns, and physiological signals. Ethical considerations and responsible deployment of these technologies are emphasized through an intense literature review. The study employs a comprehensive bibliometric analysis, utilizing tools such as VOSViewer, to trace the evolution of emotion-aware AI in business. Three key steps involve surveying the literature on emotion analysis, summarizing information on emotion in various contexts, and categorizing methods based on their areas of expertise. Comparative studies on emotion datasets reveal advancements in model fusion methods, exceeding human accuracy and enhancing applications in customer service and market research. The bibliometric analysis sheds light on a shift towards sophisticated, multimodal approaches in emotion recognition research, addressing challenges such as imbalanced datasets and interpretability issues. Visualizations depict keyword distributions in research papers, emphasizing the significance of “emotion recognition” and “deep learning.” The study concludes by offering insights gained from network visualization, showcasing core keywords and their density in research papers. Based on the literature, a SWOT analysis is also conducted to identify the strengths, weaknesses, opportunities, and threats associated with applying emotion recognition to business. Strengths include the technology’s high accuracy and real-time analysis capabilities, enabling diverse applications such as customer service and product quality improvement. However, weaknesses include data bias affecting the AI model’s quality and challenges in processing complex emotional expressions. Opportunities lie in the increasing number of studies, market size, and improving research outcomes, while threats include privacy concerns and growing competition.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography