Academic literature on the topic 'Emotional arousal datasets'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Emotional arousal datasets.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Emotional arousal datasets"

1

Ke, Jin, Hayoung Song, Zihan Bai, Monica D. Rosenberg, and Yuan Chang Leong. "Dynamic brain connectivity predicts emotional arousal during naturalistic movie-watching." PLOS Computational Biology 21, no. 4 (2025): e1012994. https://doi.org/10.1371/journal.pcbi.1012994.

Full text
Abstract:
Human affective experience varies along the dimensions of valence (positivity or negativity) and arousal (high or low activation). It remains unclear how these dimensions are represented in the brain and whether the representations are shared across different individuals and diverse situational contexts. In this study, we first utilized two publicly available functional MRI datasets of participants watching movies to build predictive models of moment-to-moment emotional arousal and valence from dynamic functional brain connectivity. We tested the models by predicting emotional arousal and vale
APA, Harvard, Vancouver, ISO, and other styles
2

Skaramagkas, Vasileios, Emmanouil Ktistakis, Dimitris Manousos, et al. "eSEE-d: Emotional State Estimation Based on Eye-Tracking Dataset." Brain Sciences 13, no. 4 (2023): 589. http://dx.doi.org/10.3390/brainsci13040589.

Full text
Abstract:
Affective state estimation is a research field that has gained increased attention from the research community in the last decade. Two of the main catalysts for this are the advancement in the data analysis using artificial intelligence and the availability of high-quality video. Unfortunately, benchmarks and public datasets are limited, thus making the development of new methodologies and the implementation of comparative studies essential. The current work presents the eSEE-d database, which is a resource to be used for emotional State Estimation based on Eye-tracking data. Eye movements of
APA, Harvard, Vancouver, ISO, and other styles
3

Hariyady, Hariyady, Ag Asri Ag Ibrahim, Jason Teo, et al. "Harmonizing Emotion and Sound: A Novel Framework for Procedural Sound Generation Based on Emotional Dynamics." JOIV : International Journal on Informatics Visualization 8, no. 4 (2024): 2479. https://doi.org/10.62527/joiv.8.4.3101.

Full text
Abstract:
The present work proposes a novel framework for emotion-driven procedural sound generation, termed SONEEG. The framework merges emotional recognition with dynamic sound synthesis to enhance user schooling in interactive digital environments. The framework uses physiological and emotional data to generate emotion-adaptive sound, leveraging datasets like DREAMER and EMOPIA. The primary innovation of this framework is the ability to capture emotions dynamically since we can map them onto a circumplex model of valence and arousal for precise classification. The framework adopts a Transformer-based
APA, Harvard, Vancouver, ISO, and other styles
4

Wirawan, I. Made Agus, Retantyo Wardoyo, Danang Lelono, and Sri Kusrohmaniah. "Modified Weighted Mean Filter to Improve the Baseline Reduction Approach for Emotion Recognition." Emerging Science Journal 6, no. 6 (2022): 1255–73. http://dx.doi.org/10.28991/esj-2022-06-06-03.

Full text
Abstract:
Participants' emotional reactions are strongly influenced by several factors such as personality traits, intellectual abilities, and gender. Several studies have examined the baseline reduction approach for emotion recognition using electroencephalogram signal patterns containing external and internal interferences, which prevented it from representing participants’ neutral state. Therefore, this study proposes two solutions to overcome this problem. Firstly, it offers a modified weighted mean filter method to eliminate the interference of the electroencephalogram baseline signal. Secondly, it
APA, Harvard, Vancouver, ISO, and other styles
5

Shinohara, Shuji, Hiroyuki Toda, Mitsuteru Nakamura, et al. "Evaluation of the Severity of Major Depression Using a Voice Index for Emotional Arousal." Sensors 20, no. 18 (2020): 5041. http://dx.doi.org/10.3390/s20185041.

Full text
Abstract:
Recently, the relationship between emotional arousal and depression has been studied. Focusing on this relationship, we first developed an arousal level voice index (ALVI) to measure arousal levels using the Interactive Emotional Dyadic Motion Capture database. Then, we calculated ALVI from the voices of depressed patients from two hospitals (Ginza Taimei Clinic (H1) and National Defense Medical College hospital (H2)) and compared them with the severity of depression as measured by the Hamilton Rating Scale for Depression (HAM-D). Depending on the HAM-D score, the datasets were classified into
APA, Harvard, Vancouver, ISO, and other styles
6

Shang, Yunrui, Qi Peng, Zixuan Wu, and Yinhua Liu. "Music-induced emotion flow modeling by ENMI Network." PLOS ONE 19, no. 10 (2024): e0297712. http://dx.doi.org/10.1371/journal.pone.0297712.

Full text
Abstract:
The relation between emotions and music is substantial because music as an art can evoke emotions. Music emotion recognition (MER) studies the emotions that music brings in the effort to map musical features to the affective dimensions. This study conceptualizes the mapping of music and emotion as a multivariate time series regression problem, with the aim of capturing the emotion flow in the Arousal-Valence emotional space. The Efficient Net-Music Informer (ENMI) Network was introduced to address this phenomenon. The ENMI was used to extract Mel-spectrogram features, complementing the time se
APA, Harvard, Vancouver, ISO, and other styles
7

Galvão, Filipe, Soraia M. Alarcão, and Manuel J. Fonseca. "Predicting Exact Valence and Arousal Values from EEG." Sensors 21, no. 10 (2021): 3414. http://dx.doi.org/10.3390/s21103414.

Full text
Abstract:
Recognition of emotions from physiological signals, and in particular from electroencephalography (EEG), is a field within affective computing gaining increasing relevance. Although researchers have used these signals to recognize emotions, most of them only identify a limited set of emotional states (e.g., happiness, sadness, anger, etc.) and have not attempted to predict exact values for valence and arousal, which would provide a wider range of emotional states. This paper describes our proposed model for predicting the exact values of valence and arousal in a subject-independent scenario. T
APA, Harvard, Vancouver, ISO, and other styles
8

Okumuş, Hatice, and Ebru Ergün. "DEVELOPMENT OF A TERNARY LEVELS EMOTION CLASSIFICATION MODEL UTILIZING ELECTROENCEPHALOGRAPHY DATA SET." Konya Journal of Engineering Sciences 13, no. 2 (2025): 607–23. https://doi.org/10.36306/konjes.1649691.

Full text
Abstract:
Electroencephalogram (EEG)-based emotion recognition has gained increasing attention due to its potential in objectively assessing affective states. However, many existing studies rely on limited datasets and focus on binary classification or narrow feature sets, limiting the granularity and generalizability of their findings. To address these challenges, this study explores a ternary classification framework for both valence and arousal dimensions—dividing each into low, medium, and high levels—to capture a broader spectrum of emotional responses. EEG recordings from ten randomly selected par
APA, Harvard, Vancouver, ISO, and other styles
9

Yuvaraj, Rajamanickam, Prasanth Thagavel, John Thomas, Jack Fogarty, and Farhan Ali. "Comprehensive Analysis of Feature Extraction Methods for Emotion Recognition from Multichannel EEG Recordings." Sensors 23, no. 2 (2023): 915. http://dx.doi.org/10.3390/s23020915.

Full text
Abstract:
Advances in signal processing and machine learning have expedited electroencephalogram (EEG)-based emotion recognition research, and numerous EEG signal features have been investigated to detect or characterize human emotions. However, most studies in this area have used relatively small monocentric data and focused on a limited range of EEG features, making it difficult to compare the utility of different sets of EEG features for emotion recognition. This study addressed that by comparing the classification accuracy (performance) of a comprehensive range of EEG feature sets for identifying em
APA, Harvard, Vancouver, ISO, and other styles
10

Lavezzo, Laura, Andrea Gargano, Enzo Pasquale Scilingo, and Mimma Nardelli. "Zooming into the Complex Dynamics of Electrodermal Activity Recorded during Emotional Stimuli: A Multiscale Approach." Bioengineering 11, no. 6 (2024): 520. http://dx.doi.org/10.3390/bioengineering11060520.

Full text
Abstract:
Physiological phenomena exhibit complex behaviours arising at multiple time scales. To investigate them, techniques derived from chaos theory were applied to physiological signals, providing promising results in distinguishing between healthy and pathological states. Fractal-like properties of electrodermal activity (EDA), a well-validated tool for monitoring the autonomic nervous system state, have been reported in previous literature. This study proposes the multiscale complexity index of electrodermal activity (MComEDA) to discern different autonomic responses based on EDA signals. This met
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Emotional arousal datasets"

1

Jha, Sonu Kumar, Somaraju Suvvari, and Mukesh Kumar. "Exploring the Impact of KNN and MLP Classifiers on Valence-Arousal Emotion Recognition Using EEG: An Analysis of DEAP Dataset and EEG Band Representations." In Communications in Computer and Information Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-70906-7_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Verma, Gyanendra K. "Emotions Modelling in 3D Space." In Multimodal Affective Computing: Affective Information Representation, Modelling, and Analysis. BENTHAM SCIENCE PUBLISHERS, 2023. http://dx.doi.org/10.2174/9789815124453123010013.

Full text
Abstract:
In this study, we have discussed emotion representation in two and three.dimensional space. The three-dimensional space is based on the three emotion primitives, i.e., valence, arousal, and dominance. The multimodal cues used in this study are EEG, Physiological signals, and video (under limitations). Due to the limited emotional content in videos from the DEAP database, we have considered only three classes of emotions, i.e., happy, sad, and terrible. The wavelet transforms, a classical transform, were employed for multi-resolution analysis of signals to extract features. We have evaluated th
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Liang, Zongyang Yun, Zaoyi Sun, Xin Wen, Xianan Qin, and Xiuying Qian. "PSIC3839: Predicting the Overall Emotion and Depth of Entire Songs." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2022. http://dx.doi.org/10.3233/faia220004.

Full text
Abstract:
Music emotion recognition (MER) studies have made great progress in detecting the emotions of music segments and analyzing the emotional dynamics of songs. The overall emotion and depth information of entire songs may be more suitable for real-life applications in certain scenarios. This study focuses on recognizing the overall emotion and depth of entire songs. First, we constructed a public dataset containing 3839 popular songs in China (PSIC3839) by conducting an online experiment to collect the arousal, valence, and depth annotation of each song. Second, we used handcrafted feature-based m
APA, Harvard, Vancouver, ISO, and other styles
4

Fang, Taosong, Bo Xu, and Linlin Zong. "Emotion-Enhanced Multi-Modal Persuasive Techniques Detection Using Split Features." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2022. http://dx.doi.org/10.3233/faia220382.

Full text
Abstract:
The persuasive techniques in propaganda campaigns impact the Internet environment and our society. Detecting persuasive techniques has aroused broad attention in natural language processing field. In this paper, we propose a novel emotion-enhanced and multi-level representation learning approach for multi-modal persuasive techniques detection. To consider the emotional factors used in persuasive techniques, we embed the text and images using different networks, and use a fully connected emotion enhanced layer to fuse multi-modal embedding, where the type and strength of emotions are incorporat
APA, Harvard, Vancouver, ISO, and other styles
5

Jeswani, Jahanvi, Praveen Kumar Govarthan, Abirami Selvaraj, et al. "Low Valence Low Arousal Stimuli: An Effective Candidate for EEG-Based Biometrics Authentication System." In Caring is Sharing – Exploiting the Value in Data for Health and Innovation. IOS Press, 2023. http://dx.doi.org/10.3233/shti230114.

Full text
Abstract:
Electroencephalography (EEG) has recently gained popularity in user authentication systems since it is unique and less impacted by fraudulent interceptions. Although EEG is known to be sensitive to emotions, understanding the stability of brain responses to EEG-based authentication systems is challenging. In this study, we compared the effect of different emotion stimuli for the application in the EEG-based biometrics system (EBS). Initially, we pre-processed audio-visual evoked EEG potentials from the ‘A Database for Emotion Analysis using Physiological Signals’ (DEAP) dataset. A total of 21
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Emotional arousal datasets"

1

Zamani, Farhad, and Retno Wulansari. "Emotion Classification using 1D-CNN and RNN based On DEAP Dataset." In 10th International Conference on Natural Language Processing (NLP 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.112328.

Full text
Abstract:
Recently, emotion recognition began to be implemented in the industry and human resource field. In the time we can perceive the emotional state of the employee, the employer could gain benefits from it as they could improve the quality of decision makings regarding their employee. Hence, this subject would become an embryo for emotion recognition tasks in the human resource field. In a fact, emotion recognition has become an important topic of research, especially one based on physiological signals, such as EEG. One of the reasons is due to the availability of EEG datasets that can be widely u
APA, Harvard, Vancouver, ISO, and other styles
2

Donnelly, Patrick, and Shaurya Gaur. "Mood Dynamic Playlist:Interpolating a musical path between emotions using a KNN algorithm." In Human Interaction and Emerging Technologies (IHIET-AI 2022) Artificial Intelligence and Future Applications. AHFE International, 2022. http://dx.doi.org/10.54941/ahfe100893.

Full text
Abstract:
We often to listen to music for its power to change our emotions. Whether selecting music for concentration, tunes for dancing, or lullabies for falling asleep, people often select music based on their desired mood or activity. We propose a method for automatically generating musical playlists that takes the listener on an emotional journey. We represent a playlist as a path of songs through the arousal-valence circumplex space, using existing datasets of songs annotated with affect values. Given a starting and desired affective state, we employ a K-nearest neighbor approach to choose songs th
APA, Harvard, Vancouver, ISO, and other styles
3

Lorenzo Bautista, John, Yun Kyung Lee, Seungyoon Nam, Chanki Park, and Hyun Soon Shin. "Utilizing Dimensional Emotion Representations in Speech Emotion Recognition." In AHFE 2023 Hawaii Edition. AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1004283.

Full text
Abstract:
Speech is a natural way of communication amongst humans and advancements in speech emotion recognition (SER) technology allow further improvement of human-computer interactions (HCI) with speech by understanding human emotions. SER systems are traditionally focused on categorizing emotions into discrete classes. However, discrete classes often overlook some subtleties between each emotion as they are prone to individual differences and cultures. In this study, we focused on the use of dimensional emotional values: valence, arousal, and dominance as outputs for an SER instead of the traditional
APA, Harvard, Vancouver, ISO, and other styles
4

Kisu, Keisuke, Kozawa Motohiro, and Keiichi Watanuki. "Generating Paintings Eliciting Specific Emotions Using Machine Learning for Application in Painting Therapy." In 15th International Conference on Applied Human Factors and Ergonomics (AHFE 2024). AHFE International, 2024. http://dx.doi.org/10.54941/ahfe1004673.

Full text
Abstract:
This study aims to label paintings based on biometric information and generate paintings that elicit specific emotions using machine learning. To create the dataset, we conduct experiments with eight participants using multi physiological measurement sensors. We focus on the arousal axis of emotion and use the skin conductance response as a measure of arousal. The results suggest that machine learning may be effective in generating paintings that elicit emotions because features related to arousal, such as brightness and color, can be appropriately learned.
APA, Harvard, Vancouver, ISO, and other styles
5

Guder, Larissa, João Paulo Aires, Felipe Meneguzzi, and Dalvan Griebler. "Dimensional Speech Emotion Recognition from Bimodal Features." In Simpósio Brasileiro de Computação Aplicada à Saúde. Sociedade Brasileira de Computação - SBC, 2024. http://dx.doi.org/10.5753/sbcas.2024.2779.

Full text
Abstract:
Considering the human-machine relationship, affective computing aims to allow computers to recognize or express emotions. Speech Emotion Recognition is a task from affective computing that aims to recognize emotions in an audio utterance. The most common way to predict emotions from the speech is using pre-determined classes in the offline mode. In that way, emotion recognition is restricted to the number of classes. To avoid this restriction, dimensional emotion recognition uses dimensions such as valence, arousal, and dominance to represent emotions with higher granularity. Existing approach
APA, Harvard, Vancouver, ISO, and other styles
6

Guder, Larissa, João Paulo Aires, and Dalvan Griebler. "Dimensional Speech Emotion Recognition: a Bimodal Approach." In Anais Estendidos do Simpósio Brasileiro de Sistemas Multimídia e Web. Sociedade Brasileira de Computação - SBC, 2024. http://dx.doi.org/10.5753/webmedia_estendido.2024.244402.

Full text
Abstract:
Considering the human-machine relationship, affective computing aims to allow computers to recognize or express emotions. Speech Emotion Recognition is a task from affective computing that aims to recognize emotions in an audio utterance. The most common way to predict emotions from the speech is using pre-determined classes in the offline mode. In that way, emotion recognition is restricted to the number of classes. To avoid this restriction, dimensional emotion recognition uses dimensions such as valence, arousal, and dominance, which can represent emotions with higher granularity. Existing
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Yong, Chuan Liu, and Qingshan Wu. "Cross-Dataset Facial Expression Recognition based on Arousal-Valence Emotion Model and Transfer Learning Method." In 2017 International Conference on Mechanical, Electronic, Control and Automation Engineering (MECAE 2017). Atlantis Press, 2017. http://dx.doi.org/10.2991/mecae-17.2017.24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tuan Nguyen Thien, Minh, Minh Anh Nguyen Duc, and Kenneth Y T Lim. "Designing for the investigation of microclimate stressors and physiological and neurological responses from the perspective of maker culture." In 10th International Conference on Human Interaction and Emerging Technologies (IHIET 2023). AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1004042.

Full text
Abstract:
The 2021 United Nations Climate Change Conference (COP26) resulted in the Glasgow Climate Pact. Initial work in the study reported in this paper investigated relationships between environment and physiological measurements using smartwatches, and self-designed bespoke environmental modules which are wearable around the waist. Data from this initial phase was analysed with a Random Forest regression model. The next phase of this project involves neurophysiological measurement, specifically electroencephalography (EEG). EEG was introduced to the model to explore how the changes in environmental
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!