To see the other types of publications on this topic, follow the link: Emotions in music.

Journal articles on the topic 'Emotions in music'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Emotions in music.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Rivas, Marcelo B. S., Agnes F. C. Cruvinel, Daniele P. Sacardo, Daniel U. C. Schubert, Mariana Bteshe, and Marco A. de Carvalho-Filho. "All You Need Is Music: Supporting Medical Students’ Emotional Development With a Music-Based Pedagogy." Academic Medicine 99, no. 7 (March 22, 2024): 741–44. http://dx.doi.org/10.1097/acm.0000000000005709.

Full text
Abstract:
Abstract Problem Although the practice of medicine is often emotionally challenging, medical curricula seldom systematically address the emotional development of medical students. To fill this gap, the authors developed and evaluated an innovative pedagogical activity based on music to nurture medical students’ emotional development. The authors believe that the metaphoric nature of music offers an efficient venue for exploring emotion perception, expression, and regulation. Approach The pedagogical activity Emotions in Medicine was carried out throughout 2020 and 2021 and consisted of 4 encounters to explore: (1) emotion perception, (2) emotion expression, (3) emotion regulation, and (4) the role of emotions in medical practice. During all encounters, the authors used music to evoke students’ emotions and focused the discussions on the relevance of emotions for meaningful medical practice. Emotional intelligence before and after the workshop was tested using the Schutte Self-Report Emotional Intelligence Test (SSEIT), a validated psychometric scale. Outcomes The workshop facilitated emotional connection among students and created a safe space to explore the role of emotions in medical practice. The mean total pretest SSEIT score was 110 (SD = 14.2); it increased to 116.8 (SD = 16.1) in the posttest (P < .001). This increase was true across its 4 dimensions: (1) perception of emotions, (2) management of own emotions (3) management of others’ emotions, and (4) use of emotions. Next Steps Music can be an active tool to explore the role of emotions in medical practice. It fosters students’ capacity to identify and reflect on emotions while exploring their role in patient care. Further (qualitative) research is needed to explore the mechanisms by which music facilitates learning emotion perception, expression, and regulation.
APA, Harvard, Vancouver, ISO, and other styles
2

Oyeniyi, Gabriel Ademola. "EMOTIONAL SOUNDTRACK: INFLUENCE OF MUSIC COMPOSERS ON AUDIENCE EMOTION." Shodh Sari-An International Multidisciplinary Journal 03, no. 01 (January 1, 2024): 394–410. http://dx.doi.org/10.59231/sari7678.

Full text
Abstract:
Music has the unrivalled ability to elicit emotions and change human experiences. This study explores the complex interaction between music composition, attendance behaviour, and mood during musical events. Music’s significant effect on human emotions has been the focus of much study and intrigue. Music composers use the emotional power of music to elicit profound reactions from their audiences. In the context of soundtracks, this study explores the complex link between music creators and listeners’ emotions. This study investigates the methods, plans, and underlying psychological processes that composers use to affect the emotional states of their audience by carefully examining previous research and studies on music, emotion, and soundtracks. The study’s methodology is based on an extensive literature assessment, empirical investigations, and theoretical frameworks exploring the mutual relationship between music and emotion. It examines how different musical components, including instrumentation, rhythm, melody, and harmony, can influence listeners’ emotions over time. Furthermore, the study explores how the musical element interacts with contextual factors in cinematic narratives and visual cues to enhance emotional engagement. The study’s findings demonstrate how emotions and music interact with soundtracks. Composers utilize various strategies to alter and shape the listener’s emotions to fit music to the intended emotional arc of the lyrics, pitch, storyline, and melodies. This study synthesizes the plethora of data and ideas from studies on music, emotion, and soundtrack to further the understanding of the complicated relationship between music and emotion. It also highlights the artistic and mental prowess of composers who employ music as a powerful instrument to evoke intense emotional reactions in their listeners. By shedding further light on the impact composers have on the emotional landscape of music and the cinematic experience, this study contributes to the ongoing discussion regarding the art and science of music composition and how it impacts audience emotion.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhou, Tie Hua, Wenlong Liang, Hangyu Liu, Ling Wang, Keun Ho Ryu, and Kwang Woo Nam. "EEG Emotion Recognition Applied to the Effect Analysis of Music on Emotion Changes in Psychological Healthcare." International Journal of Environmental Research and Public Health 20, no. 1 (December 26, 2022): 378. http://dx.doi.org/10.3390/ijerph20010378.

Full text
Abstract:
Music therapy is increasingly being used to promote physical health. Emotion semantic recognition is more objective and provides direct awareness of the real emotional state based on electroencephalogram (EEG) signals. Therefore, we proposed a music therapy method to carry out emotion semantic matching between the EEG signal and music audio signal, which can improve the reliability of emotional judgments, and, furthermore, deeply mine the potential influence correlations between music and emotions. Our proposed EER model (EEG-based Emotion Recognition Model) could identify 20 types of emotions based on 32 EEG channels, and the average recognition accuracy was above 90% and 80%, respectively. Our proposed music-based emotion classification model (MEC model) could classify eight typical emotion types of music based on nine music feature combinations, and the average classification accuracy was above 90%. In addition, the semantic mapping was analyzed according to the influence of different music types on emotional changes from different perspectives based on the two models, and the results showed that the joy type of music video could improve fear, disgust, mania, and trust emotions into surprise or intimacy emotions, while the sad type of music video could reduce intimacy to the fear emotion.
APA, Harvard, Vancouver, ISO, and other styles
4

Vuoskoski, Jonna K., and Tuomas Eerola. "Measuring music-induced emotion." Musicae Scientiae 15, no. 2 (July 2011): 159–73. http://dx.doi.org/10.1177/1029864911403367.

Full text
Abstract:
Most previous studies investigating music-induced emotions have applied emotion models developed in other fields to the domain of music. The aim of this study was to compare the applicability of music-specific and general emotion models – namely the Geneva Emotional Music Scale (GEMS), and the discrete and dimensional emotion models – in the assessment of music-induced emotions. A related aim was to explore the role of individual difference variables (such as personality and mood) in music-induced emotions, and to discover whether some emotion models reflect these individual differences more strongly than others. One hundred and forty-eight participants listened to 16 film music excerpts and rated the emotional responses evoked by the music excerpts. Intraclass correlations and Cronbach alphas revealed that the overall consistency of ratings was the highest in the case of the dimensional model. The dimensional model also outperformed the other two models in the discrimination of music excerpts, and principal component analysis revealed that 89.9% of the variance in the mean ratings of all the scales (in all three models) was accounted for by two principal components that could be labelled as valence and arousal. Personality-related differences were the most pronounced in the case of the discrete emotion model. Personality, mood, and the emotion model used were also associated with the intensity of experienced emotions. Implications for future music and emotion studies are raised concerning the selection of an appropriate emotion model when measuring music-induced emotions.
APA, Harvard, Vancouver, ISO, and other styles
5

London, Justin. "Some theories of emotion in music and their implications for research in music psychology." Musicae Scientiae 5, no. 1_suppl (September 2001): 23–36. http://dx.doi.org/10.1177/10298649020050s102.

Full text
Abstract:
Work in musical aesthetics on musical meaning is relevant to psychological research on musical expressions of emotion. Distinctions between simple emotions, higher emotions, and moods are given, and arguments as to what kinds of emotions or moods music might be able to express (given music's semantic capacities and limitations) are summarized. Next, the question as to how music might express these emotions and moods is considered. The paper concludes with a number of cautionary points for researchers in the psychology of musical emotion: (1) musical expression always involves sonic properties, which must be taken into account. (2) If one uses “real world” musical stimuli, one may be faced with associative interference. (3) Context will often individuate emotional expression, transforming a simple emotion to a higher emotion by providing an intentional object. (4) There is not a simple linear relationship between intensity of a musical parameter and the intensity of an emotional expression. (5) Some perfectly good musical expressions of emotion may not arouse those emotions in the listener, yet it would be incorrect to call such passages “inexpressive.” (6) Any emotions aroused by listening to music, while similar to emotions that occur in non-musical contexts, will nonetheless have a number of important differences.
APA, Harvard, Vancouver, ISO, and other styles
6

Vuoskoski, Jonna K., and Tuomas Eerola. "Measuring Music-Induced Emotion: A Comparison of Emotion Models, Personality Biases, and Intensity of Experiences." Musicae Scientiae 15, no. 2 (July 2011): 159–73. http://dx.doi.org/10.1177/102986491101500203.

Full text
Abstract:
Most previous studies investigating music-induced emotions have applied emotion models developed in other fields to the domain of music. The aim of this study was to compare the applicability of music-specific and general emotion models – namely the Geneva Emotional Music Scale (GEMS), and the discrete and dimensional emotion models – in the assessment of music-induced emotions. A related aim was to explore the role of individual difference variables (such as personality and mood) in music-induced emotions, and to discover whether some emotion models reflect these individual differences more strongly than others. One hundred and forty-eight participants listened to 16 film music excerpts and rated the emotional responses evoked by the music excerpts. Intraclass correlations and Cronbach alphas revealed that the overall consistency of ratings was the highest in the case of the dimensional model. The dimensional model also outperformed the other two models in the discrimination of music excerpts, and principal component analysis revealed that 89.9% of the variance in the mean ratings of all the scales (in all three models) was accounted for by two principal components that could be labelled as valence and arousal. Personality-related differences were the most pronounced in the case of the discrete emotion model. Personality, mood, and the emotion model used were also associated with the intensity of experienced emotions. Implications for future music and emotion studies are raised concerning the selection of an appropriate emotion model when measuring music-induced emotions.
APA, Harvard, Vancouver, ISO, and other styles
7

Rauduvaitė, Asta, and Zhiyu Yao. "THE ROLE OF EMOTIONS IN MUSIC EDUCATION: THEORETICAL INSIGHTS." SOCIETY. INTEGRATION. EDUCATION. Proceedings of the International Scientific Conference 1 (July 3, 2023): 491–502. http://dx.doi.org/10.17770/sie2023vol1.7078.

Full text
Abstract:
Emotional expression has been the focus of teachers and educational researchers, as it can result in an improvement in cognitive performance. In specific settings, personal and emotional experiences can provide a steppingstone to developmental and learning processes. Emotions significantly influence learner learning and play a crucial role in quality teaching, educational reform, and learner-teacher interaction. The inherent social and communicative nature of music would make group training an excellent tool for increasing the coordination of behaviour, affect, and mental states among children. This paper aims to explore the literature on various aspects of the concept of emotions in the context of music education with the main focus on opportunities for experiencing and expressing emotion in music education, learners' positive emotion experiences in music education, teaching to generate positive emotion outcomes, and the benefits of a greater emphasis on the emotions in music education. The results of theoretical analysis indicate that music education has a particularly positive effect on identifying emotions, emotion regulation, emotion recognition, improved learning, and self-expression.
APA, Harvard, Vancouver, ISO, and other styles
8

Silva, I. Carneiro, A. Gouveia, G. Dalagna, J. M. Oliveira, P. Carvalho, R. Costa, and J. Gama. "Music and emotion." European Psychiatry 64, S1 (April 2021): S671—S672. http://dx.doi.org/10.1192/j.eurpsy.2021.2018.

Full text
Abstract:
Introduction Music has been said to be emotion’s language. Research confirms a link between music structure and triggered emotions.ObjectivesTo assess the relationship between selected music excerpts and the emotions trigged, in order that the former will be used in future research.MethodsAn anonymous study was performed in April 2019 on 65 subjects of both sexes, aged 19- 33 (mean=21,09; SD=3,05).Subjects listened 4 excerpts of music, believed to be related either to excitement or to calmness, and answered to a questionary on emotion’s triggered by each exposure.ResultsRegarding to the music excerpts that were believed to induce excitement 80% of the subjects mentioned exciting emotions, 78% enjoyed the music while 78% didn’t knew them. For the ones that were believed to induce calmness 69% of the subjects mentioned calm emotions, 84% enjoyed the music and 62% didn’t knew the music. In an excerpt of music related to calmness, we observed association between knowing the music and the emotion trigged (p=0,027). The triggered emotion responses were independent of liking the music (P>0,05).ConclusionsIn our study, independent of liking the music, the participants reported to have perceived the expected emotions triggered by musical excerpts, showing this to be a phenomenon related to music structure. Calmness perception may be also influenced by previous knowledge of the music and related experiences. The role of individual perceptions will be looked for in following studies.DisclosureNo significant relationships.
APA, Harvard, Vancouver, ISO, and other styles
9

Syarifani, Nara. "Implikasi Music therapy sebagai bentuk katarsis dan relaksasi emosi (Implications of Music Therapy as a Form of Catharsis and Emotional Relaxation)." Happiness: Journal of Psychology and Islamic Science 8, no. 1 (June 14, 2024): 1–11. http://dx.doi.org/10.30762/happiness.v8i1.2136.

Full text
Abstract:
Humans have a fundamental need to express and manage their emotions. Musik therapy, a therapeutic intervention that utilizes music, offers significant potential in helping individuals achieve emotional balance. Research suggests that music therapy can serve as a form of emotional catharsis and relaxation, providing benefits to individuals' mental health and well-being. This research used a comprehensive literature insight to analyse a range of scientific studies examining the effectiveness of music therapy in the context of emotional catharsis and relaxation. Data were collected from various sources, including academic journals, research databases and scholarly books. The literature review showed that music therapy has significant potential in facilitating the process of emotional catharsis and relaxation. Through various mechanisms, such as emotional stimulation, stres reduction, and improved emotion regulation, music therapy can help individuals express and manage their emotions in a healthy manner. Research shows that music therapy can provide positive mental health benefits, such as the reduction of anxiety, depression, and trauma, as well as improved well-being and quality of life. As a form of emotional catharsis and relaxation, music therapy offers significant potential to improve the mental health and well-being of individuals.
APA, Harvard, Vancouver, ISO, and other styles
10

Cook, Terence, Ashlin R. K. Roy, and Keith M. Welker. "Music as an emotion regulation strategy: An examination of genres of music and their roles in emotion regulation." Psychology of Music 47, no. 1 (October 26, 2017): 144–54. http://dx.doi.org/10.1177/0305735617734627.

Full text
Abstract:
Research suggests that people frequently use music to regulate their emotions. However, little is known about what kinds of music may regulate affective states. To investigate this, we examined how the music preferences of 794 university students were associated with their use of music to regulate emotions. We found that preferences for pop, rap/hip-hop, soul/funk, and electronica/dance music were positively associated with using music to increase emotional arousal. Soul/funk music preferences were also positively associated with using music for up-regulating positive emotionality and down-regulating negative emotionality. More broadly, energetic and rhythmic music was positively associated with using all examined forms of musical emotion regulation, suggesting this dimension of music is especially useful in modulating emotions. These results highlight the potential use of music as a tool for emotion regulation. Future research can extend our findings by examining the efficacy of different types of music at modulating emotional states.
APA, Harvard, Vancouver, ISO, and other styles
11

Trevor, Caitlyn, Marina Renner, and Sascha Frühholz. "Acoustic and structural differences between musically portrayed subtypes of fear." Journal of the Acoustical Society of America 153, no. 1 (January 2023): 384–99. http://dx.doi.org/10.1121/10.0016857.

Full text
Abstract:
Fear is a frequently studied emotion category in music and emotion research. However, research in music theory suggests that music can convey finer-grained subtypes of fear, such as terror and anxiety. Previous research on musically expressed emotions has neglected to investigate subtypes of fearful emotions. This study seeks to fill this gap in the literature. To that end, 99 participants rated the emotional impression of short excerpts of horror film music predicted to convey terror and anxiety, respectively. Then, the excerpts that most effectively conveyed these target emotions were analyzed descriptively and acoustically to demonstrate the sonic differences between musically conveyed terror and anxiety. The results support the hypothesis that music conveys terror and anxiety with markedly different musical structures and acoustic features. Terrifying music has a brighter, rougher, harsher timbre, is musically denser, and may be faster and louder than anxious music. Anxious music has a greater degree of loudness variability. Both types of fearful music tend towards minor modalities and are rhythmically unpredictable. These findings further support the application of emotional granularity in music and emotion research.
APA, Harvard, Vancouver, ISO, and other styles
12

Burger, Birgitta, Suvi Saarikallio, Geoff Luck, Marc R. Thompson, and Petri Toiviainen. "Relationships Between Perceived Emotions in Music and Music-induced Movement." Music Perception 30, no. 5 (December 2012): 517–33. http://dx.doi.org/10.1525/mp.2013.30.5.517.

Full text
Abstract:
Listening to music makes us move in various ways. Several factors can affect the characteristics of these movements, including individual factors and musical features. Additionally, music-induced movement may also be shaped by the emotional content of the music, since emotions are an important element of musical expression. This study investigates possible relationships between emotional characteristics of music and music-induced, quasi-spontaneous movement. We recorded music-induced movement of 60 individuals, and computationally extracted features from the movement data. Additionally, the emotional content of the stimuli was assessed in a perceptual experiment. A subsequent correlational analysis revealed characteristic movement features for each emotion, suggesting that the body reflects emotional qualities of music. The results show similarities to movements of professional musicians and dancers, and to emotion-specific nonverbal behavior in general, and could furthermore be linked to notions of embodied music cognition. The valence and arousal ratings were subsequently projected onto polar coordinates to further investigate connections between the emotions of Russell’s (1980) circumplex models and the movement features
APA, Harvard, Vancouver, ISO, and other styles
13

Janowski, Maciej, and Maria Chełkowska-Zacharewicz. "What do we actually measure as music-induced emotions?" Roczniki Psychologiczne 22, no. 4 (June 29, 2020): 373–403. http://dx.doi.org/10.18290/rpsych.2019.22.4-5.

Full text
Abstract:
The paper presents the results of a systematic review of 61 empirical studies in which emotions in response to music were measured. The analysis of each study was focused on the measurement of emotion components and the conceptualization of emotion both in hypothesis and discussion. The review does not support the claim that music evokes the same emotional reactions as life events do, especially modal emotions. Notably, neither a high intensity of feelings, nor intentionality were confirmed in relation to musical experiences, the emergence of specific action tendencies, or specific physiological changes. Based on the obtained results, it is recommended to use the terms “affect” or “music emotions” with reference to emotions experienced in reaction to music and to abandon the term “emotions” as misleading.
APA, Harvard, Vancouver, ISO, and other styles
14

Tabei, Ken-ichi. "Inferior Frontal Gyrus Activation Underlies the Perception of Emotions, While Precuneus Activation Underlies the Feeling of Emotions during Music Listening." Behavioural Neurology 2015 (2015): 1–6. http://dx.doi.org/10.1155/2015/529043.

Full text
Abstract:
While music triggers many physiological and psychological reactions, the underlying neural basis of perceived and experienced emotions during music listening remains poorly understood. Therefore, using functional magnetic resonance imaging (fMRI), I conducted a comparative study of the different brain areas involved in perceiving and feeling emotions during music listening. I measured fMRI signals while participants assessed the emotional expression of music (perceived emotion) and their emotional responses to music (felt emotion). I found that cortical areas including the prefrontal, auditory, cingulate, and posterior parietal cortices were consistently activated by the perceived and felt emotional tasks. Moreover, activity in the inferior frontal gyrus increased more during the perceived emotion task than during a passive listening task. In addition, the precuneus showed greater activity during the felt emotion task than during a passive listening task. The findings reveal that the bilateral inferior frontal gyri and the precuneus are important areas for the perception of the emotional content of music as well as for the emotional response evoked in the listener. Furthermore, I propose that the precuneus, a brain region associated with self-representation, might be involved in assessing emotional responses.
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Xiao. "Research of Music Retrieval System Based on Emotional Music Template." Applied Mechanics and Materials 644-650 (September 2014): 3020–23. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.3020.

Full text
Abstract:
Traditional music retrieval system based on text information description can't meet people's demand for intelligent retrieval, on which basis content-based music retrieval method came into being. Emotional needs are introduced into retrieval and related researches are done to music retrieval method based on the emotion. This paper first constructs music emotion space to obtain the user's emotions; and then proposes emotional music template library through the study of the definition of emotional music model to meet users emotional needs matching template; Finally, based on this, advances the music retrieval system model based on emotional music template, trying to explore a kind of effective retrieval method based on emotion.
APA, Harvard, Vancouver, ISO, and other styles
16

Khusna, Febriana Aminatul, and Sekar Lathifatul Aliyah. "Emotions Evoked from “Too Good at Goodbyes” Song by Sam Smith." Jambura Journal of English Teaching and Literature 1, no. 2 (December 30, 2020): 101–12. http://dx.doi.org/10.37905/jetl.v1i2.7309.

Full text
Abstract:
Music is one means to express the soul. Sad genre music is one of music with high emotional pressure. High emotional pressure allows triggering emotions in someone who hears it. According to Sloboda and Juslin (2001), the music can induce emotions in its listeners and is perceived by listeners as expressive of emotion. The aim of this study was to investigate the influence of music with the sad genre for emotions from the song "Too Good at Goodbyes" by Sam Smith. In this study, the researchers found that the majority of respondents revealed that sad music can trigger emotions in the soul. Descriptive qualitative research with questionnaire methods was used in this study to obtain the valid data. Questionnaire was used in this research as an instrument of research to get the real data from the participants.
APA, Harvard, Vancouver, ISO, and other styles
17

Xu, Xin, Hui Guan, Zhen Liu, and Bo Jun Wang. "EEG-Based Music Mood Analysis and Applications." Advanced Materials Research 712-715 (June 2013): 2726–30. http://dx.doi.org/10.4028/www.scientific.net/amr.712-715.2726.

Full text
Abstract:
Music is known to be a powerful elicitor of emotions. Music with different moods induces various emotions, each of which corresponding to certain pattern of EEG signals. In this paper, based on current music mood categories, we discuss how the music belonging to different mood types affect the pattern EEG activity. We review several literatures verifying that certain characteristics of EEG differ from each other induced by different types of music. Such differences make it possible for emotion recognition through EEG signals. We also introduce some applications of emotional music such as improvement of human emotions and adjuvant treatment of diseases.
APA, Harvard, Vancouver, ISO, and other styles
18

Varade, Apurva. "A Review on Life Cycle Assessment of Solar PV Panel." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 14, 2021): 941–44. http://dx.doi.org/10.22214/ijraset.2021.35134.

Full text
Abstract:
Humans tend to connect the music they hear, to the emotion they are feeling. The song playlists though are, at periods too large to sort out automatically. It would be accommodating if the music player was “smart enough” to sort out the music based on the current state of emotion the individual is feeling. The main idea of this project is to automatically play songs based upon the emotions of the adherent. Based on the emotion, the music will be played from the predefined playlist. It aims to deliver user-preferred music with emotional attentiveness. In the existing system user want to manually select the songs, randomly played songs may not accede to the feel of the adherent, user has to classify the songs into various emotions and for playing the songs user has to manually choose a particular emotion. These difficulties can be avoided by using our project. This is a novel way that helps the handler to automatically play songs based on the emotions of the handler. It recognizes the facial emotions of the adherent and plays the songs based on their emotion. The emotions are recognized using a machine learning method Support Vector Machine (SVM) algorithm. The human twist is an important organ of an individual's body and it especially plays an important role in the heritage of an individual's behaviours and emotional appearance.
APA, Harvard, Vancouver, ISO, and other styles
19

Robazza, Claudio, Cristina Macaluso, and Valentina D'Urso. "Emotional Reactions to Music by Gender, Age, and Expertise." Perceptual and Motor Skills 79, no. 2 (October 1994): 939–44. http://dx.doi.org/10.2466/pms.1994.79.2.939.

Full text
Abstract:
Fragments of classical music were submitted to 80 subjects, 40 children 9 to 10 years old and 40 adults 19 to 29 years old who were divided into eight groups of ten, to induce feelings of happiness, sadness, anger, and fear. The task required linking each piece of music to one emotion, identifying at the same time the intensity of the emotional response on a scale of 1 to 3. The goal was to study how gender, age, and exposure or expertise related to emotional perceptions of music. Analysis showed (a) experts in music and nonexperts ascribed similar emotions to pieces of music, (b) there was no difference in emotional response to music by gender, although women linked to music stronger emotions of anger than girls, (c) children perceived greater feeling of happiness in music and less feeling of anger than adults, and (d) emotions of anger and fear in music were often confused with one another.
APA, Harvard, Vancouver, ISO, and other styles
20

Churi, Praharsh. "Emotion-Based Music Generation Based on Nava Rasas in Indian Classical Music." International Journal for Research in Applied Science and Engineering Technology 11, no. 11 (November 30, 2023): 2003–7. http://dx.doi.org/10.22214/ijraset.2023.56981.

Full text
Abstract:
Abstract: A crucial component of human expression, music has the astonishing power to provoke a wide range of emotions. In this study, we describe a revolutionary method for creating music that integrates Indian classical music, computer vision, and emotion analysis. In our project, named "Emotion-Based Music Generation," we use Media Pipe for facial expression recognition, Keras for music creation, OpenCV for real-time webcam access, and stream lit-WebRTC for web application development. Based on the Nava Rasas in Indian classical music, the technique isolates nine fundamental emotions and develops musical compositions in accordance. These technologies are used to create an enjoyable and interactive system that allows users to explore the emotional spectrum of music.
APA, Harvard, Vancouver, ISO, and other styles
21

He, Jing-Xian, Li Zhou, Zhen-Tao Liu, and Xin-Yue Hu. "Digital Empirical Research of Influencing Factors of Musical Emotion Classification Based on Pleasure-Arousal Musical Emotion Fuzzy Model." Journal of Advanced Computational Intelligence and Intelligent Informatics 24, no. 7 (December 20, 2020): 872–81. http://dx.doi.org/10.20965/jaciii.2020.p0872.

Full text
Abstract:
In recent years, with the further breakthrough of artificial intelligence theory and technology, as well as the further expansion of the Internet scale, the recognition of human emotions and the necessity for satisfying human psychological needs in future artificial intelligence technology development tendencies have been highlighted, in addition to physical task accomplishment. Musical emotion classification is an important research topic in artificial intelligence. The key premise of realizing music emotion classification is to construct a musical emotion model that conforms to the characteristics of music emotion recognition. Currently, three types of music emotion classification models are available: discrete category, continuous dimensional, and music emotion-specific models. The pleasure-arousal music emotion fuzzy model, which includes a wide range of emotions compared with other models, is selected as the emotional classification system in this study to investigate the influencing factor for musical emotion classification. Two representative emotional attributes, i.e., speed and strength, are used as variables. Based on test experiments involving music and non-music majors combined with questionnaire results, the relationship between music properties and emotional changes under the pleasure-arousal model is revealed quantitatively.
APA, Harvard, Vancouver, ISO, and other styles
22

Resnicow, Joel E., Peter Salovey, and Bruno H. Repp. "Is Recognition of Emotion in Music Performance an Aspect of Emotional Intelligence?" Music Perception 22, no. 1 (2004): 145–58. http://dx.doi.org/10.1525/mp.2004.22.1.145.

Full text
Abstract:
Expression of emotion in music performance is a form of nonverbal communication to which people may be differentially receptive. The recently developed Mayer-Salovey-Caruso Emotional Intelligence Test assesses individual differences in the ability to identify, understand, reason with, and manage emotions using hypothetical scenarios that are conveyed pictorially or in writing. The test currently does not include musical or spoken items. We asked 24 undergraduates to complete both that test and a listening test in which they tried to identify the intended emotions in performances of classical piano music. Emotional intelligence and emotion recognition in the music task were significantly correlated (r = .54), which suggests that identification of emotion in music performance draws on some of the same sensibilities that make up everyday emotional intelligence.
APA, Harvard, Vancouver, ISO, and other styles
23

Farmer, Eliot, Crescent Jicol, and Karin Petrini. "Musicianship Enhances Perception But Not Feeling of Emotion From Others’ Social Interaction Through Speech Prosody." Music Perception 37, no. 4 (March 11, 2020): 323–38. http://dx.doi.org/10.1525/mp.2020.37.4.323.

Full text
Abstract:
Music expertise has been shown to enhance emotion recognition from speech prosody. Yet, it is currently unclear whether music training enhances the recognition of emotions through other communicative modalities such as vision and whether it enhances the feeling of such emotions. Musicians and nonmusicians were presented with visual, auditory, and audiovisual clips consisting of the biological motion and speech prosody of two agents interacting. Participants judged as quickly as possible whether the expressed emotion was happiness or anger, and subsequently indicated whether they also felt the emotion they had perceived. Measures of accuracy and reaction time were collected from the emotion recognition judgements, while yes/no responses were collected as indication of felt emotions. Musicians were more accurate than nonmusicians at recognizing emotion in the auditory-only condition, but not in the visual-only or audiovisual conditions. Although music training enhanced recognition of emotion through sound, it did not affect the felt emotion. These findings indicate that emotional processing in music and language may use overlapping but also divergent resources, or that some aspects of emotional processing are less responsive to music training than others. Hence music training may be an effective rehabilitative device for interpreting others’ emotion through speech.
APA, Harvard, Vancouver, ISO, and other styles
24

Juslin, Patrik N., Laura S. Sakka, Gonçalo T. Barradas, and Olivier Lartillot. "Emotions, Mechanisms, and Individual Differences in Music Listening." Music Perception 40, no. 1 (September 1, 2022): 55–86. http://dx.doi.org/10.1525/mp.2022.40.1.55.

Full text
Abstract:
Emotions have been found to play a paramount role in both everyday music experiences and health applications of music, but the applicability of musical emotions depends on: 1) which emotions music can induce, 2) how it induces them, and 3) how individual differences may be explained. These questions were addressed in a listening test, where 44 participants (aged 19–66 years) reported both felt emotions and subjective impressions of emotion mechanisms (Mec Scale), while listening to 72 pieces of music from 12 genres, selected using a stratified random sampling procedure. The results showed that: 1) positive emotions (e.g., happiness) were more prevalent than negative emotions (e.g., anger); 2) Rhythmic entrainment was the most and Brain stem reflex the least frequent of the mechanisms featured in the BRECVEMA theory; 3) felt emotions could be accurately predicted based on self-reported mechanisms in multiple regression analyses; 4) self-reported mechanisms predicted felt emotions better than did acoustic features; and 5) individual listeners showed partly different emotion-mechanism links across stimuli, which may help to explain individual differences in emotional responses. Implications for future research and applications of musical emotions are discussed.
APA, Harvard, Vancouver, ISO, and other styles
25

Ma, Jiajia. "Emotional Expression and Analysis in Music Performance Based on Edge Computing." Mobile Information Systems 2022 (September 5, 2022): 1–12. http://dx.doi.org/10.1155/2022/4856977.

Full text
Abstract:
The expression of emotion in music performance is the soul of music, and the emotion revealed by the performer during the performance can bring emotional resonance to the audience. The emotions expressed by music such as joy, anger, and sadness are the meaning of music’s existence. Music without emotion will be dead. However, the music itself has no emotion at all; it is just a regular sound, so the emotional reading of music performance is very important. Music performance is an interpretation of music, and it is the most important emotional information and communication medium for human beings. Through the appreciation of the works to express the author’s emotions, the different performance forms of musical instruments, dance, and singing bring emotional resonance to the audience. Edge computing is the core technology and edge node of the Internet of everything in the new era, and it is constantly innovating with the rapid development of computers and the great changes brought about by them. Nowadays, people’s demand for emotional information processing of music performances has also increased. The research attention to music performance and the attention to its application technology have also received unprecedented development, so the requirements for the ability of human and machine interaction are getting higher and higher. With the increasing maturity of multimedia and communication technologies, there is an increasing expectation of using computers to express human thoughts and emotions. By combining the two, Dongfeng’s expression and analysis of music emotions through edge computing have also ushered in new developments. For example, people upload or share music and dance videos to their friends through WeChat, QQ, Douyin, etc., which greatly enriches people’s emotional world. The analysis and judgment of music emotion are the main subject of the joint development of both musicology and psychological research. With the help of computer science technology and artificial intelligence and other tools, the purpose of music emotional research can also be achieved. Particularly with the advancement of science and technology and the vigorous development of computer application technology, people’s needs for emotional expression and analysis of music can now be carried out with the help of computers. However, the amount of data generated is extremely large, and using edge servers for data processing can improve the efficiency of analysis and processing to meet people’s needs.
APA, Harvard, Vancouver, ISO, and other styles
26

Wang, Shu, Chonghuan Xu, Austin Shijun Ding, and Zhongyun Tang. "A Novel Emotion-Aware Hybrid Music Recommendation Method Using Deep Neural Network." Electronics 10, no. 15 (July 24, 2021): 1769. http://dx.doi.org/10.3390/electronics10151769.

Full text
Abstract:
Emotion-aware music recommendations has gained increasing attention in recent years, as music comes with the ability to regulate human emotions. Exploiting emotional information has the potential to improve recommendation performances. However, conventional studies identified emotion as discrete representations, and could not predict users’ emotional states at time points when no user activity data exists, let alone the awareness of the influences posed by social events. In this study, we proposed an emotion-aware music recommendation method using deep neural networks (emoMR). We modeled a representation of music emotion using low-level audio features and music metadata, model the users’ emotion states using an artificial emotion generation model with endogenous factors exogenous factors capable of expressing the influences posed by events on emotions. The two models were trained using a designed deep neural network architecture (emoDNN) to predict the music emotions for the music and the music emotion preferences for the users in a continuous form. Based on the models, we proposed a hybrid approach of combining content-based and collaborative filtering for generating emotion-aware music recommendations. Experiment results show that emoMR performs better in the metrics of Precision, Recall, F1, and HitRate than the other baseline algorithms. We also tested the performance of emoMR on two major events (the death of Yuan Longping and the Coronavirus Disease 2019 (COVID-19) cases in Zhejiang). Results show that emoMR takes advantage of event information and outperforms other baseline algorithms.
APA, Harvard, Vancouver, ISO, and other styles
27

Shaik, Mrs Shammi. "EMOTION BASED MUSIC PLAYER." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (April 24, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem31541.

Full text
Abstract:
Recent studies suggest that humans have a strong bond with music and it plays a crucial role in shaping brain function. It is common for individuals to spend around four hours daily immersing themselves in music that aligns with their emotions and preferences. This initiative aims to create an application that utilizes facial expressions to suggest songs that match the user's mood. Facial cues are vital for non-verbal communication. The Emotion-based music player introduces a groundbreaking idea of selecting music for users based on their emotions. The system uses facial recognition technology to understand the user's emotions and selects music that fits their mood. Computer vision is a versatile tool that enables computers to interpret images and videos effectively. By analyzing facial expressions, the system can determine the user's emotional state and suggest a playlist accordingly, saving time and effort. Key Words: Music, Emotions, Facial Recognition, Computer Vision
APA, Harvard, Vancouver, ISO, and other styles
28

Gabrielsson, Alf. "Emotion perceived and emotion felt: Same or different?" Musicae Scientiae 5, no. 1_suppl (September 2001): 123–47. http://dx.doi.org/10.1177/10298649020050s105.

Full text
Abstract:
A distinction is made between emotion perception, that is, to perceive emotional expression in music without necessarily being affected oneself, and emotion induction, that is, listeners’ emotional response to music. This distinction is not always observed, neither in everyday conversation about emotions, nor in scientific papers. Empirical studies of emotion perception are briefly reviewed with regard to listener agreement concerning expressed emotions, followed by a selective review of empirical studies on emotional response to music. Possible relationships between emotion perception and emotional response are discussed and exemplified: positive relationship, negative relationship, no systematic relationship and no relationship. It is emphasised that both emotion perception and, especially, emotional response are dependent on an interplay between musical, personal, and situational factors. Some methodological questions and suggestions for further research are discussed.
APA, Harvard, Vancouver, ISO, and other styles
29

Jiddy Abdillah, Ibnu Asror, and Yanuar Firdaus Arie Wibowo. "Emotion Classification of Song Lyrics using Bidirectional LSTM Method with GloVe Word Representation Weighting." Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) 4, no. 4 (August 20, 2020): 723–29. http://dx.doi.org/10.29207/resti.v4i4.2156.

Full text
Abstract:
The rapid change of the music market from analog to digital has caused a rapid increase in the amount of music that is spread throughout the world as well because music is easier to make and sell. The amount of music available has changed the way people find music, one of which is based on the emotion of the song. The existence of music emotion recognition and recommendation helps music listeners find songs in accordance with their emotions. Therefore, the classification of emotions is needed to determine the emotions of a song. The emotional classification of a song is largely based on feature extraction and learning from the available data sets. Various learning algorithms have been used to classify song emotions and produce different accuracy. In this study, the Bidirectional Long-short Term Memory (Bi-LSTM) deep learning method with weighting words using GloVe is used to classify the song's emotions using the lyrics of the song. The result shows that the Bi-LSTM model with dropout layer and activity regularization can produce an accuracy of 91.08%. Dropout, activity regularization and learning rate decay parameters can reduce the difference between training loss and validation loss by 0.15.
APA, Harvard, Vancouver, ISO, and other styles
30

Lauria, Federico. "Affective Responses to Music: An Affective Science Perspective." Philosophies 8, no. 2 (February 23, 2023): 16. http://dx.doi.org/10.3390/philosophies8020016.

Full text
Abstract:
Music has strong emotional powers. How are we to understand affective responses to music? What does music teach us about emotions? Why are musical emotions important? Despite the rich literature in philosophy and the empirical sciences, particularly psychology and neuroscience, little attention has been paid to integrating these approaches. This extensive review aims to redress this imbalance and establish a mutual dialogue between philosophy and the empirical sciences by presenting the main philosophical puzzles from an affective science perspective. The chief problem is contagion. Sometimes, listeners perceive music as expressing some emotion and this elicits the same emotion in them. Contagion is perplexing because it collides with the leading theory of emotions as experiences of values. This article mostly revolves around the critical presentation of the philosophical solutions to this problem in light of recent developments in emotion theory and affective science. It also highlights practical issues, particularly the role of musical emotions in well-being and health, by tackling the paradox of sad music, i.e., the question of why people enjoy sad music. It thus bridges an important gap between theoretical and real-life issues as well as between philosophical and empirical investigations on affective responses to music.
APA, Harvard, Vancouver, ISO, and other styles
31

Rucsanda, Mădălina Dana, Ana-Maria Cazan, and Camelia Truța. "Musical performance and emotions in children: The case of musical competitions." Psychology of Music 48, no. 4 (November 19, 2018): 480–94. http://dx.doi.org/10.1177/0305735618810791.

Full text
Abstract:
Emotion is a condition that facilitates or inhibits music performance. Our research aimed to explore emotions of young musicians performing in music competitions. We tried to highlight the possible differences in terms of emotions between young singers who obtained prizes in musical competitions and those who did not. Another aim of the study was to explore the relationship between pre-competition emotions and music performance, focusing on the mediating role of singing experience. The sample consisted of 146 participants in international music competitions for young musicians. A nonverbal pictorial assessment technique measuring the valence, arousal and dominance dimensions of emotions was administered just before and immediately after each participant’s performance in the competition. Our study revealed that negative emotions were associated with lower performance quality while positive emotions, low arousal and increased dominance were associated with higher performance quality. Experienced young singers reported more positive emotions, low arousal and high dominance. Our results also revealed that experience in music competitions could mediate the associations between emotions and music performance in competition. The implications of the results support the inclusion of psychological/emotional training in music education of young singers.
APA, Harvard, Vancouver, ISO, and other styles
32

Cespedes-Guevara, Julian, and Nicola Dibben. "The Role of Embodied Simulation and Visual Imagery in Emotional Contagion with Music." Music & Science 5 (January 2022): 205920432210938. http://dx.doi.org/10.1177/20592043221093836.

Full text
Abstract:
Emotional contagion has been explained as arising from embodied simulation. The two most accepted theories of music-induced emotions presume a mechanism of internal mimicry: the BRECVEMA framework proposes that the melodic aspect of music elicits internal mimicry leading to the induction of basic emotions in the listener, and the Multifactorial Process Model proposes that the observation or imagination of motor expressions of the musicians elicits muscular and neural mimicry, and emotional contagion. Two behavioral studies investigated whether, and to what extent, mimicry is responsible for emotion contagion, and second, to what extent context for affective responses in the form of visual imagery moderates emotional responses. Experiment 1 tested whether emotional contagion is influenced by mimicry by manipulating explicit vocal and motor mimicry. In one condition, participants engaged in mimicry of the melodic aspects of the music by singing along with the music, and in another, participants engaged in mimicry of the musician’s gestures when producing the music, by playing along (“air guitar”-style). The experiment did not find confirmatory evidence for either hypothesized simulation mechanism, but it did provide evidence of spontaneous visual imagery consistent with the induced and perceived emotions. Experiment 2 used imagined rather than performed mimicry, but found no association between imagined motor simulation and emotional intensity. Emotional descriptions read prior to hearing the music influenced the type of perceived and induced emotions and support the prediction that visual imagery and associated semantic knowledge shape listeners’ affective experiences with music. The lack of evidence for the causal role of embodied simulation suggests that current theorization of emotion contagion by music needs refinement to reduce the role of simulation relative to other mechanisms. Evidence for induction of affective states that can be modulated by contextual and semantic associations suggests a model of emotion induction consistent with constructionist accounts.
APA, Harvard, Vancouver, ISO, and other styles
33

Ji, Shulei, and Xinyu Yang. "MusER: Musical Element-Based Regularization for Generating Symbolic Music with Emotion." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (March 24, 2024): 12821–29. http://dx.doi.org/10.1609/aaai.v38i11.29178.

Full text
Abstract:
Generating music with emotion is an important task in automatic music generation, in which emotion is evoked through a variety of musical elements (such as pitch and duration) that change over time and collaborate with each other. However, prior research on deep learning-based emotional music generation has rarely explored the contribution of different musical elements to emotions, let alone the deliberate manipulation of these elements to alter the emotion of music, which is not conducive to fine-grained element-level control over emotions. To address this gap, we present a novel approach employing musical element-based regularization in the latent space to disentangle distinct elements, investigate their roles in distinguishing emotions, and further manipulate elements to alter musical emotions. Specifically, we propose a novel VQ-VAE-based model named MusER. MusER incorporates a regularization loss to enforce the correspondence between the musical element sequences and the specific dimensions of latent variable sequences, providing a new solution for disentangling discrete sequences. Taking advantage of the disentangled latent vectors, a two-level decoding strategy that includes multiple decoders attending to latent vectors with different semantics is devised to better predict the elements. By visualizing latent space, we conclude that MusER yields a disentangled and interpretable latent space and gain insights into the contribution of distinct elements to the emotional dimensions (i.e., arousal and valence). Experimental results demonstrate that MusER outperforms the state-of-the-art models for generating emotional music in both objective and subjective evaluation. Besides, we rearrange music through element transfer and attempt to alter the emotion of music by transferring emotion-distinguishable elements.
APA, Harvard, Vancouver, ISO, and other styles
34

Ma’rof, Aini Azeqa, Zheng Danhe, and Zeinab Zaremohzzabieh. "Gender Differences in the Function of Music for Emotion Regulation Development in Everyday Life: An Experience Sampling Method Study." Malaysian Journal of Music 12, no. 2 (December 29, 2023): 76–94. http://dx.doi.org/10.37134//mjm.vol12.2.5.2023.

Full text
Abstract:
The present study employed experience sampling methodology (ESM) to examine the role of music in regulating emotions and the potential differences in music usage for emotion regulation between men and women in everyday life. The study spanned over seven days, including both weekdays and weekends, during which 28 participants (14 men and 14 women) were asked to complete a brief questionnaire 21 times a day. The questionnaire aimed to document instances of music listening in the past three hours, resulting in a total of 588 questionnaires being sent and 264 instances of music listening being analysed. Results indicate that listening to music in daily life may have a positive impact on emotion regulation and suggest possible differences in music usage between men and women for this purpose. The study's primary findings include: (1) Relaxation was the most commonly used strategy for regulating emotions with music; (2) Four primary mechanisms of music usage for emotion regulation, including emotion type, familiarity, and content of music, were found to be essential; (3) Listening to music was an effective emotion regulation strategy, particularly for regulating happiness and peacefulness; (4) Men were more likely to use music for active coping and to consider the type and content of music when selecting music; and (5) Music appeared to regulate the intensity of emotions similarly for both men and women, although men tended to report higher emotional intensity.
APA, Harvard, Vancouver, ISO, and other styles
35

Reschke-Hernández, Alaine E., Amy M. Belfi, Edmarie Guzmán-Vélez, and Daniel Tranel. "Hooked on a Feeling: Influence of Brief Exposure to Familiar Music on Feelings of Emotion in Individuals with Alzheimer’s Disease." Journal of Alzheimer's Disease 78, no. 3 (November 24, 2020): 1019–31. http://dx.doi.org/10.3233/jad-200889.

Full text
Abstract:
Background: Research has indicated that individuals with Alzheimer’s-type dementia (AD) can experience prolonged emotions, even when they cannot recall the eliciting event. Less is known about whether music can modify the emotional state of individuals with AD and whether emotions evoked by music linger in the absence of a declarative memory for the eliciting event. Objective: We examined the effects of participant-selected recorded music on self-reported feelings of emotion in individuals with AD, and whether these feelings persisted irrespective of declarative memory for the emotion-inducing stimuli. Methods: Twenty participants with AD and 19 healthy comparisons (HCs) listened to two 4.5-minute blocks of self-selected music that aimed to induce either sadness or happiness. Participants reported their feelings at baseline and three times post-induction and completed recall and recognition tests for the music selections after each induction. Results: Participants with AD had impaired memory for music selections compared to HCs. Both groups reported elevated sadness and negative affect after listening to sad music and increased happiness and positive affect after listening to happy music, relative to baseline. Sad/negative and happy/positive emotions endured up to 20 minutes post-induction. Conclusion: Brief exposure to music can induce strong and lingering emotions in individuals with AD. These findings extend the intriguing phenomenon whereby lasting emotions can be prompted by stimuli that are not remembered declaratively. Our results underscore the utility of familiar music for inducing emotions in individuals with AD and may ultimately inform strategies for using music listening as a therapeutic tool with this population.
APA, Harvard, Vancouver, ISO, and other styles
36

Kaimal, Janhavi, Payal Taskar, Pallavi Patil, and Pranita Mane. "Real Time Emotion Based Music Player." International Journal for Research in Applied Science and Engineering Technology 12, no. 4 (April 30, 2024): 5601–7. http://dx.doi.org/10.22214/ijraset.2024.61351.

Full text
Abstract:
Abstract: In this paper, we present the development and implementation of an Emotion-based Music Player System utilizing facial emotion data, Convolutional Neural Networks (CNN), Flask, OpenCV for face detection, and Spotify API for music playlist integration. The system aims to provide users with personalized music recommendations based on their current emotional state, detected through real-time analysis of facial expressions. The proposed system consists of several key components, including data collection, preprocessing, CNN algorithm implementation for emotion classification, integration with Flask for web application development, and real-time emotion detection using OpenCV. The trained CNN model is capable of accurately classifying emotions such as anger, disgust, fear, happiness, neutrality, sadness, and surprise. Results from the implementation demonstrate the system's effectiveness in accurately detecting emotions from facial expressions and providing corresponding music recommendations. The CNN model achieved a final accuracy of 62.44% after 10 epochs of training. Realtime emotion detection using OpenCV successfully identifies facial expressions, allowing for dynamic adjustments to the music playlist. Overall, the Emotion-based Music Player System presents a novel approach to enhancing user experience by leveraging facial emotion data and advanced machine learning techniques to deliver personalized music recommendations tailored to individual emotional states.
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Yinsheng, and Wei Zheng. "Emotion Recognition and Regulation Based on Stacked Sparse Auto-Encoder Network and Personalized Reconfigurable Music." Mathematics 9, no. 6 (March 10, 2021): 593. http://dx.doi.org/10.3390/math9060593.

Full text
Abstract:
Music can regulate and improve the emotions of the brain. Traditional emotional regulation approaches often adopt complete music. As is well-known, complete music may vary in pitch, volume, and other ups and downs. An individual’s emotions may also adopt multiple states, and music preference varies from person to person. Therefore, traditional music regulation methods have problems, such as long duration, variable emotional states, and poor adaptability. In view of these problems, we use different music processing methods and stacked sparse auto-encoder neural networks to identify and regulate the emotional state of the brain in this paper. We construct a multi-channel EEG sensor network, divide brainwave signals and the corresponding music separately, and build a personalized reconfigurable music-EEG library. The 17 features in the EEG signal are extracted as joint features, and the stacked sparse auto-encoder neural network is used to classify the emotions, in order to establish a music emotion evaluation index. According to the goal of emotional regulation, music fragments are selected from the personalized reconfigurable music-EEG library, then reconstructed and combined for emotional adjustment. The results show that, compared with complete music, the reconfigurable combined music was less time-consuming for emotional regulation (76.29% less), and the number of irrelevant emotional states was reduced by 69.92%. In terms of adaptability to different participants, the reconfigurable music improved the recognition rate of emotional states by 31.32%.
APA, Harvard, Vancouver, ISO, and other styles
38

Nand, Rugwed. "Mood Music Recommendation Using Emotion Detection." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (May 31, 2023): 3423–27. http://dx.doi.org/10.22214/ijraset.2023.52385.

Full text
Abstract:
Abstract: The proposed mood music recommendation system leverages the power of deep learning and emotion detection technology to provide personalized and dynamic music recommendations to users. The system uses various modalities such as facial expressions, speech patterns, and voice tone to detect the user's emotional state. The system then recommends songs and playlists that are aligned with the user's mood, thereby enhancing their overall music listening experience. One of the key advantages of the proposed system is its adaptability to changes in the user's emotional state. As the user's mood changes, the system can dynamically adjust the music recommendations to ensure that the user is constantly provided with music that is appropriate to their current emotional state. This adaptability is crucial as emotions can be unpredictable and constantly changing. By providing music that is tailored to the user's current mood, the system can help them manage their emotions and improve their well-being Overall, the proposed system has the potential to revolutionize the music streaming industry by providing users with a highly personalized and dynamic music listening experience. The use of emotion detection technology and deep learning algorithms can help users manage their emotions and enhance their overall listening experience, thereby contributing to their overall well-being. The proposed system represents an exciting step towards the development of intelligent and adaptive music recommendation systems.
APA, Harvard, Vancouver, ISO, and other styles
39

Bağrıaçık, Belgin, Ayça Konik-Köksal, and Hamit Coşkun. "Examination of Special Talent Students’ Immediate Emotions Regarding Music with Different Emotions." Shanlax International Journal of Education 12, no. 3 (June 1, 2024): 61–71. http://dx.doi.org/10.34293/education.v12i3.7172.

Full text
Abstract:
Gifted individuals are more advanced than their peers in cognitive, affective, psychomotor, or creative areas. This study aimed to find an answer to how gifted students’ immediate emotional states change by means of music containing different emotions. The sample of the study consisted of 122 students studying at Adana BİLSEM. Their moods were measured after listening to different pieces of music. The findings showed that there was a significant difference between the emotions that the students felt with different music, they felt more positive emotions, and the girls felt the emotion much more than the boys. The results of this study can be used to better understand gifted students and to develop musical activities to help them manage their emotions.
APA, Harvard, Vancouver, ISO, and other styles
40

Lian, Jue. "An artificial intelligence-based classifier for musical emotion expression in media education." PeerJ Computer Science 9 (July 14, 2023): e1472. http://dx.doi.org/10.7717/peerj-cs.1472.

Full text
Abstract:
Music can serve as a potent tool for conveying emotions and regulating learners’ moods, while the systematic application of emotional assessment can help to improve teaching efficiency. However, existing music emotion analysis methods based on Artificial Intelligence (AI) rely primarily on pre-marked content, such as lyrics and fail to adequately account for music signals’ perception, transmission, and recognition. To address this limitation, this study first employs sound-level segmentation, data frame processing, and threshold determination to enable intelligent segmentation and recognition of notes. Next, based on the extracted audio features, a Radial Basis Function (RBF) model is utilized to construct a music emotion classifier. Finally, correlation feedback was used to label the classification results further and train the classifier. The study compares the music emotion classification method commonly used in Chinese music education with the Hevner emotion model. It identifies four emotion categories: Quiet, Happy, Sad, and Excited, to classify performers’ emotions. The testing results demonstrate that audio feature recognition time is a mere 0.004 min, with an accuracy rate of over 95%. Furthermore, classifying performers’ emotions based on audio features is consistent with conventional human cognition.
APA, Harvard, Vancouver, ISO, and other styles
41

Juslin, Patrik N., Simon Liljeström, Petri Laukka, Daniel Västfjäll, and Lars-Olov Lundqvist. "Emotional Reactions to Music in a Nationally Representative Sample of Swedish Adults: Prevalence and Causal Influences." Musicae Scientiae 15, no. 2 (July 2011): 174–207. http://dx.doi.org/10.1177/102986491101500204.

Full text
Abstract:
Empirical studies have indicated that listeners value music primarily for its ability to arouse emotions. Yet little is known about which emotions listeners normally experience when listening to music, or about the causes of these emotions. The goal of this study was therefore to explore the prevalence of emotional reactions to music in everyday life and how this is influenced by various factors in the listener, the music, and the situation. A self-administered mail questionnaire was sent to a random and nationally representative sample of 1,500 Swedish citizens between the ages of 18 and 65, and 762 participants (51%) responded to the questionnaire. Thirty-two items explored both musical emotions in general (semantic estimates) and the most recent emotion episode featuring music for each participant (episodic estimates). The results revealed several variables (e.g., personality, age, gender, listener activity) that were correlated with particular emotions. A multiple discriminant analysis indicated that three of the most common emotion categories in a set of musical episodes (i.e., happiness, sadness, nostalgia) could be predicted with a mean accuracy of 70% correct based on data obtained from the questionnaire. The results may inform theorizing about musical emotions and guide the selection of causal variables for manipulation in future experiments.
APA, Harvard, Vancouver, ISO, and other styles
42

Juslin, Patrik N., Simon Liljeström, Petri Laukka, Daniel Västfjäll, and Lars-Olov Lundqvist. "Emotional reactions to music in a nationally representative sample of Swedish adults." Musicae Scientiae 15, no. 2 (July 2011): 174–207. http://dx.doi.org/10.1177/1029864911401169.

Full text
Abstract:
Empirical studies have indicated that listeners value music primarily for its ability to arouse emotions. Yet little is known about which emotions listeners normally experience when listening to music, or about the causes of these emotions. The goal of this study was therefore to explore the prevalence of emotional reactions to music in everyday life and how this is influenced by various factors in the listener, the music, and the situation. A self-administered mail questionnaire was sent to a random and nationally representative sample of 1,500 Swedish citizens between the ages of 18 and 65, and 762 participants (51%) responded to the questionnaire. Thirty-two items explored both musical emotions in general (semantic estimates) and the most recent emotion episode featuring music for each participant (episodic estimates). The results revealed several variables (e.g., personality, age, gender, listener activity) that were correlated with particular emotions. A multiple discriminant analysis indicated that three of the most common emotion categories in a set of musical episodes (i.e., happiness, sadness, nostalgia) could be predicted with a mean accuracy of 70% correct based on data obtained from the questionnaire. The results may inform theorizing about musical emotions and guide the selection of causal variables for manipulation in future experiments.
APA, Harvard, Vancouver, ISO, and other styles
43

Mr. Aswin Jeba Mahir A, Mr. Dinesh K, Mr. Arjun R, and Mr. Dheenadhayalan A. "Song Recommendation System Based on Facial Emotion." International Research Journal on Advanced Engineering and Management (IRJAEM) 2, no. 05 (May 18, 2024): 1466–68. http://dx.doi.org/10.47392/irjaem.2024.0197.

Full text
Abstract:
This research is aiming to enhance the user experience in music consumption by incorporating real-time facial emotion analysis. Emotions play a fundamental role in shaping individual preferences, and leveraging facial expressions as a means of understanding user’s emotional states can significantly contribute to personalized music recommendations. Our proposed system begins by capturing real-time facial expressions using a webcam or analyzing static images. These facial expressions are then processed through a CNN-based emotion recognition model trained to classify emotions such as happiness, sadness, anger, and more. The CNN model extracts high-level features from facial images, enabling accurate emotion recognition. Using the detected emotional state as input, our system employs a recommendation algorithm tailored to the user's current emotional state to suggest relevant music or videos from YouTube.
APA, Harvard, Vancouver, ISO, and other styles
44

Kopec, Justin, Ashleigh Hillier, and Alice Frye. "The Valency of Music Has Different Effects on the Emotional Responses of Those with Autism Spectrum Disorders and a Comparison Group." Music Perception 31, no. 5 (December 2012): 436–43. http://dx.doi.org/10.1525/mp.2014.31.5.436.

Full text
Abstract:
Emotion perception deficits are commonly observed in individuals with autism spectrum disorders (ASD). Numerous studies have documented deficits in emotional recognition of social stimuli among those with ASD, such as faces and voices, while far fewer have investigated emotional recognition of nonsocial stimuli in this population. In this study, participants with ASD and a comparison group of typically developing (TD) control participants listened to song clips that varied in levels of pleasantness (valence) and arousal. Participants then rated emotions they felt or perceived in the music, using a list of eight emotion words for each song. Results showed that individuals with ASD gave significantly lower ratings of negative emotions in both the felt and perceived categories compared to TD controls, but did not show significant differences in ratings of positive emotions. These findings suggest that deficits in processing emotions in music among those with ASD may be valence specific.
APA, Harvard, Vancouver, ISO, and other styles
45

Patel, Jigna, Ali Asgar Padaria, Aryan Mehta, Aaryan Chokshi, Jitali Dineshkumar Patel, and Rupal Kapdi. "ConCollA - A Smart Emotion-based Music Recommendation System for Drivers." Scalable Computing: Practice and Experience 24, no. 4 (November 17, 2023): 919–39. http://dx.doi.org/10.12694/scpe.v24i4.2467.

Full text
Abstract:
Music recommender system is an area of information retrieval system that suggests customized music recommendations to users based on their previous preferences and experiences with music. While existing systems often overlook the emotional state of the driver, we propose a hybrid music recommendation system - ConCollA to provide a personalized experience based on user emotions. By incorporating facial expression recognition, ConCollA accurately identifies the driver’s emotions using convolution neural network(CNN) model and suggests music tailored to their emotional state. ConCollA combines collaborative filtering, a novel content-based recommendation system named Mood Adjusted Average Similarity (MAAS), and apriori algorithm to generate personalized music recommendations. The performance of ConCollA is assessed using various evaluation parameters. The results show that proposed emotion-aware model outperforms a collaborative-based recommender system.
APA, Harvard, Vancouver, ISO, and other styles
46

Netravathi K S, Bibi Haleema N, Priyanka, Madhushree, and Priyanka R V. "Emotion-Based Music Player." International Research Journal on Advanced Engineering and Management (IRJAEM) 2, no. 04 (April 22, 2024): 1149–56. http://dx.doi.org/10.47392/irjaem.2024.0152.

Full text
Abstract:
Songs have always been a popular medium for communicating and understanding human emotions. Reliable emotion-based categorization systems can be quite helpful to us in understanding their relevance. However, the outcome of the research on Emotion-based music classification have not been the greatest. Here, we introduce EMP, a cross-platform emotional music player that play songs in accordance with the user’s feelings at the time. EMP provides intelligent mood-based music player by incorporating emotion context reasoning abilities into our adaptive music engine. EMP revolutionizes how users interact with music, fostering deeper connections between emotions and musical experiences. Our music player is composed of three modules: the emotion module, the classification module, and the queue-based module. The Emotion Module analyses a picture of the user’s face and uses the VGG16 algorithm to detect their mood with a precision exceeding 95%. The Music Classification Module gets an outstanding result by utilizing aural criteria while classifying music into 7 different mood groups. The queue module plays the songs directly from the mapped folders in the order they are stored, ensuring alignment with the user’s mood and preferences.
APA, Harvard, Vancouver, ISO, and other styles
47

Susino, Marco, and Emery Schubert. "Cultural stereotyping of emotional responses to music genre." Psychology of Music 47, no. 3 (March 10, 2018): 342–57. http://dx.doi.org/10.1177/0305735618755886.

Full text
Abstract:
This study investigated whether emotional responses to a music genre could be predicted by stereotypes of the culture with which the music genre is associated. A two-part study was conducted. Participants listened to music samples from eight distinct genres: Fado, Koto, Heavy Metal, Hip Hop, Pop, Samba, Bolero, and Western Classical. They also described their spontaneous associations with the music and their spontaneous associations with the music’s related cultures: Portuguese, Japanese, Heavy Metal, Hip Hop, Pop, Brazilian, Cuban, and Western culture, respectively. Results indicated that a small number of specific emotions reported for a music genre were the same as stereotypical emotional associations of the corresponding culture. These include peace and calm for Koto music and Japanese culture, and anger and aggression for Heavy Metal music and culture. We explain these results through the stereotype theory of emotion in music (STEM), where an emotion filter is activated that simplifies the assessment process for a music genre that is not very familiar to the listener. Listeners familiar with a genre reported fewer stereotyped emotions than less familiar listeners. The study suggests that stereotyping competes with the psychoacoustic cues in the expression of emotion.
APA, Harvard, Vancouver, ISO, and other styles
48

Ali, S. Omar, and Zehra F. Peynircioǧǧlu. "Intensity of Emotions Conveyed and Elicited by Familiar and Unfamiliar Music." Music Perception 27, no. 3 (February 1, 2010): 177–82. http://dx.doi.org/10.1525/mp.2010.27.3.177.

Full text
Abstract:
WE REPLICATED PREVIOUS FINDINGS AND DEMONSTRATED that familiarity with musical stimuli increased 'liking' or 'preference' for the stimuli. We also demonstrated that familiarity increased the intensity of emotional responses to music, but only when the stimuli were made highly familiar through en masse repetitions (Experiment 3) rather than through interspersed repetitions (Experiment 1). In addition, intensity ratings were higher when participants were asked to judge the emotion conveyed by the music than when they were asked to judge the emotion elicited by the same music (Experiments 2 and 3). Finally, positive emotions (i.e., happy and calm) were rated higher compared with negative emotions (i.e., sad and angry) for both types of ratings (i.e., conveyed or elicited). The findings suggest that familiarity plays a role in modulating a listener's emotional response to music.
APA, Harvard, Vancouver, ISO, and other styles
49

Xu, Xiujun. "Influence of Music Intervention on Emotional Control and Mental Health Management Self-efficacy of College Students." International Journal of Emerging Technologies in Learning (iJET) 16, no. 20 (October 25, 2021): 134. http://dx.doi.org/10.3991/ijet.v16i20.26511.

Full text
Abstract:
Music can induce strong emotions and psychological changes. The emotional control and mental health management of college students are greatly affected by gender, family background, grade, and other factors. Through music intervention, this paper explores how music-induced emotions influence the emotional control and mental health management self-efficacy of college students. The results show that positive music promotes the control of positive emotions, as positive music can give full play to active emotions; negative music both controls and enhances the negative emotions of college students; music intervention significantly affects the pleasantness dimension of the mental health of college students, but insignificantly affects the arousal dimension. The research findings lay a basis for further studies on emotional control and mental health management of college students.
APA, Harvard, Vancouver, ISO, and other styles
50

Juslin, Patrik N., and Daniel Västfjäll. "Emotional responses to music: The need to consider underlying mechanisms." Behavioral and Brain Sciences 31, no. 5 (October 2008): 559–75. http://dx.doi.org/10.1017/s0140525x08005293.

Full text
Abstract:
AbstractResearch indicates that people value music primarily because of the emotions it evokes. Yet, the notion of musical emotions remains controversial, and researchers have so far been unable to offer a satisfactory account of such emotions. We argue that the study of musical emotions has suffered from a neglect of underlying mechanisms. Specifically, researchers have studied musical emotions without regard to how they were evoked, or have assumed that the emotions must be based on the “default” mechanism for emotion induction, a cognitive appraisal. Here, we present a novel theoretical framework featuring six additional mechanisms through which music listening may induce emotions: (1) brain stem reflexes, (2) evaluative conditioning, (3) emotional contagion, (4) visual imagery, (5) episodic memory, and (6) musical expectancy. We propose that these mechanisms differ regarding such characteristics as their information focus, ontogenetic development, key brain regions, cultural impact, induction speed, degree of volitional influence, modularity, and dependence on musical structure. By synthesizing theory and findings from different domains, we are able to provide the first set of hypotheses that can help researchers to distinguish among the mechanisms. We show that failure to control for the underlying mechanism may lead to inconsistent or non-interpretable findings. Thus, we argue that the new framework may guide future research and help to resolve previous disagreements in the field. We conclude that music evokes emotions through mechanisms that are not unique to music, and that the study of musical emotions could benefit the emotion field as a whole by providing novel paradigms for emotion induction.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography