To see the other types of publications on this topic, follow the link: Speech emotion recognition.

Journal articles on the topic 'Speech emotion recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Speech emotion recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

A, Prof Swethashree. "Speech Emotion Recognition." International Journal for Research in Applied Science and Engineering Technology 9, no. 8 (2021): 2637–40. http://dx.doi.org/10.22214/ijraset.2021.37375.

Full text
Abstract:
Abstract: Speech Emotion Recognition, abbreviated as SER, the act of trying to identify a person's feelings and relationships. Affected situations from speech. This is because the truth often reflects the basic feelings of tone and tone of voice. Emotional awareness is a fast-growing field of research in recent years. Unlike humans, machines do not have the power to comprehend and express emotions. But human communication with the computer can be improved by using automatic sensory recognition, accordingly reducing the need for human intervention. In this project, basic emotions such as peace,
APA, Harvard, Vancouver, ISO, and other styles
2

Venkateswarlu, Dr S. China. "Speech Emotion Recognition using Machine Learning." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–9. https://doi.org/10.55041/ijsrem48705.

Full text
Abstract:
Abstract -- Speech signals are being considered as most effective means of communication between human beings. Many researchers have found different methods or systems to identify emotions from speech signals. Here, the various features of speech are used to classify emotions. Features like pitch, tone, intensity are essential for classification. Large number of the datasets are available for speech emotion recognition. Firstly, the extraction of features from speech emotion is carried out and then another important part is classification of emotions based upon speech. Hence, different classif
APA, Harvard, Vancouver, ISO, and other styles
3

Alexander, Jessica M., and Fernando Llanos. "High-arousal emotional speech enhances speech intelligibility and emotion recognition in noise." Journal of the Acoustical Society of America 157, no. 6 (2025): 4085–96. https://doi.org/10.1121/10.0036812.

Full text
Abstract:
Prosodic and voice quality modulations of the speech signal offer acoustic cues to the emotional state of the speaker. In quiet, listeners are highly adept at identifying not only a speaker's words but also the underlying emotional context. Given that distinct vocal emotions possess varying acoustic characteristics, background noise level may differentially impact speech recognition, emotion recognition, or their interaction. To investigate this question, we assessed the effects of three emotional speech styles (angry, happy, neutral) on speech intelligibility and emotion recognition across fo
APA, Harvard, Vancouver, ISO, and other styles
4

Werner, S., and G. N. Petrenko. "Speech Emotion Recognition: Humans vs Machines." Discourse 5, no. 5 (2019): 136–52. http://dx.doi.org/10.32603/2412-8562-2019-5-5-136-152.

Full text
Abstract:
Introduction. The study focuses on emotional speech perception and speech emotion recognition using prosodic clues alone. Theoretical problems of defining prosody, intonation and emotion along with the challenges of emotion classification are discussed. An overview of acoustic and perceptional correlates of emotions found in speech is provided. Technical approaches to speech emotion recognition are also considered in the light of the latest emotional speech automatic classification experiments.Methodology and sources. The typical “big six” classification commonly used in technical applications
APA, Harvard, Vancouver, ISO, and other styles
5

S, Abhimanue, and Dr Jyothish K. John. "Survey on Speech Emotion Recognition with Expressive Speech Synthesis." International Scientific Journal of Engineering and Management 04, no. 03 (2025): 1–7. https://doi.org/10.55041/isjem02527.

Full text
Abstract:
Emotion plays a key role in identifying the state of a person, that is, whether they are angry, sad, happy, etc. The paper presents an integrated framework that recognizes emotions from speech, generates emotionally aware responses, and synchronizes facial expressions to provide an animated video response. The system provides real-time, empathetic interactions for emotional support. It focuses on identifying the emotion of the person, especially to know if the person is depressed or having a hard time, so that it can provide emotional support to them, to overcome the feeling of distress and is
APA, Harvard, Vancouver, ISO, and other styles
6

Tank, Vishal P., and S. K. Hadia. "Creation of speech corpus for emotion analysis in Gujarati language and its evaluation by various speech parameters." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 5 (2020): 4752. http://dx.doi.org/10.11591/ijece.v10i5.pp4752-4758.

Full text
Abstract:
In the last couple of years emotion recognition has proven its significance in the area of artificial intelligence and man machine communication. Emotion recognition can be done using speech and image (facial expression), this paper deals with SER (speech emotion recognition) only. For emotion recognition emotional speech database is essential. In this paper we have proposed emotional database which is developed in Gujarati language, one of the official’s language of India. The proposed speech corpus bifurcate six emotional states as: sadness, surprise, anger, disgust, fear, happiness. To obse
APA, Harvard, Vancouver, ISO, and other styles
7

Vishal, P. Tank, and K. Hadia S. "Creation of speech corpus for emotion analysis in Gujarati language and its evaluation by various speech parameters." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 5 (2020): 4752–58. https://doi.org/10.11591/ijece.v10i5.pp4752-4758.

Full text
Abstract:
In the last couple of years emotion recognition has proven its significance in the area of artificial intelligence and man machine communication. Emotion recognition can be done using speech and image (facial expression), this paper deals with SER (speech emotion recognition) only. For emotion recognition emotional speech database is essential. In this paper we have proposed emotional database which is developed in Gujarati language, one of the official’s language of India. The proposed speech corpus bifurcate six emotional states as: sadness, surprise, anger, disgust, fear, happiness. T
APA, Harvard, Vancouver, ISO, and other styles
8

Huang, Chengwei, Guoming Chen, Hua Yu, Yongqiang Bao, and Li Zhao. "Speech Emotion Recognition under White Noise." Archives of Acoustics 38, no. 4 (2013): 457–63. http://dx.doi.org/10.2478/aoa-2013-0054.

Full text
Abstract:
Abstract Speaker‘s emotional states are recognized from speech signal with Additive white Gaussian noise (AWGN). The influence of white noise on a typical emotion recogniztion system is studied. The emotion classifier is implemented with Gaussian mixture model (GMM). A Chinese speech emotion database is used for training and testing, which includes nine emotion classes (e.g. happiness, sadness, anger, surprise, fear, anxiety, hesitation, confidence and neutral state). Two speech enhancement algorithms are introduced for improved emotion classification. In the experiments, the Gaussian mixture
APA, Harvard, Vancouver, ISO, and other styles
9

Reddy, Dr N. V. Rajasekhar. "Speech Emotion Recognition Using Convolutional Neural Networks." International Journal for Research in Applied Science and Engineering Technology 12, no. 8 (2024): 30–36. http://dx.doi.org/10.22214/ijraset.2024.63859.

Full text
Abstract:
Abstract: Speech is a powerful way to express our thoughts and feelings. It can give us valuable insights into human emotions. Speech emotion recognition (SER) is a crucial tool used in various fields like human-computer interaction (HCI), medical diagnosis, and lie detection. However, understanding emotions from speech is challenging. This research aims to address this challenge. It uses multiple datasets, including CREMA-D, RAVDESS, TESS, and SAVEE, to identify different emotional states. The researchers reviewed existing literature to inform their methodology. They used spectrograms and mel
APA, Harvard, Vancouver, ISO, and other styles
10

Morgan, Shae D. "Comparing Emotion Recognition and Word Recognition in Background Noise." Journal of Speech, Language, and Hearing Research 64, no. 5 (2021): 1758–72. http://dx.doi.org/10.1044/2021_jslhr-20-00153.

Full text
Abstract:
Purpose Word recognition in quiet and in background noise has been thoroughly investigated in previous research to establish segmental speech recognition performance as a function of stimulus characteristics (e.g., audibility). Similar methods to investigate recognition performance for suprasegmental information (e.g., acoustic cues used to make judgments of talker age, sex, or emotional state) have not been performed. In this work, we directly compared emotion and word recognition performance in different levels of background noise to identify psychoacoustic properties of emotion recognition
APA, Harvard, Vancouver, ISO, and other styles
11

Srinidhi, Ponnaluri. "Speech Based Emotion Recognition." International Journal for Research in Applied Science and Engineering Technology 10, no. 6 (2022): 3160–65. http://dx.doi.org/10.22214/ijraset.2022.44583.

Full text
Abstract:
Abstract: In this paper we are exhibiting our Final Year Project which is Speech Emotion Recognition. Today's hot study issue is speech and emotion detection, with the goal of improving human-machine connection. Currently, the majority of research in this field relies on discriminator extraction to classify emotions into several categories. The majority of the present research focuses on the utterance of words that are employed in language-dependent lexical analysis for emotion detection. This study employs strategies to classify emotions into five categories: anger, calm, anxiety, happiness,
APA, Harvard, Vancouver, ISO, and other styles
12

Asghar, Awais, Sarmad Sohaib, Saman Iftikhar, Muhammad Shafi, and Kiran Fatima. "An Urdu speech corpus for emotion recognition." PeerJ Computer Science 8 (May 9, 2022): e954. http://dx.doi.org/10.7717/peerj-cs.954.

Full text
Abstract:
Emotion recognition from acoustic signals plays a vital role in the field of audio and speech processing. Speech interfaces offer humans an informal and comfortable means to communicate with machines. Emotion recognition from speech signals has a variety of applications in the area of human computer interaction (HCI) and human behavior analysis. In this work, we develop the first emotional speech database of the Urdu language. We also develop the system to classify five different emotions: sadness, happiness, neutral, disgust, and anger using different machine learning algorithms. The Mel Freq
APA, Harvard, Vancouver, ISO, and other styles
13

Reddy, C. Karthik, T. Venkat Teja, Imtiyaz ., and Dr Navnath D. Kale. "Speech Based Emotion Recognition." International Journal for Research in Applied Science and Engineering Technology 11, no. 3 (2023): 495–500. http://dx.doi.org/10.22214/ijraset.2023.49448.

Full text
Abstract:
Abstract: Speech Emotion Recognition is the final year project that we are showcasing in this essay. Speech and emotion recognition is a current research hot topic with the aim of enhancing human-machine interaction. In order to categorise emotions into different groups, discriminator extraction is now used in the bulk of this field of study. The majority of the current study concentrates on the words used in lexical analysis that is language-dependent for detecting emotions when they are spoken. This study uses the Convolutional Neural Network machine learning method five categories to classi
APA, Harvard, Vancouver, ISO, and other styles
14

Wang, Yu Tai, Jie Han, Xiao Qing Jiang, Jing Zou, and Hui Zhao. "Study of Speech Emotion Recognition Based on Prosodic Parameters and Facial Expression Features." Applied Mechanics and Materials 241-244 (December 2012): 1677–81. http://dx.doi.org/10.4028/www.scientific.net/amm.241-244.1677.

Full text
Abstract:
The present status of speech emotion recognition was introduced in the paper. The emotional databases of Chinese speech and facial expressions were established with the noise stimulus and movies evoking subjects' emtion. For different emotional states, we analyzed the single-mode speech emotion recognitions based the prosodic features and the geometric features of facial expression. Then, we discussed the bimodal emotion recognition by the use of Gaussian Mixture Model. The experimental results show that, the bimodal emotion recognition rate combined with facial expression is about 6% higher t
APA, Harvard, Vancouver, ISO, and other styles
15

Zhao, Hui, Yu Tai Wang, and Xing Hai Yang. "Emotion Detection System Based on Speech and Facial Signals." Advanced Materials Research 459 (January 2012): 483–87. http://dx.doi.org/10.4028/www.scientific.net/amr.459.483.

Full text
Abstract:
This paper introduces the present status of speech emotion detection. In order to improve the emotion recognition rate of single mode, the bimodal fusion method based on speech and facial expression is proposed. First, we establishes emotional database include speech and facial expression. For different emotions, calm, happy, surprise, anger, sad, we extract ten speech parameters and use the PCA method to detect the speech emotion. Then we analyze the bimodal emotion detection of fusing facial expression information. The experiment results show that the emotion recognition rate with bimodal fu
APA, Harvard, Vancouver, ISO, and other styles
16

Kumar, Balbant. "Speech Emotion Recognition using CNN." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 04 (2025): 1–9. https://doi.org/10.55041/ijsrem45881.

Full text
Abstract:
Abstract Speech Emotion Recognition (SER) is a growing area in affective computing that aims to detect and understand human emotions through speech signals. It finds extensive use in human-computer interaction, virtual assistants, mental health tracking, and automating customer service. This project introduced a deep learning method for SER utilizing Convolutional Neural Networks (CNNs). The system extracts mel-frequency cepstral coefficients (MFCCs) and spectrograms from raw audio inputs and transforms speech signals into two-dimensional images. These images were then processed by a CNN frame
APA, Harvard, Vancouver, ISO, and other styles
17

Prasad, Dr Kanakam Siva Rama, N. Srinivasa Rao, and B. Sravani. "Advanced Model Implementation to Recognize Emotion Based Speech with Machine Learning." International Journal of Innovative Research in Engineering & Management 9, no. 6 (2022): 47–54. http://dx.doi.org/10.55524/ijirem.2022.9.6.8.

Full text
Abstract:
Emotions are essential in developing interpersonal relationships. Emotions make emphasizing with others’ problems easy and leads to better communication without misunderstandings. Humans possess the natural ability of understanding others’ emotions from their speech, hand gestures, facial expressions etc and react accordingly but, it is impossible for machines to extract and understand emotions unless they are trained to do so. Speech Emotion Recognition is one step towards it, SER uses ML algorithms to forecast the emotion behind a speech. The features which include MEL, MFCC, and Chroma of a
APA, Harvard, Vancouver, ISO, and other styles
18

G, Apeksha. "Speech Emotion Recognition Using ANN." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem32584.

Full text
Abstract:
The speech is the most effective means of communication, to recognize the emotions in speech is the most crucial task. In this paper we are using the Artificial Neural Network to recognize the emotions in speech. Hence, providing an efficient and accurate technique for speech based emotion recognition is also an important task. This study is focused on seven basic human emotions (angry, disgust, fear, happy, neutral, surprise, sad). The training and validating accuracy and also lose can be seen in a graph while training the dataset. According to it confusion matrix for model is created. The se
APA, Harvard, Vancouver, ISO, and other styles
19

T, Ramya. "Speech Emotion Recognition." International Journal for Research in Applied Science and Engineering Technology 9, no. 1 (2021): 746–49. http://dx.doi.org/10.22214/ijraset.2021.32740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Schuller, Björn W. "Speech emotion recognition." Communications of the ACM 61, no. 5 (2018): 90–99. http://dx.doi.org/10.1145/3129340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kambale, Prof Jagdish, Abhijeet Khedkar, Prasad Patil, and Tejas Sonone. "Speech Emotion Recognition Using Deep Learning." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (2023): 4829–33. http://dx.doi.org/10.22214/ijraset.2023.49703.

Full text
Abstract:
Abstract: Due to different technical developments, speech signals have evolved into a kind of human-machine communication in the digital age. Recognizing the emotions of the person behind his or her speech is a crucial part of Human-Computer Interaction (HCI). Many methods, including numerous well-known speech analysis and classification algorithms, have been employed to extract emotions from signals in the literature on voice emotion recognition (SER). Speech Emotion Recognition (SER) approaches have become obsolete as the Deep Learning concept has come into play. In this paper, the algorithm
APA, Harvard, Vancouver, ISO, and other styles
22

Kane, Pushkar. "Attention-based Speech Emotion Recognition Approach for Medical Application." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (2023): 3863–71. http://dx.doi.org/10.22214/ijraset.2023.52422.

Full text
Abstract:
Abstract: Presently AI is used in various medical fields. Mental health is an important part of the overall health of a person and speech is the primary form for expression of emotion. Thus, Speech Emotion Recognition can be used to understand emotions of the person and help doctors focus on the cure. Speech Emotion recognition is analysis and classification of speech signals to detect the underlying emotions. This paper proposes a model for speech emotion recognition using an attention mechanism. The model is developed using Mel-frequency cepstral coefficients (MFCCs), and a combination of 2D
APA, Harvard, Vancouver, ISO, and other styles
23

Xian, Weijia. "Speech Emotion Recognition Application for Education." BCP Education & Psychology 7 (November 7, 2022): 378–83. http://dx.doi.org/10.54691/bcpep.v7i.2691.

Full text
Abstract:
Based on convolutional neural networks, a speech recognition application capable of analyzing human emotions is designed. This speech emotion recognition can better assist teachers to understand students' emotional status in the learning process and enable them to improve their teaching methods with the help of the system, thus achieving the goal of improving students' learning efficiency. The application is based on PAD dimension, convolutional neural network to extract deep speech emotion features, and Least squares support vector machine for emotion recognition, thus improving the recogniti
APA, Harvard, Vancouver, ISO, and other styles
24

Youddha, Beer Singh. "A Review on Emotional Speech Databases." i-manager's Journal on Computer Science 10, no. 3 (2022): 27. http://dx.doi.org/10.26634/jcom.10.3.19103.

Full text
Abstract:
Due to its numerous practical applications, human emotion recognition from speech is now a challenging and demanding research subject for scientists. The Speech databases, speech features, and classifiers are the important factors for recognizing emotions from speech. The availability of suitable emotional speech databases is the first step for Speech Emotion Recognition (SER). This paper presents a comprehensive literature review of emotional speech databases. The availability of appropriate emotional speech databases in all emotions and languages are summarized. A total of 26 papers for the
APA, Harvard, Vancouver, ISO, and other styles
25

Quan, Changqin, Bin Zhang, Xiao Sun, and Fuji Ren. "A combined cepstral distance method for emotional speech recognition." International Journal of Advanced Robotic Systems 14, no. 4 (2017): 172988141771983. http://dx.doi.org/10.1177/1729881417719836.

Full text
Abstract:
Affective computing is not only the direction of reform in artificial intelligence but also exemplification of the advanced intelligent machines. Emotion is the biggest difference between human and machine. If the machine behaves with emotion, then the machine will be accepted by more people. Voice is the most natural and can be easily understood and accepted manner in daily communication. The recognition of emotional voice is an important field of artificial intelligence. However, in recognition of emotions, there often exists the phenomenon that two emotions are particularly vulnerable to co
APA, Harvard, Vancouver, ISO, and other styles
26

Kothuri, Jhansi. "Speech Emotion Recognition: An LSTM Approach." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 04 (2025): 1–9. https://doi.org/10.55041/ijsrem45580.

Full text
Abstract:
Abstract – This paper presents a novel approach to Speech Emotion Recognition (SER) utilizing a Long Short-Term Memory (LSTM) network to classify emotions from audio inputs in real-time. The primary goal of this research is to accurately identify various emotions, including happiness, sadness, anger, fear, and surprise, enhancing user experience in applications such as human-computer interaction, virtual assistants, and mental health monitoring. The methodology involves a comprehensive process that begins with the preprocessing of audio signals to ensure clarity and consistency. This is follow
APA, Harvard, Vancouver, ISO, and other styles
27

K, Deepak. "Improving Speech Recognition with Convolutional Neural Networks." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem30472.

Full text
Abstract:
This project explores advanced techniques in speech recognition, focusing on emotion identification using Convolutional Neural Networks for improved accuracy and real-time processing efficiency. Emotion recognition from speech signals plays a crucial role in various applications, including human-computer interaction, customer service, mental health monitoring, and entertainment. This project proposes an innovative approach to emotion recognition using Convolutional Neural Networks (CNNs) applied to speech data. By leveraging advanced deep learning techniques, the proposed system aims to accura
APA, Harvard, Vancouver, ISO, and other styles
28

G, Apeksha. "Speech Emotion Recognition Using Machine Learning." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem32388.

Full text
Abstract:
The speech is the most effective means of communication, to recognize the emotions in speech is the most crucial task. In this paper we are using the Artificial Neural Network to recognize the emotions in speech. Hence, providing an efficient and accurate technique for speech based emotion recognition is also an important task. This study is focused on seven basic human emotions (angry, disgust, fear, happy, neutral, surprise, sad). The training and validating accuracy and also lose can be seen in a graph while training the dataset. According to it confusion matrix for model is created. The se
APA, Harvard, Vancouver, ISO, and other styles
29

Akash, Raghav, and C. Lakshmi Dr. "Speech Emotion Recognition using Deep Learning." International Journal of Innovative Science and Research Technology 7, no. 9 (2022): 1595–600. https://doi.org/10.5281/zenodo.7215574.

Full text
Abstract:
The goal of the project is to detect the speaker's emotions while he or she speaks. Speech generated under a condition of fear, rage or delight, for example, becomes very loud and fast, with a larger and more varied pitch range, However, in a moment of grief or tiredness, speech is slow and low-pitched. Voice and speech patterns can be used to detect human emotions, which can help improve human-machine interactions. We give Deep Neural Networks CNN, Support Vector Machine, and MLP Classification based on auditory data for emotion produced by speech, such as Mel Frequency Cepstral Coefficie
APA, Harvard, Vancouver, ISO, and other styles
30

Kanwal, Sofia, Sohail Asghar, and Hazrat Ali. "Feature selection enhancement and feature space visualization for speech-based emotion recognition." PeerJ Computer Science 8 (November 4, 2022): e1091. http://dx.doi.org/10.7717/peerj-cs.1091.

Full text
Abstract:
Robust speech emotion recognition relies on the quality of the speech features. We present speech features enhancement strategy that improves speech emotion recognition. We used the INTERSPEECH 2010 challenge feature-set. We identified subsets from the features set and applied principle component analysis to the subsets. Finally, the features are fused horizontally. The resulting feature set is analyzed using t-distributed neighbour embeddings (t-SNE) before the application of features for emotion recognition. The method is compared with the state-of-the-art methods used in the literature. The
APA, Harvard, Vancouver, ISO, and other styles
31

Sondawale, Shweta. "Face and Speech Emotion Recognition System." International Journal for Research in Applied Science and Engineering Technology 12, no. 4 (2024): 5621–28. http://dx.doi.org/10.22214/ijraset.2024.61278.

Full text
Abstract:
Abstract: Emotions serve as the cornerstone of human communication, facilitating the expression of one's inner thoughts and feelings to others. Speech Emotion Recognition (SER) represents a pivotal endeavour aimed at deciphering the emotional nuances embedded within a speaker's voice signal. Universal emotions such as neutrality, anger, happiness, and sadness form the basis of this recognition process, allowing for the identification of fundamental emotional states. To achieve this, spectral and prosodic features are leveraged, each offering unique insights into the emotional content of speech
APA, Harvard, Vancouver, ISO, and other styles
32

Khedkar, Shilpa. "Activity Recommendation System Based on Emotion Recognition." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 06 (2025): 1–9. https://doi.org/10.55041/ijsrem49641.

Full text
Abstract:
Abstract—An Emotion Recognition-Based Activity Recommendation System aims at providing users with adequate activity recommendations based on emotional states using the latest developments within the scope of emotion recognition technology. This project would apply speech emotion recognition techniques, specifically focusing on the most current state of-the-art methods in Triangular Region Cut-Mix augmentation for the enhancement of accuracy of emotion classification while preserving audio spectrogram information related to key emotions. Furthermore, it involves a dual learning framework integr
APA, Harvard, Vancouver, ISO, and other styles
33

Byun, Sung-Woo, and Seok-Pil Lee. "A Study on a Speech Emotion Recognition System with Effective Acoustic Features Using Deep Learning Algorithms." Applied Sciences 11, no. 4 (2021): 1890. http://dx.doi.org/10.3390/app11041890.

Full text
Abstract:
The goal of the human interface is to recognize the user’s emotional state precisely. In the speech emotion recognition study, the most important issue is the effective parallel use of the extraction of proper speech features and an appropriate classification engine. Well defined speech databases are also needed to accurately recognize and analyze emotions from speech signals. In this work, we constructed a Korean emotional speech database for speech emotion analysis and proposed a feature combination that can improve emotion recognition performance using a recurrent neural network model. To i
APA, Harvard, Vancouver, ISO, and other styles
34

Wen, Suyun. "An Analysis of Emotional Responses of Students in Bilingual Classes and Adjustment Strategies." International Journal of Emerging Technologies in Learning (iJET) 18, no. 01 (2023): 100–114. http://dx.doi.org/10.3991/ijet.v18i01.37125.

Full text
Abstract:
Students’ willingness to participate in bilingual communication is greatly influenced by their positive emotions in the bilingual class. The automatic recognition of students’ emotional state in bilingual class can assist teachers to correctly master the laws of students’ emotional changes during the bilingual learning process as fast as possible. However, the speech emotion features extracted by existing speech emotion recognition models are not universal and not suitable for bilingual speech emotion recognition, and the accuracy needs to be improved. To cope with these issues, this paper aim
APA, Harvard, Vancouver, ISO, and other styles
35

Kumar, K. Ashok, and Dr J. L. Mazher Iqbal. "Machine Learning Based Emotion Recognition using Speech Signal." International Journal of Engineering and Advanced Technology 9, no. 1s5 (2019): 295–302. http://dx.doi.org/10.35940/ijeat.a1068.1291s519.

Full text
Abstract:
The challenging module in CAS (computer-aided services) has recognized the emotion from the signals of speech. In SER (speech emotion recognition), several schemes have used for extracting emotions from the signals, comprising various classification & speech analysis methods. This manuscript represents an outline of methods & explores some contemporary literature where the existing models have used for emotion recognition based on speech. This literature review presents contributions that made towards emotion recognition of speech and extracted the features for determining emotions.
APA, Harvard, Vancouver, ISO, and other styles
36

E.S, Pallavi. "Speech Emotion Recognition Based on Machine Learning." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem33995.

Full text
Abstract:
The speech is the most effective means of communication, to recognize the emotions in speech is the most crucial task. In this paper we are using the Artificial Neural Network to recognize the emotions in speech. Hence, providing an efficient and accurate technique for speech based emotion recognition is also an important task. This study is focused on seven basic human emotions (angry, disgust, fear, happy, neutral, surprise, sad). The training and validating accuracy and also lose can be seen in a graph while training the dataset.According to it confusion matrix for model is created. The fea
APA, Harvard, Vancouver, ISO, and other styles
37

Caballero-Morales, Santiago-Omar. "Recognition of Emotions in Mexican Spanish Speech: An Approach Based on Acoustic Modelling of Emotion-Specific Vowels." Scientific World Journal 2013 (2013): 1–13. http://dx.doi.org/10.1155/2013/162093.

Full text
Abstract:
An approach for the recognition of emotions in speech is presented. The target language is Mexican Spanish, and for this purpose a speech database was created. The approach consists in the phoneme acoustic modelling of emotion-specific vowels. For this, a standard phoneme-based Automatic Speech Recognition (ASR) system was built with Hidden Markov Models (HMMs), where different phoneme HMMs were built for the consonants and emotion-specific vowels associated with four emotional states (anger, happiness, neutral, sadness). Then, estimation of the emotional state from a spoken sentence is perfor
APA, Harvard, Vancouver, ISO, and other styles
38

Prathibha, Dr G., Y. Kavya, P. Vinay Jacob, and L. Poojita. "Speech Emotion Recognition Using Deep Learning." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 07 (2024): 1–13. http://dx.doi.org/10.55041/ijsrem36262.

Full text
Abstract:
Speech is one of the primary forms of expression and is important for Emotion Recognition. Emotion Recognition is helpful to derive various useful insights about the thoughts of a person. Automatic speech emotion recognition is an active field of study in Artificial intelligence and Machine learning, which aims to generate machines that communicate with people via speech. In this work, deep learning algorithms such as Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) are explored to extract features and classify emotions such as calm, happy, fearful, disgust, angry, neutral
APA, Harvard, Vancouver, ISO, and other styles
39

Zheng, Chunjun, Chunli Wang, and Ning Jia. "An Ensemble Model for Multi-Level Speech Emotion Recognition." Applied Sciences 10, no. 1 (2019): 205. http://dx.doi.org/10.3390/app10010205.

Full text
Abstract:
Speech emotion recognition is a challenging and widely examined research topic in the field of speech processing. The accuracy of existing models in speech emotion recognition tasks is not high, and the generalization ability is not strong. Since the feature set and model design of effective speech directly affect the accuracy of speech emotion recognition, research on features and models is important. Because emotional expression is often correlated with the global features, local features, and model design of speech, it is often difficult to find a universal solution for effective speech emo
APA, Harvard, Vancouver, ISO, and other styles
40

Rastogi, Rohit, Tushar Anand, Shubham Kumar Sharma, and Sarthak Panwar. "Emotion Detection via Voice and Speech Recognition." International Journal of Cyber Behavior, Psychology and Learning 13, no. 1 (2023): 1–24. http://dx.doi.org/10.4018/ijcbpl.333473.

Full text
Abstract:
Emotion detection from voice signals is needed for human-computer interaction (HCI), which is a difficult challenge. In the literature on speech emotion recognition, various well known speech analysis and classification methods have been used to extract emotions from signals. Deep learning strategies have recently been proposed as a workable alternative to conventional methods and discussed. Several recent studies have employed these methods to identify speech-based emotions. The review examines the databases used, the emotions collected, and the contributions to speech emotion recognition. Th
APA, Harvard, Vancouver, ISO, and other styles
41

Verma, Teena, Sahil Niranjan, Abhinav K.Gupt, Vinay KUMAR, and YASH Vashist. "EMOTIONAL RECOGNITION USING FACIAL EXPRESSIONS AND SPEECH ANALYSIS." International Journal of Engineering Applied Sciences and Technology 6, no. 7 (2021): 176–80. http://dx.doi.org/10.33564/ijeast.2021.v06i07.028.

Full text
Abstract:
Emotional recognition can be made from Many sources including text, speech, hand, body language and facial expressions. Currently, most sensory systems use only one of these sources. People's feelings change every second and one method used to process emotional recognition may not reflect emotions in the right way. This research recommends the desire to understand and explore people's feelings in many similar ways speech and face. We have chosen to explore, sound and video inputs to develop an ensemble model that gathers the information from all these sources and displays it in a clear and int
APA, Harvard, Vancouver, ISO, and other styles
42

Liang, Boyuan. "Research On Emotion Management Based on Speech Analysis for Nursing Homes." Highlights in Science, Engineering and Technology 81 (January 26, 2024): 646–54. http://dx.doi.org/10.54097/75tss921.

Full text
Abstract:
Speech emotion recognition is an important research area in artificial intelligence aimed at identifying the emotional states of speakers. This paper provides an overview of the current state and key algorithms of speech emotion recognition, focusing particularly on the emotional well-being of elderly residents in nursing homes. Firstly, the paper introduces the background and application areas of speech emotion recognition, emphasizing its significance in human-computer interaction, psychological health monitoring, and emotional intelligent systems. The paper extensively discusses methods of
APA, Harvard, Vancouver, ISO, and other styles
43

Manish, Goswami, Parate Aditya, Kapde Nisarga, Singh Shashwat, Gupta Nitiksha, and Surjuse Meena. "Development of a model for detecting emotions using CNN and LSTM." i-manager’s Journal on Software Engineering 19, no. 1 (2024): 17. https://doi.org/10.26634/jse.19.1.21324.

Full text
Abstract:
This paper presents the development of a real-time deep learning system for emotion recognition using both speech and facial inputs. For speech emotion recognition, three significant datasets: SAVEE, Toronto Emotion Speech Set (TESS), and CREMA-D were utilized, comprising over 75,000 samples that represent a spectrum of emotions: Anger, Sadness, Fear, Disgust, Calm, Happiness, Neutral, and Surprise, mapped to numerical labels from 1 to 8. The system identifies emotions from live speech inputs and pre-recorded audio files using a Long Short-Term Memory (LSTM) network, which is particularly effe
APA, Harvard, Vancouver, ISO, and other styles
44

N, Tejashwini, V. Kaveri A, Keerthana P, Rajneesh Kumar, and M. Kavya C. "Multimodal Emotion Recognition." Perspectives in Communication, Embedded-systems and Signal-processing - PiCES 4, no. 8 (2020): 194–98. https://doi.org/10.5281/zenodo.4419690.

Full text
Abstract:
Recognizing different emotions of humans for system has been a burning issue since last decade. The association between individuals and PCs will be increasingly normal if PCs can see and react to human non-verbal correspondence, for example, feelings. Albeit a few methodologies have been proposed to perceive human feelings dependent on outward appearances or discourse or text, generally restricted work has been three models and other modalities to improve the capacities of the feeling acknowledgment framework. This paper describes the qualities and the restrictions of frameworks dependent on o
APA, Harvard, Vancouver, ISO, and other styles
45

B, Chakradhar. "Machine Learning Based Speech Emotion Recognition System." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem32211.

Full text
Abstract:
In the last decade, there has been significant research into Automatic Speech Emotion Recognition (SER). The primary goal of SER is to improve human-machine interfaces. It can also monitor someone's psychological state for lie detection applications. Recently, speech emotion recognition has found uses in medicine and forensics. This paper recognizes 7 emotions using pitch and prosody features. The majority of speech features used here are in the time domain. A Support Vector Machine (SVM) classifier categorizes the emotions. The Berlin emotional database was used for this task. A good recognit
APA, Harvard, Vancouver, ISO, and other styles
46

Farmer, Eliot, Crescent Jicol, and Karin Petrini. "Musicianship Enhances Perception But Not Feeling of Emotion From Others’ Social Interaction Through Speech Prosody." Music Perception 37, no. 4 (2020): 323–38. http://dx.doi.org/10.1525/mp.2020.37.4.323.

Full text
Abstract:
Music expertise has been shown to enhance emotion recognition from speech prosody. Yet, it is currently unclear whether music training enhances the recognition of emotions through other communicative modalities such as vision and whether it enhances the feeling of such emotions. Musicians and nonmusicians were presented with visual, auditory, and audiovisual clips consisting of the biological motion and speech prosody of two agents interacting. Participants judged as quickly as possible whether the expressed emotion was happiness or anger, and subsequently indicated whether they also felt the
APA, Harvard, Vancouver, ISO, and other styles
47

Vicsi, Klára, and Dávid Sztahó. "Recognition of Emotions on the Basis of Different Levels of Speech Segments." Journal of Advanced Computational Intelligence and Intelligent Informatics 16, no. 2 (2012): 335–40. http://dx.doi.org/10.20965/jaciii.2012.p0335.

Full text
Abstract:
Emotions play a very important role in human-human and human-machine communication. They can be expressed by voice, bodily gestures, and facial movements. People’s acceptance of any kind of intelligent device depends, to a large extent, on how the device reflects emotions. This is the reason why automatic emotion recognition is a recent research topic. In this paper we deal with automatic emotion recognition from human voice. Numerous papers in this field deal with database creation and with the examination of acoustic features appropriate for such recognition, but only few attempts were made
APA, Harvard, Vancouver, ISO, and other styles
48

Hazra, Sumon Kumar, Romana Rahman Ema, Syed Md Galib, Shalauddin Kabir, and Nasim Adnan. "Emotion recognition of human speech using deep learning method and MFCC features." Radioelectronic and Computer Systems, no. 4 (November 29, 2022): 161–72. http://dx.doi.org/10.32620/reks.2022.4.13.

Full text
Abstract:
Subject matter: Speech emotion recognition (SER) is an ongoing interesting research topic. Its purpose is to establish interactions between humans and computers through speech and emotion. To recognize speech emotions, five deep learning models: Convolution Neural Network, Long-Short Term Memory, Artificial Neural Network, Multi-Layer Perceptron, Merged CNN, and LSTM Network (CNN-LSTM) are used in this paper. The Toronto Emotional Speech Set (TESS), Surrey Audio-Visual Expressed Emotion (SAVEE) and Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) datasets were used for this
APA, Harvard, Vancouver, ISO, and other styles
49

Shirbhate, Tanvi, Devashish Deshmukh, Chetan Rajurkar, Sayali Sagane, and Prof. (Dr) Anup W. Burange. "Speech Emotion Recognition Using Machine Learning." International Journal of Ingenious Research, Invention and Development (IJIRID) 3, no. 2 (2024): 101–9. https://doi.org/10.5281/zenodo.11049046.

Full text
Abstract:
<em>Language is the most important medium of communication. Emotions play an important role in human life. Recognizing emotion in speech is both important and challenging because we are dealing with human-computer interaction. Speech Emotion Recognition (SER) has many applications, and a lot of research has focused on this interest in recent years. Speech Emotion Recognition (SER) has become an important collaboration at the intersection of music processing and machine learning. The goal of the system is to identify and classify emotions in speech, leading to human-computer applications, psych
APA, Harvard, Vancouver, ISO, and other styles
50

Nam, Youngja, and Chankyu Lee. "Cascaded Convolutional Neural Network Architecture for Speech Emotion Recognition in Noisy Conditions." Sensors 21, no. 13 (2021): 4399. http://dx.doi.org/10.3390/s21134399.

Full text
Abstract:
Convolutional neural networks (CNNs) are a state-of-the-art technique for speech emotion recognition. However, CNNs have mostly been applied to noise-free emotional speech data, and limited evidence is available for their applicability in emotional speech denoising. In this study, a cascaded denoising CNN (DnCNN)–CNN architecture is proposed to classify emotions from Korean and German speech in noisy conditions. The proposed architecture consists of two stages. In the first stage, the DnCNN exploits the concept of residual learning to perform denoising; in the second stage, the CNN performs th
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!