To see the other types of publications on this topic, follow the link: Text emotion recognition.

Journal articles on the topic 'Text emotion recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Text emotion recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sahoo, Sipra. "Emotion Recognition from Text." International Journal for Research in Applied Science and Engineering Technology 6, no. 3 (March 31, 2018): 237–43. http://dx.doi.org/10.22214/ijraset.2018.3038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Deng, Jiawen, and Fuji Ren. "Hierarchical Network with Label Embedding for Contextual Emotion Recognition." Research 2021 (January 6, 2021): 1–9. http://dx.doi.org/10.34133/2021/3067943.

Full text
Abstract:
Emotion recognition has been used widely in various applications such as mental health monitoring and emotional management. Usually, emotion recognition is regarded as a text classification task. Emotion recognition is a more complex problem, and the relations of emotions expressed in a text are nonnegligible. In this paper, a hierarchical model with label embedding is proposed for contextual emotion recognition. Especially, a hierarchical model is utilized to learn the emotional representation of a given sentence based on its contextual information. To give emotion correlation-based recognition, a label embedding matrix is trained by joint learning, which contributes to the final prediction. Comparison experiments are conducted on Chinese emotional corpus RenCECps, and the experimental results indicate that our approach has a satisfying performance in textual emotion recognition task.
APA, Harvard, Vancouver, ISO, and other styles
3

Fujisawa, Akira, Kazuyuki Matsumoto, Minoru Yoshida, and Kenji Kita. "Emotion Estimation Method Based on Emoticon Image Features and Distributed Representations of Sentences." Applied Sciences 12, no. 3 (January 25, 2022): 1256. http://dx.doi.org/10.3390/app12031256.

Full text
Abstract:
This paper proposes an emotion recognition method for tweets containing emoticons using their emoticon image and language features. Some of the existing methods register emoticons and their facial expression categories in a dictionary and use them, while other methods recognize emoticon facial expressions based on the various elements of the emoticons. However, highly accurate emotion recognition cannot be performed unless the recognition is based on a combination of the features of sentences and emoticons. Therefore, we propose a model that recognizes emotions by extracting the shape features of emoticons from their image data and applying the feature vector input that combines the image features with features extracted from the text of the tweets. Based on evaluation experiments, the proposed method is confirmed to achieve high accuracy and shown to be more effective than methods that use text features only.
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Changxiu, S. Kirubakaran, and Alfred Daniel J. "Deep Learning Approach for Emotion Recognition Analysis in Text Streams." International Journal of Technology and Human Interaction 18, no. 2 (April 1, 2022): 1–21. http://dx.doi.org/10.4018/ijthi.313927.

Full text
Abstract:
Social media sites employ various approaches to track feelings, including diagnosing neurological problems, including fear, in people or assessing a population public sentiment. One essential obstacle for automatic emotion recognition principles is variable with fluctuating limitations, language, and interpretation shifts. Therefore, in this paper, a deep learning-based emotion recognition (DL-EM) system has been proposed to describe the various relational effects in emotional groups. A soft classification method is suggested to quantify the tendency and allocate a message to each emotional class. A supervised framework for emotions in text streaming messages is developed and tested. Two of the major activities are offline teaching assignments and interactive emotion classification techniques. The first challenge offers templates in text responses to describe sentiment. The second activity includes implementing a two-stage framework to identify live broadcasts of text messages for dedicated emotion monitoring.
APA, Harvard, Vancouver, ISO, and other styles
5

Hatem, Ahmed Samit, and Abbas M. Al-Bakry. "The Information Channels of Emotion Recognition: A Review." Webology 19, no. 1 (January 20, 2022): 927–41. http://dx.doi.org/10.14704/web/v19i1/web19064.

Full text
Abstract:
Humans are emotional beings. When we express about emotions, we frequently use several modalities, whether we want to so overtly (i.e., Speech, facial expressions,..) or implicitly (i.e., body language, text,..). Emotion recognition has lately piqued the interest of many researchers, and various techniques have been studied. A review on emotion recognition is given in this article. The survey seeks single and multiple source of data or information channels that may be utilized to identify emotions and includes a literature analysis on current studies published to each information channel, as well as the techniques employed and the findings obtained. Ultimately, some of the present emotion recognition problems and future work recommendations have been mentioned.
APA, Harvard, Vancouver, ISO, and other styles
6

Bharti, Santosh Kumar, S. Varadhaganapathy, Rajeev Kumar Gupta, Prashant Kumar Shukla, Mohamed Bouye, Simon Karanja Hingaa, and Amena Mahmoud. "Text-Based Emotion Recognition Using Deep Learning Approach." Computational Intelligence and Neuroscience 2022 (August 23, 2022): 1–8. http://dx.doi.org/10.1155/2022/2645381.

Full text
Abstract:
Sentiment analysis is a method to identify people’s attitudes, sentiments, and emotions towards a given goal, such as people, activities, organizations, services, subjects, and products. Emotion detection is a subset of sentiment analysis as it predicts the unique emotion rather than just stating positive, negative, or neutral. In recent times, many researchers have already worked on speech and facial expressions for emotion recognition. However, emotion detection in text is a tedious task as cues are missing, unlike in speech, such as tonal stress, facial expression, pitch, etc. To identify emotions from text, several methods have been proposed in the past using natural language processing (NLP) techniques: the keyword approach, the lexicon-based approach, and the machine learning approach. However, there were some limitations with keyword- and lexicon-based approaches as they focus on semantic relations. In this article, we have proposed a hybrid (machine learning + deep learning) model to identify emotions in text. Convolutional neural network (CNN) and Bi-GRU were exploited as deep learning techniques. Support vector machine is used as a machine learning approach. The performance of the proposed approach is evaluated using a combination of three different types of datasets, namely, sentences, tweets, and dialogs, and it attains an accuracy of 80.11%.
APA, Harvard, Vancouver, ISO, and other styles
7

Quan, Changqin, and Fuji Ren. "Visualizing Emotions from Chinese Blogs by Textual Emotion Analysis and Recognition Techniques." International Journal of Information Technology & Decision Making 15, no. 01 (January 2016): 215–34. http://dx.doi.org/10.1142/s0219622014500710.

Full text
Abstract:
The research on blog emotion analysis and recognition has become increasingly important in recent years. In this study, based on the Chinese blog emotion corpus (Ren-CECps), we analyze and compare blog emotion visualization from different text levels: word, sentence, and paragraph. Then, a blog emotion visualization system is designed for practical applications. Machine learning methods are applied for the implementation of blog emotion recognition at different textual levels. Based on the emotion recognition engine, the blog emotion visualization interface is designed to provide a more intuitive display of emotions in blogs, which can detect emotion for bloggers, and capture emotional change rapidly. In addition, we evaluated the performance of sentence emotion recognition by comparing five classification algorithms under different schemas, which demonstrates the effectiveness of the Complement Naive Bayes model for sentence emotion recognition. The system can recognize multi-label emotions in blogs, which provides a richer and more detailed emotion expression.
APA, Harvard, Vancouver, ISO, and other styles
8

Huang, Yuxin. "Research on Lovelorn Emotion Recognition Based on Ernie Tiny." Frontiers in Computing and Intelligent Systems 2, no. 2 (January 2, 2023): 66–69. http://dx.doi.org/10.54097/fcis.v2i2.4145.

Full text
Abstract:
Topics related to sentiment classification and emotion recognition are an important part of the Natural Language Processing research field and can be used to analyze users' sentiment tendencies towards brands, understand the public's attitudes and opinions on public opinion events, and detect users' mental health, among others. Past research has usually been based on positive and negative emotions or multi-categorized emotions such as happiness, anger and sadness, while there has been little research on the recognition of the specific emotion of lovelorn. This study aims to identify the lovelorn emotion in text, using deep learning pretrained model ERNIR Tiny to train a dataset consisting of 5008 pieces of Chinese lovelorn emotion text crawled from social media platform Weibo and 4998 pieces of ordinary text extracted from existing available dataset. And finally, it was proved that ERNIE Tiny performs well in classifying whether a text contains lovelorn emotion or not, with F1 score of 0.941929, precision score of 0.942300 and recall score of 0.941928 obtained on the test set.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Ziheng. "Review of text emotion detection." Highlights in Science, Engineering and Technology 12 (August 26, 2022): 213–21. http://dx.doi.org/10.54097/hset.v12i.1456.

Full text
Abstract:
Emotion is one of the essential characteristics of being human. When writing essays or reports, people will add their own emotions. Text sentiment detection can detect the leading emotional tone of a text. Text emotion detection and recognition is a new research field related to sentiment analysis. Emotion analysis detects and identifies emotion types, such as anger, happiness, or sadness, through textual expression. It is a subdomain of NLP. For some applications, the technology could help large companies' Chinese and Russian data analysts gauge public opinion or conduct nuanced market research and understand product reputation. At present, text emotion is one of the most studied fields in the literature. Still, it is also tricky because it is related to deep neural networks and requires the application of psychological knowledge. In this article, we will discuss the concept of text detection and introduce and analyze the main methods of text emotion detection. In addition, this paper will also discuss the advantages and weaknesses of this technology and some future research directions and problems to be solved.
APA, Harvard, Vancouver, ISO, and other styles
10

Su, Sheng-Hsiung, Hao-Chiang Koong Lin, Cheng-Hung Wang, and Zu-Ching Huang. "Multi-Modal Affective Computing Technology Design the Interaction between Computers and Human of Intelligent Tutoring Systems." International Journal of Online Pedagogy and Course Design 6, no. 1 (January 2016): 13–28. http://dx.doi.org/10.4018/ijopcd.2016010102.

Full text
Abstract:
In this paper, the authors are using emotion recognition in two ways: facial expression recognition and emotion recognition from text. Through this dual-mode operation, not only can strength the effects of recognition, but also increase the types of emotion recognition to handle the learning situation smoothly. Through the training of image processing to identify facial expression, the emotion from text is identifying by emotional keywords, syntax, semantics and calculus with logic. The system identify learns' emotions and learning situations by analyzing, choosing the appropriate instructional strategies and curriculum content, and through agents to communicate between user and system, so that learners can get a well learning. This study uses triangular system evaluation methods, observation, questionnaires and interviews. Experimental design to the subjects by the level of awareness on art and non-art to explore the traditional teaching, affective tutoring system and no emotional factors learning course website these three kinds of ways to get results, analysis and evaluate the data.
APA, Harvard, Vancouver, ISO, and other styles
11

Cai, Linqin, Yaxin Hu, Jiangong Dong, and Sitong Zhou. "Audio-Textual Emotion Recognition Based on Improved Neural Networks." Mathematical Problems in Engineering 2019 (December 31, 2019): 1–9. http://dx.doi.org/10.1155/2019/2593036.

Full text
Abstract:
With the rapid development in social media, single-modal emotion recognition is hard to satisfy the demands of the current emotional recognition system. Aiming to optimize the performance of the emotional recognition system, a multimodal emotion recognition model from speech and text was proposed in this paper. Considering the complementarity between different modes, CNN (convolutional neural network) and LSTM (long short-term memory) were combined in a form of binary channels to learn acoustic emotion features; meanwhile, an effective Bi-LSTM (bidirectional long short-term memory) network was resorted to capture the textual features. Furthermore, we applied a deep neural network to learn and classify the fusion features. The final emotional state was determined by the output of both speech and text emotion analysis. Finally, the multimodal fusion experiments were carried out to validate the proposed model on the IEMOCAP database. In comparison with the single modal, the overall recognition accuracy of text increased 6.70%, and that of speech emotion recognition soared 13.85%. Experimental results show that the recognition accuracy of our multimodal is higher than that of the single modal and outperforms other published multimodal models on the test datasets.
APA, Harvard, Vancouver, ISO, and other styles
12

Batbaatar, Erdenebileg, Meijing Li, and Keun Ho Ryu. "Semantic-Emotion Neural Network for Emotion Recognition From Text." IEEE Access 7 (2019): 111866–78. http://dx.doi.org/10.1109/access.2019.2934529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

BianBian, Jiao, R. Leelavathi, N. Lohgheswary, and Z. M. Nopiah. "Emotion Recognition and Analysis Of Netizens Based On Micro-Blog During Covid-19 Epidemic." Jurnal Kejuruteraan si5, no. 2 (November 30, 2022): 177–89. http://dx.doi.org/10.17576/jkukm-2022-si5(2)-19.

Full text
Abstract:
The research is about emotion recognition and analysis based on Micro-blog short text. Emotion recognition is an important field of text classification in Natural Language Processing. The data of this research comes from Micro-blog 100K record related to COVID-19 theme collected by Data fountain platform, the data are manually labeled, and the emotional tendencies of the text are negative, positive and neutral. The empirical part adopts dictionary emotion recognition method and machine learning emotion recognition respectively. The algorithms used include support vector machine and naive Bayes based on TFIDF, support vector machine and LSTM based on wod2vec. The five results are compared. Combined with statistical analysis methods, the emotions of netizens in the early stage of the epidemic are analyzed for public opinion. This research uses machine learning algorithm combined with statistical analysis to analyze current events in real time. It will be of great significance for the introduction and implementation of national policies.
APA, Harvard, Vancouver, ISO, and other styles
14

Lim, Myung-Jin, Moung-Ho Yi, and Ju-Hyun Shin. "Intrinsic Emotion Recognition Considering the Emotional Association in Dialogues." Electronics 12, no. 2 (January 8, 2023): 326. http://dx.doi.org/10.3390/electronics12020326.

Full text
Abstract:
Computer communication via text messaging or Social Networking Services (SNS) has become increasingly popular. At this time, many studies are being conducted to analyze user information or opinions and recognize emotions by using a large amount of data. Currently, the methods for the emotion recognition of dialogues requires an analysis of emotion keywords or vocabulary, and dialogue data are mostly classified as a single emotion. Recently, datasets classified as multiple emotions have emerged, but most of them are composed of English datasets. For accurate emotion recognition, a method for recognizing various emotions in one sentence is required. In addition, multi-emotion recognition research in Korean dialogue datasets is also needed. Since dialogues are exchanges between speakers. One’s feelings may be changed by the words of others, and feelings, once generated, may last for a long period of time. Emotions are expressed not only through vocabulary, but also indirectly through dialogues. In order to improve the performance of emotion recognition, it is necessary to analyze Emotional Association in Dialogues (EAD) to effectively reflect various factors that induce emotions. Therefore, in this paper, we propose a more accurate emotion recognition method to overcome the limitations of single emotion recognition. We implement Intrinsic Emotion Recognition (IER) to understand the meaning of dialogue and recognize complex emotions. In addition, conversations are classified according to their characteristics, and the correlation between IER is analyzed to derive Emotional Association in Dialogues (EAD) and apply them. To verify the usefulness of the proposed technique, IER applied with EAD is tested and evaluated. This evaluation determined that Micro-F1 of the proposed method exhibited the best performance, with 74.8% accuracy. Using IER to assess the EAD proposed in this paper can improve the accuracy and performance of emotion recognition in dialogues.
APA, Harvard, Vancouver, ISO, and other styles
15

Pólya, Tibor, and István Csertő. "Emotion Recognition Based on the Structure of Narratives." Electronics 12, no. 4 (February 11, 2023): 919. http://dx.doi.org/10.3390/electronics12040919.

Full text
Abstract:
One important application of natural language processing (NLP) is the recognition of emotions in text. Most current emotion analyzers use a set of linguistic features such as emotion lexicons, n-grams, word embeddings, and emoticons. This study proposes a new strategy to perform emotion recognition, which is based on the homologous structure of emotions and narratives. It is argued that emotions and narratives share both a goal-based structure and an evaluation structure. The new strategy was tested in an empirical study with 117 participants who recounted two narratives about their past emotional experiences, including one positive and one negative episode. Immediately after narrating each episode, the participants reported their current affective state using the Affect Grid. The goal-based structure and evaluation structure of the narratives were analyzed with a hybrid method. First, a linguistic analysis of the texts was carried out, including tokenization, lemmatization, part-of-speech tagging, and morphological analysis. Second, an extensive set of rule-based algorithms was used to analyze the goal-based structure of, and evaluations in, the narratives. Third, the output was fed into machine learning classifiers of narrative structural features that previously proved to be effective predictors of the narrator’s current affective state. This hybrid procedure yielded a high average F1 score (0.72). The results are discussed in terms of the benefits of employing narrative structure analysis in NLP-based emotion recognition.
APA, Harvard, Vancouver, ISO, and other styles
16

Hirt, Franziska, Egon Werlen, Ivan Moser, and Per Bergamin. "Measuring emotions during learning: lack of coherence between automated facial emotion recognition and emotional experience." Open Computer Science 9, no. 1 (December 13, 2019): 308–17. http://dx.doi.org/10.1515/comp-2019-0020.

Full text
Abstract:
AbstractMeasuring emotions non-intrusively via affective computing provides a promising source of information for adaptive learning and intelligent tutoring systems. Using non-intrusive, simultaneous measures of emotions, such systems could steadily adapt to students emotional states. One drawback, however, is the lack of evidence on how such modern measures of emotions relate to traditional self-reports. The aim of this study was to compare a prominent area of affective computing, facial emotion recognition, to students’ self-reports of interest, boredom, and valence. We analyzed different types of aggregation of the simultaneous facial emotion recognition estimates and compared them to self-reports after reading a text. Analyses of 103 students revealed no relationship between the aggregated facial emotion recognition estimates of the software FaceReader and self-reports. Irrespective of different types of aggregation of the facial emotion recognition estimates, neither the epistemic emotions (i.e., boredom and interest), nor the estimates of valence predicted the respective self-report measure. We conclude that assumptions on the subjective experience of emotions cannot necessarily be transferred to other emotional components, such as estimated by affective computing. We advise to wait for more comprehensive evidence on the predictive validity of facial emotion recognition for learning before relying on it in educational practice.
APA, Harvard, Vancouver, ISO, and other styles
17

Deshmukh, Shrikala Madhav. "Mood Enhancing Music Player Based on Speech Emotion Recognition and Text Emotion Recognition." International Journal of Emerging Trends in Engineering Research 8, no. 6 (June 25, 2020): 2770–73. http://dx.doi.org/10.30534/ijeter/2020/90862020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Juntao Zhao, Juntao Zhao. "Multichannel Fusion Based on modified CNN for Image Emotion Recognition." 電腦學刊 33, no. 1 (February 2022): 013–19. http://dx.doi.org/10.53106/199115992022023301002.

Full text
Abstract:
<p>Social media networks are an integral part of people’s daily life. Users share images and texts to express their emotions and opinions. Analyzing the image and text content published by these users can help understand and predict user behavior, so as to carry out marketing, public opinion monitoring and personalized recommendation. Weibo, Wechat and other social media are important ways of self-expression. Images are more intuitive than text. Therefore, more scholars begin to pay attention to the research of image emotion analysis. At present, image emotion analysis methods pay seldom attention to the influence of saliency object and face on image emotion expression. Therefore, we propose a multichannel fusion method based on modified CNN for image emotion recognition. Firstly, saliency target and face target region are detected in the whole image. Then feature pyramid is used to improve CNN to recognize saliency target emotion. Weighted loss CNN emotion recognition is constructed on multi-layer supervision module. Finally, the saliency target emotion, face target emotion and the directly recognized emotion on the whole image are fused to get the final result of emotion classification. Experimental results show that the proposed method can improve the accuracy of image emotion recognition.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
19

S R, Adarsh. "ENHANCEMENT OF TEXT BASED EMOTION RECOGNITION PERFORMANCES USING WORD CLUSTERS." International Journal of Research -GRANTHAALAYAH 7, no. 1 (January 31, 2019): 238–50. http://dx.doi.org/10.29121/granthaalayah.v7.i1.2019.1051.

Full text
Abstract:
Human Computer Interaction (HCI) researches the use of computer technology mainly focused on the interfaces between human users and computers. Expression of emotion comprises of challenging style as it is produced with plaint text and short messaging language as well. This research paper investigates on the overview of emotion recognition from various texts and expresses the emotion detection methodologies applying Machine Learning Approach (MLA). This paper recommends resolving the problem of feature meagerness, and largely improving the emotion recognition presentation from short texts by achieving the three aims: (I) The representing short texts along with word cluster features, (II) Presenting a narrative word clustering algorithm, and (iii) Making use of a new feature weighting scheme of the Emotion classification. Experiments were performed for the classifying the emotions with different features and weighting schemes, on the openly available dataset. We have used the word clusters in place of unigrams as features, the micro-averages of accuracy have been found to be enhanced by more than three percentage, which suggests that the overall accuracy value of the text emotion classifier has been improved. All the macro-averages were enhanced by more than one percentage, which suggests that the word cluster feature can advance the generalization potential of the emotion classifier. The experimental results suggest that the text words cluster features and the proposed weighting scheme can moderately resolve the problems of the emotion recognition performance and the feature sparseness.
APA, Harvard, Vancouver, ISO, and other styles
20

Yi, Moung Ho, Myung Jin Lim, and Ju Hyun Shin. "Multi-Emotion Recognition Model with Text and Speech Ensemble." Korean Institute of Smart Media 11, no. 8 (September 30, 2022): 65–72. http://dx.doi.org/10.30693/smj.2022.11.8.65.

Full text
Abstract:
Due to COVID-19, the importance of non-face-to-face counseling is increasing as the face-to-face counseling method has progressed to non-face-to-face counseling. The advantage of non-face-to-face counseling is that it can be consulted online anytime, anywhere and is safe from COVID-19. However, it is difficult to understand the client's mind because it is difficult to communicate with non-verbal expressions. Therefore, it is important to recognize emotions by accurately analyzing text and voice in order to understand the client's mind well during non-face-to-face counseling. Therefore, in this paper, text data is vectorized using FastText after separating consonants, and voice data is vectorized by extracting features using Log Mel Spectrogram and MFCC respectively. We propose a multi-emotion recognition model that recognizes five emotions using vectorized data using an LSTM model. Multi-emotion recognition is calculated using RMSE. As a result of the experiment, the RMSE of the proposed model was 0.2174, which was the lowest error compared to the model using text and voice data, respectively.
APA, Harvard, Vancouver, ISO, and other styles
21

Verma, Teena, Sahil Niranjan, Abhinav K.Gupt, Vinay KUMAR, and YASH Vashist. "EMOTIONAL RECOGNITION USING FACIAL EXPRESSIONS AND SPEECH ANALYSIS." International Journal of Engineering Applied Sciences and Technology 6, no. 7 (November 1, 2021): 176–80. http://dx.doi.org/10.33564/ijeast.2021.v06i07.028.

Full text
Abstract:
Emotional recognition can be made from Many sources including text, speech, hand, body language and facial expressions. Currently, most sensory systems use only one of these sources. People's feelings change every second and one method used to process emotional recognition may not reflect emotions in the right way. This research recommends the desire to understand and explore people's feelings in many similar ways speech and face. We have chosen to explore, sound and video inputs to develop an ensemble model that gathers the information from all these sources and displays it in a clear and interpretable way. By improving the emotion recognition accuracy, the proposed multisensory emotion recognition system can help to improve the naturalness of human computer interaction. Speech, hand, body language, and facial expressions are all examples of sources for emotional recognition. Most sensory systems currently use only one of these sources. People's feelings fluctuate by the second, therefore one method for processing emotional identification may not accurately reflect emotions. This study suggests that there is a need to comprehend and explore people's sentiments in many ways that voice and face do. Various emotional states were utilised in this case. Speech, facial expressions, and both can be used to detect emotions in the proposed framework. Audio, and video inputs and construct an ensemble model that collects data from all of these sources and presents it in a clear and understandable manner. The suggested multisensory emotion recognition system can help to increase the naturalness of human-computer interaction by boosting emotion recognition accuracy.
APA, Harvard, Vancouver, ISO, and other styles
22

Poorna, S. S., S. Devika Nair, Arsha Narayan, Veena Prasad, Parsi Sai Himaja, Suraj S. Kamath, and G. J. Nair. "Bimodal Emotion Recognition Using Audio and Facial Features." Journal of Computational and Theoretical Nanoscience 17, no. 1 (January 1, 2020): 189–94. http://dx.doi.org/10.1166/jctn.2020.8649.

Full text
Abstract:
A multimodal emotion recognition system is proposed using speech and facial images. For this purpose a video database is developed, containing emotions in three affective states viz. anger, sad and happiness. The audio and the snapshots of facial expressions acquired from the videos constituted the bimodal input for recognizing emotions. The spoken sentences in the database included text dependent as well as text independent sentences in Malayalam language. The audio features included short-time processing of speech to obtain: energy, zero crossing count, pitch and Mel Frequency Cepstral Coefficients. For facial expressions, the landmark features of face: eyebrows, eyes and mouth, obtained using Viola Jones Algorithm is used. The supervised learning methods K-Nearest Neighbor and Artificial Neural Network are used for emotion analysis. The system performance is evaluated for 3 cases viz. using audio based features and facial features separately and for both features taken together. Further, the effect of text dependent and text independent audio is also analyzed. The result of the analysis shows that text independent videos (utilizing both modalities) using K-Nearest Neighbor (highest accuracy 82.78%) is found to be more effective in recognizing emotions from the database considered.
APA, Harvard, Vancouver, ISO, and other styles
23

Kalyani, BJD, Kopparthi Praneeth Sai, N. M. Deepika, Shaik Shahanaz, and G. Lohitha. "Smart Multi-Model Emotion Recognition System with Deep learning." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 1 (February 6, 2023): 139–44. http://dx.doi.org/10.17762/ijritcc.v11i1.6061.

Full text
Abstract:
Emotion recognition is added a new dimension to the sentiment analysis. This paper presents a multi-modal human emotion recognition web application by considering of three traits includes speech, text, facial expressions, to extract and analyze emotions of people who are giving interviews. Now a days there is a rapid development of Machine Learning, Artificial Intelligence and deep learning, this emotion recognition is getting more attention from researchers. These machines are said to be intelligent only if they are able to do human recognition or sentiment analysis. Emotion recognition helps in spam call detection, blackmailing calls, customer services, lie detectors, audience engagement, suspicious behavior. In this paper focus on facial expression analysis is carried out by using deep learning approaches with speech signals and input text.
APA, Harvard, Vancouver, ISO, and other styles
24

Islam, Juyana, M. A. H. Akhand, Md Ahsan Habib, Md Abdus Samad Kamal, and Nazmul Siddique. "Recognition of Emotion from Emoticon with Text in Microblog Using LSTM." Advances in Science, Technology and Engineering Systems Journal 6, no. 3 (June 2021): 347–54. http://dx.doi.org/10.25046/aj060340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Azam, Nazish, Tauqir Ahmad, and Nazeef Ul Haq. "Automatic emotion recognition in healthcare data using supervised machine learning." PeerJ Computer Science 7 (December 15, 2021): e751. http://dx.doi.org/10.7717/peerj-cs.751.

Full text
Abstract:
Human feelings are fundamental to perceive the conduct and state of mind of an individual. A healthy emotional state is one significant highlight to improve personal satisfaction. On the other hand, bad emotional health can prompt social or psychological well-being issues. Recognizing or detecting feelings in online health care data gives important and helpful information regarding the emotional state of patients. To recognize or detection of patient’s emotion against a specific disease using text from online sources is a challenging task. In this paper, we propose a method for the automatic detection of patient’s emotions in healthcare data using supervised machine learning approaches. For this purpose, we created a new dataset named EmoHD, comprising of 4,202 text samples against eight disease classes and six emotion classes, gathered from different online resources. We used six different supervised machine learning models based on different feature engineering techniques. We also performed a detailed comparison of the chosen six machine learning algorithms using different feature vectors on our dataset. We achieved the highest 87% accuracy using MultiLayer Perceptron as compared to other state of the art models. Moreover, we use the emotional guidance scale to show that there is a link between negative emotion and psychological health issues. Our proposed work will be helpful to automatically detect a patient’s emotion during disease and to avoid extreme acts like suicide, mental disorders, or psychological health issues. The implementation details are made publicly available at the given link: https://bit.ly/2NQeGET.
APA, Harvard, Vancouver, ISO, and other styles
26

Et.al, Khodijah Hulliyah. "Analysis of Emotion Recognition Model Using Electroencephalogram (EEG) Signals Based on Stimuli Text." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 3 (April 10, 2021): 1384–93. http://dx.doi.org/10.17762/turcomat.v12i3.910.

Full text
Abstract:
Recognizing emotions through the brain wave approach with facial or sound expression is widely used, but few use text stimuli. Therefore, this study aims to analyze the emotion recognition experiment by stimulating sentiment-tones using EEG. The process of classifying emotions uses a random forest model approach which is compared with two models, namely Support Vector Machine and decision tree as benchmarks. The raw data used comes from the results of scrapping Twitter data. The dataset of emotional annotation was carried out manually based on four classifications, specifically: happiness, sadness, fear, and anger. The annotated dataset was tested using an Electroencephalogram (EEG) device attached to the participant's head to determine the brain waves appearing after reading the text. The results showed that the random forest model has the highest accuracy level with a rate of 98% which is slightly different from the decision tree with 88%. Meanwhile, in SVM the accuracy results are less good with a rate of 32%. Furthermore, the match level of angry emotions from the three models above during manual annotation and using the EEG device showed a high number with an average value above 90%, because reading with angry expressions is easier to perform. For this reason, this study aims to test the emotion recognition experiment by stimulating sentiment-tones using EEG. The process of classifying emotions uses a random forest model approach which is compared with two models, namely SVM and decision tree as benchmarks. The dataset used comes from the results of scrapping Twitter data.
APA, Harvard, Vancouver, ISO, and other styles
27

Guo, Jia. "Deep learning approach to text analysis for human emotion detection from big data." Journal of Intelligent Systems 31, no. 1 (January 1, 2022): 113–26. http://dx.doi.org/10.1515/jisys-2022-0001.

Full text
Abstract:
Abstract Emotional recognition has arisen as an essential field of study that can expose a variety of valuable inputs. Emotion can be articulated in several means that can be seen, like speech and facial expressions, written text, and gestures. Emotion recognition in a text document is fundamentally a content-based classification issue, including notions from natural language processing (NLP) and deep learning fields. Hence, in this study, deep learning assisted semantic text analysis (DLSTA) has been proposed for human emotion detection using big data. Emotion detection from textual sources can be done utilizing notions of Natural Language Processing. Word embeddings are extensively utilized for several NLP tasks, like machine translation, sentiment analysis, and question answering. NLP techniques improve the performance of learning-based methods by incorporating the semantic and syntactic features of the text. The numerical outcomes demonstrate that the suggested method achieves an expressively superior quality of human emotion detection rate of 97.22% and the classification accuracy rate of 98.02% with different state-of-the-art methods and can be enhanced by other emotional word embeddings.
APA, Harvard, Vancouver, ISO, and other styles
28

Płaza, Mirosław, Robert Kazała, Zbigniew Koruba, Marcin Kozłowski, Małgorzata Lucińska, Kamil Sitek, and Jarosław Spyrka. "Emotion Recognition Method for Call/Contact Centre Systems." Applied Sciences 12, no. 21 (October 28, 2022): 10951. http://dx.doi.org/10.3390/app122110951.

Full text
Abstract:
Nowadays, one of the important aspects of research on call/contact centre (CC) systems is how to automate their operations. Process automation is influenced by the continuous development in the implementation of virtual assistants. The effectiveness of virtual assistants depends on numerous factors. One of the most important is correctly recognizing the intent of clients conversing with the machine. Recognizing intentions is not an easy process, as often the client’s actual intentions can only be correctly identified after considering the client’s emotional state. When it comes to human–machine communication, the ability of a virtual assistant to recognize the client’s emotional state would greatly improve its effectiveness. This paper proposes a new method for recognizing interlocutors’ emotions dedicated directly to contact centre systems. The developed method provides opportunities to determine emotional states in text and voice channels. It provides opportunities to explore both the client’s and the agent’s emotional states. Information about agents’ emotions can be used to build their behavioural profiles, which is also applicable in contact centres. In addition, the paper explored the possibility of emotion assessment based on automatic transcriptions of recordings, which also positively affected emotion recognition performance in the voice channel. The research used actual conversations that took place during the operation of a large, commercial contact centre. The proposed solution makes it possible to recognize the emotions of customers contacting the hotline and agents handling these calls. Using this information in practical applications can increase the efficiency of agents’ work, efficiency of bots used in CC and increase customer satisfaction.
APA, Harvard, Vancouver, ISO, and other styles
29

yasmina, Douiji, Mousannif Hajar, and Al Moatassime Hassan. "Using YouTube Comments for Text-based Emotion Recognition." Procedia Computer Science 83 (2016): 292–99. http://dx.doi.org/10.1016/j.procs.2016.04.128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Jain, Vybhav, S. B. Rajeshwari, and Jagadish S. Kallimani. "Emotion Analysis from Human Voice Using Various Prosodic Features and Text Analysis." Journal of Computational and Theoretical Nanoscience 17, no. 9 (July 1, 2020): 4244–47. http://dx.doi.org/10.1166/jctn.2020.9055.

Full text
Abstract:
Emotion Analysis is a dynamic field of research with the aim to provide a method to recognize the emotions of a person only from their voice. It is more famously recognized as the Speech Emotion Recognition (SER) problem. This problem has been studied upon from more than a decade with results coming from either Voice Analysis or Text Analysis. Individually, both these methods have shown a good accuracy up till now. But, the use of both of these methods in unison has showed a much more better result than either one of those parts considered individually. When different people of different age groups are talking, it is important to understand their emotions behind what they say as this will in turn help us in reacting better. To try and achieve this, the paper implements a model which performs Emotion Analysis based on both Tone and Text Analysis. The prosodic features of the tone are analyzed and then the speech is converted to text. Once the text has been extracted from the speech, Sentiment Analysis is done on the extracted text to further improve the accuracy of the Emotion Recognition.
APA, Harvard, Vancouver, ISO, and other styles
31

Zygadło, Artur, Marek Kozłowski, and Artur Janicki. "Text-Based Emotion Recognition in English and Polish for Therapeutic Chatbot." Applied Sciences 11, no. 21 (October 29, 2021): 10146. http://dx.doi.org/10.3390/app112110146.

Full text
Abstract:
In this article, we present the results of our experiments on sentiment and emotion recognition for English and Polish texts, aiming to work in the context of a therapeutic chatbot. We created a dedicated dataset by adding samples of neutral texts to an existing English-language emotion-labeled corpus. Next, using neural machine translation, we developed a Polish version of the English database. A bilingual, parallel corpus created in this way, named CORTEX (CORpus of Translated Emotional teXts), labeled with three sentiment polarity classes and nine emotion classes, was used for experiments on classification. We employed various classifiers: Naïve Bayes, Support Vector Machines, fastText, and BERT. The results obtained were satisfactory: we achieved the best scores for the BERT-based models, which yielded accuracy of over 90% for sentiment (3-class) classification and almost 80% for emotion (9-class) classification. We compared the results for both languages and discussed the differences. Both the accuracy and the F1-scores for Polish turned out to be slightly inferior to those for English, with the highest difference visible for BERT.
APA, Harvard, Vancouver, ISO, and other styles
32

ElBedwehy, Mona Nagy, G. M. Behery, and Reda Elbarougy. "Emotional Speech Recognition Based on Weighted Distance Optimization System." International Journal of Pattern Recognition and Artificial Intelligence 34, no. 11 (February 19, 2020): 2050027. http://dx.doi.org/10.1142/s0218001420500275.

Full text
Abstract:
Human emotion plays a major role in expressing their feelings through speech. Emotional speech recognition is an important research field in the human–computer interaction. Ultimately, the endowing machines that perceive the users’ emotions will enable a more intuitive and reliable interaction.The researchers presented many models to recognize the human emotion from the speech. One of the famous models is the Gaussian mixture model (GMM). Nevertheless, GMM may sometimes have one or more of its components as ill-conditioned or singular covariance matrices when the number of features is high and some features are correlated. In this research, a new system based on a weighted distance optimization (WDO) has been developed for recognizing the emotional speech. The main purpose of the WDO system (WDOS) is to address the GMM shortcomings and increase the recognition accuracy. We found that WDOS has achieved considerable success through a comparative study of all emotional states and the individual emotional state characteristics. WDOS has a superior performance accuracy of 86.03% for the Japanese language. It improves the Japanese emotion recognition accuracy by 18.43% compared with GMM and [Formula: see text]-mean.
APA, Harvard, Vancouver, ISO, and other styles
33

Byun, Sung-Woo, Ju-Hee Kim, and Seok-Pil Lee. "Multi-Modal Emotion Recognition Using Speech Features and Text-Embedding." Applied Sciences 11, no. 17 (August 28, 2021): 7967. http://dx.doi.org/10.3390/app11177967.

Full text
Abstract:
Recently, intelligent personal assistants, chat-bots and AI speakers are being utilized more broadly as communication interfaces and the demands for more natural interaction measures have increased as well. Humans can express emotions in various ways, such as using voice tones or facial expressions; therefore, multimodal approaches to recognize human emotions have been studied. In this paper, we propose an emotion recognition method to deliver more accuracy by using speech and text data. The strengths of the data are also utilized in this method. We conducted 43 feature vectors such as spectral features, harmonic features and MFCC from speech datasets. In addition, 256 embedding vectors from transcripts using pre-trained Tacotron encoder were extracted. The acoustic feature vectors and embedding vectors were fed into each deep learning model which produced a probability for the predicted output classes. The results show that the proposed model exhibited more accurate performance than in previous research.
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Qingbiao, Chunhua Wu, Zhe Wang, and Kangfeng Zheng. "Hierarchical Transformer Network for Utterance-Level Emotion Recognition." Applied Sciences 10, no. 13 (June 28, 2020): 4447. http://dx.doi.org/10.3390/app10134447.

Full text
Abstract:
While there have been significant advances in detecting emotions in text, in the field of utterance-level emotion recognition (ULER), there are still many problems to be solved. In this paper, we address some challenges in ULER in dialog systems. (1) The same utterance can deliver different emotions when it is in different contexts. (2) Long-range contextual information is hard to effectively capture. (3) Unlike the traditional text classification problem, for most datasets of this task, they contain inadequate conversations or speech. (4) To better model the emotional interaction between speakers, speaker information is necessary. To address the problems of (1) and (2), we propose a hierarchical transformer framework (apart from the description of other studies, the “transformer” in this paper usually refers to the encoder part of the transformer) with a lower-level transformer to model the word-level input and an upper-level transformer to capture the context of utterance-level embeddings. For problem (3), we use bidirectional encoder representations from transformers (BERT), a pretrained language model, as the lower-level transformer, which is equivalent to introducing external data into the model and solves the problem of data shortage to some extent. For problem (4), we add speaker embeddings to the model for the first time, which enables our model to capture the interaction between speakers. Experiments on three dialog emotion datasets, Friends, EmotionPush, and EmoryNLP, demonstrate that our proposed hierarchical transformer network models obtain competitive results compared with the state-of-the-art methods in terms of the macro-averaged F1-score (macro-F1).
APA, Harvard, Vancouver, ISO, and other styles
35

Chaudhari, Aayushi, Chintan Bhatt, Achyut Krishna, and Carlos M. Travieso-González. "Facial Emotion Recognition with Inter-Modality-Attention-Transformer-Based Self-Supervised Learning." Electronics 12, no. 2 (January 5, 2023): 288. http://dx.doi.org/10.3390/electronics12020288.

Full text
Abstract:
Emotion recognition is a very challenging research field due to its complexity, as individual differences in cognitive–emotional cues involve a wide variety of ways, including language, expressions, and speech. If we use video as the input, we can acquire a plethora of data for analyzing human emotions. In this research, we use features derived from separately pretrained self-supervised learning models to combine text, audio (speech), and visual data modalities. The fusion of features and representation is the biggest challenge in multimodal emotion classification research. Because of the large dimensionality of self-supervised learning characteristics, we present a unique transformer and attention-based fusion method for incorporating multimodal self-supervised learning features that achieved an accuracy of 86.40% for multimodal emotion classification.
APA, Harvard, Vancouver, ISO, and other styles
36

Atmaja, Bagus Tris, and Akira Sasou. "Effects of Data Augmentations on Speech Emotion Recognition." Sensors 22, no. 16 (August 9, 2022): 5941. http://dx.doi.org/10.3390/s22165941.

Full text
Abstract:
Data augmentation techniques have recently gained more adoption in speech processing, including speech emotion recognition. Although more data tend to be more effective, there may be a trade-off in which more data will not provide a better model. This paper reports experiments on investigating the effects of data augmentation in speech emotion recognition. The investigation aims at finding the most useful type of data augmentation and the number of data augmentations for speech emotion recognition in various conditions. The experiments are conducted on the Japanese Twitter-based emotional speech and IEMOCAP datasets. The results show that for speaker-independent data, two data augmentations with glottal source extraction and silence removal exhibited the best performance among others, even with more data augmentation techniques. For the text-independent data (including speaker and text-independent), more data augmentations tend to improve speech emotion recognition performances. The results highlight the trade-off between the number of data augmentations and the performance of speech emotion recognition showing the necessity to choose a proper data augmentation technique for a specific condition.
APA, Harvard, Vancouver, ISO, and other styles
37

Md Saad, Mastura, Nursuriati Jamil, and Raseeda Hamzah. "Evaluation of Support Vector Machine and Decision Tree for Emotion Recognition of Malay Folklores." Bulletin of Electrical Engineering and Informatics 7, no. 3 (September 1, 2018): 479–86. http://dx.doi.org/10.11591/eei.v7i3.1279.

Full text
Abstract:
In this paper, the performance of Support Vector Machine (SVM) and Decision Tree (DT) in classifying emotions from Malay folklores is presented. This work is the continuation of our storytelling speech synthesis work to add emotions for a more natural storytelling. A total of 100 documents from children short stories are collected and used as the datasets of the text-based emotion recognition experiment. Term Frequency-Inverse Document Frequency (TF-IDF) is extracted from the text documents and classified using SVM and DT. Four types of common emotions, which are happy, angry, fearful and sad are classified using the two classifiers. Results showed that DT outperformed SVM by more than 22.2% accuracy rate. However, the overall emotion recognition is only at moderate rate suggesting an improvement is needed in future work. The accuracy of the emotion recognition should be improved in future studies by using semantic feature extractors or by incorporating deep learning for classification.
APA, Harvard, Vancouver, ISO, and other styles
38

Sorinas, Jennifer, Maria Dolores Grima, Jose Manuel Ferrandez, and Eduardo Fernandez. "Identifying Suitable Brain Regions and Trial Size Segmentation for Positive/Negative Emotion Recognition." International Journal of Neural Systems 29, no. 02 (February 21, 2019): 1850044. http://dx.doi.org/10.1142/s0129065718500442.

Full text
Abstract:
The development of suitable EEG-based emotion recognition systems has become a main target in the last decades for Brain Computer Interface applications (BCI). However, there are scarce algorithms and procedures for real-time classification of emotions. The present study aims to investigate the feasibility of real-time emotion recognition implementation by the selection of parameters such as the appropriate time window segmentation and target bandwidths and cortical regions. We recorded the EEG-neural activity of 24 participants while they were looking and listening to an audiovisual database composed of positive and negative emotional video clips. We tested 12 different temporal window sizes, 6 ranges of frequency bands and 60 electrodes located along the entire scalp. Our results showed a correct classification of 86.96% for positive stimuli. The correct classification for negative stimuli was a little bit less (80.88%). The best time window size, from the tested 1[Formula: see text]s to 12[Formula: see text]s segments, was 12[Formula: see text]s. Although more studies are still needed, these preliminary results provide a reliable way to develop accurate EEG-based emotion classification.
APA, Harvard, Vancouver, ISO, and other styles
39

Schmidt, Thomas, Miriam Schlindwein, Katharina Lichtner, and Christian Wolff. "Investigating the Relationship Between Emotion Recognition Software and Usability Metrics." i-com 19, no. 2 (August 26, 2020): 139–51. http://dx.doi.org/10.1515/icom-2020-0009.

Full text
Abstract:
AbstractDue to progress in affective computing, various forms of general purpose sentiment/emotion recognition software have become available. However, the application of such tools in usability engineering (UE) for measuring the emotional state of participants is rarely employed. We investigate if the application of sentiment/emotion recognition software is beneficial for gathering objective and intuitive data that can predict usability similar to traditional usability metrics. We present the results of a UE project examining this question for the three modalities text, speech and face. We perform a large scale usability test (N = 125) with a counterbalanced within-subject design with two websites of varying usability. We have identified a weak but significant correlation between text-based sentiment analysis on the text acquired via thinking aloud and SUS scores as well as a weak positive correlation between the proportion of neutrality in users’ voice and SUS scores. However, for the majority of the output of emotion recognition software, we could not find any significant results. Emotion metrics could not be used to successfully differentiate between two websites of varying usability. Regression models, either unimodal or multimodal could not predict usability metrics. We discuss reasons for these results and how to continue research with more sophisticated methods.
APA, Harvard, Vancouver, ISO, and other styles
40

Ouyang, Wensi. "Design of Semantic Matching Model of Folk Music in Occupational Therapy Based on Audio Emotion Analysis." Occupational Therapy International 2022 (June 18, 2022): 1–10. http://dx.doi.org/10.1155/2022/6841445.

Full text
Abstract:
The main semantic symbol systems for people to express their emotions include natural language and music. The analysis and establishment of semantic association between language and music is helpful to provide more accurate retrieval and recommendation services for text and music. Existing researches mainly focus on the surface symbolic features and association of natural language and music, which limits the performance and interpretability of applications based on semantic association of natural language and music. Emotion is the main meaning of music expression, and the semantic range of text expression includes emotion. In this paper, the semantic features of music are extracted from audio features, and the semantic matching model of audio emotion analysis is constructed to analyze ethnic music audio emotion through feature extraction ability of deep structure. The model is based on the framework of emotional semantic matching technology and realizes the emotional semantic matching of music fragments and words through semantic emotional recognition algorithm. Multiple experiments show that when W = 0.65 , the recognition rate of multichannel fusion model is 88.42%, and the model can reasonably realize audio emotion analysis. When the spatial dimension of music data changes, the classification accuracy reaches the highest when the spatial dimension is 25. Analysing the semantic association of audio promotes the application of folk music in occupational therapy.
APA, Harvard, Vancouver, ISO, and other styles
41

Solanki, Ms Pinal. "A Study on Emotion Detection & Classification from Text using Machine Learning." Journal of Artificial Intelligence, Machine Learning and Neural Network, no. 22 (March 26, 2022): 40–46. http://dx.doi.org/10.55529/jaimlnn.22.40.46.

Full text
Abstract:
Humans are using online social networks to share their opinions and thoughts on a variety of subjects and topics with their friends, family, and relations through text, photographs, audio and video messages and posts. On specific social, national, and global topics, humans can share their thoughts, mental states, moments, and viewpoints. Given the variety of communication options available, text is one of the most popular mediums of communication on social media. The study described here aims to detect and analyses sentiment and emotion expressed by people in their messages, and then use that information to generate suggestions. Humans collected comments and replies on a few specific topics and created a dataset with text, sentiment emotion, and other data. Emotion identification from Text is a new topic of research that is closely related to sentiment analysis. Anger, disgust, fear, happiness, sadness, and surprise are examples of emotions that may be detected and understood by the expression of texts using Emotion Analysis. Emotion Detection focuses on feature extraction and word recognition because pre-processing techniques improve accuracy of classification.Humans are using online social networks to share their opinions and thoughts on a variety of subjects and topics with their friends, family, and relations through text, photographs, audio and video messages and posts. On specific social, national, and global topics, humans can share their thoughts, mental states, moments, and viewpoints. Given the variety of communication options available, text is one of the most popular mediums of communication on social media. The study described here aims to detect and analyses sentiment and emotion expressed by people in their messages, and then use that information to generate suggestions. Humans collected comments and replies on a few specific topics and created a dataset with text, sentiment emotion, and other data. Emotion identification from Text is a new topic of research that is closely related to sentiment analysis. Anger, disgust, fear, happiness, sadness, and surprise are examples of emotions that may be detected and understood by the expression of texts using Emotion Analysis. Emotion Detection focuses on feature extraction and word recognition because pre-processing techniques improve accuracy of classification.Humans are using online social networks to share their opinions and thoughts on a variety of subjects and topics with their friends, family, and relations through text, photographs, audio and video messages and posts. On specific social, national, and global topics, humans can share their thoughts, mental states, moments, and viewpoints. Given the variety of communication options available, text is one of the most popular mediums of communication on social media. The study described here aims to detect and analyses sentiment and emotion expressed by people in their messages, and then use that information to generate suggestions. Humans collected comments and replies on a few specific topics and created a dataset with text, sentiment emotion, and other data. Emotion identification from Text is a new topic of research that is closely related to sentiment analysis. Anger, disgust, fear, happiness, sadness, and surprise are examples of emotions that may be detected and understood by the expression of texts using Emotion Analysis. Emotion Detection focuses on feature extraction and word recognition because pre-processing techniques improve accuracy of classification.
APA, Harvard, Vancouver, ISO, and other styles
42

Qin, Yu Qiang, and Xue Ying Zhang. "HMM-Based Speaker Emotional Recognition Technology for Speech Signal." Advanced Materials Research 230-232 (May 2011): 261–65. http://dx.doi.org/10.4028/www.scientific.net/amr.230-232.261.

Full text
Abstract:
In emotion classification of speech signals, the popular features employed are statistics of fundamental frequency, energy contour, duration of silence and voice quality. However, the performance of systems employing these features degrades substantially when more than two categories of emotion are to be classified. In this paper, a text independent method of emotion classification of speech is proposed. The proposed method makes use of short time log frequency power coefficients(LFPC) to represent the speech signals and a discrete Hidden Markov Model (HMM) as the classifier. The category labels used are, the archetypal emotions of anger, joy, sadness and neutral. Results show that the proposed system yields an average accuracy of 82.55%and the best accuracy of 94.4% in the classification of 4 emotions. Results also reveal that LFPC is a better choice as feature parameters for emotion classification than the traditional feature parameters.
APA, Harvard, Vancouver, ISO, and other styles
43

Salam, Shaikh Abdul, and Rajkumar Gupta. "Emotion Detection and Recognition from Text using Machine Learning." International Journal of Computer Sciences and Engineering 6, no. 6 (June 30, 2018): 341–45. http://dx.doi.org/10.26438/ijcse/v6i6.341345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Alswaidan, Nourah, and Mohamed El Bachir Menai. "Hybrid Feature Model for Emotion Recognition in Arabic Text." IEEE Access 8 (2020): 37843–54. http://dx.doi.org/10.1109/access.2020.2975906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Zatarain-Cabada, Ramón, María Lucia Barrón-Estrada, Jorge García-Lizárraga, Gilberto Muñoz-Sandoval, and José Mario Ríos-Félix. "Java Tutoring System with Facial and Text Emotion Recognition." Research in Computing Science 106, no. 1 (December 31, 2015): 49–58. http://dx.doi.org/10.13053/rcs-106-1-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Wu, Chenjian, Chengwei Huang, and Hong Chen. "Text-independent speech emotion recognition using frequency adaptive features." Multimedia Tools and Applications 77, no. 18 (February 13, 2018): 24353–63. http://dx.doi.org/10.1007/s11042-018-5742-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Habib, Md Ahsan, M. A. H. Akhand, and Md Abdus Samad Kamal. "Emotion Recognition from Microblog Managing Emoticon with Text and Classifying using 1D CNN." Journal of Computer Science 18, no. 12 (December 1, 2022): 1170–78. http://dx.doi.org/10.3844/jcssp.2022.1170.1178.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Gitey, Charu, and Dr Kamlesh Namdev. "A Comprehensive Study on Emotional State Analysis of Humans from EEG Signals." SMART MOVES JOURNAL IJOSCIENCE 4, no. 12 (December 13, 2018): 6. http://dx.doi.org/10.24113/ijoscience.v4i12.176.

Full text
Abstract:
Emotion plays an important role in the daily life of man and is an important feature of human interaction. Because of its role of adaptation, it motivates people to respond quickly to stimuli in their environment to improve communication, learning and decision making. With the increasing role of the brain-computer interface (BCI) in user-computer interaction, automatic recognition of emotions has become an area of interest in the last decade. The recognition of emotions could be facial expression, gesture, speech and text and could be recorded in different ways, such as electroencephalogram (EEG), positron emission tomography (PET), magnetic resonance imaging (MRI), etc. In this research work, feature extraction feature reduction and classification of emotions have been evaluated on different methods to recognize and classify different emotional states such as fear, sad, frustrated, happy, pleasant and satisfied from inner emotion EEG signals.
APA, Harvard, Vancouver, ISO, and other styles
49

Gitey, Charu. "Genetic Algorithm based Emotional State Evaluation from Filtered EEG Data." INTERNATIONAL JOURNAL ONLINE OF SCIENCE 5, no. 3 (March 28, 2019): 8. http://dx.doi.org/10.24113/ijoscience.v5i3.193.

Full text
Abstract:
Emotion plays an important role in the daily life of man and is an important feature of human interaction. Because of its role of adaptation, it motivates people to respond quickly to stimuli in their environment to improve communication, learning and decision making. With the increasing role of the brain-computer interface (BCI) in user-computer interaction, automatic recognition of emotions has become an area of interest in the last decade. The recognition of emotions could be facial expression, gesture, speech and text and could be recorded in different ways, such as electroencephalogram (EEG), positron emission tomography (PET), magnetic resonance imaging (MRI), etc. In this research work, feature extraction feature reduction and classification of emotions have been evaluated on different methods to recognize and classify different emotional states such as fear, sad, frustrated, happy, pleasant and satisfied from inner emotion EEG signals.
APA, Harvard, Vancouver, ISO, and other styles
50

Hung, Lai Po, and Suraya Alias. "Beyond Sentiment Analysis: A Review of Recent Trends in Text Based Sentiment Analysis and Emotion Detection." Journal of Advanced Computational Intelligence and Intelligent Informatics 27, no. 1 (January 20, 2023): 84–95. http://dx.doi.org/10.20965/jaciii.2023.p0084.

Full text
Abstract:
Sentiment Analysis is probably one of the best-known area in text mining. However, in recent years, as big data rose in popularity more areas of text classification are being explored. Perhaps the next task to catch on is emotion detection, the task of identifying emotions. This is because emotions are the finer grained information which could be extracted from opinions. So besides writer sentiments, writer emotion is also a valuable data. Emotion detection can be done using text, facial expressions, verbal communications and brain waves; however, the focus of this review is on text-based sentiment analysis and emotion detection. The internet has provided an avenue for the public to express their opinions easily. These expressions not only contain positive or negative sentiments, it contains emotions as well. These emotions can help in social behaviour analysis, decision and policy makings for companies and the country. Emotion detection can further support other tasks such as opinion mining and early depression detection. This review provides a comprehensive analysis of the shift in recent trends from text sentiment analysis to emotion detection and the challenges in these tasks. We summarize some of the recent works in the last five years and look at the methods they used. We also look at the models of emotion classes that are generally referenced. The trend of text-based emotion detection has shifted from the early keyword-based comparisons to machine learning and deep learning algorithms that provide more flexibility to the task and better performance.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography