Literatura académica sobre el tema "Text emotion recognition"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Text emotion recognition".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Text emotion recognition"

1

Sahoo, Sipra. "Emotion Recognition from Text". International Journal for Research in Applied Science and Engineering Technology 6, n.º 3 (31 de marzo de 2018): 237–43. http://dx.doi.org/10.22214/ijraset.2018.3038.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Deng, Jiawen y Fuji Ren. "Hierarchical Network with Label Embedding for Contextual Emotion Recognition". Research 2021 (6 de enero de 2021): 1–9. http://dx.doi.org/10.34133/2021/3067943.

Texto completo
Resumen
Emotion recognition has been used widely in various applications such as mental health monitoring and emotional management. Usually, emotion recognition is regarded as a text classification task. Emotion recognition is a more complex problem, and the relations of emotions expressed in a text are nonnegligible. In this paper, a hierarchical model with label embedding is proposed for contextual emotion recognition. Especially, a hierarchical model is utilized to learn the emotional representation of a given sentence based on its contextual information. To give emotion correlation-based recognition, a label embedding matrix is trained by joint learning, which contributes to the final prediction. Comparison experiments are conducted on Chinese emotional corpus RenCECps, and the experimental results indicate that our approach has a satisfying performance in textual emotion recognition task.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Fujisawa, Akira, Kazuyuki Matsumoto, Minoru Yoshida y Kenji Kita. "Emotion Estimation Method Based on Emoticon Image Features and Distributed Representations of Sentences". Applied Sciences 12, n.º 3 (25 de enero de 2022): 1256. http://dx.doi.org/10.3390/app12031256.

Texto completo
Resumen
This paper proposes an emotion recognition method for tweets containing emoticons using their emoticon image and language features. Some of the existing methods register emoticons and their facial expression categories in a dictionary and use them, while other methods recognize emoticon facial expressions based on the various elements of the emoticons. However, highly accurate emotion recognition cannot be performed unless the recognition is based on a combination of the features of sentences and emoticons. Therefore, we propose a model that recognizes emotions by extracting the shape features of emoticons from their image data and applying the feature vector input that combines the image features with features extracted from the text of the tweets. Based on evaluation experiments, the proposed method is confirmed to achieve high accuracy and shown to be more effective than methods that use text features only.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Liu, Changxiu, S. Kirubakaran y Alfred Daniel J. "Deep Learning Approach for Emotion Recognition Analysis in Text Streams". International Journal of Technology and Human Interaction 18, n.º 2 (1 de abril de 2022): 1–21. http://dx.doi.org/10.4018/ijthi.313927.

Texto completo
Resumen
Social media sites employ various approaches to track feelings, including diagnosing neurological problems, including fear, in people or assessing a population public sentiment. One essential obstacle for automatic emotion recognition principles is variable with fluctuating limitations, language, and interpretation shifts. Therefore, in this paper, a deep learning-based emotion recognition (DL-EM) system has been proposed to describe the various relational effects in emotional groups. A soft classification method is suggested to quantify the tendency and allocate a message to each emotional class. A supervised framework for emotions in text streaming messages is developed and tested. Two of the major activities are offline teaching assignments and interactive emotion classification techniques. The first challenge offers templates in text responses to describe sentiment. The second activity includes implementing a two-stage framework to identify live broadcasts of text messages for dedicated emotion monitoring.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Hatem, Ahmed Samit y Abbas M. Al-Bakry. "The Information Channels of Emotion Recognition: A Review". Webology 19, n.º 1 (20 de enero de 2022): 927–41. http://dx.doi.org/10.14704/web/v19i1/web19064.

Texto completo
Resumen
Humans are emotional beings. When we express about emotions, we frequently use several modalities, whether we want to so overtly (i.e., Speech, facial expressions,..) or implicitly (i.e., body language, text,..). Emotion recognition has lately piqued the interest of many researchers, and various techniques have been studied. A review on emotion recognition is given in this article. The survey seeks single and multiple source of data or information channels that may be utilized to identify emotions and includes a literature analysis on current studies published to each information channel, as well as the techniques employed and the findings obtained. Ultimately, some of the present emotion recognition problems and future work recommendations have been mentioned.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Bharti, Santosh Kumar, S. Varadhaganapathy, Rajeev Kumar Gupta, Prashant Kumar Shukla, Mohamed Bouye, Simon Karanja Hingaa y Amena Mahmoud. "Text-Based Emotion Recognition Using Deep Learning Approach". Computational Intelligence and Neuroscience 2022 (23 de agosto de 2022): 1–8. http://dx.doi.org/10.1155/2022/2645381.

Texto completo
Resumen
Sentiment analysis is a method to identify people’s attitudes, sentiments, and emotions towards a given goal, such as people, activities, organizations, services, subjects, and products. Emotion detection is a subset of sentiment analysis as it predicts the unique emotion rather than just stating positive, negative, or neutral. In recent times, many researchers have already worked on speech and facial expressions for emotion recognition. However, emotion detection in text is a tedious task as cues are missing, unlike in speech, such as tonal stress, facial expression, pitch, etc. To identify emotions from text, several methods have been proposed in the past using natural language processing (NLP) techniques: the keyword approach, the lexicon-based approach, and the machine learning approach. However, there were some limitations with keyword- and lexicon-based approaches as they focus on semantic relations. In this article, we have proposed a hybrid (machine learning + deep learning) model to identify emotions in text. Convolutional neural network (CNN) and Bi-GRU were exploited as deep learning techniques. Support vector machine is used as a machine learning approach. The performance of the proposed approach is evaluated using a combination of three different types of datasets, namely, sentences, tweets, and dialogs, and it attains an accuracy of 80.11%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Quan, Changqin y Fuji Ren. "Visualizing Emotions from Chinese Blogs by Textual Emotion Analysis and Recognition Techniques". International Journal of Information Technology & Decision Making 15, n.º 01 (enero de 2016): 215–34. http://dx.doi.org/10.1142/s0219622014500710.

Texto completo
Resumen
The research on blog emotion analysis and recognition has become increasingly important in recent years. In this study, based on the Chinese blog emotion corpus (Ren-CECps), we analyze and compare blog emotion visualization from different text levels: word, sentence, and paragraph. Then, a blog emotion visualization system is designed for practical applications. Machine learning methods are applied for the implementation of blog emotion recognition at different textual levels. Based on the emotion recognition engine, the blog emotion visualization interface is designed to provide a more intuitive display of emotions in blogs, which can detect emotion for bloggers, and capture emotional change rapidly. In addition, we evaluated the performance of sentence emotion recognition by comparing five classification algorithms under different schemas, which demonstrates the effectiveness of the Complement Naive Bayes model for sentence emotion recognition. The system can recognize multi-label emotions in blogs, which provides a richer and more detailed emotion expression.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Huang, Yuxin. "Research on Lovelorn Emotion Recognition Based on Ernie Tiny". Frontiers in Computing and Intelligent Systems 2, n.º 2 (2 de enero de 2023): 66–69. http://dx.doi.org/10.54097/fcis.v2i2.4145.

Texto completo
Resumen
Topics related to sentiment classification and emotion recognition are an important part of the Natural Language Processing research field and can be used to analyze users' sentiment tendencies towards brands, understand the public's attitudes and opinions on public opinion events, and detect users' mental health, among others. Past research has usually been based on positive and negative emotions or multi-categorized emotions such as happiness, anger and sadness, while there has been little research on the recognition of the specific emotion of lovelorn. This study aims to identify the lovelorn emotion in text, using deep learning pretrained model ERNIR Tiny to train a dataset consisting of 5008 pieces of Chinese lovelorn emotion text crawled from social media platform Weibo and 4998 pieces of ordinary text extracted from existing available dataset. And finally, it was proved that ERNIE Tiny performs well in classifying whether a text contains lovelorn emotion or not, with F1 score of 0.941929, precision score of 0.942300 and recall score of 0.941928 obtained on the test set.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Zhang, Ziheng. "Review of text emotion detection". Highlights in Science, Engineering and Technology 12 (26 de agosto de 2022): 213–21. http://dx.doi.org/10.54097/hset.v12i.1456.

Texto completo
Resumen
Emotion is one of the essential characteristics of being human. When writing essays or reports, people will add their own emotions. Text sentiment detection can detect the leading emotional tone of a text. Text emotion detection and recognition is a new research field related to sentiment analysis. Emotion analysis detects and identifies emotion types, such as anger, happiness, or sadness, through textual expression. It is a subdomain of NLP. For some applications, the technology could help large companies' Chinese and Russian data analysts gauge public opinion or conduct nuanced market research and understand product reputation. At present, text emotion is one of the most studied fields in the literature. Still, it is also tricky because it is related to deep neural networks and requires the application of psychological knowledge. In this article, we will discuss the concept of text detection and introduce and analyze the main methods of text emotion detection. In addition, this paper will also discuss the advantages and weaknesses of this technology and some future research directions and problems to be solved.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Su, Sheng-Hsiung, Hao-Chiang Koong Lin, Cheng-Hung Wang y Zu-Ching Huang. "Multi-Modal Affective Computing Technology Design the Interaction between Computers and Human of Intelligent Tutoring Systems". International Journal of Online Pedagogy and Course Design 6, n.º 1 (enero de 2016): 13–28. http://dx.doi.org/10.4018/ijopcd.2016010102.

Texto completo
Resumen
In this paper, the authors are using emotion recognition in two ways: facial expression recognition and emotion recognition from text. Through this dual-mode operation, not only can strength the effects of recognition, but also increase the types of emotion recognition to handle the learning situation smoothly. Through the training of image processing to identify facial expression, the emotion from text is identifying by emotional keywords, syntax, semantics and calculus with logic. The system identify learns' emotions and learning situations by analyzing, choosing the appropriate instructional strategies and curriculum content, and through agents to communicate between user and system, so that learners can get a well learning. This study uses triangular system evaluation methods, observation, questionnaires and interviews. Experimental design to the subjects by the level of awareness on art and non-art to explore the traditional teaching, affective tutoring system and no emotional factors learning course website these three kinds of ways to get results, analysis and evaluate the data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Tesis sobre el tema "Text emotion recognition"

1

Zhu, Winstead Xingran. "Hotspot Detection for Automatic Podcast Trailer Generation". Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-444887.

Texto completo
Resumen
With podcasts being a fast growing audio-only form of media, an effective way of promoting different podcast shows becomes more and more vital to all the stakeholders concerned, including the podcast creators, the podcast streaming platforms, and the podcast listeners. This thesis investigates the relatively little studied topic of automatic podcast trailer generation, with the purpose of en- hancing the overall visibility and publicity of different podcast contents and gen- erating more user engagement in podcast listening. This thesis takes a hotspot- based approach, by specifically defining the vague concept of “hotspot” and designing different appropriate methods for hotspot detection. Different meth- ods are analyzed and compared, and the best methods are selected. The selected methods are then used to construct an automatic podcast trailer generation sys- tem, which consists of four major components and one schema to coordinate the components. The system can take a random podcast episode audio as input and generate an around 1 minute long trailer for it. This thesis also proposes two human-based podcast trailer evaluation approaches, and the evaluation results show that the proposed system outperforms the baseline with a large margin and achieves promising results in terms of both aesthetics and functionality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Fell, Michael. "Traitement automatique des langues pour la recherche d'information musicale : analyse profonde de la structure et du contenu des paroles de chansons". Thesis, Université Côte d'Azur, 2020. http://www.theses.fr/2020COAZ4017.

Texto completo
Resumen
Les applications en Recherche d’Information Musicale et en musicologie computationnelle reposent traditionnellementsur des fonctionnalités extraites du contenu musical sous forme audio, mais ignorent la plupart du temps les paroles des chansons. Plus récemment, des améliorations dans des domaines tels que la recommandation de musique ont été apportées en tenant compte des métadonnées externes liées à la chanson. Dans cette thèse, nous soutenons que l’extraction des connaissances à partir des paroles des chansons est la prochaine étape pour améliorer l’expérience de l’utilisateur lors de l’interaction avec la musique. Pour extraire des connaissances de vastes quantités de paroles de chansons, nous montrons pour différents aspects textuels (leur structure, leur contenu et leur perception) comment les méthodes de Traitement Automatique des Langues peuvent être adaptées et appliquées avec succès aux paroles. Pour l’aspect structurel des paroles, nous en dérivons une description structurelle en introduisant un modèle qui segmente efficacement les paroles en leurs partiescaractéristiques (par exemple, intro, couplet, refrain). Puis, nous représentons le contenu des paroles en résumantles paroles d’une manière qui respecte la structure caractéristique des paroles. Enfin, sur la perception des paroles,nous étudions le problème de la détection de contenu explicite dans un texte de chanson. Cette tâche s’est avèree très difficile et nous montrons que la difficulté provienten partie de la nature subjective de la perception des paroles d’une manière ou d’une autre selon le contexte. De plus, nous abordons un autre problème de perception des paroles en présentant nos résultats préliminaires sur la reconnaissance des émotions. L’un des résultats de cette thèse a été de créer un corpus annoté, le WASABI Song Corpus, un ensemble de données de deux millions de chansons avec des annotations de paroles TAL à différents niveaux
Applications in Music Information Retrieval and Computational Musicology have traditionally relied on features extracted from the music content in the form of audio, but mostly ignored the song lyrics. More recently, improvements in fields such as music recommendation have been made by taking into account external metadata related to the song. In this thesis, we argue that extracting knowledge from the song lyrics is the next step to improve the user’s experience when interacting with music. To extract knowledge from vast amounts of song lyrics, we show for different textual aspects (their structure, content and perception) how Natural Language Processing methods can be adapted and successfully applied to lyrics. For the structuralaspect of lyrics, we derive a structural description of it by introducing a model that efficiently segments the lyricsinto its characteristic parts (e.g. intro, verse, chorus). In a second stage, we represent the content of lyrics by meansof summarizing the lyrics in a way that respects the characteristic lyrics structure. Finally, on the perception of lyricswe investigate the problem of detecting explicit content in a song text. This task proves to be very hard and we showthat the difficulty partially arises from the subjective nature of perceiving lyrics in one way or another depending onthe context. Furthermore, we touch on another problem of lyrics perception by presenting our preliminary resultson Emotion Recognition. As a result, during the course of this thesis we have created the annotated WASABI SongCorpus, a dataset of two million songs with NLP lyrics annotations on various levels
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Howe, J. C. "Emotion recognition problems after brain injury : development of the Brief Emotion Recognition Test (BERT)". Thesis, University of the West of England, Bristol, 2018. http://eprints.uwe.ac.uk/34056/.

Texto completo
Resumen
Difficulty recognising emotion can have a major impact on psychosocial outcome following acquired brain injury. The need to have an easily administered screening test which enables clinicians to quickly assess this ability has been identified. In this thesis, the development of the Brief Emotion Recognition Test (BERT) is described. It is anticipated that the BERT will provide a reliable and valid screening measure for emotion recognition problems after acquired brain injury. The test consists of 14 short video clips of actors portraying positive, negative and neutral emotions. After watching each video clip viewers are asked to choose which emotion was being portrayed from a list of six emotions (happy, sad, surprise, anger, fear, disgust) and neutral. Half of the clips include facial expressions only (no phrase) and the other half include facial expressions and vocal cues in the form of neutral carrier phrases (with phrase). The performance of 92 neurologically healthy adults was compared with that of 20 adults who had sustained moderate-to-severe brain injury. Validity and reliability of the test were assessed. Test-retest reliability was good. The BERT has good discriminant and concurrent reliability. There was a statistically significant difference in performance (p < 0.05) between the groups. The neurologically healthy group were more accurate regarding five clips in the 'no phrase' condition; five of the seven in the 'with phrase' trial; and in the total overall score. Overall findings for this pilot study suggest the BERT provides a valid, reliable means of rapidly screening for emotion recognition difficulties after brain injury.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Červenec, Radek. "Rozpoznávání emocí v česky psaných textech". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-218962.

Texto completo
Resumen
With advances in information and communication technologies over the past few years, the amount of information stored in the form of electronic text documents has been rapidly growing. Since the human abilities to effectively process and analyze large amounts of information are limited, there is an increasing demand for tools enabling to automatically analyze these documents and benefit from their emotional content. These kinds of systems have extensive applications. The purpose of this work is to design and implement a system for identifying expression of emotions in Czech texts. The proposed system is based mainly on machine learning methods and therefore design and creation of a training set is described as well. The training set is eventually utilized to create a model of classifier using the SVM. For the purpose of improving classification results, additional components were integrated into the system, such as lexical database, lemmatizer or derived keyword dictionary. The thesis also presents results of text documents classification into defined emotion classes and evaluates various approaches to categorization.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Monzo, Sánchez Carlos Manuel. "Modelado de la cualidad de la voz para la síntesis del habla expresiva". Doctoral thesis, Universitat Ramon Llull, 2010. http://hdl.handle.net/10803/9145.

Texto completo
Resumen
Aquesta tesi es realitza dins del marc de treball existent en el grup d'investigació Grup de Recerca en Tecnologies Mèdia (GTM) d'Enginyeria i Arquitectura La Salle, amb l'objectiu de dotar de major naturalitat a la interacció home-màquina. Per això ens basem en les limitacions de la tecnologia emprada fins al moment, detectant punts de millora en els que poder aportar solucions. Donat que la naturalitat de la parla està íntimament relacionada amb l'expressivitat que aquesta pot transmetre, aquests punts de millora es centren en la capacitat de treballar amb emocions o estils de parla expressius en general.
L'objectiu últim d'aquesta tesi és la generació d'estils de parla expressius en l'àmbit de sistemes de Conversió de Text a Parla (CTP) orientats a la Síntesi de la Parla Expressiva (SPE), essent possible transmetre un missatge oral amb una certa expressivitat que l'oient sigui capaç de percebre i interpretar correctament. No obstant, aquest objectiu implica diferents metes intermitges: conèixer les opcions de parametrització existents, entendre cadascun dels paràmetres, detectar els pros i contres de la seva utilització, descobrir les relacions existents entre ells i els estils de parla expressius i, finalment, portar a terme la síntesi de la parla expressiva. Donat això, el propi procés de síntesi implica un treball previ en reconeixement d'emocions, que en si mateix podria ser una línia complerta d'investigació, ja que aporta el coneixement necessari per extreure models que poden ser usats durant el procés de síntesi.
La cerca de l'increment de la naturalitat ha implicat una millor caracterització de la parla emocional o expressiva, raó per la qual s'ha investigat en parametritzacions que poguessin portar a terme aquesta comesa. Aquests són els paràmetres de Qualitat de la Veu Voice Quality (VoQ), que presenten com a característica principal que són capaços de caracteritzar individualment la parla, identificant cadascun dels factors que fan que sigui única. Els beneficis potencials, que aquest tipus de parametrització pot aportar a la interacció natural, són de dos classes: el reconeixement i la síntesi d'estils de parla expressius. La proposta de la parametrització de VoQ no pretén substituir a la ja emprada prosòdia, sinó tot el contrari, treballar conjuntament amb ella per tal de millorar els resultats obtinguts fins al moment.
Un cop realitzada la selecció de paràmetres es planteja el modelat de la VoQ, és a dir la metodologia d'anàlisi i de modificació, de forma que cadascun d'ells pugui ser extret a partir de la senyal de veu i posteriorment modificat durant la síntesi. Així mateix, es proposen variacions pels paràmetres implicats i tradicionalment utilitzats, adaptant la seva definició al context de la parla expressiva. A partir d'aquí es passa a treballar en les relacions existents amb els estils de parla expressius, presentant finalment la metodologia de transformació d'aquests últims, mitjançant la modificació conjunta de la VoQ y la prosòdia, per a la SPE en un sistema de CTP.
Esta tesis se realiza dentro del marco de trabajo existente en el grupo de investigación Grup de Recerca en Tecnologies Mèdia (GTM) de Enginyeria i Arquitectura La Salle, con el objetivo de dotar de mayor naturalidad a la interacción hombre-máquina. Para ello nos basamos en las limitaciones de la tecnología empleada hasta el momento, detectando puntos de mejora en los que poder aportar soluciones. Debido a que la naturalidad del habla está íntimamente relacionada con la expresividad que esta puede transmitir, estos puntos de mejora se centran en la capacidad de trabajar con emociones o estilos de habla expresivos en general.
El objetivo último de esta tesis es la generación de estilos de habla expresivos en el ámbito de sistemas de Conversión de Texto en Habla (CTH) orientados a la Síntesis del Habla Expresiva (SHE), siendo posible transmitir un mensaje oral con una cierta expresividad que el oyente sea capaz de percibir e interpretar correctamente. No obstante, este objetivo implica diferentes metas intermedias: conocer las opciones de parametrización existentes, entender cada uno de los parámetros, detectar los pros y contras de su utilización, descubrir las relaciones existentes entre ellos y los estilos de habla expresivos y, finalmente, llevar a cabo la síntesis del habla expresiva. El propio proceso de síntesis implica un trabajo previo en reconocimiento de emociones, que en sí mismo podría ser una línea completa de investigación, ya que muestra la viabilidad de usar los parámetros seleccionados en la discriminación de estos y aporta el conocimiento necesario para extraer los modelos que pueden ser usados durante el proceso de síntesis.
La búsqueda del incremento de la naturalidad ha implicado una mejor caracterización del habla emocional o expresiva, con lo que para ello se ha investigado en parametrizaciones que pudieran llevar a cabo este cometido. Estos son los parámetros de Cualidad de la Voz Voice Quality (VoQ), que presentan como característica principal que son capaces de caracterizar individualmente el habla, identificando cada uno de los factores que hacen que sea única. Los beneficios potenciales, que este tipo de parametrización puede aportar a la interacción natural, son de dos clases: el reconocimiento y la síntesis de estilos de habla expresivos. La propuesta de la parametrización de VoQ no pretende sustituir a la ya empleada prosodia, sino todo lo contrario, trabajar conjuntamente con ella para mejorar los resultados obtenidos hasta el momento.
Una vez realizada la selección de los parámetros se plantea el modelado de la VoQ, es decir, la metodología de análisis y de modificación de forma que cada uno de ellos pueda ser extraído a partir de la señal de voz y posteriormente modificado durante la síntesis. Asimismo, se proponen variaciones para los parámetros implicados y tradicionalmente utilizados, adaptando su definición al contexto del habla expresiva.
A partir de aquí se pasa a trabajar en las relaciones existentes con los estilos de habla expresivos, presentando finalmente la metodología de transformación de estos últimos, mediante la modificación conjunta de VoQ y prosodia, para la SHE en un sistema de CTH.
This thesis is conducted on the existing working framework in the Grup de Recerca en Tecnologies Mèdia (GTM) research group of the Enginyeria i Arquitectura La Salle, with the aim of providing the man-machine interaction with more naturalness. To do this, we are based on the limitations of the technology used up to now, detecting the improvement points where we could contribute solutions. Given that the speech naturalness is closely linked with the expressivity communication, these improvement points are focused on the ability of working with emotions or expressive speech styles in general.
The final goal of this thesis is the expressive speech styles generation in the field of Text-to-Speech (TTS) systems aimed at Expressive Speech Synthesis (ESS), with the possibility of communicating an oral message with a certain expressivity that the listener will be able to correctly perceive and interpret. Nevertheless, this goal involves different intermediate aims: to know the existing parameterization options, to understand each of the parameters, to find out the existing relations among them and the expressive speech styles and, finally, to carry out the expressive speech synthesis. All things considered, the synthesis process involves a previous work in emotion recognition, which could be a complete research field, since it shows the feasibility of using the selected parameters during their discrimination and provides with the necessary knowledge for the modelling that can be used during the synthesis process.
The search for the naturalness improvement has implied a better characterization of the emotional or expressive speech, so we have researched on parameterizations that could perform this task. These are the Voice Quality (VoQ) parameters, which main feature is they are able to characterize the speech in an individual way, identifying each factor that makes it unique. The potential benefits that this kind of parameterization can provide with natural interaction are twofold: the expressive speech styles recognition and the synthesis. The VoQ parameters proposal is not trying to replace prosody, but working altogether to improve the results so far obtained.
Once the parameters selection is conducted, the VoQ modelling is raised (i. e. analysis and modification methodology), so each of them can be extracted from the voice signal and later on modified during the synthesis. Also, variations are proposed for the involved and traditionally used parameters, adjusting their definition to the expressive speech context. From here, we work on the existing relations with the expressive speech styles and, eventually we show the transformation methodology for these ones, by means of the modification of VoQ and prosody, for the ESS in a TTS system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Križan, Viliam. "Analýza sociálních sítí využitím metod rozpoznání vzoru". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-220399.

Texto completo
Resumen
Diplomová práca sa zaoberá rozpoznávaním emócií z textu v sociálnych sieťach. Práca popisuje súčasné metódy extrakcie príznakov, používané lexikóny, korpusy a klasifikátory. Emócie boli rozpoznávané na základe klasifikátoru, netrénovaného na anotovaných dátach z mikroblogovacej siete Twitter. Výhodou použitia služby Twitter, bolo geografické vymedzenie dát, ktoré umožňuje sledovanie zmien emócií populácie v rôznych mestách. Prvým prístupom klasifikácie bolo vytvorenie Baseline algoritmu, ktorý používal jednoduchý lexikón. Pre zlepšenie klasifikácie sme v druhom bode použili komplexnejší SVM klasifikátor. SVM klasifikátory, extrakcie a selekcie príznakov boli použité z dostupnej Python knižnice Scikit. Dáta pre natrénovanie klasifikátoru boli zhromažďované z oblasti USA, a to s pomocou vytvorenej aplikácie. Klasifikátor bol natrénovaný na dátach, označených pri ich zhromažďovaní - bez manuálnej anotácie. Boli použité dve rôzne implantácie SVM klasifikátorov. Výsledné klasifikované emócie, v rôznych mestách a dňoch, boli zobrazené v podobe farebných značiek na mape.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Sun, Luning. "Using the Ekman 60 faces test to detect emotion recognition deficit in brain injury patients". Thesis, University of Cambridge, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.708553.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Guillery, Murielle. "Rôle du contrôle cognitif dans les modulations du langage et des émotions : l'exemple de la schizophrénie et des troubles bipolaires". Thesis, Rennes 2, 2017. http://www.theses.fr/2017REN20070.

Texto completo
Resumen
L’étude présentée explore les modulations du contrôle émotionnel dans les interactions du langage et des émotions, chez 23 sujets atteints de schizophrénie en état de stabilisation et 21 sujets atteints de troubles bipolaires en phase euthymique. Les interactions ont été envisagées d’une part dans le sens des émotions via le langage avec une tâche expérimentale de Stroop émotionnel conditionné, puis en contraste dans le sens du langage via les émotions avec une tâche expérimentale de décision lexicale avec des voisins orthographiques à connotation émotionnelle. Les résultats mettent en évidence une hyper‐réactivité émotionnelle positive dans les troubles bipolaires et des troubles du contrôle cognitif émotionnel dans la schizophrénie. Ces deux maladies présentent des chevauchements dans les altérations cognitives qui ne permettent pasencore de distinguer des marqueurs cognitifs. Cependant, les résultats de cette étude indiquent que les processus impliqués dans les perturbations du traitement des mots à connotation émotionnelle sont de natures différentes entre ces deux pathologies. Dès ors, le présent dispositif pourrait s’avérer utile pour différencier la schizophrénie des troubles bipolaires
The present study explores the modulations of the emotional control in the interactions of the language and the emotions, to 23 affected subjects of schizophrenia in state of stabilization and 21 affected subjects of bipolar disorders in euthymic phase. The interactions were envisaged on one hand in the sense of the feelings via the language with an experimental taskof conditioned emotional Stroop, then in contrast in the sense of the language via the feelings with an experimental task of lexical decision with orthographic neighbors with emotional connotation. The results highlight an emotional positive hyper-reactivity in bipolar disorders and disorders of the emotional cognitive control in the schizophrenia. These two diseasespresent overlappings in the cognitive changes which do not still allow to distinguish cognitive markers. However, the results of this study indicate that the processes involved in the disturbances of the processing of the words with emotional connotation are of different natures between these two pathologies. From then on, the present study could turn out usefulto differentiate the schizophrenia of bipolar disorders
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Graziani, Lisa. "Constrained Affective Computing". Doctoral thesis, 2021. http://hdl.handle.net/2158/1238365.

Texto completo
Resumen
Emotions have an important role in daily life, influence decision-making, human interaction, perception, attention, self-regulation. They have been studied since ancient times, philosophers have been always interested in analyzing human nature and bodily sensations, psychologists in studying the physical and psychological changes that influence thought and behavior. In the early 1970s, the psychologist Paul Ekman defined six universal emotions, namely anger, disgust, fear, happiness, sadness, and surprise. This categorization has been taken into account for several studies. In the late 1990s, Affective Computing was born, a new discipline spanning between computer science, psychology, and cognitive science. Affective Computing aims at developing intelligent systems able to recognize, interpret, process, and simulate human emotions. It has a wide range of applications, as healthcare, education, games, entertainment, marketing, automated driver assistance, robotics, and many others. Emotions can be detected from different channels, such as facial expressions, body gestures, speech, text, physiological signals. In order to enrich human-machine interaction, the machine should be able to perform tasks similar to humans, such as recognizing facial expressions, detecting emotions from what it is said (text) and from how it said (audio), and it should be able also to express its own emotions. With the great success of deep learning, deep architectures have been employed also for many Affective Computing tasks. In this thesis, thinking about an emotional and intelligent agent, a detailed study of emotions has been carried out using deep learning techniques for various tasks, such as facial expression recognition, text and speech emotion recognition, and facial expression generation. Nevertheless, deep learning methods to properly perform in general require a great computing power and large collections of labeled data. To overcome these limitations we exploit the framework of Learning from Constraints, which needs few supervised data and enables to exploit a great quantity of unsupervised data, which are easier to collect. Furthermore, such approach integrates low-level tasks processing sensorial data and reasoning using higher-level semantic knowledge, so allowing machines to behave in an intelligent way in real complex environments. These conditions are reached requiring the satisfaction of a set of constraints during the learning process. In this way a task is translated into a constrained satisfaction problem. In our case, considering that knowledge could not be always perfect, the constraints are softly injected into the learning problem, so allowing some slight violations for some inputs. In this work different constraints have been employed in order to exploit knowledge that we have on the problem. In facial expression recognition, a predictor that detects emotions from the full face is enforced by three coherence constraints. One exploits the temporal sequence of the expression, another relates different face sub-parts (eyes, nose, mouth, eyebrows, jaw), and the last relates two feature representations. In text emotion recognition First Order Logic (FOL)-based constraints are used to exploit a great quantity of unlabeled data and data labeled with Facebook reactions. In facial expression generation cyclic-consistency FOL constraints are employed to translate a neutral face into a specific expression, and other logical rules are used to decide what emotion to generate putting together inputs coming from different channels. Finally, some logical constraints are proposed to develop a system that recognizes emotion from speech, and we built an Italian dataset that might be helpful to implement such model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Mountain, Mary Ann Forbes. "The Victoria emotion recognition test". Thesis, 1992. https://dspace.library.uvic.ca//handle/1828/9620.

Texto completo
Resumen
Emotional disorders are common in people with brain damage. It is often difficult to determine whether such disorders are a result of a deficit in recognition, expression, or regulation of emotion due to brain damage per se, or if they are reactive to other functional limitations. The Victoria Emotion Recognition Test (VERT) was developed to provide a standardized tool for the assessment of deficits in the recognition of facial and tonal affect. The VERT was constructed on the basis of neurophysiological and behavioural theories of emotion and neuropsychological theories of agnosia. The VERT consists of three subtests in which four emotions (angry, sad, happy and afraid) are presented at three levels of intensity. The visual subtest presents photographs of faces; the auditory subtest, audiotaped voice clips; and the auditory/visual subtest, both photographs and voice clips. Psychometric results of the standardization studies suggest that the VERT measures an aspect of the recognition of facial and tonal emotion that is independent of more basic skills in face recognition and auditory nonverbal memory. The theoretical construct of recognition of emotion was investigated within the framework of an "affective agnosia". The results suggest that a broader concept of agnosia is necessary in order to include failures in recognition of emotion within this framework.
Graduate
Los estilos APA, Harvard, Vancouver, ISO, etc.

Libros sobre el tema "Text emotion recognition"

1

N, Emde Robert, Osofsky Joy D y Butterfield Perry M. 1932-, eds. The IFEEL pictures: A new instrument for interpreting emotions. Madison, Conn: International Universities Press, 1993.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Rueschemeyer, Shirley-Ann y M. Gareth Gaskell, eds. The Oxford Handbook of Psycholinguistics. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198786825.001.0001.

Texto completo
Resumen
This handbook reviews the current state of the art in the field of psycholinguistics. Part I deals with language comprehension at the sublexical, lexical, and sentence and discourse levels. It explores concepts of speech representation and the search for universal speech segmentation mechanisms against a background of linguistic diversity and compares first language with second language segmentation. It also discusses visual word recognition, lexico-semantics, the different forms of lexical ambiguity, sentence comprehension, text comprehension, and language in deaf populations. Part II focuses on language production, with chapters covering topics such as word production and related processes based on evidence from aphasia, the major debates surrounding grammatical encoding. Part III considers various aspects of interaction and communication, including the role of gesture in language processing, approaches to the study of perspective-taking, and the interrelationships between language comprehension, emotion, and sociality. Part IV is concerned with language development and evolution, focusing on topics ranging from the development of prosodic phonology, the neurobiology of artificial grammar learning, and developmental dyslexia. The book concludes with Part V, which looks at methodological advances in psycholinguistic research, such as the use of intracranial electrophysiology in the area of language processing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

(Editor), Robert N. Emde, Joy D. Osofsky (Contributor, Editor) y Perry M. Butterfield (Editor), eds. The Ifeel Pictures: A New Instrument for Interpreting Emotions (Clinical Infant Reports). International Universities Press, 1993.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Vanmai, Jean. Jean Vanmai’s Chân Đăng The Tonkinese of Caledonia in the colonial era. Traducido por Tess Do y Kathryn Lay-Chenchabi. University of Technology, Sydney, 2022. http://dx.doi.org/10.5130/aai.

Texto completo
Resumen
Jean Vanmai’s Chân Đăng The Tonkinese of Caledonia in the colonial era is a rare insider’s account of the life experiences of Chân Đăng, the Vietnamese indentured workers who were brought from Tonkin to work in the New Caledonian nickel mines in the 1930s and 1940s, when both Indochina and New Caledonia were French colonies. Narrated from the unique perspective of a descendant of Chân Đăng, the novel offers a deep understanding of how Vietnamese migration, shaped by French colonialism and the indenture system, led to the implantation of the Vietnamese community in New Caledonia, in spite of the massive repatriation of the workers and their families to Vietnam in the 1960s. Through his writing which blends his own family story with the rich oral testimonies of his compatriots, Jean Vanmai, a passionate advocate for the recognition of the part played by the Chân Đăng in the New Caledonian national history, has succeeded in giving these often faceless and powerless ‘coolies’ a strong collective voice. The translation into English of that voice was long overdue. Only accessible until now to French speakers, this English version opens up the exceptional account of the personal and emotional complexities of the Chân Đăng’s experience to a global readership. The English version not only advances knowledge of the history of indentured labour and colonialism in the Asia-Pacific, thus offering Anglophone historians and interested readers a new understanding of the processes through which histories and memories travel and translate across national, oceanic, and linguistic borders, it also constitutes an invaluable historical resource for Anglophone Vietnamese diasporic communities. One of the significant revisions in this English version is the restitution of the diacritical marks to all the Vietnamese names in the novel. Rather than a simple correction of the printing of Vietnamese diacritics which was unavailable at the time of publication of the origin text, it lends greater authenticity to the story for the Anglophone reader and symbolically restores their full identity to the Chân Đăng protagonists, who had become mere matriculation numbers under the colonial indenture system. The critical introduction by Tess Do and Kathryn Lay-Chenchabi is a richly documented text that contextualizes the novel for the Anglophone reader. The photographs and official documents, carefully selected from a wealth of sources, including the National Archives of both New Caledonia and New Zealand, the private community collections and Jean Vanmai’s family photo albums, all contribute to an illuminating and informative visual overview of the Chân Đăng’s working and living conditions in New Caledonia. This emotive illustration of the past also functions as an important reference for the common future shared by all Caledonians, in that it conveys to the reader the long-lasting imprint left by the Vietnamese community on New Caledonia’s economic and cultural scene since the Chân Đăng first migrated to this country more than two centuries ago.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Text emotion recognition"

1

Nayak, Biswajit y Manoj Kumar Pradhan. "Text-Dependent Versus Text-Independent Speech Emotion Recognition". En Advances in Intelligent Systems and Computing, 153–61. New Delhi: Springer India, 2015. http://dx.doi.org/10.1007/978-81-322-2517-1_16.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Almahdawi, Amer y William John Teahan. "Emotion Recognition in Text Using PPM". En Artificial Intelligence XXXIV, 149–55. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-71078-5_13.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Huang, Xiaoxi, Yun Yang y Changle Zhou. "Emotional Metaphors for Emotion Recognition in Chinese Text". En Affective Computing and Intelligent Interaction, 319–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11573548_41.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Gajšek, Rok, Vitomir Štruc y France Mihelič. "Multimodal Emotion Recognition Based on the Decoupling of Emotion and Speaker Information". En Text, Speech and Dialogue, 275–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15760-8_35.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Popková, Anna, Filip Povolný, Pavel Matějka, Ondřej Glembek, František Grézl y Jan “Honza” Černocký. "Investigation of Bottle-Neck Features for Emotion Recognition". En Text, Speech, and Dialogue, 426–34. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-45510-5_49.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Kostoulas, Theodoros, Todor Ganchev, Alexandros Lazaridis y Nikos Fakotakis. "Enhancing Emotion Recognition from Speech through Feature Selection". En Text, Speech and Dialogue, 338–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15760-8_43.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Chauhan, Rahul, Jainath Yadav, S. G. Koolagudi y K. Sreenivasa Rao. "Text Independent Emotion Recognition Using Spectral Features". En Communications in Computer and Information Science, 359–70. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22606-9_37.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Ho, Vong Anh, Duong Huynh-Cong Nguyen, Danh Hoang Nguyen, Linh Thi-Van Pham, Duc-Vu Nguyen, Kiet Van Nguyen y Ngan Luu-Thuy Nguyen. "Emotion Recognition for Vietnamese Social Media Text". En Communications in Computer and Information Science, 319–33. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-6168-9_27.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Heracleous, Panikos, Yasser Mohammad, Keiji Yasuda y Akio Yoneyama. "Speech Emotion Recognition Using Spontaneous Children’s Corpus". En Computational Linguistics and Intelligent Text Processing, 321–33. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-24340-0_24.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Moore, Johanna D., Leimin Tian y Catherine Lai. "Word-Level Emotion Recognition Using High-Level Features". En Computational Linguistics and Intelligent Text Processing, 17–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-54903-8_2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Text emotion recognition"

1

Liu, Taiao, Yajun Du y Qiaoyu Zhou. "Text Emotion Recognition Using GRU Neural Network with Attention Mechanism and Emoticon Emotions". En RICAI 2020: 2020 2nd International Conference on Robotics, Intelligent Control and Artificial Intelligence. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3438872.3439094.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Park, Seo-Hui, Byung-Chull Bae y Yun-Gyung Cheong. "Emotion Recognition from Text Stories Using an Emotion Embedding Model". En 2020 IEEE International Conference on Big Data and Smart Computing (BigComp). IEEE, 2020. http://dx.doi.org/10.1109/bigcomp48618.2020.00014.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Islam, Juyana, Sadman Ahmed, M. A. H. Akhand y N. Siddique. "Improved Emotion Recognition from Microblog Focusing on Both Emoticon and Text". En 2020 IEEE Region 10 Symposium (TENSYMP). IEEE, 2020. http://dx.doi.org/10.1109/tensymp50017.2020.9230725.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

D, Chris Jonathan y Sujitha Juliet. "Text-based Emotion Recognition using Sentiment Analysis". En 2022 International Conference on Applied Artificial Intelligence and Computing (ICAAIC). IEEE, 2022. http://dx.doi.org/10.1109/icaaic53929.2022.9793304.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Schuller, Bjorn, Bogdan Vlasenko, Dejan Arsic, Gerhard Rigoll y Andreas Wendemuth. "Combining speech recognition and acoustic word emotion models for robust text-independent emotion recognition". En 2008 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2008. http://dx.doi.org/10.1109/icme.2008.4607689.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Su, Ming-Hsiang, Chung-Hsien Wu, Kun-Yi Huang y Qian-Bei Hong. "LSTM-based Text Emotion Recognition Using Semantic and Emotional Word Vectors". En 2018 First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia). IEEE, 2018. http://dx.doi.org/10.1109/aciiasia.2018.8470378.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Calefato, Fabio, Filippo Lanubile y Nicole Novielli. "EmoTxt: A toolkit for emotion recognition from text". En 2017 Seventh International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). IEEE, 2017. http://dx.doi.org/10.1109/aciiw.2017.8272591.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Yoon, Seunghyun, Seokhyun Byun y Kyomin Jung. "Multimodal Speech Emotion Recognition Using Audio and Text". En 2018 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2018. http://dx.doi.org/10.1109/slt.2018.8639583.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Wu, Ye y Fuji Ren. "Improving emotion recognition from text with fractionation training". En 2010 International Conference on Natural Language Processing and Knowledge Engineering (NLP-KE). IEEE, 2010. http://dx.doi.org/10.1109/nlpke.2010.5587800.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Ito, Manabu y Konstantin Markov. "Sentence embedding based emotion recognition from text data". En RACS '22: International Conference on Research in Adaptive and Convergent Systems. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3538641.3561488.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía