Academic literature on the topic 'Text emotion recognition'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Text emotion recognition.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Text emotion recognition"
Sahoo, Sipra. "Emotion Recognition from Text." International Journal for Research in Applied Science and Engineering Technology 6, no. 3 (March 31, 2018): 237–43. http://dx.doi.org/10.22214/ijraset.2018.3038.
Full textDeng, Jiawen, and Fuji Ren. "Hierarchical Network with Label Embedding for Contextual Emotion Recognition." Research 2021 (January 6, 2021): 1–9. http://dx.doi.org/10.34133/2021/3067943.
Full textFujisawa, Akira, Kazuyuki Matsumoto, Minoru Yoshida, and Kenji Kita. "Emotion Estimation Method Based on Emoticon Image Features and Distributed Representations of Sentences." Applied Sciences 12, no. 3 (January 25, 2022): 1256. http://dx.doi.org/10.3390/app12031256.
Full textLiu, Changxiu, S. Kirubakaran, and Alfred Daniel J. "Deep Learning Approach for Emotion Recognition Analysis in Text Streams." International Journal of Technology and Human Interaction 18, no. 2 (April 1, 2022): 1–21. http://dx.doi.org/10.4018/ijthi.313927.
Full textHatem, Ahmed Samit, and Abbas M. Al-Bakry. "The Information Channels of Emotion Recognition: A Review." Webology 19, no. 1 (January 20, 2022): 927–41. http://dx.doi.org/10.14704/web/v19i1/web19064.
Full textBharti, Santosh Kumar, S. Varadhaganapathy, Rajeev Kumar Gupta, Prashant Kumar Shukla, Mohamed Bouye, Simon Karanja Hingaa, and Amena Mahmoud. "Text-Based Emotion Recognition Using Deep Learning Approach." Computational Intelligence and Neuroscience 2022 (August 23, 2022): 1–8. http://dx.doi.org/10.1155/2022/2645381.
Full textQuan, Changqin, and Fuji Ren. "Visualizing Emotions from Chinese Blogs by Textual Emotion Analysis and Recognition Techniques." International Journal of Information Technology & Decision Making 15, no. 01 (January 2016): 215–34. http://dx.doi.org/10.1142/s0219622014500710.
Full textHuang, Yuxin. "Research on Lovelorn Emotion Recognition Based on Ernie Tiny." Frontiers in Computing and Intelligent Systems 2, no. 2 (January 2, 2023): 66–69. http://dx.doi.org/10.54097/fcis.v2i2.4145.
Full textZhang, Ziheng. "Review of text emotion detection." Highlights in Science, Engineering and Technology 12 (August 26, 2022): 213–21. http://dx.doi.org/10.54097/hset.v12i.1456.
Full textSu, Sheng-Hsiung, Hao-Chiang Koong Lin, Cheng-Hung Wang, and Zu-Ching Huang. "Multi-Modal Affective Computing Technology Design the Interaction between Computers and Human of Intelligent Tutoring Systems." International Journal of Online Pedagogy and Course Design 6, no. 1 (January 2016): 13–28. http://dx.doi.org/10.4018/ijopcd.2016010102.
Full textDissertations / Theses on the topic "Text emotion recognition"
Zhu, Winstead Xingran. "Hotspot Detection for Automatic Podcast Trailer Generation." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-444887.
Full textFell, Michael. "Traitement automatique des langues pour la recherche d'information musicale : analyse profonde de la structure et du contenu des paroles de chansons." Thesis, Université Côte d'Azur, 2020. http://www.theses.fr/2020COAZ4017.
Full textApplications in Music Information Retrieval and Computational Musicology have traditionally relied on features extracted from the music content in the form of audio, but mostly ignored the song lyrics. More recently, improvements in fields such as music recommendation have been made by taking into account external metadata related to the song. In this thesis, we argue that extracting knowledge from the song lyrics is the next step to improve the user’s experience when interacting with music. To extract knowledge from vast amounts of song lyrics, we show for different textual aspects (their structure, content and perception) how Natural Language Processing methods can be adapted and successfully applied to lyrics. For the structuralaspect of lyrics, we derive a structural description of it by introducing a model that efficiently segments the lyricsinto its characteristic parts (e.g. intro, verse, chorus). In a second stage, we represent the content of lyrics by meansof summarizing the lyrics in a way that respects the characteristic lyrics structure. Finally, on the perception of lyricswe investigate the problem of detecting explicit content in a song text. This task proves to be very hard and we showthat the difficulty partially arises from the subjective nature of perceiving lyrics in one way or another depending onthe context. Furthermore, we touch on another problem of lyrics perception by presenting our preliminary resultson Emotion Recognition. As a result, during the course of this thesis we have created the annotated WASABI SongCorpus, a dataset of two million songs with NLP lyrics annotations on various levels
Howe, J. C. "Emotion recognition problems after brain injury : development of the Brief Emotion Recognition Test (BERT)." Thesis, University of the West of England, Bristol, 2018. http://eprints.uwe.ac.uk/34056/.
Full textČervenec, Radek. "Rozpoznávání emocí v česky psaných textech." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-218962.
Full textMonzo, Sánchez Carlos Manuel. "Modelado de la cualidad de la voz para la síntesis del habla expresiva." Doctoral thesis, Universitat Ramon Llull, 2010. http://hdl.handle.net/10803/9145.
Full textL'objectiu últim d'aquesta tesi és la generació d'estils de parla expressius en l'àmbit de sistemes de Conversió de Text a Parla (CTP) orientats a la Síntesi de la Parla Expressiva (SPE), essent possible transmetre un missatge oral amb una certa expressivitat que l'oient sigui capaç de percebre i interpretar correctament. No obstant, aquest objectiu implica diferents metes intermitges: conèixer les opcions de parametrització existents, entendre cadascun dels paràmetres, detectar els pros i contres de la seva utilització, descobrir les relacions existents entre ells i els estils de parla expressius i, finalment, portar a terme la síntesi de la parla expressiva. Donat això, el propi procés de síntesi implica un treball previ en reconeixement d'emocions, que en si mateix podria ser una línia complerta d'investigació, ja que aporta el coneixement necessari per extreure models que poden ser usats durant el procés de síntesi.
La cerca de l'increment de la naturalitat ha implicat una millor caracterització de la parla emocional o expressiva, raó per la qual s'ha investigat en parametritzacions que poguessin portar a terme aquesta comesa. Aquests són els paràmetres de Qualitat de la Veu Voice Quality (VoQ), que presenten com a característica principal que són capaços de caracteritzar individualment la parla, identificant cadascun dels factors que fan que sigui única. Els beneficis potencials, que aquest tipus de parametrització pot aportar a la interacció natural, són de dos classes: el reconeixement i la síntesi d'estils de parla expressius. La proposta de la parametrització de VoQ no pretén substituir a la ja emprada prosòdia, sinó tot el contrari, treballar conjuntament amb ella per tal de millorar els resultats obtinguts fins al moment.
Un cop realitzada la selecció de paràmetres es planteja el modelat de la VoQ, és a dir la metodologia d'anàlisi i de modificació, de forma que cadascun d'ells pugui ser extret a partir de la senyal de veu i posteriorment modificat durant la síntesi. Així mateix, es proposen variacions pels paràmetres implicats i tradicionalment utilitzats, adaptant la seva definició al context de la parla expressiva. A partir d'aquí es passa a treballar en les relacions existents amb els estils de parla expressius, presentant finalment la metodologia de transformació d'aquests últims, mitjançant la modificació conjunta de la VoQ y la prosòdia, per a la SPE en un sistema de CTP.
Esta tesis se realiza dentro del marco de trabajo existente en el grupo de investigación Grup de Recerca en Tecnologies Mèdia (GTM) de Enginyeria i Arquitectura La Salle, con el objetivo de dotar de mayor naturalidad a la interacción hombre-máquina. Para ello nos basamos en las limitaciones de la tecnología empleada hasta el momento, detectando puntos de mejora en los que poder aportar soluciones. Debido a que la naturalidad del habla está íntimamente relacionada con la expresividad que esta puede transmitir, estos puntos de mejora se centran en la capacidad de trabajar con emociones o estilos de habla expresivos en general.
El objetivo último de esta tesis es la generación de estilos de habla expresivos en el ámbito de sistemas de Conversión de Texto en Habla (CTH) orientados a la Síntesis del Habla Expresiva (SHE), siendo posible transmitir un mensaje oral con una cierta expresividad que el oyente sea capaz de percibir e interpretar correctamente. No obstante, este objetivo implica diferentes metas intermedias: conocer las opciones de parametrización existentes, entender cada uno de los parámetros, detectar los pros y contras de su utilización, descubrir las relaciones existentes entre ellos y los estilos de habla expresivos y, finalmente, llevar a cabo la síntesis del habla expresiva. El propio proceso de síntesis implica un trabajo previo en reconocimiento de emociones, que en sí mismo podría ser una línea completa de investigación, ya que muestra la viabilidad de usar los parámetros seleccionados en la discriminación de estos y aporta el conocimiento necesario para extraer los modelos que pueden ser usados durante el proceso de síntesis.
La búsqueda del incremento de la naturalidad ha implicado una mejor caracterización del habla emocional o expresiva, con lo que para ello se ha investigado en parametrizaciones que pudieran llevar a cabo este cometido. Estos son los parámetros de Cualidad de la Voz Voice Quality (VoQ), que presentan como característica principal que son capaces de caracterizar individualmente el habla, identificando cada uno de los factores que hacen que sea única. Los beneficios potenciales, que este tipo de parametrización puede aportar a la interacción natural, son de dos clases: el reconocimiento y la síntesis de estilos de habla expresivos. La propuesta de la parametrización de VoQ no pretende sustituir a la ya empleada prosodia, sino todo lo contrario, trabajar conjuntamente con ella para mejorar los resultados obtenidos hasta el momento.
Una vez realizada la selección de los parámetros se plantea el modelado de la VoQ, es decir, la metodología de análisis y de modificación de forma que cada uno de ellos pueda ser extraído a partir de la señal de voz y posteriormente modificado durante la síntesis. Asimismo, se proponen variaciones para los parámetros implicados y tradicionalmente utilizados, adaptando su definición al contexto del habla expresiva.
A partir de aquí se pasa a trabajar en las relaciones existentes con los estilos de habla expresivos, presentando finalmente la metodología de transformación de estos últimos, mediante la modificación conjunta de VoQ y prosodia, para la SHE en un sistema de CTH.
This thesis is conducted on the existing working framework in the Grup de Recerca en Tecnologies Mèdia (GTM) research group of the Enginyeria i Arquitectura La Salle, with the aim of providing the man-machine interaction with more naturalness. To do this, we are based on the limitations of the technology used up to now, detecting the improvement points where we could contribute solutions. Given that the speech naturalness is closely linked with the expressivity communication, these improvement points are focused on the ability of working with emotions or expressive speech styles in general.
The final goal of this thesis is the expressive speech styles generation in the field of Text-to-Speech (TTS) systems aimed at Expressive Speech Synthesis (ESS), with the possibility of communicating an oral message with a certain expressivity that the listener will be able to correctly perceive and interpret. Nevertheless, this goal involves different intermediate aims: to know the existing parameterization options, to understand each of the parameters, to find out the existing relations among them and the expressive speech styles and, finally, to carry out the expressive speech synthesis. All things considered, the synthesis process involves a previous work in emotion recognition, which could be a complete research field, since it shows the feasibility of using the selected parameters during their discrimination and provides with the necessary knowledge for the modelling that can be used during the synthesis process.
The search for the naturalness improvement has implied a better characterization of the emotional or expressive speech, so we have researched on parameterizations that could perform this task. These are the Voice Quality (VoQ) parameters, which main feature is they are able to characterize the speech in an individual way, identifying each factor that makes it unique. The potential benefits that this kind of parameterization can provide with natural interaction are twofold: the expressive speech styles recognition and the synthesis. The VoQ parameters proposal is not trying to replace prosody, but working altogether to improve the results so far obtained.
Once the parameters selection is conducted, the VoQ modelling is raised (i. e. analysis and modification methodology), so each of them can be extracted from the voice signal and later on modified during the synthesis. Also, variations are proposed for the involved and traditionally used parameters, adjusting their definition to the expressive speech context. From here, we work on the existing relations with the expressive speech styles and, eventually we show the transformation methodology for these ones, by means of the modification of VoQ and prosody, for the ESS in a TTS system.
Križan, Viliam. "Analýza sociálních sítí využitím metod rozpoznání vzoru." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-220399.
Full textSun, Luning. "Using the Ekman 60 faces test to detect emotion recognition deficit in brain injury patients." Thesis, University of Cambridge, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.708553.
Full textGuillery, Murielle. "Rôle du contrôle cognitif dans les modulations du langage et des émotions : l'exemple de la schizophrénie et des troubles bipolaires." Thesis, Rennes 2, 2017. http://www.theses.fr/2017REN20070.
Full textThe present study explores the modulations of the emotional control in the interactions of the language and the emotions, to 23 affected subjects of schizophrenia in state of stabilization and 21 affected subjects of bipolar disorders in euthymic phase. The interactions were envisaged on one hand in the sense of the feelings via the language with an experimental taskof conditioned emotional Stroop, then in contrast in the sense of the language via the feelings with an experimental task of lexical decision with orthographic neighbors with emotional connotation. The results highlight an emotional positive hyper-reactivity in bipolar disorders and disorders of the emotional cognitive control in the schizophrenia. These two diseasespresent overlappings in the cognitive changes which do not still allow to distinguish cognitive markers. However, the results of this study indicate that the processes involved in the disturbances of the processing of the words with emotional connotation are of different natures between these two pathologies. From then on, the present study could turn out usefulto differentiate the schizophrenia of bipolar disorders
Graziani, Lisa. "Constrained Affective Computing." Doctoral thesis, 2021. http://hdl.handle.net/2158/1238365.
Full textMountain, Mary Ann Forbes. "The Victoria emotion recognition test." Thesis, 1992. https://dspace.library.uvic.ca//handle/1828/9620.
Full textGraduate
Books on the topic "Text emotion recognition"
N, Emde Robert, Osofsky Joy D, and Butterfield Perry M. 1932-, eds. The IFEEL pictures: A new instrument for interpreting emotions. Madison, Conn: International Universities Press, 1993.
Find full textRueschemeyer, Shirley-Ann, and M. Gareth Gaskell, eds. The Oxford Handbook of Psycholinguistics. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198786825.001.0001.
Full text(Editor), Robert N. Emde, Joy D. Osofsky (Contributor, Editor), and Perry M. Butterfield (Editor), eds. The Ifeel Pictures: A New Instrument for Interpreting Emotions (Clinical Infant Reports). International Universities Press, 1993.
Find full textVanmai, Jean. Jean Vanmai’s Chân Đăng The Tonkinese of Caledonia in the colonial era. Translated by Tess Do and Kathryn Lay-Chenchabi. University of Technology, Sydney, 2022. http://dx.doi.org/10.5130/aai.
Full textBook chapters on the topic "Text emotion recognition"
Nayak, Biswajit, and Manoj Kumar Pradhan. "Text-Dependent Versus Text-Independent Speech Emotion Recognition." In Advances in Intelligent Systems and Computing, 153–61. New Delhi: Springer India, 2015. http://dx.doi.org/10.1007/978-81-322-2517-1_16.
Full textAlmahdawi, Amer, and William John Teahan. "Emotion Recognition in Text Using PPM." In Artificial Intelligence XXXIV, 149–55. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-71078-5_13.
Full textHuang, Xiaoxi, Yun Yang, and Changle Zhou. "Emotional Metaphors for Emotion Recognition in Chinese Text." In Affective Computing and Intelligent Interaction, 319–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11573548_41.
Full textGajšek, Rok, Vitomir Štruc, and France Mihelič. "Multimodal Emotion Recognition Based on the Decoupling of Emotion and Speaker Information." In Text, Speech and Dialogue, 275–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15760-8_35.
Full textPopková, Anna, Filip Povolný, Pavel Matějka, Ondřej Glembek, František Grézl, and Jan “Honza” Černocký. "Investigation of Bottle-Neck Features for Emotion Recognition." In Text, Speech, and Dialogue, 426–34. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-45510-5_49.
Full textKostoulas, Theodoros, Todor Ganchev, Alexandros Lazaridis, and Nikos Fakotakis. "Enhancing Emotion Recognition from Speech through Feature Selection." In Text, Speech and Dialogue, 338–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15760-8_43.
Full textChauhan, Rahul, Jainath Yadav, S. G. Koolagudi, and K. Sreenivasa Rao. "Text Independent Emotion Recognition Using Spectral Features." In Communications in Computer and Information Science, 359–70. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22606-9_37.
Full textHo, Vong Anh, Duong Huynh-Cong Nguyen, Danh Hoang Nguyen, Linh Thi-Van Pham, Duc-Vu Nguyen, Kiet Van Nguyen, and Ngan Luu-Thuy Nguyen. "Emotion Recognition for Vietnamese Social Media Text." In Communications in Computer and Information Science, 319–33. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-6168-9_27.
Full textHeracleous, Panikos, Yasser Mohammad, Keiji Yasuda, and Akio Yoneyama. "Speech Emotion Recognition Using Spontaneous Children’s Corpus." In Computational Linguistics and Intelligent Text Processing, 321–33. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-24340-0_24.
Full textMoore, Johanna D., Leimin Tian, and Catherine Lai. "Word-Level Emotion Recognition Using High-Level Features." In Computational Linguistics and Intelligent Text Processing, 17–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-54903-8_2.
Full textConference papers on the topic "Text emotion recognition"
Liu, Taiao, Yajun Du, and Qiaoyu Zhou. "Text Emotion Recognition Using GRU Neural Network with Attention Mechanism and Emoticon Emotions." In RICAI 2020: 2020 2nd International Conference on Robotics, Intelligent Control and Artificial Intelligence. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3438872.3439094.
Full textPark, Seo-Hui, Byung-Chull Bae, and Yun-Gyung Cheong. "Emotion Recognition from Text Stories Using an Emotion Embedding Model." In 2020 IEEE International Conference on Big Data and Smart Computing (BigComp). IEEE, 2020. http://dx.doi.org/10.1109/bigcomp48618.2020.00014.
Full textIslam, Juyana, Sadman Ahmed, M. A. H. Akhand, and N. Siddique. "Improved Emotion Recognition from Microblog Focusing on Both Emoticon and Text." In 2020 IEEE Region 10 Symposium (TENSYMP). IEEE, 2020. http://dx.doi.org/10.1109/tensymp50017.2020.9230725.
Full textD, Chris Jonathan, and Sujitha Juliet. "Text-based Emotion Recognition using Sentiment Analysis." In 2022 International Conference on Applied Artificial Intelligence and Computing (ICAAIC). IEEE, 2022. http://dx.doi.org/10.1109/icaaic53929.2022.9793304.
Full textSchuller, Bjorn, Bogdan Vlasenko, Dejan Arsic, Gerhard Rigoll, and Andreas Wendemuth. "Combining speech recognition and acoustic word emotion models for robust text-independent emotion recognition." In 2008 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2008. http://dx.doi.org/10.1109/icme.2008.4607689.
Full textSu, Ming-Hsiang, Chung-Hsien Wu, Kun-Yi Huang, and Qian-Bei Hong. "LSTM-based Text Emotion Recognition Using Semantic and Emotional Word Vectors." In 2018 First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia). IEEE, 2018. http://dx.doi.org/10.1109/aciiasia.2018.8470378.
Full textCalefato, Fabio, Filippo Lanubile, and Nicole Novielli. "EmoTxt: A toolkit for emotion recognition from text." In 2017 Seventh International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). IEEE, 2017. http://dx.doi.org/10.1109/aciiw.2017.8272591.
Full textYoon, Seunghyun, Seokhyun Byun, and Kyomin Jung. "Multimodal Speech Emotion Recognition Using Audio and Text." In 2018 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2018. http://dx.doi.org/10.1109/slt.2018.8639583.
Full textWu, Ye, and Fuji Ren. "Improving emotion recognition from text with fractionation training." In 2010 International Conference on Natural Language Processing and Knowledge Engineering (NLP-KE). IEEE, 2010. http://dx.doi.org/10.1109/nlpke.2010.5587800.
Full textIto, Manabu, and Konstantin Markov. "Sentence embedding based emotion recognition from text data." In RACS '22: International Conference on Research in Adaptive and Convergent Systems. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3538641.3561488.
Full text