Academic literature on the topic 'Open multimodal emotion corpus'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Open multimodal emotion corpus.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Open multimodal emotion corpus"
Dychka, Ivan, Ihor Tereikovskyi, Andrii Samofalov, Lyudmila Tereykovska, and Vitaliy Romankevich. "MULTIPLE EFFECTIVENESS CRITERIA OF FORMING DATABASES OF EMOTIONAL VOICE SIGNALS." Cybersecurity: Education, Science, Technique 1, no. 21 (2023): 65–74. http://dx.doi.org/10.28925/2663-4023.2023.21.6574.
Full textIvanina, Ekaterina O., Anna D. Tokmovtseva, and Elizaveta V. Akelieva. "EmoEye: Eye-Tracking and Biometrics Database for Emotion Recognition." Lurian Journal 4, no. 1 (2023): 8–20. http://dx.doi.org/10.15826/lurian.2023.4.1.1.
Full textKomninos, Nickolas. "Discourse Analysis of the 2022 Australian Tennis Open: A Multimodal Appraisal Perspective." HERMES - Journal of Language and Communication in Business, no. 63 (October 27, 2023): 83–98. http://dx.doi.org/10.7146/hjlcb.vi63.140134.
Full textOBAMA, FRANCK DONALD, and Olesya Viktorovna Lazareva. "The problem of translating the verbal component of political cartoons in English, Russian and French." Litera, no. 4 (April 2025): 234–48. https://doi.org/10.25136/2409-8698.2025.4.72950.
Full textMajeed, Adil, and Hasan Mujtaba. "UMEDNet: a multimodal approach for emotion detection in the Urdu language." PeerJ Computer Science 11 (May 1, 2025): e2861. https://doi.org/10.7717/peerj-cs.2861.
Full textCu, Jocelynn, Katrina Ysabel Solomon, Merlin Teodosia Suarez, and Madelene Sta. Maria. "A multimodal emotion corpus for Filipino and its uses." Journal on Multimodal User Interfaces 7, no. 1-2 (2012): 135–42. http://dx.doi.org/10.1007/s12193-012-0114-8.
Full textQi, Qingfu, Liyuan Lin, and Rui Zhang. "Feature Extraction Network with Attention Mechanism for Data Enhancement and Recombination Fusion for Multimodal Sentiment Analysis." Information 12, no. 9 (2021): 342. http://dx.doi.org/10.3390/info12090342.
Full textMARTIN, JEAN-CLAUDE, RADOSLAW NIEWIADOMSKI, LAURENCE DEVILLERS, STEPHANIE BUISINE, and CATHERINE PELACHAUD. "MULTIMODAL COMPLEX EMOTIONS: GESTURE EXPRESSIVITY AND BLENDED FACIAL EXPRESSIONS." International Journal of Humanoid Robotics 03, no. 03 (2006): 269–91. http://dx.doi.org/10.1142/s0219843606000825.
Full textBuitelaar, Paul, Ian D. Wood, Sapna Negi, et al. "MixedEmotions: An Open-Source Toolbox for Multimodal Emotion Analysis." IEEE Transactions on Multimedia 20, no. 9 (2018): 2454–65. http://dx.doi.org/10.1109/tmm.2018.2798287.
Full textJiang, Jiali. "Multimodal Emotion Recognition Based on Deep Learning." International Journal of Computer Science and Information Technology 5, no. 2 (2025): 71–80. https://doi.org/10.62051/ijcsit.v5n2.10.
Full textDissertations / Theses on the topic "Open multimodal emotion corpus"
Delahaye, Pauline. "Étude sémiotique des émotions complexes animales : des signes pour le dire." Thesis, Paris 4, 2017. http://www.theses.fr/2017PA040086.
Full textBook chapters on the topic "Open multimodal emotion corpus"
Huang, Zhaopei, Jinming Zhao, and Qin Jin. "Two-Stage Adaptation for Cross-Corpus Multimodal Emotion Recognition." In Natural Language Processing and Chinese Computing. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44696-2_34.
Full textDeng, Jun, Nicholas Cummins, Jing Han, et al. "The University of Passau Open Emotion Recognition System for the Multimodal Emotion Challenge." In Communications in Computer and Information Science. Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-3005-5_54.
Full textConference papers on the topic "Open multimodal emotion corpus"
Ghosh, Shreya, Zhixi Cai, Parul Gupta, et al. "Emolysis: A Multimodal Open-Source Group Emotion Analysis and Visualization Toolkit." In 2024 12th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). IEEE, 2024. https://doi.org/10.1109/aciiw63320.2024.00023.
Full textMaham, Shiza, Abdullah Tariq, Bushra Tayyaba, Bisma Saleem, and Muhammad Hamza Farooq. "MMER: Mid-Level Fusion Strategy for Multimodal Emotion Recognition using Speech and Video Data." In 2024 18th International Conference on Open Source Systems and Technologies (ICOSST). IEEE, 2024. https://doi.org/10.1109/icosst64562.2024.10871145.
Full textZhang, Zheng, Jeffrey M. Girard, Yue Wu, et al. "Multimodal Spontaneous Emotion Corpus for Human Behavior Analysis." In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016. http://dx.doi.org/10.1109/cvpr.2016.374.
Full textChou, Huang-Cheng, Wei-Cheng Lin, Lien-Chiang Chang, Chyi-Chang Li, Hsi-Pin Ma, and Chi-Chun Lee. "NNIME: The NTHU-NTUA Chinese interactive multimodal emotion corpus." In 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 2017. http://dx.doi.org/10.1109/acii.2017.8273615.
Full textVoloshina, Tatiana, and Olesia Makhnytkina. "Multimodal Emotion Recognition and Sentiment Analysis Using Masked Attention and Multimodal Interaction." In 2023 33rd Conference of Open Innovations Association (FRUCT). IEEE, 2023. http://dx.doi.org/10.23919/fruct58615.2023.10143065.
Full textZhang, Zixing, Zhongren Dong, Zhiqiang Gao, et al. "Open Vocabulary Emotion Prediction Based on Large Multimodal Models." In MM '24: The 32nd ACM International Conference on Multimedia. ACM, 2024. http://dx.doi.org/10.1145/3689092.3689402.
Full textDeschamps-Berger, Theo, Lori Lamel, and Laurence Devillers. "Exploring Attention Mechanisms for Multimodal Emotion Recognition in an Emergency Call Center Corpus." In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023. http://dx.doi.org/10.1109/icassp49357.2023.10096112.
Full textHorii, Daisuke, Akinori Ito, and Takashi Nose. "Design and Construction of Japanese Multimodal Utterance Corpus with Improved Emotion Balance and Naturalness." In 2022 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2022. http://dx.doi.org/10.23919/apsipaasc55919.2022.9980272.
Full textClavel, Celine, and Jean-Claude Martin. "Exploring relations between cognitive style and multimodal expression of emotion in a TV series corpus." In 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops (ACII 2009). IEEE, 2009. http://dx.doi.org/10.1109/acii.2009.5349540.
Full textChang, Chun-Min, Bo-Hao Su, Shih-Chen Lin, Jeng-Lin Li, and Chi-Chun Lee. "A bootstrapped multi-view weighted Kernel fusion framework for cross-corpus integration of multimodal emotion recognition." In 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 2017. http://dx.doi.org/10.1109/acii.2017.8273627.
Full text