Статті в журналах з теми "Emotional speech database"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Emotional speech database".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.
Tank, Vishal P., and S. K. Hadia. "Creation of speech corpus for emotion analysis in Gujarati language and its evaluation by various speech parameters." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 5 (October 1, 2020): 4752. http://dx.doi.org/10.11591/ijece.v10i5.pp4752-4758.
Byun, Sung-Woo, and Seok-Pil Lee. "A Study on a Speech Emotion Recognition System with Effective Acoustic Features Using Deep Learning Algorithms." Applied Sciences 11, no. 4 (February 21, 2021): 1890. http://dx.doi.org/10.3390/app11041890.
손남호, Hwang Hyosung, and Ho-Young Lee. "Emotional Speech Database and the Acoustic Analysis of Emotional Speech." EONEOHAG ll, no. 72 (August 2015): 175–99. http://dx.doi.org/10.17290/jlsk.2015..72.175.
Vicsi, Klára, and Dávid Sztahó. "Recognition of Emotions on the Basis of Different Levels of Speech Segments." Journal of Advanced Computational Intelligence and Intelligent Informatics 16, no. 2 (March 20, 2012): 335–40. http://dx.doi.org/10.20965/jaciii.2012.p0335.
Quan, Changqin, Bin Zhang, Xiao Sun, and Fuji Ren. "A combined cepstral distance method for emotional speech recognition." International Journal of Advanced Robotic Systems 14, no. 4 (July 1, 2017): 172988141771983. http://dx.doi.org/10.1177/1729881417719836.
Shahin, Ismail. "Employing Emotion Cues to Verify Speakers in Emotional Talking Environments." Journal of Intelligent Systems 25, no. 1 (January 1, 2016): 3–17. http://dx.doi.org/10.1515/jisys-2014-0118.
Caballero-Morales, Santiago-Omar. "Recognition of Emotions in Mexican Spanish Speech: An Approach Based on Acoustic Modelling of Emotion-Specific Vowels." Scientific World Journal 2013 (2013): 1–13. http://dx.doi.org/10.1155/2013/162093.
Sultana, Sadia, M. Shahidur Rahman, M. Reza Selim, and M. Zafar Iqbal. "SUST Bangla Emotional Speech Corpus (SUBESCO): An audio-only emotional speech corpus for Bangla." PLOS ONE 16, no. 4 (April 30, 2021): e0250173. http://dx.doi.org/10.1371/journal.pone.0250173.
Keshtiari, Niloofar, Michael Kuhlmann, Moharram Eslami, and Gisela Klann-Delius. "Recognizing emotional speech in Persian: A validated database of Persian emotional speech (Persian ESD)." Behavior Research Methods 47, no. 1 (May 23, 2014): 275–94. http://dx.doi.org/10.3758/s13428-014-0467-x.
Werner, S., and G. N. Petrenko. "Speech Emotion Recognition: Humans vs Machines." Discourse 5, no. 5 (December 18, 2019): 136–52. http://dx.doi.org/10.32603/2412-8562-2019-5-5-136-152.
Anvarjon, Tursunov, Mustaqeem, and Soonil Kwon. "Deep-Net: A Lightweight CNN-Based Speech Emotion Recognition System Using Deep Frequency Features." Sensors 20, no. 18 (September 12, 2020): 5212. http://dx.doi.org/10.3390/s20185212.
Zhao, Hui, Yu Tai Wang, and Xing Hai Yang. "Emotion Detection System Based on Speech and Facial Signals." Advanced Materials Research 459 (January 2012): 483–87. http://dx.doi.org/10.4028/www.scientific.net/amr.459.483.
Arimoto, Yoshiko, Sumio Ohno, and Hitoshi Iida. "Assessment of spontaneous emotional speech database toward emotion recognition: Intensity and similarity of perceived emotion from spontaneously expressed emotional speech." Acoustical Science and Technology 32, no. 1 (2011): 26–29. http://dx.doi.org/10.1250/ast.32.26.
Batliner, Anton, Dino Seppi, Stefan Steidl, and Björn Schuller. "Segmenting into Adequate Units for Automatic Recognition of Emotion-Related Episodes: A Speech-Based Approach." Advances in Human-Computer Interaction 2010 (2010): 1–15. http://dx.doi.org/10.1155/2010/782802.
B.Waghmare, V., R. R. Deshmukh, P. P. Shrishrimal, and G. B. Janvale. "Development of Isolated Marathi Words Emotional Speech Database." International Journal of Computer Applications 94, no. 4 (May 16, 2014): 19–22. http://dx.doi.org/10.5120/16331-5611.
Moriarty, Peter M., Michelle Vigeant, Rachel Wolf, Rick Gilmore, and Pamela Cole. "Creation and characterization of an emotional speech database." Journal of the Acoustical Society of America 143, no. 3 (March 2018): 1869. http://dx.doi.org/10.1121/1.5036133.
Huang, Ri Sheng. "Information Technology in an Improved Supervised Locally Linear Embedding for Recognizing Speech Emotion." Advanced Materials Research 1014 (July 2014): 375–78. http://dx.doi.org/10.4028/www.scientific.net/amr.1014.375.
Tursunov, Anvarjon, Soonil Kwon, and Hee-Suk Pang. "Discriminating Emotions in the Valence Dimension from Speech Using Timbre Features." Applied Sciences 9, no. 12 (June 17, 2019): 2470. http://dx.doi.org/10.3390/app9122470.
Trabelsi, Imen, and Med Salim Bouhlel. "Feature Selection for GUMI Kernel-Based SVM in Speech Emotion Recognition." International Journal of Synthetic Emotions 6, no. 2 (July 2015): 57–68. http://dx.doi.org/10.4018/ijse.2015070104.
Keshtiari, Niloofar, Michael Kuhlmann, Moharram Eslami, and Gisela Klann-Delius. "Erratum to: Recognizing emotional speech in Persian: A validated database of Persian emotional speech (Persian ESD)." Behavior Research Methods 47, no. 1 (November 26, 2014): 295. http://dx.doi.org/10.3758/s13428-014-0504-9.
Zvarevashe, Kudakwashe, and Oludayo O. Olugbara. "Recognition of speech emotion using custom 2D-convolution neural network deep learning algorithm." Intelligent Data Analysis 24, no. 5 (September 30, 2020): 1065–86. http://dx.doi.org/10.3233/ida-194747.
Seo, Minji, and Myungho Kim. "Fusing Visual Attention CNN and Bag of Visual Words for Cross-Corpus Speech Emotion Recognition." Sensors 20, no. 19 (September 28, 2020): 5559. http://dx.doi.org/10.3390/s20195559.
Trabelsi, Imen, and Med Salim Bouhlel. "Comparison of Several Acoustic Modeling Techniques for Speech Emotion Recognition." International Journal of Synthetic Emotions 7, no. 1 (January 2016): 58–68. http://dx.doi.org/10.4018/ijse.2016010105.
Sekkate, Sara, Mohammed Khalil, Abdellah Adib, and Sofia Ben Jebara. "An Investigation of a Feature-Level Fusion for Noisy Speech Emotion Recognition." Computers 8, no. 4 (December 13, 2019): 91. http://dx.doi.org/10.3390/computers8040091.
Sun, Ying, Xue-Ying Zhang, Jiang-He Ma, Chun-Xiao Song, and Hui-Fen Lv. "Nonlinear Dynamic Feature Extraction Based on Phase Space Reconstruction for the Classification of Speech and Emotion." Mathematical Problems in Engineering 2020 (April 9, 2020): 1–15. http://dx.doi.org/10.1155/2020/9452976.
Huang, Chengwei, Guoming Chen, Hua Yu, Yongqiang Bao, and Li Zhao. "Speech Emotion Recognition under White Noise." Archives of Acoustics 38, no. 4 (December 1, 2013): 457–63. http://dx.doi.org/10.2478/aoa-2013-0054.
Mustaqeem and Soonil Kwon. "A CNN-Assisted Enhanced Audio Signal Processing for Speech Emotion Recognition." Sensors 20, no. 1 (December 28, 2019): 183. http://dx.doi.org/10.3390/s20010183.
Bang, Jaehun, Taeho Hur, Dohyeong Kim, Thien Huynh-The, Jongwon Lee, Yongkoo Han, Oresti Banos, Jee-In Kim, and Sungyoung Lee. "Adaptive Data Boosting Technique for Robust Personalized Speech Emotion in Emotionally-Imbalanced Small-Sample Environments." Sensors 18, no. 11 (November 2, 2018): 3744. http://dx.doi.org/10.3390/s18113744.
Zaidan, Noor Aina, and Md Sah Hj Salam. "Emotional speech feature selection using end-part segmented energy feature." Indonesian Journal of Electrical Engineering and Computer Science 15, no. 3 (September 1, 2019): 1374. http://dx.doi.org/10.11591/ijeecs.v15.i3.pp1374-1381.
Noh, Kyoung Ju, Chi Yoon Jeong, Jiyoun Lim, Seungeun Chung, Gague Kim, Jeong Mook Lim, and Hyuntae Jeong. "Multi-Path and Group-Loss-Based Network for Speech Emotion Recognition in Multi-Domain Datasets." Sensors 21, no. 5 (February 24, 2021): 1579. http://dx.doi.org/10.3390/s21051579.
Pramod Reddy, A., and Vijayarajan V. "Recognition of human emotion with spectral features using multi layer-perceptron." International Journal of Knowledge-based and Intelligent Engineering Systems 24, no. 3 (September 28, 2020): 227–33. http://dx.doi.org/10.3233/kes-200044.
Jaratrotkamjorn, Apichart. "Bimodal Emotion Recognition Using Deep Belief Network." ECTI Transactions on Computer and Information Technology (ECTI-CIT) 15, no. 1 (January 14, 2021): 73–81. http://dx.doi.org/10.37936/ecti-cit.2021151.226446.
Cai, Linqin, Yaxin Hu, Jiangong Dong, and Sitong Zhou. "Audio-Textual Emotion Recognition Based on Improved Neural Networks." Mathematical Problems in Engineering 2019 (December 31, 2019): 1–9. http://dx.doi.org/10.1155/2019/2593036.
Jiang, Xiaoqing, Kewen Xia, Lingyin Wang, and Yongliang Lin. "Reordering Features with Weights Fusion in Multiclass and Multiple-Kernel Speech Emotion Recognition." Journal of Electrical and Computer Engineering 2017 (2017): 1–7. http://dx.doi.org/10.1155/2017/8709518.
Lee, Sanghyun, David K. Han, and Hanseok Ko. "Fusion-ConvBERT: Parallel Convolution and BERT Fusion for Speech Emotion Recognition." Sensors 20, no. 22 (November 23, 2020): 6688. http://dx.doi.org/10.3390/s20226688.
Yu, Yeonguk, and Yoon-Joong Kim. "Attention-LSTM-Attention Model for Speech Emotion Recognition and Analysis of IEMOCAP Database." Electronics 9, no. 5 (April 26, 2020): 713. http://dx.doi.org/10.3390/electronics9050713.
Lieskovská, Eva, Maroš Jakubec, Roman Jarina, and Michal Chmulík. "A Review on Speech Emotion Recognition Using Deep Learning and Attention Mechanism." Electronics 10, no. 10 (May 13, 2021): 1163. http://dx.doi.org/10.3390/electronics10101163.
Huang, Chengwei, Ruiyu Liang, Qingyun Wang, Ji Xi, Cheng Zha, and Li Zhao. "Practical Speech Emotion Recognition Based on Online Learning: From Acted Data to Elicited Data." Mathematical Problems in Engineering 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/265819.
Partila, Pavol, Miroslav Voznak, and Jaromir Tovarek. "Pattern Recognition Methods and Features Selection for Speech Emotion Recognition System." Scientific World Journal 2015 (2015): 1–7. http://dx.doi.org/10.1155/2015/573068.
Cairong, Zou, Zhang Xinran, Zha Cheng, and Zhao Li. "A Novel DBN Feature Fusion Model for Cross-Corpus Speech Emotion Recognition." Journal of Electrical and Computer Engineering 2016 (2016): 1–11. http://dx.doi.org/10.1155/2016/7437860.
Helmiyah, Siti, Abdul Fadlil, and Anton Yudhana. "Pengenalan Pola Emosi Manusia Berdasarkan Ucapan Menggunakan Ekstraksi Fitur Mel-Frequency Cepstral Coefficients (MFCC)." CogITo Smart Journal 4, no. 2 (February 8, 2019): 372. http://dx.doi.org/10.31154/cogito.v4i2.129.372-381.
Agrima, Abdellah, Ilham Mounir, Abdelmajid Farchi, Laila Elmaazouzi, and Badia Mounir. "Emotion recognition from syllabic units using k-nearest-neighbor classification and energy distribution." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 6 (December 1, 2021): 5438. http://dx.doi.org/10.11591/ijece.v11i6.pp5438-5449.
Wang, Jade, Trent Nicol, Erika Skoe, Mikko Sams, and Nina Kraus. "Emotion Modulates Early Auditory Response to Speech." Journal of Cognitive Neuroscience 21, no. 11 (November 2009): 2121–28. http://dx.doi.org/10.1162/jocn.2008.21147.
Mustaqeem and Soonil Kwon. "CLSTM: Deep Feature-Based Speech Emotion Recognition Using the Hierarchical ConvLSTM Network." Mathematics 8, no. 12 (November 30, 2020): 2133. http://dx.doi.org/10.3390/math8122133.
Chang, Xin, and Władysław Skarbek. "Multi-Modal Residual Perceptron Network for Audio–Video Emotion Recognition." Sensors 21, no. 16 (August 12, 2021): 5452. http://dx.doi.org/10.3390/s21165452.
Basalaeva, Elena G., Elena Yu Bulygina, and Tatiana A. Tripolskaya. "Stylistic Qualification of Colloquial Vocabulary in the Database of Pragmatically Marked Vocabulary of the Russian Language." Voprosy leksikografii, no. 20 (2021): 5–22. http://dx.doi.org/10.17223/22274200/20/1.
Metallinou, Angeliki, Zhaojun Yang, Chi-chun Lee, Carlos Busso, Sharon Carnicke, and Shrikanth Narayanan. "The USC CreativeIT database of multimodal dyadic interactions: from speech and full body motion capture to continuous emotional annotations." Language Resources and Evaluation 50, no. 3 (April 17, 2015): 497–521. http://dx.doi.org/10.1007/s10579-015-9300-0.
Avdeev, Vladimir, Viktor Trushin, and Mihail Kungurov. "Unified Speech-Like Interference for Active Protection of Speech Information." Informatics and Automation 19, no. 5 (October 15, 2020): 991–1017. http://dx.doi.org/10.15622/ia.2020.19.5.4.
Sepúlveda, Axel, Francisco Castillo, Carlos Palma, and Maria Rodriguez-Fernandez. "Emotion Recognition from ECG Signals Using Wavelet Scattering and Machine Learning." Applied Sciences 11, no. 11 (May 27, 2021): 4945. http://dx.doi.org/10.3390/app11114945.
Reese, K., D. P. Terry, B. Maxwell, R. Zafonte, P. D. Berkner, and G. L. Iverson. "The Association Between Past Speech Therapy and Preseason Symptom Reporting in Adolescent Student Athletes." Archives of Clinical Neuropsychology 34, no. 5 (July 2019): 752. http://dx.doi.org/10.1093/arclin/acz026.22.