Academic literature on the topic 'Perceptual features for speech recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Perceptual features for speech recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Perceptual features for speech recognition"

1

Li, Guan Yu, Hong Zhi Yu, Yong Hong Li, and Ning Ma. "Features Extraction for Lhasa Tibetan Speech Recognition." Applied Mechanics and Materials 571-572 (June 2014): 205–8. http://dx.doi.org/10.4028/www.scientific.net/amm.571-572.205.

Full text
Abstract:
Speech feature extraction is discussed. Mel frequency cepstral coefficients (MFCC) and perceptual linear prediction coefficient (PLP) method is analyzed. These two types of features are extracted in Lhasa large vocabulary continuous speech recognition system. Then the recognition results are compared.
APA, Harvard, Vancouver, ISO, and other styles
2

Haque, Serajul, Roberto Togneri, and Anthony Zaknich. "Perceptual features for automatic speech recognition in noisy environments." Speech Communication 51, no. 1 (January 2009): 58–75. http://dx.doi.org/10.1016/j.specom.2008.06.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Trabelsi, Imen, and Med Salim Bouhlel. "Comparison of Several Acoustic Modeling Techniques for Speech Emotion Recognition." International Journal of Synthetic Emotions 7, no. 1 (January 2016): 58–68. http://dx.doi.org/10.4018/ijse.2016010105.

Full text
Abstract:
Automatic Speech Emotion Recognition (SER) is a current research topic in the field of Human Computer Interaction (HCI) with a wide range of applications. The purpose of speech emotion recognition system is to automatically classify speaker's utterances into different emotional states such as disgust, boredom, sadness, neutral, and happiness. The speech samples in this paper are from the Berlin emotional database. Mel Frequency cepstrum coefficients (MFCC), Linear prediction coefficients (LPC), linear prediction cepstrum coefficients (LPCC), Perceptual Linear Prediction (PLP) and Relative Spec
APA, Harvard, Vancouver, ISO, and other styles
4

Dua, Mohit, Rajesh Kumar Aggarwal, and Mantosh Biswas. "Optimizing Integrated Features for Hindi Automatic Speech Recognition System." Journal of Intelligent Systems 29, no. 1 (October 1, 2018): 959–76. http://dx.doi.org/10.1515/jisys-2018-0057.

Full text
Abstract:
Abstract An automatic speech recognition (ASR) system translates spoken words or utterances (isolated, connected, continuous, and spontaneous) into text format. State-of-the-art ASR systems mainly use Mel frequency (MF) cepstral coefficient (MFCC), perceptual linear prediction (PLP), and Gammatone frequency (GF) cepstral coefficient (GFCC) for extracting features in the training phase of the ASR system. Initially, the paper proposes a sequential combination of all three feature extraction methods, taking two at a time. Six combinations, MF-PLP, PLP-MFCC, MF-GFCC, GF-MFCC, GF-PLP, and PLP-GFCC,
APA, Harvard, Vancouver, ISO, and other styles
5

Al Mahmud, Nahyan, and Shahfida Amjad Munni. "Qualitative Analysis of PLP in LSTM for Bangla Speech Recognition." International journal of Multimedia & Its Applications 12, no. 5 (October 30, 2020): 1–8. http://dx.doi.org/10.5121/ijma.2020.12501.

Full text
Abstract:
The performance of various acoustic feature extraction methods has been compared in this work using Long Short-Term Memory (LSTM) neural network in a Bangla speech recognition system. The acoustic features are a series of vectors that represents the speech signals. They can be classified in either words or sub word units such as phonemes. In this work, at first linear predictive coding (LPC) is used as acoustic vector extraction technique. LPC has been chosen due to its widespread popularity. Then other vector extraction techniques like Mel frequency cepstral coefficients (MFCC) and perceptual
APA, Harvard, Vancouver, ISO, and other styles
6

Kamińska, Dorota. "Emotional Speech Recognition Based on the Committee of Classifiers." Entropy 21, no. 10 (September 21, 2019): 920. http://dx.doi.org/10.3390/e21100920.

Full text
Abstract:
This article presents the novel method for emotion recognition from speech based on committee of classifiers. Different classification methods were juxtaposed in order to compare several alternative approaches for final voting. The research is conducted on three different types of Polish emotional speech: acted out with the same content, acted out with different content, and spontaneous. A pool of descriptors, commonly utilized for emotional speech recognition, expanded with sets of various perceptual coefficients, is used as input features. This research shows that presented approach improve
APA, Harvard, Vancouver, ISO, and other styles
7

Dmitrieva, E., V. Gelman, K. Zaitseva, and A. Orlov. "Psychophysiological features of perceptual learning in the process of speech emotional prosody recognition." International Journal of Psychophysiology 85, no. 3 (September 2012): 375. http://dx.doi.org/10.1016/j.ijpsycho.2012.07.034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Seyedin, Sanaz, Seyed Mohammad Ahadi, and Saeed Gazor. "New Features Using Robust MVDR Spectrum of Filtered Autocorrelation Sequence for Robust Speech Recognition." Scientific World Journal 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/634160.

Full text
Abstract:
This paper presents a novel noise-robust feature extraction method for speech recognition using the robust perceptual minimum variance distortionless response (MVDR) spectrum of temporally filtered autocorrelation sequence. The perceptual MVDR spectrum of the filtered short-time autocorrelation sequence can reduce the effects of residue of the nonstationary additive noise which remains after filtering the autocorrelation. To achieve a more robust front-end, we also modify the robust distortionless constraint of the MVDR spectral estimation method via revised weighting of the subband power spec
APA, Harvard, Vancouver, ISO, and other styles
9

Kaur, Gurpreet, Mohit Srivastava, and Amod Kumar. "Genetic Algorithm for Combined Speaker and Speech Recognition using Deep Neural Networks." Journal of Telecommunications and Information Technology 2 (June 29, 2018): 23–31. http://dx.doi.org/10.26636/jtit.2018.119617.

Full text
Abstract:
Huge growth is observed in the speech and speaker recognition field due to many artificial intelligence algorithms being applied. Speech is used to convey messages via the language being spoken, emotions, gender and speaker identity. Many real applications in healthcare are based upon speech and speaker recognition, e.g. a voice-controlled wheelchair helps control the chair. In this paper, we use a genetic algorithm (GA) for combined speaker and speech recognition, relying on optimized Mel Frequency Cepstral Coefficient (MFCC) speech features, and classification is performed using a Deep Neural Net
APA, Harvard, Vancouver, ISO, and other styles
10

Trabelsi, Imen, and Med Salim Bouhlel. "Feature Selection for GUMI Kernel-Based SVM in Speech Emotion Recognition." International Journal of Synthetic Emotions 6, no. 2 (July 2015): 57–68. http://dx.doi.org/10.4018/ijse.2015070104.

Full text
Abstract:
Speech emotion recognition is the indispensable requirement for efficient human machine interaction. Most modern automatic speech emotion recognition systems use Gaussian mixture models (GMM) and Support Vector Machines (SVM). GMM are known for their performance and scalability in the spectral modeling while SVM are known for their discriminatory power. A GMM-supervector characterizes an emotional style by the GMM parameters (mean vectors, covariance matrices, and mixture weights). GMM-supervector SVM benefits from both GMM and SVM frameworks. In this paper, the GMM-UBM mean interval (GUMI) ke
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Perceptual features for speech recognition"

1

Haque, Serajul. "Perceptual features for speech recognition." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0187.

Full text
Abstract:
Automatic speech recognition (ASR) is one of the most important research areas in the field of speech technology and research. It is also known as the recognition of speech by a machine or, by some artificial intelligence. However, in spite of focused research in this field for the past several decades, robust speech recognition with high reliability has not been achieved as it degrades in presence of speaker variabilities, channel mismatch condi- tions, and in noisy environments. The superb ability of the human auditory system has motivated researchers to include features of human perception
APA, Harvard, Vancouver, ISO, and other styles
2

Gu, Y. "Perceptually-based features in automatic speech recognition." Thesis, Swansea University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.637182.

Full text
Abstract:
Interspeaker variability of speech features is one of most important problems in automatic speech recognition (ASR), and makes speaker-independent systems much more difficult to achieve than speaker-dependent ones. The work described in the Thesis examines two ideas to overcome this problem. The first attempts to extract more reliable speech features by perceptually-based modelling; the second investigates the speaker variability in this speech feature and reduces its effects by a speaker normalisation scheme. The application of human speech perception in automatic speech recognition is discus
APA, Harvard, Vancouver, ISO, and other styles
3

Chu, Kam Keung. "Feature extraction based on perceptual non-uniform spectral compression for noisy speech recognition /." access full-text access abstract and table of contents, 2005. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?mphil-ee-b19887516a.pdf.

Full text
Abstract:
Thesis (M.Phil.)--City University of Hong Kong, 2005.<br>"Submitted to Department of Electronic Engineering in partial fulfillment of the requirements for the degree of Master of Philosophy" Includes bibliographical references (leaves 143-147)
APA, Harvard, Vancouver, ISO, and other styles
4

Koniaris, Christos. "Perceptually motivated speech recognition and mispronunciation detection." Doctoral thesis, KTH, Tal-kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-102321.

Full text
Abstract:
This doctoral thesis is the result of a research effort performed in two fields of speech technology, i.e., speech recognition and mispronunciation detection. Although the two areas are clearly distinguishable, the proposed approaches share a common hypothesis based on psychoacoustic processing of speech signals. The conjecture implies that the human auditory periphery provides a relatively good separation of different sound classes. Hence, it is possible to use recent findings from psychoacoustic perception together with mathematical and computational tools to model the auditory sensitivities
APA, Harvard, Vancouver, ISO, and other styles
5

Koniaris, Christos. "A study on selecting and optimizing perceptually relevant features for automatic speech recognition." Licentiate thesis, Stockholm : Kungliga Tekniska högskolan, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-11470.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sklar, Alexander Gabriel. "Channel Modeling Applied to Robust Automatic Speech Recognition." Scholarly Repository, 2007. http://scholarlyrepository.miami.edu/oa_theses/87.

Full text
Abstract:
In automatic speech recognition systems (ASRs), training is a critical phase to the system?s success. Communication media, either analog (such as analog landline phones) or digital (VoIP) distort the speaker?s speech signal often in very complex ways: linear distortion occurs in all channels, either in the magnitude or phase spectrum. Non-linear but time-invariant distortion will always appear in all real systems. In digital systems we also have network effects which will produce packet losses and delays and repeated packets. Finally, one cannot really assert what path a signal will take, and
APA, Harvard, Vancouver, ISO, and other styles
7

Atassi, Hicham. "Rozpoznání emočního stavu z hrané a spontánní řeči." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2014. http://www.nusl.cz/ntk/nusl-233665.

Full text
Abstract:
Dizertační práce se zabývá rozpoznáním emočního stavu mluvčích z řečového signálu. Práce je rozdělena do dvou hlavních častí, první část popisuju navržené metody pro rozpoznání emočního stavu z hraných databází. V rámci této části jsou představeny výsledky rozpoznání použitím dvou různých databází s různými jazyky. Hlavními přínosy této části je detailní analýza rozsáhlé škály různých příznaků získaných z řečového signálu, návrh nových klasifikačních architektur jako je například „emoční párování“ a návrh nové metody pro mapování diskrétních emočních stavů do dvou dimenzionálního prostoru. Dru
APA, Harvard, Vancouver, ISO, and other styles
8

Temko, Andriy. "Acoustic event detection and classification." Doctoral thesis, Universitat Politècnica de Catalunya, 2007. http://hdl.handle.net/10803/6880.

Full text
Abstract:
L'activitat humana que té lloc en sales de reunions o aules d'ensenyament es veu reflectida en una rica varietat d'events acústics, ja siguin produïts pel cos humà o per objectes que les persones manegen. Per això, la determinació de la identitat dels sons i de la seva posició temporal pot ajudar a detectar i a descriure l'activitat humana que té lloc en la sala. A més a més, la detecció de sons diferents de la veu pot ajudar a millorar la robustes de tecnologies de la parla com el reconeixement automàtica a condicions de treball adverses. L'objectiu d'aquesta tesi és la detecció i classificac
APA, Harvard, Vancouver, ISO, and other styles
9

Lileikytė, Rasa. "Quality estimation of speech recognition features." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20120302_090132-92071.

Full text
Abstract:
The accuracy of speech recognition system depends on characteristics of employed speech recognition features and classifier. Evaluating the accuracy of speech recognition system in ordinary way, the error of speech recognition system has to be calculated for each type of explored feature system and each type of classifier. The amount of such calculations can be reduced if the quality of explored feature system is estimated. Accordingly, the researches were made for quality estimation of speech recognition features. The proposed method for quality estimation of speech recognition features is ba
APA, Harvard, Vancouver, ISO, and other styles
10

Matthews, Iain. "Features for audio-visual speech recognition." Thesis, University of East Anglia, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Perceptual features for speech recognition"

1

Rao, K. Sreenivasa, and Shashidhar G. Koolagudi. Emotion Recognition using Speech Features. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-5143-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rao, K. Sreenivasa, and Manjunath K E. Speech Recognition Using Articulatory and Excitation Source Features. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-49220-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gabsdil, Malte. Automatic classification of speech recognition hypotheses using acoustic and pragmatic features. Saarbrücken: DFKI & Universität des Saarlandes, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rao, K. Sreenivasa. Robust Emotion Recognition using Spectral and Prosodic Features. New York, NY: Springer New York, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kulshreshtha, Manisha. Dialect Accent Features for Establishing Speaker Identity: A Case Study. Boston, MA: Springer US, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rao, K. Sreenivasa Sreenivasa, and Manjunath K. E. Speech Recognition Using Articulatory and Excitation Source Features. Springer, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Leibo, Joel Z., and Tomaso Poggio. Perception. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199674923.003.0025.

Full text
Abstract:
This chapter provides an overview of biological perceptual systems and their underlying computational principles focusing on the sensory sheets of the retina and cochlea and exploring how complex feature detection emerges by combining simple feature detectors in a hierarchical fashion. We also explore how the microcircuits of the neocortex implement such schemes pointing out similarities to progress in the field of machine vision driven deep learning algorithms. We see signs that engineered systems are catching up with the brain. For example, vision-based pedestrian detection systems are now a
APA, Harvard, Vancouver, ISO, and other styles
8

Lee, Lisa. The role of the structure of the lexicon in perceptual word learning. 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rao, K. Sreenivasa, and Shashidhar G. Koolagudi. Robust Emotion Recognition using Spectral and Prosodic Features. Springer, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rao, K. Sreenivasa, and Shashidhar G. Koolagudi. Robust Emotion Recognition using Spectral and Prosodic Features. Springer, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Perceptual features for speech recognition"

1

Revathi, A., R. Nagakrishnan, D. Vishnu Vashista, Kuppa Sai Sri Teja, and N. Sasikaladevi. "Emotion Recognition from Speech Using Perceptual Features and Convolutional Neural Networks." In Lecture Notes in Electrical Engineering, 355–65. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-3992-3_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Linjuan, Longbiao Wang, Jianwu Dang, Lili Guo, and Haotian Guan. "Convolutional Neural Network with Spectrogram and Perceptual Features for Speech Emotion Recognition." In Neural Information Processing, 62–71. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-04212-7_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Grau, Antoni, Joan Aranda, and Joan Climent. "Stepwise selection of perceptual texture features." In Advances in Pattern Recognition, 837–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0033309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kaur, Gurpreet, Mohit Srivastava, and Amod Kumar. "Speech Recognition Fundamentals and Features." In Cognitive Computing Systems, 327–48. First edition.: Apple Academic Press, 2021. http://dx.doi.org/10.1201/9781003082033-18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Frasconi, Paolo, Marco Gori, and Giovanni Soda. "Automatic speech recognition with neural networks: Beyond nonparametric models." In Intelligent Perceptual Systems, 104–21. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/3-540-57379-8_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Potapova, Rodmonga, and Liliya Komalova. "Auditory-Perceptual Recognition of the Emotional State of Aggression." In Speech and Computer, 89–95. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23132-7_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sendlmeier, Walter F. "Primary Perceptual Units in Word Recognition." In Recent Advances in Speech Understanding and Dialog Systems, 165–69. Berlin, Heidelberg: Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/978-3-642-83476-9_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

So, Stephen, and Kuldip K. Paliwal. "Quantization of Speech Features: Source Coding." In Advances in Pattern Recognition, 131–61. London: Springer London, 2008. http://dx.doi.org/10.1007/978-1-84800-143-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Karlos, Stamatis, Nikos Fazakis, Katerina Karanikola, Sotiris Kotsiantis, and Kyriakos Sgarbas. "Speech Recognition Combining MFCCs and Image Features." In Speech and Computer, 651–58. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43958-7_79.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bimbot, Frédéric, Gérard Chollet, and Jean-Pierre Tubach. "Phonetic features extraction using Time-Delay Neural Networks." In Speech Recognition and Understanding, 299–304. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/978-3-642-76626-8_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Perceptual features for speech recognition"

1

Revathi, A., and C. Jeyalakshmi. "Robust speech recognition in noisy environment using perceptual features and adaptive filters." In 2017 2nd International Conference on Communication and Electronics Systems (ICCES). IEEE, 2017. http://dx.doi.org/10.1109/cesys.2017.8321168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Umakanthan, Padmalochini, and Kaliappan Gopalan. "A Perceptual Masking based Feature Set for Speech Recognition." In Modelling and Simulation. Calgary,AB,Canada: ACTAPRESS, 2013. http://dx.doi.org/10.2316/p.2013.804-024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Revathi, A., and Y. Venkataramani. "Perceptual Features Based Isolated Digit and Continuous Speech Recognition Using Iterative Clustering Approach." In 2009 First International Conference on Networks & Communications. IEEE, 2009. http://dx.doi.org/10.1109/netcom.2009.32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nguyen Quoc Trung and Phung Trung Nghia. "The perceptual wavelet feature for noise robust Vietnamese speech recognition." In 2008 Second International Conference on Communications and Electronics (ICCE). IEEE, 2008. http://dx.doi.org/10.1109/cce.2008.4578968.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Alatwi, Aadel, Stephen So, and Kuldip K. Paliwal. "Perceptually motivated linear prediction cepstral features for network speech recognition." In 2016 10th International Conference on Signal Processing and Communication Systems (ICSPCS). IEEE, 2016. http://dx.doi.org/10.1109/icspcs.2016.7843309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Biswas, Astik, P. K. Sahu, Anirban Bhowmick, and Mahesh Chandra. "Acoustic feature extraction using ERB like wavelet sub-band perceptual Wiener filtering for noisy speech recognition." In 2014 Annual IEEE India Conference (INDICON). IEEE, 2014. http://dx.doi.org/10.1109/indicon.2014.7030474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Frolova, Оlga, and Elena Lyakso. "PERCEPTUAL FEATURES OF SPEECH AND VOCALIZATIONS OF 5-8 YEARS OLD CHILDREN WITH AUTISM SPECTRUM DISORDERS AND INTELLECTUAL DISABILITIES: RECOGNITION OF THE CHILD'S GENDER, AGE AND STATE." In XVI International interdisciplinary congress "Neuroscience for Medicine and Psychology". LLC MAKS Press, 2020. http://dx.doi.org/10.29003/m1310.sudak.ns2020-16/485-486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Chung-Hsien, Yu-Hsien Chiu, and Huigan Lim. "Perceptual speech modeling for noisy speech recognition." In Proceedings of ICASSP '02. IEEE, 2002. http://dx.doi.org/10.1109/icassp.2002.5743735.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chung-Hsien Wu, Yu-Hsien Chiu, and Huigan Lim. "Perceptual speech modeling for noisy speech recognition." In IEEE International Conference on Acoustics Speech and Signal Processing ICASSP-02. IEEE, 2002. http://dx.doi.org/10.1109/icassp.2002.1005757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sezgin, Cenk, Bilge Gunsel, and Canberk Hacioglu. "Audio emotion recognition by perceptual features." In 2012 20th Signal Processing and Communications Applications Conference (SIU). IEEE, 2012. http://dx.doi.org/10.1109/siu.2012.6204799.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Perceptual features for speech recognition"

1

Nahamoo, David. Robust Models and Features for Speech Recognition. Fort Belvoir, VA: Defense Technical Information Center, March 1998. http://dx.doi.org/10.21236/ada344834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!