Academic literature on the topic 'Computer recognition of speech'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Computer recognition of speech.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Computer recognition of speech"

1

Moese, Gerarld. "Computer system for speech recognition." Journal of the Acoustical Society of America 99, no. 2 (1996): 646. http://dx.doi.org/10.1121/1.414609.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kushida, Akihiro, and Tetsuo Kosaka. "Speech recognition system, speech recognition server, speech recognition client, their control method, and computer readable memory." Journal of the Acoustical Society of America 121, no. 3 (2007): 1290. http://dx.doi.org/10.1121/1.2720066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bordeaux, Theodore A. "Real time computer speech recognition system." Journal of the Acoustical Society of America 89, no. 3 (1991): 1489. http://dx.doi.org/10.1121/1.400618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vidal, E., F. Casacuberta, L. Rodriguez, J. Civera, and C. D. M. Hinarejos. "Computer-assisted translation using speech recognition." IEEE Transactions on Audio, Speech and Language Processing 14, no. 3 (2006): 941–51. http://dx.doi.org/10.1109/tsa.2005.857788.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schuller, Björn W. "Speech emotion recognition." Communications of the ACM 61, no. 5 (2018): 90–99. http://dx.doi.org/10.1145/3129340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rebman, Carl M., Milam W. Aiken, and Casey G. Cegielski. "Speech recognition in the human–computer interface." Information & Management 40, no. 6 (2003): 509–19. http://dx.doi.org/10.1016/s0378-7206(02)00067-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lung, Vu Duc, Phan Dinh Duy, Nguyen Vo An Phu, Nguyen Hoang Long, and Truong Nguyen Vu. "Speech Recognition in Human-Computer Interactive Control." Journal of Automation and Control Engineering 1, no. 3 (2013): 222–26. http://dx.doi.org/10.12720/joace.1.3.222-226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rengger, Ralph E., and David R. Manning. "Input device for computer speech recognition system." Journal of the Acoustical Society of America 83, no. 1 (1988): 405. http://dx.doi.org/10.1121/1.396213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

A, Prof Swethashree. "Speech Emotion Recognition." International Journal for Research in Applied Science and Engineering Technology 9, no. 8 (2021): 2637–40. http://dx.doi.org/10.22214/ijraset.2021.37375.

Full text
Abstract:
Abstract: Speech Emotion Recognition, abbreviated as SER, the act of trying to identify a person's feelings and relationships. Affected situations from speech. This is because the truth often reflects the basic feelings of tone and tone of voice. Emotional awareness is a fast-growing field of research in recent years. Unlike humans, machines do not have the power to comprehend and express emotions. But human communication with the computer can be improved by using automatic sensory recognition, accordingly reducing the need for human intervention. In this project, basic emotions such as peace, happiness, fear, disgust, etc. are analyzed signs of emotional expression. We use machine learning techniques such as Multilayer perceptron Classifier (MLP Classifier) which is used to separate information provided by groups to be divided equally. Coefficients of Mel-frequency cepstrum (MFCC), chroma and mel features are extracted from speech signals and used to train MLP differentiation. By accomplishing this purpose, we use python libraries such as Librosa, sklearn, pyaudio, numpy and audio file to analyze speech patterns and see the feeling. Keywords: Speech emotion recognition, mel cepstral coefficient, neural artificial network, multilayer perceptrons, mlp classifier, python.
APA, Harvard, Vancouver, ISO, and other styles
10

Sudoh, Katsuhito, Hajime Tsukada, and Hideki Isozaki. "Named Entity Recognition from Speech Using Discriminative Models and Speech Recognition Confidence." Journal of Information Processing 17 (2009): 72–81. http://dx.doi.org/10.2197/ipsjjip.17.72.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Computer recognition of speech"

1

Wang, Peidong. "Robust Automatic Speech Recognition By Integrating Speech Separation." The Ohio State University, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1619099401042668.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tyler, J. E. M. "Speech recognition by computer : algorithms and architectures." Thesis, University of Greenwich, 1988. http://gala.gre.ac.uk/8707/.

Full text
Abstract:
This work is concerned with the investigation of algorithms and architectures for computer recognition of human speech. Three speech recognition algorithms have been implemented, using (a) Walsh Analysis, (b) Fourier Analysis and (c) Linear Predictive Coding. The Fourier Analysis algorithm made use of the Prime-number Fourier Transform technique. The Linear Predictive Coding algorithm made use of LeRoux and Gueguen's method for calculating the coefficients. The system was organised so that the speech samples could be input to a PC/XT microcomputer in a typical office environment. The PC/XT was linked via Ethernet to a Sun 2/180s computer system which allowed the data to be stored on a Winchester disk so that the data used for testing each algorithm was identical. The recognition algorithms were implemented entirely in Pascal, to allow evaluation to take place on several different machines. The effectiveness of the algorithms was tested with a group of five naive speakers, results being in the form of recognition scores. The results showed the superiority of the Linear Predictive Coding algorithm, which achieved a mean recognition score of 93.3%. The software was implemented on three different computer systems. These were an 8-bit microprocessor, a sixteen-bit microcomputer based on the IBM PC/XT, and a Motorola 68020 based Sun Workstation. The effectiveness of the implementations was measured in terms of speed of execution of the recognition software. By limiting the vocabulary to ten words, it has been shown that it would be possible to achieve recognition of isolated utterances in real time using a single 68020 microprocessor. The definition of real time in this context is understood to mean that the recognition task will on average, be completed within the duration of the utterance, for all the utterances in the recogniser's vocabulary. A speech recogniser architecture is proposed which would achieve real time speech recognition without any limitation being placed upon (a) the order of the transform, and (b) the size of the recogniser's vocabulary. This is achieved by utilising a pipeline of four processors, with the pattern matching process performed in parallel on groups of words in the vocabulary.
APA, Harvard, Vancouver, ISO, and other styles
3

Sun, Felix (Felix W. ). "Speech Representation Models for Speech Synthesis and Multimodal Speech Recognition." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106378.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 59-63).<br>The field of speech recognition has seen steady advances over the last two decades, leading to the accurate, real-time recognition systems available on mobile phones today. In this thesis, I apply speech modeling techniques developed for recognition to two other speech problems: speech synthesis and multimodal speech recognition with images. In both problems, there is a need to learn a relationship between speech sounds and another source of information. For speech synthesis, I show that using a neural network acoustic model results in a synthesizer that is more tolerant of noisy training data than previous work. For multimodal recognition, I show how information from images can be effectively integrated into the recognition search framework, resulting in improved accuracy when image data is available.<br>by Felix Sun.<br>M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
4

Eriksson, Mattias. "Speech recognition availability." Thesis, Linköping University, Department of Computer and Information Science, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2651.

Full text
Abstract:
<p>This project investigates the importance of availability in the scope of dictation programs. Using speech recognition technology for dictating has not reached the public, and that may very well be a result of poor availability in today’s technical solutions. </p><p>I have constructed a persona character, Johanna, who personalizes the target user. I have also developed a solution that streams audio into a speech recognition server and sends back interpreted text. Johanna affirmed that the solution was successful in theory. </p><p>I then incorporated test users that tried out the solution in practice. Half of them do indeed claim that their usage has been and will continue to be increased thanks to the new level of availability.</p>
APA, Harvard, Vancouver, ISO, and other styles
5

Melnikoff, Stephen Jonathan. "Speech recognition in programmable logic." Thesis, University of Birmingham, 2003. http://etheses.bham.ac.uk//id/eprint/16/.

Full text
Abstract:
Speech recognition is a computationally demanding task, especially the decoding part, which converts pre-processed speech data into words or sub-word units, and which incorporates Viterbi decoding and Gaussian distribution calculations. In this thesis, this part of the recognition process is implemented in programmable logic, specifically, on a field-programmable gate array (FPGA). Relevant background material about speech recognition is presented, along with a critical review of previous hardware implementations. Designs for a decoder suitable for implementation in hardware are then described. These include details of how multiple speech files can be processed in parallel, and an original implementation of an algorithm for summing Gaussian mixture components in the log domain. These designs are then implemented on an FPGA. An assessment is made as to how appropriate it is to use hardware for speech recognition. It is concluded that while certain parts of the recognition algorithm are not well suited to this medium, much of it is, and so an efficient implementation is possible. Also presented is an original analysis of the requirements of speech recognition for hardware and software, which relates the parameters that dictate the complexity of the system to processing speed and bandwidth. The FPGA implementations are compared to equivalent software, written for that purpose. For a contemporary FPGA and processor, the FPGA outperforms the software by an order of magnitude.
APA, Harvard, Vancouver, ISO, and other styles
6

Nilsson, Tobias. "Speech Recognition Software and Vidispine." Thesis, Umeå universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-71428.

Full text
Abstract:
To evaluate libraries for continuous speech recognition, a test based on TED-talk videos was created. The different speech recognition libraries PocketSphinx, Dragon NaturallySpeaking and Microsoft Speech API were part of the evaluation. From the words that the libraries recognized, Word Error Rate (WER) was calculated and the results show that Microsoft SAPI performed worst with a WER of 60.8%, PocketSphinx at second place with 59.9% and Dragon NaturallySpeaking as the best with 42.6%. These results were all achieved with a Real Time Factor (RTF) of less than 1.0. PocketSphinx was chosen as the best candidate for the intended system on the basis that it is open-source, free and would be a better match to the system. By modifying the language model and dictionary to closer resemble typical TED-talk contents, it was also possible to improve the WER for PocketSphinx to a value of 39.5%, however with the cost of RTF which passed the 1.0 limit,making it less useful for live video.
APA, Harvard, Vancouver, ISO, and other styles
7

Price, Michael Ph D. (Michael R. ). Massachusetts Institute of Technology. "Energy-scalable speech recognition circuits." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106090.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 135-141).<br>As people become more comfortable with speaking to machines, the applications of speech interfaces will diversify and include a wider range of devices, such as wearables, appliances, and robots. Automatic speech recognition (ASR) is a key component of these interfaces that is computationally intensive. This thesis shows how we designed special-purpose integrated circuits to bring local ASR capabilities to electronic devices with a small size and power footprint. This thesis adopts a holistic, system-driven approach to ASR hardware design. We identify external memory bandwidth as the main driver in system power consumption and select algorithms and architectures to minimize it. We evaluate three acoustic modeling approaches-Gaussian mixture models (GMMs), subspace GMMs (SGMMs), and deep neural networks (DNNs)-and identify tradeoffs between memory bandwidth and recognition accuracy. DNNs offer the best tradeoffs for our application; we describe a SIMD DNN architecture using parameter quantization and sparse weight matrices to save bandwidth. We also present a hidden Markov model (HMM) search architecture using a weighted finite-state transducer (WFST) representation. Enhancements to the search architecture, including WFST compression and caching, predictive beam width control, and a word lattice, reduce memory bandwidth to 10 MB/s or less, despite having just 414 kB of on-chip SRAM. The resulting system runs in real-time with accuracy comparable to a software recognizer using the same models. We provide infrastructure for deploying recognizers trained with open-source tools (Kaldi) on the hardware platform. We investigate voice activity detection (VAD) as a wake-up mechanism and conclude that an accurate and robust algorithm is necessary to minimize system power, even if it results in larger area and power for the VAD itself. We design fixed-point digital implementations of three VAD algorithms and explore their performance on two synthetic tasks with SNRs from -5 to 30 dB. The best algorithm uses modulation frequency features with an NN classifier, requiring just 8.9 kB of parameters. Throughout this work we emphasize energy scalability, or the ability to save energy when high accuracy or complex models are not required. Our architecture exploits scalability from many sources: model hyperparameters, runtime parameters such as beam width, and voltage/frequency scaling. We demonstrate these concepts with results from five ASR tasks, with vocabularies ranging from 11 words to 145,000 words.<br>by Michael Price.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
8

Yoder, Benjamin W. (Benjamin Wesley) 1977. "Spontaneous speech recognition using HMMs." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/36108.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2003.<br>Includes bibliographical references (leaf 63).<br>This thesis describes a speech recognition system that was built to support spontaneous speech understanding. The system is composed of (1) a front end acoustic analyzer which computes Mel-frequency cepstral coefficients, (2) acoustic models of context-dependent phonemes (triphones), (3) a back-off bigram statistical language model, and (4) a beam search decoder based on the Viterbi algorithm. The contextdependent acoustic models resulted in 67.9% phoneme recognition accuracy on the standard TIMIT speech database. Spontaneous speech was collected using a "Wizard of Oz" simulation of a simple spatial manipulation game. Naive subjects were instructed to manipulate blocks on a computer screen in order to solve a series of geometric puzzles using only spoken commands. A hidden human operator performed actions in response to each spoken command. The speech from thirteen subjects formed the corpus for the speech recognition results reported here. Using a task-specific bigram statistical language model and context-dependent acoustic models, the system achieved a word recognition accuracy of 67.6%. The recognizer operated using a vocabulary of 523 words. The recognition had a word perplexity of 36.<br>by Benjamin W. Yoder.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
9

Ganapathiraju, Aravind. "Support Vector Machines for Speech Recognition." MSSTATE, 2002. http://sun.library.msstate.edu/ETD-db/theses/available/etd-02202002-111027/.

Full text
Abstract:
Hidden Markov models (HMM) with Gaussian mixture observation densities are the dominant approach in speech recognition. These systems typically use a representational model for acoustic modeling which can often be prone to overfitting and does not translate to improved discrimination. We propose a new paradigm centered on principles of structural risk minimization using a discriminative framework for speech recognition based on support vector machines (SVMs). SVMs have the ability to simultaneously optimize the representational and discriminative ability of the acoustic classifiers. We have developed the first SVM-based large vocabulary speech recognition system that improves performance over traditional HMM-based systems. This hybrid system achieves a state-of-the-art word error rate of 10.6% on a continuous alphadigit task ? a 10% improvement relative to an HMM system. On SWITCHBOARD, a large vocabulary task, the system improves performance over a traditional HMM system from 41.6% word error rate to 40.6%. This dissertation discusses several practical issues that arise when SVMs are incorporated into the hybrid system.
APA, Harvard, Vancouver, ISO, and other styles
10

Lebart, Katia. "Speech dereverberation applied to automatic speech recognition and hearing aids." Thesis, University of Sussex, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Computer recognition of speech"

1

Schroeder, Manfred R. Computer Speech: Recognition, Compression, Synthesis. Springer Berlin Heidelberg, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schroeder, Manfred R. Computer Speech: Recognition, Compression, Synthesis. Springer Berlin Heidelberg, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sun, Jianjun. Computer based speech analysis and speech recognition. The Polytechnic, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rodman, Robert. Computer speech technology. Artech House, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

1932-, Fallside Frank, and Woods William A, eds. Computer speech processing. Prentice-Hall International, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Schindler, Esther. The computer speech book. AP Professional, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schindler, Esther. The computer speech book. AP Professional, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

İnce, A. Nejat. Digital Speech Processing: Speech Coding, Synthesis and Recognition. Springer US, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bourlard, Hervé. Connectionist speech recognition: A hybrid approach. Kluwer Academic Publishers, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bourlard, Hervé A. Connectionist Speech Recognition: A Hybrid Approach. Springer US, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Computer recognition of speech"

1

Weik, Martin H. "speech recognition." In Computer Science and Communications Dictionary. Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_17919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Farouk, Mohamed Hesham. "Speech Recognition." In SpringerBriefs in Electrical and Computer Engineering. Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-02732-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Farouk, Mohamed Hesham. "Speech Recognition." In SpringerBriefs in Electrical and Computer Engineering. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-69002-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Schroeder, Manfred R. "Speech Recognition and Speaker Identification." In Computer Speech. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-662-06384-2_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schroeder, Manfred R. "Speech Recognition and Speaker Identification." In Computer Speech. Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/978-3-662-03861-1_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Stölzle, Anton. "Speech Recognition." In The Kluwer International Series in Engineering and Computer Science. Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3570-6_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lu, Xugang, Sheng Li, and Masakiyo Fujimoto. "Automatic Speech Recognition." In SpringerBriefs in Computer Science. Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-15-0595-9_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Weik, Martin H. "automatic speech recognition." In Computer Science and Communications Dictionary. Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_1147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Siegert, Ingo, Yamini Sinha, Oliver Jokisch, and Andreas Wendemuth. "Recognition Performance of Selected Speech Recognition APIs – A Longitudinal Study." In Speech and Computer. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60276-5_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kandagal, Amaresh P., V. Udayashankara, and M. A. Anusuya. "Silent Speech Recognition." In Communications in Computer and Information Science. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-9059-2_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Computer recognition of speech"

1

Zhang, Xi-Wen, Yong-Gang Fu, and Ke-Zhang Chen. "Improving Chinese Handwriting Recognition by Fusing Speech Recognition." In 2009 WRI World Congress on Computer Science and Information Engineering. IEEE, 2009. http://dx.doi.org/10.1109/csie.2009.126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Arshad, N. W., S. N. Abdul Aziz, R. Hamid, R. Abdul Karim, F. Naim, and N. F. Zakaria. "Speech processing for makhraj recognition." In 2011 International Conference on Electrical, Control and Computer Engineering (INECCE). IEEE, 2011. http://dx.doi.org/10.1109/inecce.2011.5953900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

M.A., Anusuya, and Katti S.K. "Kannada Speech Recognition using Discrete Wavelet Transform – PCA." In International Conference on Computer Applications — Computer Applications - I. Research Publishing Services, 2010. http://dx.doi.org/10.3850/978-981-08-7618-0_1305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Satriawan, Cil Hardianto, and Dessi Puji Lestari. "Feature-based noise robust speech recognition on an Indonesian language automatic speech recognition system." In 2014 International Conference on Electrical Engineering and Computer Science (ICEECS). IEEE, 2014. http://dx.doi.org/10.1109/iceecs.2014.7045217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Addarrazi, Ilham, Hassan Satori, and Khalid Satori. "Amazigh audiovisual speech recognition system design." In 2017 Intelligent Systems and Computer Vision (ISCV). IEEE, 2017. http://dx.doi.org/10.1109/isacv.2017.8054956.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, Bo, Cheng Lu, Yandong Guo, and Jacob Wang. "Discriminative Multi-Modality Speech Recognition." In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020. http://dx.doi.org/10.1109/cvpr42600.2020.01444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Aye, Yin Yin. "Speech Recognition Using Zero-Crossing Features." In 2009 International Conference on Electronic Computer Technology, ICECT. IEEE, 2009. http://dx.doi.org/10.1109/icect.2009.142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ma, Yongbao, Yi Zhou, Jingang Liu, Jie Xia, and Hongqing Liu. "An improved switch speech enhancement algorithm for automatic speech recognition." In 2015 IEEE International Conference on Computer and Communications (ICCC). IEEE, 2015. http://dx.doi.org/10.1109/compcomm.2015.7387610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lalitha, S., Abhishek Madhavan, Bharath Bhushan, and Srinivas Saketh. "Speech emotion recognition." In 2014 International Conference on Advances in Electronics, Computers and Communications (ICAECC). IEEE, 2014. http://dx.doi.org/10.1109/icaecc.2014.7002390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lalitha, S., Sahruday Patnaik, T. H. Arvind, Vivek Madhusudhan, and Shikha Tripathi. "Emotion Recognition through Speech Signal for Human-Computer Interaction." In 2014 Fifth International Symposium on Electronic System Design (ISED). IEEE, 2014. http://dx.doi.org/10.1109/ised.2014.54.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Computer recognition of speech"

1

Hoeferlin, David M., Brian M. Ore, Stephen A. Thorn, and David Snyder. Speech Processing and Recognition (SPaRe). Defense Technical Information Center, 2011. http://dx.doi.org/10.21236/ada540142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kubala, F., S. Austin, C. Barry, J. Makhoul, P. Placeway, and R. Schwartz. Byblos Speech Recognition Benchmark Results. Defense Technical Information Center, 1991. http://dx.doi.org/10.21236/ada459943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Schwartz, Richard, and Owen Kimball. Toward Real-Time Continuous Speech Recognition. Defense Technical Information Center, 1989. http://dx.doi.org/10.21236/ada208196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Fu-Hua, Pedro J. Moreno, Richard M. Stern, and Alejandro Acero. Signal Processing for Robust Speech Recognition. Defense Technical Information Center, 1994. http://dx.doi.org/10.21236/ada457798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

STANDARD OBJECT SYSTEMS INC SHALIMAR FL. Auditory Modeling for Noisy Speech Recognition. Defense Technical Information Center, 2000. http://dx.doi.org/10.21236/ada373379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Schwartz, R., Y.-L. Chow, A. Derr, M.-W. Feng, and O. Kimball. Statistical Modeling for Continuous Speech Recognition. Defense Technical Information Center, 1988. http://dx.doi.org/10.21236/ada192054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pfister, M. Software Package for Speaker Independent or Dependent Speech Recognition Using Standard Objects for Phonetic Speech Recognition. Defense Technical Information Center, 1998. http://dx.doi.org/10.21236/ada341198.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ore, Brian M. Speech Recognition, Articulatory Feature Detection, and Speech Synthesis in Multiple Languages. Defense Technical Information Center, 2009. http://dx.doi.org/10.21236/ada519140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Draelos, Timothy J., Stephen Heck, Jennifer Galasso, and Ronald Brogan. Seismic Phase Identification with Speech Recognition Algorithms. Office of Scientific and Technical Information (OSTI), 2018. http://dx.doi.org/10.2172/1474260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Murveit, Hy. High Performance Speech Recognition Using Consistency Modeling. Defense Technical Information Center, 1993. http://dx.doi.org/10.21236/ada298857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!