Auswahl der wissenschaftlichen Literatur zum Thema „Lipreading“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Lipreading" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Lipreading"

1

Lynch, Michael P., Rebecca E. Eilers, D. Kimbrough Oller, Richard C. Urbano, and Patricia J. Pero. "Multisensory Narrative Tracking by a Profoundly Deaf Subject Using an Electrocutaneous Vocoder and a Vibrotactile Aid." Journal of Speech, Language, and Hearing Research 32, no. 2 (1989): 331–38. http://dx.doi.org/10.1044/jshr.3202.331.

Der volle Inhalt der Quelle
Annotation:
A congenitally, profoundly deaf adult who had received 41 hours of tactual word recognition training in a previous study was assessed in tracking of connected discourse. This assessment was conducted in three phases. In the first phase, the subject used the Tacticon 1600 electrocutaneous vocoder to track a narrative in three conditions: (a) lipreading and aided hearing (L+H), (b) lipreading and tactual vocoder (L+TV), and (c) lipreading, tactual vocoder, and aided hearing (L+TV+H), Subject performance was significantly better in the L+TV+H condition than in the L+H condition, suggesting that t
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Tye-Murray, Nancy, Sandra Hale, Brent Spehar, Joel Myerson, and Mitchell S. Sommers. "Lipreading in School-Age Children: The Roles of Age, Hearing Status, and Cognitive Ability." Journal of Speech, Language, and Hearing Research 57, no. 2 (2014): 556–65. http://dx.doi.org/10.1044/2013_jslhr-h-12-0273.

Der volle Inhalt der Quelle
Annotation:
Purpose The study addressed three research questions: Does lipreading improve between the ages of 7 and 14 years? Does hearing loss affect the development of lipreading? How do individual differences in lipreading relate to other abilities? Method Forty children with normal hearing (NH) and 24 with hearing loss (HL) were tested using 4 lipreading instruments plus measures of perceptual, cognitive, and linguistic abilities. Results For both groups, lipreading performance improved with age on all 4 measures of lipreading, with the HL group performing better than the NH group. Scores from the 4 m
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Paulesu, E., D. Perani, V. Blasi, et al. "A Functional-Anatomical Model for Lipreading." Journal of Neurophysiology 90, no. 3 (2003): 2005–13. http://dx.doi.org/10.1152/jn.00926.2002.

Der volle Inhalt der Quelle
Annotation:
Regional cerebral blood flow (rCBF) PET scans were used to study the physiological bases of lipreading, a natural skill of extracting language from mouth movements, which contributes to speech perception in everyday life. Viewing connected mouth movements that could not be lexically identified and that evoke perception of isolated speech sounds (nonlexical lipreading) was associated with bilateral activation of the auditory association cortex around Wernicke's area, of left dorsal premotor cortex, and left opercular-premotor division of the left inferior frontal gyrus (Broca's area). The suppl
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Hawes, Nancy A. "Lipreading for Children: A Synthetic Approach to Lipreading." Ear and Hearing 9, no. 6 (1988): 356. http://dx.doi.org/10.1097/00003446-198812000-00018.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sankalp, Kala, and Sridhar Ranganathan Prof. "Deep Learning Based Lipreading for Video Captioning." Engineering and Technology Journal 9, no. 05 (2024): 3935–46. https://doi.org/10.5281/zenodo.11120548.

Der volle Inhalt der Quelle
Annotation:
Visual speech recognition, often referred to as lipreading, has garnered significant attention in recent years due to its potential applications in various fields such as human-computer interaction, accessibility technology, and biometric security systems. This paper explores the challenges and advancements in the field of lipreading, which involves deciphering speech from visual cues, primarily movements of the lips, tongue, and teeth. Despite being an essential aspect of human communication, lipreading presents inherent difficulties, especially in noisy environments or when contextual inform
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Heikkilä, Jenni, Eila Lonka, Sanna Ahola, Auli Meronen, and Kaisa Tiippana. "Lipreading Ability and Its Cognitive Correlates in Typically Developing Children and Children With Specific Language Impairment." Journal of Speech, Language, and Hearing Research 60, no. 3 (2017): 485–93. http://dx.doi.org/10.1044/2016_jslhr-s-15-0071.

Der volle Inhalt der Quelle
Annotation:
PurposeLipreading and its cognitive correlates were studied in school-age children with typical language development and delayed language development due to specific language impairment (SLI).MethodForty-two children with typical language development and 20 children with SLI were tested by using a word-level lipreading test and an extensive battery of standardized cognitive and linguistic tests.ResultsChildren with SLI were poorer lipreaders than their typically developing peers. Good phonological skills were associated with skilled lipreading in both typically developing children and in child
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ortiz, Isabel de los Reyes Rodríguez. "Lipreading in the Prelingually Deaf: What makes a Skilled Speechreader?" Spanish Journal of Psychology 11, no. 2 (2008): 488–502. http://dx.doi.org/10.1017/s1138741600004492.

Der volle Inhalt der Quelle
Annotation:
Lipreading proficiency was investigated in a group of hearing-impaired people, all of them knowing Spanish Sign Language (SSL). The aim of this study was to establish the relationships between lipreading and some other variables (gender, intelligence, audiological variables, participants' education, parents' education, communication practices, intelligibility, use of SSL). The 32 participants were between 14 and 47 years of age. They all had sensorineural hearing losses (from severe to profound). The lipreading procedures comprised identification of words in isolation. The words selected for p
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Plant, Geoff, Johan Gnosspelius, and Harry Levitt. "The Use of Tactile Supplements in Lipreading Swedish and English." Journal of Speech, Language, and Hearing Research 43, no. 1 (2000): 172–83. http://dx.doi.org/10.1044/jslhr.4301.172.

Der volle Inhalt der Quelle
Annotation:
The speech perception skills of GS, a Swedish adult deaf man who has used a "natural" tactile supplement to lipreading for over 45 years, were tested in two languages: Swedish and English. Two different tactile supplements to lipreading were investigated. In the first, "Tactiling," GS detected the vibrations accompanying speech by placing his thumb directly on the speaker’s throat. In the second, a simple tactile aid consisting of a throat microphone, amplifier, and a hand-held bone vibrator was used. Both supplements led to improved lipreading of materials ranging in complexity from consonant
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Suess, Nina, Anne Hauswald, Verena Zehentner, et al. "Influence of linguistic properties and hearing impairment on visual speech perception skills in the German language." PLOS ONE 17, no. 9 (2022): e0275585. http://dx.doi.org/10.1371/journal.pone.0275585.

Der volle Inhalt der Quelle
Annotation:
Visual input is crucial for understanding speech under noisy conditions, but there are hardly any tools to assess the individual ability to lipread. With this study, we wanted to (1) investigate how linguistic characteristics of language on the one hand and hearing impairment on the other hand have an impact on lipreading abilities and (2) provide a tool to assess lipreading abilities for German speakers. 170 participants (22 prelingually deaf) completed the online assessment, which consisted of a subjective hearing impairment scale and silent videos in which different item categories (numbers
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Zhang, Tao, Lun He, Xudong Li, and Guoqing Feng. "Efficient End-to-End Sentence-Level Lipreading with Temporal Convolutional Networks." Applied Sciences 11, no. 15 (2021): 6975. http://dx.doi.org/10.3390/app11156975.

Der volle Inhalt der Quelle
Annotation:
Lipreading aims to recognize sentences being spoken by a talking face. In recent years, the lipreading method has achieved a high level of accuracy on large datasets and made breakthrough progress. However, lipreading is still far from being solved, and existing methods tend to have high error rates on the wild data and have the defects of disappearing training gradient and slow convergence. To overcome these problems, we proposed an efficient end-to-end sentence-level lipreading model, using an encoder based on a 3D convolutional network, ResNet50, Temporal Convolutional Network (TCN), and a
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Dissertationen zum Thema "Lipreading"

1

Lucey, Patrick Joseph. "Lipreading across multiple views." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16676/1/Patrick_Joseph_Lucey_Thesis.pdf.

Der volle Inhalt der Quelle
Annotation:
Visual information from a speaker's mouth region is known to improve automatic speech recognition (ASR) robustness, especially in the presence of acoustic noise. Currently, the vast majority of audio-visual ASR (AVASR) studies assume frontal images of the speaker's face, which is a rather restrictive human-computer interaction (HCI) scenario. The lack of research into AVASR across multiple views has been dictated by the lack of large corpora that contains varying pose/viewpoint speech data. Recently, research has concentrated on recognising human be- haviours within &quotmeeting " or &quotlect
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Lucey, Patrick Joseph. "Lipreading across multiple views." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16676/.

Der volle Inhalt der Quelle
Annotation:
Visual information from a speaker's mouth region is known to improve automatic speech recognition (ASR) robustness, especially in the presence of acoustic noise. Currently, the vast majority of audio-visual ASR (AVASR) studies assume frontal images of the speaker's face, which is a rather restrictive human-computer interaction (HCI) scenario. The lack of research into AVASR across multiple views has been dictated by the lack of large corpora that contains varying pose/viewpoint speech data. Recently, research has concentrated on recognising human be- haviours within &quotmeeting " or &quotlect
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

MacLeod, A. "Effective methods for measuring lipreading skills." Thesis, University of Nottingham, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.233400.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

MacDermid, Catriona. "Lipreading and language processing by deaf children." Thesis, University of Surrey, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.291020.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Yuan, Hanfeng 1972. "Tactual display of consonant voicing to supplement lipreading." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/87906.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.<br>Includes bibliographical references (p. 241-251).<br>This research is concerned with the development of tactual displays to supplement the information available through lipreading. Because voicing carries a high informational load in speech and is not well transmitted through lipreading, the efforts are focused on providing tactual displays of voicing to supplement the information available on the lips of the talker. This research includes exploration of 1) signal-processing s
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Chiou, Greg I. "Active contour models for distinct feature tracking and lipreading /." Thesis, Connect to this title online; UW restricted, 1995. http://hdl.handle.net/1773/6023.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kaucic, Robert August. "Lip tracking for audio-visual speech recognition." Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360392.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Matthews, Iain. "Features for audio-visual speech recognition." Thesis, University of East Anglia, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266736.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Thangthai, Kwanchiva. "Computer lipreading via hybrid deep neural network hidden Markov models." Thesis, University of East Anglia, 2018. https://ueaeprints.uea.ac.uk/69215/.

Der volle Inhalt der Quelle
Annotation:
Constructing a viable lipreading system is a challenge because it is claimed that only 30% of information of speech production is visible on the lips. Nevertheless, in small vocabulary tasks, there have been several reports of high accuracies. However, investigation of larger vocabulary tasks is rare. This work examines constructing a large vocabulary lipreading system using an approach based-on Deep Neural Network Hidden Markov Models (DNN-HMMs). We present the historical development of computer lipreading technology and the state-ofthe-art results in small and large vocabulary tasks. In prel
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Hiramatsu, Sandra. "Does lipreading help word reading? : an investigation of the relationship between visible speech and early reading achievement /." Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/7913.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Bücher zum Thema "Lipreading"

1

Woods, John Chaloner. Lipreading: A guide for beginners. Royal National Institute for the Deaf, 1991.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Erickson, Joan Good. Speech reading: An aid to communication. 2nd ed. Interstate Printers & Publishers, 1989.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Dupret, Jean-Pierre. Stratégies visuelles dans la lecture labiale. H. Buske, 1986.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Martin, Christine. Speech perception: Writing functional material for lipreading classes. [s.n.], 1995.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Beeching, David. Take another pick: A selection of lipreading exercises. [ATLA], 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Chaloner, Woods John, ed. Watch this face: A practical guide to lipreading. Royal National Institute for Deaf People, 2003.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Nitchie, Edward Bartlett. Lip reading made easy. Breakout Productions, 1998.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Nitchie, Edward Bartlett. Lip reading made easy. Loompanics, 1985.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lyxell, Björn. Beyond lips: Components of speechreading skill. Universitetet, 1989.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Marcus, Irving S. Your eyes hear for you: A self-help course in speechreading. Self Help for Hard of Hearing People, 1985.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Buchteile zum Thema "Lipreading"

1

Hlaváč, Miroslav, Ivan Gruber, Miloš Železný, and Alexey Karpov. "Lipreading with LipsID." In Speech and Computer. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60276-5_18.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Bregler, Christoph, and Stephen M. Omohundro. "Learning Visual Models for Lipreading." In Computational Imaging and Vision. Springer Netherlands, 1997. http://dx.doi.org/10.1007/978-94-015-8935-2_13.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Paleček, Karel. "Spatiotemporal Convolutional Features for Lipreading." In Text, Speech, and Dialogue. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-64206-2_49.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Séguier, Renaud, and Nicolas Cladel. "Genetic Snakes: Application on Lipreading." In Artificial Neural Nets and Genetic Algorithms. Springer Vienna, 2003. http://dx.doi.org/10.1007/978-3-7091-0646-4_41.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Visser, Michiel, Mannes Poel, and Anton Nijholt. "Classifying Visemes for Automatic Lipreading." In Text, Speech and Dialogue. Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48239-3_65.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Yu, Keren, Xiaoyi Jiang, and Horst Bunke. "Lipreading using Fourier transform over time." In Computer Analysis of Images and Patterns. Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63460-6_152.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Singh, Preety, Vijay Laxmi, Deepika Gupta, and M. S. Gaur. "Lipreading Using n–Gram Feature Vector." In Advances in Intelligent and Soft Computing. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-16626-6_9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Owczarek, Agnieszka, and Krzysztof Ślot. "Lipreading Procedure Based on Dynamic Programming." In Artificial Intelligence and Soft Computing. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29347-4_65.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Goldschen, Alan J., Oscar N. Garcia, and Eric D. Petajan. "Continuous Automatic Speech Recognition by Lipreading." In Computational Imaging and Vision. Springer Netherlands, 1997. http://dx.doi.org/10.1007/978-94-015-8935-2_14.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Tsunekawa, Takuya, Kazuhiro Hotta, and Haruhisa Takahashi. "Lipreading Using Recurrent Neural Prediction Model." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30126-4_50.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Lipreading"

1

Somkuwar, Sameer, Amey Rathi, Somesh Todankar, and R. G. Yelalwar. "Deep Learning Based Lipreading Assistant." In 2024 OPJU International Technology Conference (OTCON) on Smart Computing for Innovation and Advancement in Industry 4.0. IEEE, 2024. http://dx.doi.org/10.1109/otcon60325.2024.10687357.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Wang, Haoxu, Cancan Li, Fei Su, Juan Liu, Hongbin Suo, and Ming Li. "The Whu Wake Word Lipreading System for the 2024 Chat-Scenario Chinese Lipreading Challenge." In 2024 IEEE International Conference on Multimedia and Expo Workshops (ICMEW). IEEE, 2024. http://dx.doi.org/10.1109/icmew63481.2024.10645425.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Aminah, Luthfia Cucu, Eko Mulyanto Yuniarno, and Reza Fuad Rachmadi. "Vowel Recognition in Lipreading Using Geometric Features." In 2024 International Conference on Information Technology Systems and Innovation (ICITSI). IEEE, 2024. https://doi.org/10.1109/icitsi65188.2024.10929323.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Zhang, Chen-Yue, Hang Chen, Jun Du, Sabato Marco Siniscalchi, Ya Jiang, and Chin-Hui Lee. "Summary on the Chat-Scenario Chinese Lipreading (ChatCLR) Challenge." In 2024 IEEE International Conference on Multimedia and Expo Workshops (ICMEW). IEEE, 2024. http://dx.doi.org/10.1109/icmew63481.2024.10645486.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Zou, Shuai, Xuefeng Liang, and Yiyang Huang. "LipReading for Low-resource Languages by Language Dynamic LoRA." In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025. https://doi.org/10.1109/icassp49660.2025.10889645.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Daou, Samar, Achraf Ben-Hamadou, Ahmed Rekik, and Abdelaziz Kallel. "Transfer Learning for Limited-Data Infra-Red Lipreading Model Training." In 2024 IEEE/ACS 21st International Conference on Computer Systems and Applications (AICCSA). IEEE, 2024. https://doi.org/10.1109/aiccsa63423.2024.10912550.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Chen, Hang, Chang Wang, Jun Du, Chao-Han Huck Yang, and Jun Qi. "Projection Valued-based Quantum Machine Learning Adapting to Differential Privacy Algorithm for Word-level Lipreading." In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025. https://doi.org/10.1109/icassp49660.2025.10890305.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Wan, Genshun, and Zhongfu Ye. "Multi-Modal Knowledge Transfer for Target Speaker Lipreading with Improved Audio-Visual Pretraining and Cross-Lingual Fine-Tuning." In 2024 IEEE International Conference on Multimedia and Expo Workshops (ICMEW). IEEE, 2024. http://dx.doi.org/10.1109/icmew63481.2024.10645443.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Yavuz, Zafer, and Vasif V. Nabiyev. "Automatic Lipreading." In 2007 IEEE 15th Signal Processing and Communications Applications. IEEE, 2007. http://dx.doi.org/10.1109/siu.2007.4298783.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Mase, Kenji, and Alex Pentland. "Lip Reading: Automatic Visual Recognition of Spoken Words." In Image Understanding and Machine Vision. Optica Publishing Group, 1989. http://dx.doi.org/10.1364/iumv.1989.wc1.

Der volle Inhalt der Quelle
Annotation:
Lipreading is an rich source of speech information, and in noisy environments it can even be the primary source of information. In day-to-day situations lipreading is important because it provides a source of information that is largely independent of auditory signal, so that auditory and lipreading information can be combined to produce more accurate and robust speech recognition. For instance, the nasal sounds ‘n’, ‘m’, and ‘ng’ are quite difficult to distinguish acoustically, but have very different visual appearance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!