Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Lipreading.

Zeitschriftenartikel zum Thema „Lipreading“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Lipreading" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Lynch, Michael P., Rebecca E. Eilers, D. Kimbrough Oller, Richard C. Urbano, and Patricia J. Pero. "Multisensory Narrative Tracking by a Profoundly Deaf Subject Using an Electrocutaneous Vocoder and a Vibrotactile Aid." Journal of Speech, Language, and Hearing Research 32, no. 2 (1989): 331–38. http://dx.doi.org/10.1044/jshr.3202.331.

Der volle Inhalt der Quelle
Annotation:
A congenitally, profoundly deaf adult who had received 41 hours of tactual word recognition training in a previous study was assessed in tracking of connected discourse. This assessment was conducted in three phases. In the first phase, the subject used the Tacticon 1600 electrocutaneous vocoder to track a narrative in three conditions: (a) lipreading and aided hearing (L+H), (b) lipreading and tactual vocoder (L+TV), and (c) lipreading, tactual vocoder, and aided hearing (L+TV+H), Subject performance was significantly better in the L+TV+H condition than in the L+H condition, suggesting that t
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Tye-Murray, Nancy, Sandra Hale, Brent Spehar, Joel Myerson, and Mitchell S. Sommers. "Lipreading in School-Age Children: The Roles of Age, Hearing Status, and Cognitive Ability." Journal of Speech, Language, and Hearing Research 57, no. 2 (2014): 556–65. http://dx.doi.org/10.1044/2013_jslhr-h-12-0273.

Der volle Inhalt der Quelle
Annotation:
Purpose The study addressed three research questions: Does lipreading improve between the ages of 7 and 14 years? Does hearing loss affect the development of lipreading? How do individual differences in lipreading relate to other abilities? Method Forty children with normal hearing (NH) and 24 with hearing loss (HL) were tested using 4 lipreading instruments plus measures of perceptual, cognitive, and linguistic abilities. Results For both groups, lipreading performance improved with age on all 4 measures of lipreading, with the HL group performing better than the NH group. Scores from the 4 m
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Paulesu, E., D. Perani, V. Blasi, et al. "A Functional-Anatomical Model for Lipreading." Journal of Neurophysiology 90, no. 3 (2003): 2005–13. http://dx.doi.org/10.1152/jn.00926.2002.

Der volle Inhalt der Quelle
Annotation:
Regional cerebral blood flow (rCBF) PET scans were used to study the physiological bases of lipreading, a natural skill of extracting language from mouth movements, which contributes to speech perception in everyday life. Viewing connected mouth movements that could not be lexically identified and that evoke perception of isolated speech sounds (nonlexical lipreading) was associated with bilateral activation of the auditory association cortex around Wernicke's area, of left dorsal premotor cortex, and left opercular-premotor division of the left inferior frontal gyrus (Broca's area). The suppl
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Hawes, Nancy A. "Lipreading for Children: A Synthetic Approach to Lipreading." Ear and Hearing 9, no. 6 (1988): 356. http://dx.doi.org/10.1097/00003446-198812000-00018.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sankalp, Kala, and Sridhar Ranganathan Prof. "Deep Learning Based Lipreading for Video Captioning." Engineering and Technology Journal 9, no. 05 (2024): 3935–46. https://doi.org/10.5281/zenodo.11120548.

Der volle Inhalt der Quelle
Annotation:
Visual speech recognition, often referred to as lipreading, has garnered significant attention in recent years due to its potential applications in various fields such as human-computer interaction, accessibility technology, and biometric security systems. This paper explores the challenges and advancements in the field of lipreading, which involves deciphering speech from visual cues, primarily movements of the lips, tongue, and teeth. Despite being an essential aspect of human communication, lipreading presents inherent difficulties, especially in noisy environments or when contextual inform
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Heikkilä, Jenni, Eila Lonka, Sanna Ahola, Auli Meronen, and Kaisa Tiippana. "Lipreading Ability and Its Cognitive Correlates in Typically Developing Children and Children With Specific Language Impairment." Journal of Speech, Language, and Hearing Research 60, no. 3 (2017): 485–93. http://dx.doi.org/10.1044/2016_jslhr-s-15-0071.

Der volle Inhalt der Quelle
Annotation:
PurposeLipreading and its cognitive correlates were studied in school-age children with typical language development and delayed language development due to specific language impairment (SLI).MethodForty-two children with typical language development and 20 children with SLI were tested by using a word-level lipreading test and an extensive battery of standardized cognitive and linguistic tests.ResultsChildren with SLI were poorer lipreaders than their typically developing peers. Good phonological skills were associated with skilled lipreading in both typically developing children and in child
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ortiz, Isabel de los Reyes Rodríguez. "Lipreading in the Prelingually Deaf: What makes a Skilled Speechreader?" Spanish Journal of Psychology 11, no. 2 (2008): 488–502. http://dx.doi.org/10.1017/s1138741600004492.

Der volle Inhalt der Quelle
Annotation:
Lipreading proficiency was investigated in a group of hearing-impaired people, all of them knowing Spanish Sign Language (SSL). The aim of this study was to establish the relationships between lipreading and some other variables (gender, intelligence, audiological variables, participants' education, parents' education, communication practices, intelligibility, use of SSL). The 32 participants were between 14 and 47 years of age. They all had sensorineural hearing losses (from severe to profound). The lipreading procedures comprised identification of words in isolation. The words selected for p
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Plant, Geoff, Johan Gnosspelius, and Harry Levitt. "The Use of Tactile Supplements in Lipreading Swedish and English." Journal of Speech, Language, and Hearing Research 43, no. 1 (2000): 172–83. http://dx.doi.org/10.1044/jslhr.4301.172.

Der volle Inhalt der Quelle
Annotation:
The speech perception skills of GS, a Swedish adult deaf man who has used a "natural" tactile supplement to lipreading for over 45 years, were tested in two languages: Swedish and English. Two different tactile supplements to lipreading were investigated. In the first, "Tactiling," GS detected the vibrations accompanying speech by placing his thumb directly on the speaker’s throat. In the second, a simple tactile aid consisting of a throat microphone, amplifier, and a hand-held bone vibrator was used. Both supplements led to improved lipreading of materials ranging in complexity from consonant
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Suess, Nina, Anne Hauswald, Verena Zehentner, et al. "Influence of linguistic properties and hearing impairment on visual speech perception skills in the German language." PLOS ONE 17, no. 9 (2022): e0275585. http://dx.doi.org/10.1371/journal.pone.0275585.

Der volle Inhalt der Quelle
Annotation:
Visual input is crucial for understanding speech under noisy conditions, but there are hardly any tools to assess the individual ability to lipread. With this study, we wanted to (1) investigate how linguistic characteristics of language on the one hand and hearing impairment on the other hand have an impact on lipreading abilities and (2) provide a tool to assess lipreading abilities for German speakers. 170 participants (22 prelingually deaf) completed the online assessment, which consisted of a subjective hearing impairment scale and silent videos in which different item categories (numbers
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Zhang, Tao, Lun He, Xudong Li, and Guoqing Feng. "Efficient End-to-End Sentence-Level Lipreading with Temporal Convolutional Networks." Applied Sciences 11, no. 15 (2021): 6975. http://dx.doi.org/10.3390/app11156975.

Der volle Inhalt der Quelle
Annotation:
Lipreading aims to recognize sentences being spoken by a talking face. In recent years, the lipreading method has achieved a high level of accuracy on large datasets and made breakthrough progress. However, lipreading is still far from being solved, and existing methods tend to have high error rates on the wild data and have the defects of disappearing training gradient and slow convergence. To overcome these problems, we proposed an efficient end-to-end sentence-level lipreading model, using an encoder based on a 3D convolutional network, ResNet50, Temporal Convolutional Network (TCN), and a
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Kumar, Yaman, Rohit Jain, Khwaja Mohd Salik, Rajiv Ratn Shah, Yifang Yin, and Roger Zimmermann. "Lipper: Synthesizing Thy Speech Using Multi-View Lipreading." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 2588–95. http://dx.doi.org/10.1609/aaai.v33i01.33012588.

Der volle Inhalt der Quelle
Annotation:
Lipreading has a lot of potential applications such as in the domain of surveillance and video conferencing. Despite this, most of the work in building lipreading systems has been limited to classifying silent videos into classes representing text phrases. However, there are multiple problems associated with making lipreading a text-based classification task like its dependence on a particular language and vocabulary mapping. Thus, in this paper we propose a multi-view lipreading to audio system, namely Lipper, which models it as a regression task. The model takes silent videos as input and pr
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Muljono, Muljono, Galuh Wilujeng Saraswati, Nurul Anisa Sri Winarsih, Nur Rokhman, Catur Supriyanto, and Pujiono Pujiono. "Developing BacaBicara: An Indonesian Lipreading System as an Independent Communication Learning for the Deaf and Hard-of-Hearing." International Journal of Emerging Technologies in Learning (iJET) 14, no. 04 (2019): 44. http://dx.doi.org/10.3991/ijet.v14i04.9578.

Der volle Inhalt der Quelle
Annotation:
Deaf and hard-of-hearing people have limitations in communication, espe-cially on aspects of language, intelligence, and social adjustment. To com-municate, deaf people use sign language or lipreading. For normal people, it is very difficult to use sign language. They have to memorize many hand signs. Therefore, lipreading is a necessary for communication between nor-mal and deaf people. In Indonesia, there is still few education media for deaf people to learn lipreading. To overcome this challenge, we develop a lipread-ing educational media to help deaf and hard-of-hearing to learn Bahasa In-
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Metipatil, Prabhuraj, and Pranav SK. "Hybrid Attention Mechanisms in 3D CNN for Noise-Resilient Lip Reading in Complex Environments." Computer Science & Engineering: An International Journal 15, no. 1 (2025): 83–93. https://doi.org/10.5121/cseij.2025.15110.

Der volle Inhalt der Quelle
Annotation:
This paper presents a novel lipreading approach implemented through a web application that automatically generates subtitles for videos where the speaker's mouth movements are visible. The proposed solution leverages a deep learning architecture combining 3D convolutional neural networks (CNN) with bidirectional Long Short-Term Memory (LSTM) units to accurately predict sentences based solely on visual input. A thorough review of existing lipreading techniques over the past decade is provided to contextualize the advancements introduced in this work. The primary goal is to improve the accuracy
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Johnson, Fern M., Leslie H. Hicks, Terry Goldberg, and Michael S. Myslobodsky. "Sex differences in lipreading." Bulletin of the Psychonomic Society 26, no. 2 (1988): 106–8. http://dx.doi.org/10.3758/bf03334875.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Updike, Claudia D., Joanne M. Rasmussen, Roberta Arndt, and Cathy German. "Revised Craig Lipreading Inventory." Perceptual and Motor Skills 74, no. 1 (1992): 267–77. http://dx.doi.org/10.2466/pms.1992.74.1.267.

Der volle Inhalt der Quelle
Annotation:
The two purposes of this study were to shorten the Craig Lipreading Inventory without affecting its reliability and validity and to establish normative data on the revised version. The full inventory was administered to 75 children. By item analysis, half of the items were selected to comprise the brief version; both versions were administered to another group of 75 children. Scores on the two versions correlated (.91 and .92, respectively, for Word Forms A and B and .97 and .95, respectively, for Sentence Forms A and B), thereby substantiating the construct validity of the briefer version. Th
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Ebrahimi, D., and H. Kunov. "Peripheral vision lipreading aid." IEEE Transactions on Biomedical Engineering 38, no. 10 (1991): 944–52. http://dx.doi.org/10.1109/10.88440.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

CAMPBELL, RUTH, THEODOR LANDIS, and MARIANNE REGARD. "FACE RECOGNITION AND LIPREADING." Brain 109, no. 3 (1986): 509–21. http://dx.doi.org/10.1093/brain/109.3.509.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

SAMUELSSON, STEFAN, and JERKER RÖNNBERG. "Script activation in lipreading." Scandinavian Journal of Psychology 32, no. 2 (1991): 124–43. http://dx.doi.org/10.1111/j.1467-9450.1991.tb00863.x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Wiss, Rosemary. "Lipreading: Remembering Saartjie Baartman." Australian Journal of Anthropology 5, no. 3 (1994): 11–40. http://dx.doi.org/10.1111/j.1835-9310.1994.tb00323.x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Chiou, G. I., and Jenq-Neng Hwang. "Lipreading from color video." IEEE Transactions on Image Processing 6, no. 8 (1997): 1192–95. http://dx.doi.org/10.1109/83.605417.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Bernstein, Lynne E., Marilyn E. Demorest, Michael P. O'Connell, and David C. Coulter. "Lipreading with vibrotactile vocoders." Journal of the Acoustical Society of America 87, S1 (1990): S124—S125. http://dx.doi.org/10.1121/1.2027907.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Yu, Xuhu, Zhong Wan, Zehao Shi, and Lei Wang. "Lipreading Using Liquid State Machine with STDP-Tuning." Applied Sciences 12, no. 20 (2022): 10484. http://dx.doi.org/10.3390/app122010484.

Der volle Inhalt der Quelle
Annotation:
Lipreading refers to the task of decoding the text content of a speaker based on visual information about the movement of the speaker’s lips. With the development of deep learning in recent years, lipreading has attracted extensive research. However, the deep learning method requires a lot of computing resources, which is not conducive to the migration of the system to edge devices. Inspired by the work of Spiking Neural Networks (SNNs) in recognizing human actions and gestures, we propose a lipreading system based on SNNs. Specifically, we construct the front-end feature extractor of the syst
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Brown, A. M., R. C. Dowell, and G. M. Clark. "Clinical Results for Postlingually Deaf Patients Implanted with Multichannel Cochlear Prostheses." Annals of Otology, Rhinology & Laryngology 96, no. 1_suppl (1987): 127–28. http://dx.doi.org/10.1177/00034894870960s168.

Der volle Inhalt der Quelle
Annotation:
Clinical results for 24 patients using the Nucleus 22-channel cochlear prosthesis have shown the device to be successful in presenting amplitude, fundamental frequency, and second formant information to patients with acquired hearing loss. For all patients, this has meant a significant improvement in their communication ability when using lipreading and some ability to understand unknown speech without lipreading or contextual cues. Approximately 40% of patients are able to understand running speech in a limited fashion without lipreading, and this ability has been evaluated using the speech-t
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Strand, Julia, Allison Cooperman, Jonathon Rowe, and Andrea Simenstad. "Individual Differences in Susceptibility to the McGurk Effect: Links With Lipreading and Detecting Audiovisual Incongruity." Journal of Speech, Language, and Hearing Research 57, no. 6 (2014): 2322–31. http://dx.doi.org/10.1044/2014_jslhr-h-14-0059.

Der volle Inhalt der Quelle
Annotation:
Purpose Prior studies (e.g., Nath & Beauchamp, 2012) report large individual variability in the extent to which participants are susceptible to the McGurk effect, a prominent audiovisual (AV) speech illusion. The current study evaluated whether susceptibility to the McGurk effect (MGS) is related to lipreading skill and whether multiple measures of MGS that have been used previously are correlated. In addition, it evaluated the test–retest reliability of individual differences in MGS. Method Seventy-three college-age participants completed 2 tasks measuring MGS and 3 measures of lipreading
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Li, Hao, Nurbiya Yadikar, Yali Zhu, Mutallip Mamut, and Kurban Ubul. "Learning the Relative Dynamic Features for Word-Level Lipreading." Sensors 22, no. 10 (2022): 3732. http://dx.doi.org/10.3390/s22103732.

Der volle Inhalt der Quelle
Annotation:
Lipreading is a technique for analyzing sequences of lip movements and then recognizing the speech content of a speaker. Limited by the structure of our vocal organs, the number of pronunciations we could make is finite, leading to problems with homophones when speaking. On the other hand, different speakers will have various lip movements for the same word. For these problems, we focused on the spatial–temporal feature extraction in word-level lipreading in this paper, and an efficient two-stream model was proposed to learn the relative dynamic information of lip motion. In this model, two di
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Jishnu T S and Anju Antony. "LipNet: End-to-End Lipreading." Indian Journal of Data Mining 4, no. 1 (2024): 1–4. http://dx.doi.org/10.54105/ijdm.a1632.04010524.

Der volle Inhalt der Quelle
Annotation:
Lipreading is the task of decoding text from the movement of a speaker’s mouth. This research presents the development of an advanced end-to-end lipreading system. Leveraging deep learning architectures and multimodal fusion techniques, the proposed system interprets spoken language solely from visual cues, such as lip movements. Through meticulous data collection, annotation, preprocessing, model development, and evaluation, diverse datasets encompassing various speakers, accents, languages, and environmental conditions are curated to ensure robustness and generalization. Conventional methods
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Vannuscorps, Gilles, Michael Andres, Sarah Pereira Carneiro, Elise Rombaux, and Alfonso Caramazza. "Typically Efficient Lipreading without Motor Simulation." Journal of Cognitive Neuroscience 33, no. 4 (2021): 611–21. http://dx.doi.org/10.1162/jocn_a_01666.

Der volle Inhalt der Quelle
Annotation:
All it takes is a face-to-face conversation in a noisy environment to realize that viewing a speaker's lip movements contributes to speech comprehension. What are the processes underlying the perception and interpretation of visual speech? Brain areas that control speech production are also recruited during lipreading. This finding raises the possibility that lipreading may be supported, at least to some extent, by a covert unconscious imitation of the observed speech movements in the observer's own speech motor system—a motor simulation. However, whether, and if so to what extent, motor simul
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Rosen, Stuart, John Walliker, Judith A. Brimacombe, and Bradly J. Edgerton. "Prosodic and Segmental Aspects of Speech Perception with the House/3M Single-Channel Implant." Journal of Speech, Language, and Hearing Research 32, no. 1 (1989): 93–111. http://dx.doi.org/10.1044/jshr.3201.93.

Der volle Inhalt der Quelle
Annotation:
Four adult users of the House/3M single-channel cochlear implant were tested for their ability to label question and statement intonation contours (by auditory means alone) and to identify a set of 12 intervocalic consonants (with and without lipreading). Nineteen of 20 scores obtained on the question/statement task were significantly better than chance. Simplifying the stimulating waveform so as to signal fundamental frequency alone sometimes led to an improvement in performance. In consonant identification, lipreading alone scores were always far inferior to those obtained by lipreading with
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Jishnu, T. S. "LipNet: End-to-End Lipreading." Indian Journal of Data Mining (IJDM) 4, no. 1 (2024): 1–4. https://doi.org/10.54105/ijdm.A1632.04010524.

Der volle Inhalt der Quelle
Annotation:
<strong>Abstarct:</strong> Lipreading is the task of decoding text from the movement of a speaker&rsquo;s mouth. This research presents the development of an advanced end-to-end lipreading system. Leveraging deep learning architectures and multimodal fusion techniques, the proposed system interprets spoken language solely from visual cues, such as lip movements. Through meticulous data collection, annotation, preprocessing, model development, and evaluation, diverse datasets encompassing various speakers, accents, languages, and environmental conditions are curated to ensure robustness and gen
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Ching, Yuk Ching. "Lipreading Cantonese with voice pitch." Journal of the Acoustical Society of America 77, S1 (1985): S39—S40. http://dx.doi.org/10.1121/1.2022317.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

MYSLOBODSKY, MICHAEL S., TERRY GOLDBERG, FERN JOHNSON, LESLIE HICKS, and DANIEL R. WEINBERGER. "Lipreading in Patients with Schizophrenia." Journal of Nervous and Mental Disease 180, no. 3 (1992): 168–71. http://dx.doi.org/10.1097/00005053-199203000-00004.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Yu, Keren, Xiaoyi Jiang, and Horst Bunke. "Lipreading: A classifier combination approach." Pattern Recognition Letters 18, no. 11-13 (1997): 1421–26. http://dx.doi.org/10.1016/s0167-8655(97)00113-x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Guoying Zhao, M. Barnard, and M. Pietikainen. "Lipreading With Local Spatiotemporal Descriptors." IEEE Transactions on Multimedia 11, no. 7 (2009): 1254–65. http://dx.doi.org/10.1109/tmm.2009.2030637.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Farrimond, Thomas. "Effect of Encouragement on Performance of Young and Old Subjects on a Task Involving Lipreading." Psychological Reports 65, no. 3_suppl2 (1989): 1247–50. http://dx.doi.org/10.2466/pr0.1989.65.3f.1247.

Der volle Inhalt der Quelle
Annotation:
Two tests of lipreading ability were constructed, one using numbers and the other of sentences including visual cues. The tests were given to two groups of men, one older group aged 40 yr. and over ( n = 110) and a younger group of less than 40 yr. ( n = 70). Requests to guess produced a higher mean score for the older subjects on the lipreading tests containing the greater amount of information. It is suggested that differences in the effect of encouragement on performance between young and old may be related to both age and cultural factors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Salik, Khwaja Mohd, Swati Aggarwal, Yaman Kumar, Rajiv Ratn Shah, Rohit Jain, and Roger Zimmermann. "Lipper: Speaker Independent Speech Synthesis Using Multi-View Lipreading." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 10023–24. http://dx.doi.org/10.1609/aaai.v33i01.330110023.

Der volle Inhalt der Quelle
Annotation:
Lipreading is the process of understanding and interpreting speech by observing a speaker’s lip movements. In the past, most of the work in lipreading has been limited to classifying silent videos to a fixed number of text classes. However, this limits the applications of the lipreading since human language cannot be bound to a fixed set of words or languages. The aim of this work is to reconstruct intelligible acoustic speech signals from silent videos from various poses of a person which Lipper has never seen before. Lipper, therefore is a vocabulary and language agnostic, speaker independen
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Daou, Samar, Achraf Ben-Hamadou, Ahmed Rekik, and Abdelaziz Kallel. "Cross-Attention Fusion of Visual and Geometric Features for Large-Vocabulary Arabic Lipreading." Technologies 13, no. 1 (2025): 26. https://doi.org/10.3390/technologies13010026.

Der volle Inhalt der Quelle
Annotation:
Lipreading involves recognizing spoken words by analyzing the movements of the lips and surrounding area using visual data. It is an emerging research topic with many potential applications, such as human–machine interaction and enhancing audio-based speech recognition. Recent deep learning approaches integrate visual features from the mouth region and lip contours. However, simple methods such as concatenation may not effectively optimize the feature vector. In this article, we propose extracting optimal visual features using 3D convolution blocks followed by a ResNet-18, while employing a gr
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

YU, KEREN, XIAOYI JIANG, and HORST BUNKE. "SENTENCE LIPREADING USING HIDDEN MARKOV MODEL WITH INTEGRATED GRAMMAR." International Journal of Pattern Recognition and Artificial Intelligence 15, no. 01 (2001): 161–76. http://dx.doi.org/10.1142/s0218001401000770.

Der volle Inhalt der Quelle
Annotation:
In this paper, we describe a systematic approach to the lipreading of whole sentences. A vocabulary of elementary words is considered. Based on the vocabulary, we define a grammar that generates a set of legal sentences. Our lipreading approach is based on a combination of the grammar with hidden Markov models (HMMs). Two different experiments were conducted. In the first experiment a set of e-mail commands is considered, while the set of sentences in the second experiment is given by all English integer numbers up to one million. Both experiments showed promising results, regarding the diffic
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Clark, G. M., B. C. Pyman, R. L. Webb, B. K.-H. G. Franz, T. J. Redhead, and R. K. Shepherd. "Surgery for the Safe Insertion and Reinsertion of the Banded Electrode Array." Annals of Otology, Rhinology & Laryngology 96, no. 1_suppl (1987): 10–12. http://dx.doi.org/10.1177/00034894870960s102.

Der volle Inhalt der Quelle
Annotation:
Adhering to the surgical technique outlined in the protocol for the Nucleus implant has resulted in over 100 patients worldwide obtaining significant benefit from multichannel stimulation. A detailed analysis of the results in 40 patients shows that it improves their awareness of environmental sounds and their abilities in understanding running speech when combined with lipreading. In addition, one third to one half of the patients also understand significant amounts of running speech without lipreading and some can have interactive conversations over the telephone. It is clear that any insert
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Caron, Cora Jirschik, Coriandre Vilain, Jean-Luc Schwartz, et al. "The Effect of Cued-Speech (CS) Perception on Auditory Processing in Typically Hearing (TH) Individuals Who Are Either Naïve or Experienced CS Producers." Brain Sciences 13, no. 7 (2023): 1036. http://dx.doi.org/10.3390/brainsci13071036.

Der volle Inhalt der Quelle
Annotation:
Cued Speech (CS) is a communication system that uses manual gestures to facilitate lipreading. In this study, we investigated how CS information interacts with natural speech using Event-Related Potential (ERP) analyses in French-speaking, typically hearing adults (TH) who were either naïve or experienced CS producers. The audiovisual (AV) presentation of lipreading information elicited an amplitude attenuation of the entire N1 and P2 complex in both groups, accompanied by N1 latency facilitation in the group of CS producers. Adding CS gestures to lipread information increased the magnitude of
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Cohen, N. L., S. B. Waltzman, and W. Shapiro. "Multichannel Cochlear Implant: The New York University/Bellevue Experience." Annals of Otology, Rhinology & Laryngology 96, no. 1_suppl (1987): 139–40. http://dx.doi.org/10.1177/00034894870960s177.

Der volle Inhalt der Quelle
Annotation:
A total of nine patients have been implanted at the New York University/Bellevue Medical Center with the Nucleus multichannel cochlear implant. The patients ranged in age from 21 to 62 years, with a mean age of 38.7 years. All were postlingually deafened with bilateral profound sensorineural hearing loss, and were unable to benefit from appropriate amplification. Each patient was implanted with the 22-electrode array inserted into the scala tympani, using the facial recess technique. Seven of the nine patients have functioning 22-channel systems, whereas one patient has a single-channel system
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Updike, Claudia D., Roberta L. Albertson, Cathy M. German, and Joanne M. Ward. "Evaluation of the Craig Lipreading Inventory." Perceptual and Motor Skills 70, no. 3_suppl (1990): 1271–82. http://dx.doi.org/10.2466/pms.1990.70.3c.1271.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Demorest, Marilyn E., Lynne E. Bernstein, and Silvio P. Eberhardt. "Reliability of individual differences in lipreading." Journal of the Acoustical Society of America 82, S1 (1987): S24. http://dx.doi.org/10.1121/1.2024715.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Matthews, I., T. F. Cootes, J. A. Bangham, S. Cox, and R. Harvey. "Extraction of visual features for lipreading." IEEE Transactions on Pattern Analysis and Machine Intelligence 24, no. 2 (2002): 198–213. http://dx.doi.org/10.1109/34.982900.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Yu, Keren, Xiaoyi Jiang, and Horst Bunke. "Lipreading using signal analysis over time." Signal Processing 77, no. 2 (1999): 195–208. http://dx.doi.org/10.1016/s0165-1684(99)00032-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Chen, Xuejuan, Jixiang Du, and Hongbo Zhang. "Lipreading with DenseNet and resBi-LSTM." Signal, Image and Video Processing 14, no. 5 (2020): 981–89. http://dx.doi.org/10.1007/s11760-019-01630-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Mase, Kenji, and Alex Pentland. "Automatic lipreading by optical-flow analysis." Systems and Computers in Japan 22, no. 6 (1991): 67–76. http://dx.doi.org/10.1002/scj.4690220607.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Bernstein, Lynne E. "Response Errors in Females’ and Males’ Sentence Lipreading Necessitate Structurally Different Models for Predicting Lipreading Accuracy." Language Learning 68 (February 26, 2018): 127–58. http://dx.doi.org/10.1111/lang.12281.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Bear, Helen L., and Richard Harvey. "Alternative Visual Units for an Optimized Phoneme-Based Lipreading System." Applied Sciences 9, no. 18 (2019): 3870. http://dx.doi.org/10.3390/app9183870.

Der volle Inhalt der Quelle
Annotation:
Lipreading is understanding speech from observed lip movements. An observed series of lip motions is an ordered sequence of visual lip gestures. These gestures are commonly known, but as yet are not formally defined, as `visemes’. In this article, we describe a structured approach which allows us to create speaker-dependent visemes with a fixed number of visemes within each set. We create sets of visemes for sizes two to 45. Each set of visemes is based upon clustering phonemes, thus each set has a unique phoneme-to-viseme mapping. We first present an experiment using these maps and the Resour
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Collins, M. Jane, and Richard R. Hurtig. "Categorical Perception of Speech Sounds via the Tactile Mode." Journal of Speech, Language, and Hearing Research 28, no. 4 (1985): 594–98. http://dx.doi.org/10.1044/jshr.2804.594.

Der volle Inhalt der Quelle
Annotation:
The usefulness of tactile devices as aids to lipreading has been established. However, maximum usefulness in reducing the ambiguity of lipreading cues and/or use of tactile devices as a substitute for audition may be dependent on phonemic recognition via tactile signals alone. In the present study, a categorical perception paradigm was used to evaluate tactile perception of speech sounds in comparison to auditory perception. The results show that speech signals delivered by tactile stimulation can be categorically perceived on a voice-onset time (VOT) continuum. The boundary for the voiced-voi
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Hygge, Staffan, Jerker Rönnberg, Birgitta Larsby, and Stig Arlinger. "Normal-Hearing and Hearing- Impaired Subjects' Ability to Just Follow Conversation in Competing Speech, Reversed Speech, and Noise Backgrounds." Journal of Speech, Language, and Hearing Research 35, no. 1 (1992): 208–15. http://dx.doi.org/10.1044/jshr.3501.208.

Der volle Inhalt der Quelle
Annotation:
The performance on a conversation-following task by 24 hearing-impaired persons was compared with that of 24 matched controls with normal hearing in the presence of three background noises: (a) speech-spectrum random noise, (b) a male voice, and (c) the male voice played in reverse. The subjects’ task was to readjust the sound level of a female voice (signal), every time the signal voice was attenuated, to the subjective level at which it was just possible to understand what was being said. To assess the benefit of lipreading, half of the material was presented audiovisually and half auditoril
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!