Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Visual speech information.

Zeitschriftenartikel zum Thema „Visual speech information“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Visual speech information" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Miller, Rachel M., Kauyumari Sanchez, and Lawrence D. Rosenblum. "Alignment to visual speech information." Attention, Perception, & Psychophysics 72, no. 6 (August 2010): 1614–25. http://dx.doi.org/10.3758/app.72.6.1614.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Rosenblum, Lawrence D., Deborah A. Yakel, Naser Baseer, Anjani Panchal, Brynn C. Nodarse, and Ryan P. Niehus. "Visual speech information for face recognition." Perception & Psychophysics 64, no. 2 (February 2002): 220–29. http://dx.doi.org/10.3758/bf03195788.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Yakel, Deborah A., and Lawrence D. Rosenblum. "Face identification using visual speech information." Journal of the Acoustical Society of America 100, no. 4 (October 1996): 2570. http://dx.doi.org/10.1121/1.417401.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Weinholtz, Chase, and James W. Dias. "Categorical perception of visual speech information." Journal of the Acoustical Society of America 139, no. 4 (April 2016): 2018. http://dx.doi.org/10.1121/1.4949950.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

HISANAGA, Satoko, Kaoru SEKIYAMA, Tomohiko IGASAKI, and Nobuki MURAYAMA. "Effects of visual information on audio-visual speech processing." Proceedings of the Annual Convention of the Japanese Psychological Association 75 (September 15, 2011): 2AM061. http://dx.doi.org/10.4992/pacjpa.75.0_2am061.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sell, Andrea J., and Michael P. Kaschak. "Does visual speech information affect word segmentation?" Memory & Cognition 37, no. 6 (September 2009): 889–94. http://dx.doi.org/10.3758/mc.37.6.889.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Hall, Michael D., Paula M. T. Smeele, and Patricia K. Kuhl. "Integration of auditory and visual speech information." Journal of the Acoustical Society of America 103, no. 5 (May 1998): 2985. http://dx.doi.org/10.1121/1.421677.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

McGiverin, Rolland. "Speech, Hearing and Visual." Behavioral & Social Sciences Librarian 8, no. 3-4 (April 16, 1990): 73–78. http://dx.doi.org/10.1300/j103v08n03_12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Hollich, George J., Peter W. Jusczyk, and Rochelle S. Newman. "Infants use of visual information in speech segmentation." Journal of the Acoustical Society of America 110, no. 5 (November 2001): 2703. http://dx.doi.org/10.1121/1.4777318.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Tekin, Ender, James Coughlan, and Helen Simon. "Improving speech enhancement algorithms by incorporating visual information." Journal of the Acoustical Society of America 134, no. 5 (November 2013): 4237. http://dx.doi.org/10.1121/1.4831575.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Ujiie, Yuta, and Kohske Takahashi. "Weaker McGurk Effect for Rubin’s Vase-Type Speech in People With High Autistic Traits." Multisensory Research 34, no. 6 (April 16, 2021): 663–79. http://dx.doi.org/10.1163/22134808-bja10047.

Der volle Inhalt der Quelle
Annotation:
Abstract While visual information from facial speech modulates auditory speech perception, it is less influential on audiovisual speech perception among autistic individuals than among typically developed individuals. In this study, we investigated the relationship between autistic traits (Autism-Spectrum Quotient; AQ) and the influence of visual speech on the recognition of Rubin’s vase-type speech stimuli with degraded facial speech information. Participants were 31 university students (13 males and 18 females; mean age: 19.2, SD: 1.13 years) who reported normal (or corrected-to-normal) hear
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Reed, Rebecca K., and Edward T. Auer. "Influence of visual speech information on the identification of foreign accented speech." Journal of the Acoustical Society of America 125, no. 4 (April 2009): 2660. http://dx.doi.org/10.1121/1.4784199.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Kim, Jeesun, and Chris Davis. "How visual timing and form information affect speech and non-speech processing." Brain and Language 137 (October 2014): 86–90. http://dx.doi.org/10.1016/j.bandl.2014.07.012.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Sams, M. "Audiovisual Speech Perception." Perception 26, no. 1_suppl (August 1997): 347. http://dx.doi.org/10.1068/v970029.

Der volle Inhalt der Quelle
Annotation:
Persons with hearing loss use visual information from articulation to improve their speech perception. Even persons with normal hearing utilise visual information, especially when the stimulus-to-noise ratio is poor. A dramatic demonstration of the role of vision in speech perception is the audiovisual fusion called the ‘McGurk effect’. When the auditory syllable /pa/ is presented in synchrony with the face articulating the syllable /ka/, the subject usually perceives /ta/ or /ka/. The illusory perception is clearly auditory in nature. We recently studied the audiovisual fusion (acoustical /p/
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Plass, John, David Brang, Satoru Suzuki, and Marcia Grabowecky. "Vision perceptually restores auditory spectral dynamics in speech." Proceedings of the National Academy of Sciences 117, no. 29 (July 6, 2020): 16920–27. http://dx.doi.org/10.1073/pnas.2002887117.

Der volle Inhalt der Quelle
Annotation:
Visual speech facilitates auditory speech perception, but the visual cues responsible for these benefits and the information they provide remain unclear. Low-level models emphasize basic temporal cues provided by mouth movements, but these impoverished signals may not fully account for the richness of auditory information provided by visual speech. High-level models posit interactions among abstract categorical (i.e., phonemes/visemes) or amodal (e.g., articulatory) speech representations, but require lossy remapping of speech signals onto abstracted representations. Because visible articulato
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Karpov, Alexey Anatolyevich. "Assistive Information Technologies based on Audio-Visual Speech Interfaces." SPIIRAS Proceedings 4, no. 27 (March 17, 2014): 114. http://dx.doi.org/10.15622/sp.27.10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Whalen, D. H., Julia Irwin, and Carol A. Fowler. "Audiovisual integration of speech based on minimal visual information." Journal of the Acoustical Society of America 100, no. 4 (October 1996): 2569. http://dx.doi.org/10.1121/1.417395.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Gurban, M., and J. P. Thiran. "Information Theoretic Feature Extraction for Audio-Visual Speech Recognition." IEEE Transactions on Signal Processing 57, no. 12 (December 2009): 4765–76. http://dx.doi.org/10.1109/tsp.2009.2026513.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Mishra, Sushmit, Thomas Lunner, Stefan Stenfelt, Jerker Rönnberg, and Mary Rudner. "Visual Information Can Hinder Working Memory Processing of Speech." Journal of Speech, Language, and Hearing Research 56, no. 4 (August 2013): 1120–32. http://dx.doi.org/10.1044/1092-4388(2012/12-0033).

Der volle Inhalt der Quelle
Annotation:
Purpose The purpose of the present study was to evaluate the new Cognitive Spare Capacity Test (CSCT), which measures aspects of working memory capacity for heard speech in the audiovisual and auditory-only modalities of presentation. Method In Experiment 1, 20 young adults with normal hearing performed the CSCT and an independent battery of cognitive tests. In the CSCT, they listened to and recalled 2-digit numbers according to instructions inducing executive processing at 2 different memory loads. In Experiment 2, 10 participants performed a less executively demanding free recall task using
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Borrie, Stephanie A. "Visual speech information: A help or hindrance in perceptual processing of dysarthric speech." Journal of the Acoustical Society of America 137, no. 3 (March 2015): 1473–80. http://dx.doi.org/10.1121/1.4913770.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Wayne, Rachel V., and Ingrid S. Johnsrude. "The role of visual speech information in supporting perceptual learning of degraded speech." Journal of Experimental Psychology: Applied 18, no. 4 (2012): 419–35. http://dx.doi.org/10.1037/a0031042.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Winneke, Axel H., and Natalie A. Phillips. "Brain processes underlying the integration of audio-visual speech and non-speech information." Brain and Cognition 67 (June 2008): 45. http://dx.doi.org/10.1016/j.bandc.2008.02.096.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Sánchez-García, Carolina, Sonia Kandel, Christophe Savariaux, Nara Ikumi, and Salvador Soto-Faraco. "Time course of audio–visual phoneme identification: A cross-modal gating study." Seeing and Perceiving 25 (2012): 194. http://dx.doi.org/10.1163/187847612x648233.

Der volle Inhalt der Quelle
Annotation:
When both present, visual and auditory information are combined in order to decode the speech signal. Past research has addressed to what extent visual information contributes to distinguish confusable speech sounds, but usually ignoring the continuous nature of speech perception. Here we tap at the temporal course of the contribution of visual and auditory information during the process of speech perception. To this end, we designed an audio–visual gating task with videos recorded with high speed camera. Participants were asked to identify gradually longer fragments of pseudowords varying in
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Yordamlı, Arzu, and Doğu Erdener. "Auditory–Visual Speech Integration in Bipolar Disorder: A Preliminary Study." Languages 3, no. 4 (October 17, 2018): 38. http://dx.doi.org/10.3390/languages3040038.

Der volle Inhalt der Quelle
Annotation:
This study aimed to investigate how individuals with bipolar disorder integrate auditory and visual speech information compared to healthy individuals. Furthermore, we wanted to see whether there were any differences between manic and depressive episode bipolar disorder patients with respect to auditory and visual speech integration. It was hypothesized that the bipolar group’s auditory–visual speech integration would be weaker than that of the control group. Further, it was predicted that those in the manic phase of bipolar disorder would integrate visual speech information more robustly than
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Drijvers, Linda, and Asli Özyürek. "Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension." Journal of Speech, Language, and Hearing Research 60, no. 1 (January 2017): 212–22. http://dx.doi.org/10.1044/2016_jslhr-h-16-0101.

Der volle Inhalt der Quelle
Annotation:
Purpose This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Method Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips bl
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Rosenblum, Lawrence D. "Speech Perception as a Multimodal Phenomenon." Current Directions in Psychological Science 17, no. 6 (December 2008): 405–9. http://dx.doi.org/10.1111/j.1467-8721.2008.00615.x.

Der volle Inhalt der Quelle
Annotation:
Speech perception is inherently multimodal. Visual speech (lip-reading) information is used by all perceivers and readily integrates with auditory speech. Imaging research suggests that the brain treats auditory and visual speech similarly. These findings have led some researchers to consider that speech perception works by extracting amodal information that takes the same form across modalities. From this perspective, speech integration is a property of the input information itself. Amodal speech information could explain the reported automaticity, immediacy, and completeness of audiovisual s
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Mishra, Saumya, Anup Kumar Gupta, and Puneet Gupta. "DARE: Deceiving Audio–Visual speech Recognition model." Knowledge-Based Systems 232 (November 2021): 107503. http://dx.doi.org/10.1016/j.knosys.2021.107503.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Callan, Daniel E., Jeffery A. Jones, Kevin Munhall, Christian Kroos, Akiko M. Callan, and Eric Vatikiotis-Bateson. "Multisensory Integration Sites Identified by Perception of Spatial Wavelet Filtered Visual Speech Gesture Information." Journal of Cognitive Neuroscience 16, no. 5 (June 2004): 805–16. http://dx.doi.org/10.1162/089892904970771.

Der volle Inhalt der Quelle
Annotation:
Perception of speech is improved when presentation of the audio signal is accompanied by concordant visual speech gesture information. This enhancement is most prevalent when the audio signal is degraded. One potential means by which the brain affords perceptual enhancement is thought to be through the integration of concordant information from multiple sensory channels in a common site of convergence, multisensory integration (MSI) sites. Some studies have identified potential sites in the superior temporal gyrus/sulcus (STG/S) that are responsive to multisensory information from the auditory
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Hertrich, Ingo, Susanne Dietrich, and Hermann Ackermann. "Cross-modal Interactions during Perception of Audiovisual Speech and Nonspeech Signals: An fMRI Study." Journal of Cognitive Neuroscience 23, no. 1 (January 2011): 221–37. http://dx.doi.org/10.1162/jocn.2010.21421.

Der volle Inhalt der Quelle
Annotation:
During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream—prior to its fusion with auditory phonological features [Hertrich, I., Mathiak, K., Lutzenberger, W., & Ackermann, H. Time course of early audiovisual interactions during speech and non-speech central-auditory processing: An MEG study. Journal of Cognitive Neuroscience, 21, 259–274, 2009]. Using functional magnet
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Everdell, Ian T., Heidi Marsh, Micheal D. Yurick, Kevin G. Munhall, and Martin Paré. "Gaze Behaviour in Audiovisual Speech Perception: Asymmetrical Distribution of Face-Directed Fixations." Perception 36, no. 10 (October 2007): 1535–45. http://dx.doi.org/10.1068/p5852.

Der volle Inhalt der Quelle
Annotation:
Speech perception under natural conditions entails integration of auditory and visual information. Understanding how visual and auditory speech information are integrated requires detailed descriptions of the nature and processing of visual speech information. To understand better the process of gathering visual information, we studied the distribution of face-directed fixations of humans performing an audiovisual speech perception task to characterise the degree of asymmetrical viewing and its relationship to speech intelligibility. Participants showed stronger gaze fixation asymmetries while
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Jesse, Alexandra, Nick Vrignaud, Michael M. Cohen, and Dominic W. Massaro. "The processing of information from multiple sources in simultaneous interpreting." Interpreting. International Journal of Research and Practice in Interpreting 5, no. 2 (December 31, 2000): 95–115. http://dx.doi.org/10.1075/intp.5.2.04jes.

Der volle Inhalt der Quelle
Annotation:
Language processing is influenced by multiple sources of information. We examined whether the performance in simultaneous interpreting would be improved when providing two sources of information, the auditory speech as well as corresponding lip-movements, in comparison to presenting the auditory speech alone. Although there was an improvement in sentence recognition when presented with visible speech, there was no difference in performance between these two presentation conditions when bilinguals simultaneously interpreted from English to German or from English to Spanish. The reason why visua
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Jia, Xi Bin, and Mei Xia Zheng. "Video Based Visual Speech Feature Model Construction." Applied Mechanics and Materials 182-183 (June 2012): 1367–71. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.1367.

Der volle Inhalt der Quelle
Annotation:
This paper aims to give a solutions for the construction of chinese visual speech feature model based on HMM. We propose and discuss three kind representation model of the visual speech which are lip geometrical features, lip motion features and lip texture features. The model combines the advantages of the local LBP and global DCT texture information together, which shows better performance than the single feature. Equally the model combines the advantages of the local LBP and geometrical information together is better than single feature. By computing the recognition rate of the visemes from
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Shi, Li Juan, Ping Feng, Jian Zhao, Li Rong Wang, and Na Che. "Study on Dual Mode Fusion Method of Video and Audio." Applied Mechanics and Materials 734 (February 2015): 412–15. http://dx.doi.org/10.4028/www.scientific.net/amm.734.412.

Der volle Inhalt der Quelle
Annotation:
In order to solve the hearing-impaired students in class only rely on sign language, amount of classroom information received less, This paper studies video and audio dual mode fusion algorithm combined with lip reading、speech recognition technology and information fusion technology.First ,speech feature extraction, processing of speech signal, the speech synchronization output text. At the same time, extraction of video features, voice and video signal fusion, Make voice information into visual information that the hearing-impaired students can receive. Make the students receive text messages
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Dias, James W., and Lawrence D. Rosenblum. "Visual Influences on Interactive Speech Alignment." Perception 40, no. 12 (January 1, 2011): 1457–66. http://dx.doi.org/10.1068/p7071.

Der volle Inhalt der Quelle
Annotation:
Speech alignment describes the unconscious tendency to produce speech that shares characteristics with perceived speech (eg Goldinger, 1998 Psychological Review105 251 – 279). In the present study we evaluated whether seeing a talker enhances alignment over just hearing a talker. Pairs of participants performed an interactive search task which required them to repeatedly utter a series of keywords. Half of the pairs performed the task while hearing each other, while the other half could see and hear each other. Alignment was assessed by naive judges rating the similarity of interlocutors' keyw
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Campbell, Ruth. "The processing of audio-visual speech: empirical and neural bases." Philosophical Transactions of the Royal Society B: Biological Sciences 363, no. 1493 (September 7, 2007): 1001–10. http://dx.doi.org/10.1098/rstb.2007.2155.

Der volle Inhalt der Quelle
Annotation:
In this selective review, I outline a number of ways in which seeing the talker affects auditory perception of speech, including, but not confined to, the McGurk effect. To date, studies suggest that all linguistic levels are susceptible to visual influence, and that two main modes of processing can be described: a complementary mode, whereby vision provides information more efficiently than hearing for some under-specified parts of the speech stream, and a correlated mode, whereby vision partially duplicates information about dynamic articulatory patterning. Cortical correlates of seen speech
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Metzger, Brian A. ,., John F. ,. Magnotti, Elizabeth Nesbitt, Daniel Yoshor, and Michael S. ,. Beauchamp. "Cross-modal suppression model of speech perception: Visual information drives suppressive interactions between visual and auditory speech in pSTG." Journal of Vision 20, no. 11 (October 20, 2020): 434. http://dx.doi.org/10.1167/jov.20.11.434.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Irwin, Julia, Trey Avery, Lawrence Brancazio, Jacqueline Turcios, Kayleigh Ryherd, and Nicole Landi. "Electrophysiological Indices of Audiovisual Speech Perception: Beyond the McGurk Effect and Speech in Noise." Multisensory Research 31, no. 1-2 (2018): 39–56. http://dx.doi.org/10.1163/22134808-00002580.

Der volle Inhalt der Quelle
Annotation:
Visual information on a talker’s face can influence what a listener hears. Commonly used approaches to study this include mismatched audiovisual stimuli (e.g., McGurk type stimuli) or visual speech in auditory noise. In this paper we discuss potential limitations of these approaches and introduce a novel visual phonemic restoration method. This method always presents the same visual stimulus (e.g., /ba/) dubbed with a matched auditory stimulus (/ba/) or one that has weakened consonantal information and sounds more /a/-like). When this reduced auditory stimulus (or /a/) is dubbed with the visua
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Van Engen, Kristin J., Jasmine E. B. Phelps, Rajka Smiljanic, and Bharath Chandrasekaran. "Enhancing Speech Intelligibility: Interactions Among Context, Modality, Speech Style, and Masker." Journal of Speech, Language, and Hearing Research 57, no. 5 (October 2014): 1908–18. http://dx.doi.org/10.1044/jslhr-h-13-0076.

Der volle Inhalt der Quelle
Annotation:
Purpose The authors sought to investigate interactions among intelligibility-enhancing speech cues (i.e., semantic context, clearly produced speech, and visual information) across a range of masking conditions. Method Sentence recognition in noise was assessed for 29 normal-hearing listeners. Testing included semantically normal and anomalous sentences, conversational and clear speaking styles, auditory-only (AO) and audiovisual (AV) presentation modalities, and 4 different maskers (2-talker babble, 4-talker babble, 8-talker babble, and speech-shaped noise). Results Semantic context, clear spe
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Records, Nancy L. "A Measure of the Contribution of a Gesture to the Perception of Speech in Listeners With Aphasia." Journal of Speech, Language, and Hearing Research 37, no. 5 (October 1994): 1086–99. http://dx.doi.org/10.1044/jshr.3705.1086.

Der volle Inhalt der Quelle
Annotation:
The contribution of a visual source of contextual information to speech perception was measured in 12 listeners with aphasia. The three experimental conditions were: Visual-Only (referential gesture), Auditory-Only (computer-edited speech), and Audio-Visual. In a two-alternative, forced-choice task, subjects indicated which picture had been requested. The stimuli were first validated with listeners without brain damage. The listeners with aphasia were subgrouped as having high or low language comprehension based on standardized test scores. Results showed a significantly larger contribution of
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Helfer, Karen S. "Auditory and Auditory-Visual Perception of Clear and Conversational Speech." Journal of Speech, Language, and Hearing Research 40, no. 2 (April 1997): 432–43. http://dx.doi.org/10.1044/jslhr.4002.432.

Der volle Inhalt der Quelle
Annotation:
Research has shown that speaking in a deliberately clear manner can improve the accuracy of auditory speech recognition. Allowing listeners access to visual speech cues also enhances speech understanding. Whether the nature of information provided by speaking clearly and by using visual speech cues is redundant has not been determined. This study examined how speaking mode (clear vs. conversational) and presentation mode (auditory vs. auditory-visual) influenced the perception of words within nonsense sentences. In Experiment 1, 30 young listeners with normal hearing responded to videotaped st
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Taitelbaum-Swead, Riki, and Leah Fostick. "Auditory and visual information in speech perception: A developmental perspective." Clinical Linguistics & Phonetics 30, no. 7 (March 30, 2016): 531–45. http://dx.doi.org/10.3109/02699206.2016.1151938.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Yakel, Deborah A., and Lawrence D. Rosenblum. "Time‐varying information for vowel identification in visual speech perception." Journal of the Acoustical Society of America 108, no. 5 (November 2000): 2482. http://dx.doi.org/10.1121/1.4743160.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Johnson, Jennifer A., and Lawrence D. Rosenblum. "Hemispheric differences in perceiving and integrating dynamic visual speech information." Journal of the Acoustical Society of America 100, no. 4 (October 1996): 2570. http://dx.doi.org/10.1121/1.417400.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Ogihara, Akio, Akira Shintani, Naoshi Doi, and Kunio Fukunaga. "HMM Speech Recognition Using Fusion of Visual and Auditory Information." IEEJ Transactions on Electronics, Information and Systems 115, no. 11 (1995): 1317–24. http://dx.doi.org/10.1541/ieejeiss1987.115.11_1317.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Keintz, Connie K., Kate Bunton, and Jeannette D. Hoit. "Influence of Visual Information on the Intelligibility of Dysarthric Speech." American Journal of Speech-Language Pathology 16, no. 3 (August 2007): 222–34. http://dx.doi.org/10.1044/1058-0360(2007/027).

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Yuan, Yi, Andrew Lotto, and Yonghee Oh. "Temporal cues from visual information benefit speech perception in noise." Journal of the Acoustical Society of America 146, no. 4 (October 2019): 3056. http://dx.doi.org/10.1121/1.5137604.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Blank, Helen, and Katharina von Kriegstein. "Mechanisms of enhancing visual–speech recognition by prior auditory information." NeuroImage 65 (January 2013): 109–18. http://dx.doi.org/10.1016/j.neuroimage.2012.09.047.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Moon, Il-Joon, Mini Jo, Ga-Young Kim, Nicolas Kim, Young-Sang Cho, Sung-Hwa Hong, and Hye-Yoon Seol. "How Does a Face Mask Impact Speech Perception?" Healthcare 10, no. 9 (September 7, 2022): 1709. http://dx.doi.org/10.3390/healthcare10091709.

Der volle Inhalt der Quelle
Annotation:
Face masks are mandatory during the COVID-19 pandemic, leading to attenuation of sound energy and loss of visual cues which are important for communication. This study explores how a face mask affects speech performance for individuals with and without hearing loss. Four video recordings (a female speaker with and without a face mask and a male speaker with and without a face mask) were used to examine individuals’ speech performance. The participants completed a listen-and-repeat task while watching four types of video recordings. Acoustic characteristics of speech signals based on mask type
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Kubicek, Claudia, Anne Hillairet de Boisferon, Eve Dupierrix, Hélène Lœvenbruck, Judit Gervain, and Gudrun Schwarzer. "Face-scanning behavior to silently-talking faces in 12-month-old infants: The impact of pre-exposed auditory speech." International Journal of Behavioral Development 37, no. 2 (February 25, 2013): 106–10. http://dx.doi.org/10.1177/0165025412473016.

Der volle Inhalt der Quelle
Annotation:
The present eye-tracking study aimed to investigate the impact of auditory speech information on 12-month-olds’ gaze behavior to silently-talking faces. We examined German infants’ face-scanning behavior to side-by-side presentation of a bilingual speaker’s face silently speaking German utterances on one side and French on the other side, before and after auditory familiarization with one of the two languages. The results showed that 12-month-old infants showed no general visual preference for either of the visual speeches, neither before nor after auditory input. But, infants who heard native
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

McCotter, Maxine V., and Timothy R. Jordan. "The Role of Facial Colour and Luminance in Visual and Audiovisual Speech Perception." Perception 32, no. 8 (August 2003): 921–36. http://dx.doi.org/10.1068/p3316.

Der volle Inhalt der Quelle
Annotation:
We conducted four experiments to investigate the role of colour and luminance information in visual and audiovisual speech perception. In experiments la (stimuli presented in quiet conditions) and 1b (stimuli presented in auditory noise), face display types comprised naturalistic colour (NC), grey-scale (GS), and luminance inverted (LI) faces. In experiments 2a (quiet) and 2b (noise), face display types comprised NC, colour inverted (CI), LI, and colour and luminance inverted (CLI) faces. Six syllables and twenty-two words were used to produce auditory and visual speech stimuli. Auditory and v
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!