To see the other types of publications on this topic, follow the link: Audiovisual speech processing.

Journal articles on the topic 'Audiovisual speech processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Audiovisual speech processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Tsuhan Chen. "Audiovisual speech processing." IEEE Signal Processing Magazine 18, no. 1 (2001): 9–21. http://dx.doi.org/10.1109/79.911195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vatikiotis-Bateson, Eric, and Takaaki Kuratate. "Overview of audiovisual speech processing." Acoustical Science and Technology 33, no. 3 (2012): 135–41. http://dx.doi.org/10.1250/ast.33.135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Francisco, Ana A., Alexandra Jesse, Margriet A. Groen, and James M. McQueen. "A General Audiovisual Temporal Processing Deficit in Adult Readers With Dyslexia." Journal of Speech, Language, and Hearing Research 60, no. 1 (2017): 144–58. http://dx.doi.org/10.1044/2016_jslhr-h-15-0375.

Full text
Abstract:
Purpose Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of audiovisual speech and nonspeech stimuli, their time window of audiovisual integration for speech (using incongruent /aCa/ syllables), and their audiovisual perception of phonetic categories. Results Adult readers with dyslexia showed less sensitivity t
APA, Harvard, Vancouver, ISO, and other styles
4

Bernstein, Lynne E., Edward T. Auer, Michael Wagner, and Curtis W. Ponton. "Spatiotemporal dynamics of audiovisual speech processing." NeuroImage 39, no. 1 (2008): 423–35. http://dx.doi.org/10.1016/j.neuroimage.2007.08.035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dunham-Carr, Kacie, Jacob I. Feldman, David M. Simon, et al. "The Processing of Audiovisual Speech Is Linked with Vocabulary in Autistic and Nonautistic Children: An ERP Study." Brain Sciences 13, no. 7 (2023): 1043. http://dx.doi.org/10.3390/brainsci13071043.

Full text
Abstract:
Explaining individual differences in vocabulary in autism is critical, as understanding and using words to communicate are key predictors of long-term outcomes for autistic individuals. Differences in audiovisual speech processing may explain variability in vocabulary in autism. The efficiency of audiovisual speech processing can be indexed via amplitude suppression, wherein the amplitude of the event-related potential (ERP) is reduced at the P2 component in response to audiovisual speech compared to auditory-only speech. This study used electroencephalography (EEG) to measure P2 amplitudes in
APA, Harvard, Vancouver, ISO, and other styles
6

Sams, M. "Audiovisual Speech Perception." Perception 26, no. 1_suppl (1997): 347. http://dx.doi.org/10.1068/v970029.

Full text
Abstract:
Persons with hearing loss use visual information from articulation to improve their speech perception. Even persons with normal hearing utilise visual information, especially when the stimulus-to-noise ratio is poor. A dramatic demonstration of the role of vision in speech perception is the audiovisual fusion called the ‘McGurk effect’. When the auditory syllable /pa/ is presented in synchrony with the face articulating the syllable /ka/, the subject usually perceives /ta/ or /ka/. The illusory perception is clearly auditory in nature. We recently studied the audiovisual fusion (acoustical /p/
APA, Harvard, Vancouver, ISO, and other styles
7

Ojanen, Ville, Riikka Möttönen, Johanna Pekkola, et al. "Processing of audiovisual speech in Broca's area." NeuroImage 25, no. 2 (2005): 333–38. http://dx.doi.org/10.1016/j.neuroimage.2004.12.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stevenson, Ryan A., Nicholas A. Altieri, Sunah Kim, David B. Pisoni, and Thomas W. James. "Neural processing of asynchronous audiovisual speech perception." NeuroImage 49, no. 4 (2010): 3308–18. http://dx.doi.org/10.1016/j.neuroimage.2009.12.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hamilton, Roy H., Jeffrey T. Shenton, and H. Branch Coslett. "An acquired deficit of audiovisual speech processing." Brain and Language 98, no. 1 (2006): 66–73. http://dx.doi.org/10.1016/j.bandl.2006.02.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tomalski, Przemysław. "Developmental Trajectory of Audiovisual Speech Integration in Early Infancy. A Review of Studies Using the McGurk Paradigm." Psychology of Language and Communication 19, no. 2 (2015): 77–100. http://dx.doi.org/10.1515/plc-2015-0006.

Full text
Abstract:
Abstract Apart from their remarkable phonological skills young infants prior to their first birthday show ability to match the mouth articulation they see with the speech sounds they hear. They are able to detect the audiovisual conflict of speech and to selectively attend to articulating mouth depending on audiovisual congruency. Early audiovisual speech processing is an important aspect of language development, related not only to phonological knowledge, but also to language production during subsequent years. Th is article reviews recent experimental work delineating the complex development
APA, Harvard, Vancouver, ISO, and other styles
11

Ozker, Muge, Inga M. Schepers, John F. Magnotti, Daniel Yoshor, and Michael S. Beauchamp. "A Double Dissociation between Anterior and Posterior Superior Temporal Gyrus for Processing Audiovisual Speech Demonstrated by Electrocorticography." Journal of Cognitive Neuroscience 29, no. 6 (2017): 1044–60. http://dx.doi.org/10.1162/jocn_a_01110.

Full text
Abstract:
Human speech can be comprehended using only auditory information from the talker's voice. However, comprehension is improved if the talker's face is visible, especially if the auditory information is degraded as occurs in noisy environments or with hearing loss. We explored the neural substrates of audiovisual speech perception using electrocorticography, direct recording of neural activity using electrodes implanted on the cortical surface. We observed a double dissociation in the responses to audiovisual speech with clear and noisy auditory component within the superior temporal gyrus (STG),
APA, Harvard, Vancouver, ISO, and other styles
12

Simon, David M., and Mark T. Wallace. "Integration and Temporal Processing of Asynchronous Audiovisual Speech." Journal of Cognitive Neuroscience 30, no. 3 (2018): 319–37. http://dx.doi.org/10.1162/jocn_a_01205.

Full text
Abstract:
Multisensory integration of visual mouth movements with auditory speech is known to offer substantial perceptual benefits, particularly under challenging (i.e., noisy) acoustic conditions. Previous work characterizing this process has found that ERPs to auditory speech are of shorter latency and smaller magnitude in the presence of visual speech. We sought to determine the dependency of these effects on the temporal relationship between the auditory and visual speech streams using EEG. We found that reductions in ERP latency and suppression of ERP amplitude are maximal when the visual signal p
APA, Harvard, Vancouver, ISO, and other styles
13

de la Vaux, Steven K., and Dominic W. Massaro. "Audiovisual speech gating: examining information and information processing." Cognitive Processing 5, no. 2 (2004): 106–12. http://dx.doi.org/10.1007/s10339-004-0014-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Alsius, Agnès, Martin Paré, and Kevin G. Munhall. "Forty Years After Hearing Lips and Seeing Voices: the McGurk Effect Revisited." Multisensory Research 31, no. 1-2 (2018): 111–44. http://dx.doi.org/10.1163/22134808-00002565.

Full text
Abstract:
Since its discovery 40 years ago, the McGurk illusion has been usually cited as a prototypical paradigmatic case of multisensory binding in humans, and has been extensively used in speech perception studies as a proxy measure for audiovisual integration mechanisms. Despite the well-established practice of using the McGurk illusion as a tool for studying the mechanisms underlying audiovisual speech integration, the magnitude of the illusion varies enormously across studies. Furthermore, the processing of McGurk stimuli differs from congruent audiovisual processing at both phenomenological and n
APA, Harvard, Vancouver, ISO, and other styles
15

Moradi, Shahram, and Jerker Rönnberg. "Perceptual Doping: A Hypothesis on How Early Audiovisual Speech Stimulation Enhances Subsequent Auditory Speech Processing." Brain Sciences 13, no. 4 (2023): 601. http://dx.doi.org/10.3390/brainsci13040601.

Full text
Abstract:
Face-to-face communication is one of the most common means of communication in daily life. We benefit from both auditory and visual speech signals that lead to better language understanding. People prefer face-to-face communication when access to auditory speech cues is limited because of background noise in the surrounding environment or in the case of hearing impairment. We demonstrated that an early, short period of exposure to audiovisual speech stimuli facilitates subsequent auditory processing of speech stimuli for correct identification, but early auditory exposure does not. We called t
APA, Harvard, Vancouver, ISO, and other styles
16

Ujiie, Yuta, and Kohske Takahashi. "Weaker McGurk Effect for Rubin’s Vase-Type Speech in People With High Autistic Traits." Multisensory Research 34, no. 6 (2021): 663–79. http://dx.doi.org/10.1163/22134808-bja10047.

Full text
Abstract:
Abstract While visual information from facial speech modulates auditory speech perception, it is less influential on audiovisual speech perception among autistic individuals than among typically developed individuals. In this study, we investigated the relationship between autistic traits (Autism-Spectrum Quotient; AQ) and the influence of visual speech on the recognition of Rubin’s vase-type speech stimuli with degraded facial speech information. Participants were 31 university students (13 males and 18 females; mean age: 19.2, SD: 1.13 years) who reported normal (or corrected-to-normal) hear
APA, Harvard, Vancouver, ISO, and other styles
17

Drebing, Daniel, Jared Medina, H. Branch Coslett, Jeffrey T. Shenton, and Roy H. Hamilton. "An acquired deficit of intermodal temporal processing for audiovisual speech: A case study." Seeing and Perceiving 25 (2012): 186. http://dx.doi.org/10.1163/187847612x648152.

Full text
Abstract:
Integrating sensory information across modalities is necessary for a cohesive experience of the world; disrupting the ability to bind the multisensory stimuli arising from an event leads to a disjointed and confusing percept. We previously reported (Hamilton et al., 2006) a patient, AWF, who suffered an acute neural incident after which he displayed a distinct inability to integrate auditory and visual speech information. While our prior experiments involving AWF suggested that he had a deficit of audiovisual speech processing, they did not explore the hypothesis that his deficits in audiovisu
APA, Harvard, Vancouver, ISO, and other styles
18

Thézé, Raphaël, Anne-Lise Giraud, and Pierre Mégevand. "The phase of cortical oscillations determines the perceptual fate of visual cues in naturalistic audiovisual speech." Science Advances 6, no. 45 (2020): eabc6348. http://dx.doi.org/10.1126/sciadv.abc6348.

Full text
Abstract:
When we see our interlocutor, our brain seamlessly extracts visual cues from their face and processes them along with the sound of their voice, making speech an intrinsically multimodal signal. Visual cues are especially important in noisy environments, when the auditory signal is less reliable. Neuronal oscillations might be involved in the cortical processing of audiovisual speech by selecting which sensory channel contributes more to perception. To test this, we designed computer-generated naturalistic audiovisual speech stimuli where one mismatched phoneme-viseme pair in a key word of sent
APA, Harvard, Vancouver, ISO, and other styles
19

Mishra, Sushmit, Thomas Lunner, Stefan Stenfelt, Jerker Rönnberg, and Mary Rudner. "Visual Information Can Hinder Working Memory Processing of Speech." Journal of Speech, Language, and Hearing Research 56, no. 4 (2013): 1120–32. http://dx.doi.org/10.1044/1092-4388(2012/12-0033).

Full text
Abstract:
Purpose The purpose of the present study was to evaluate the new Cognitive Spare Capacity Test (CSCT), which measures aspects of working memory capacity for heard speech in the audiovisual and auditory-only modalities of presentation. Method In Experiment 1, 20 young adults with normal hearing performed the CSCT and an independent battery of cognitive tests. In the CSCT, they listened to and recalled 2-digit numbers according to instructions inducing executive processing at 2 different memory loads. In Experiment 2, 10 participants performed a less executively demanding free recall task using
APA, Harvard, Vancouver, ISO, and other styles
20

Hertrich, Ingo, Hermann Ackermann, Klaus Mathiak, and Werner Lutzenberger. "Early stages of audiovisual speech processing—a magnetoencephalography study." Journal of the Acoustical Society of America 121, no. 5 (2007): 3044. http://dx.doi.org/10.1121/1.4781737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Harwood, Vanessa, Alisa Baron, Daniel Kleinman, Luca Campanelli, Julia Irwin, and Nicole Landi. "Event-Related Potentials in Assessing Visual Speech Cues in the Broader Autism Phenotype: Evidence from a Phonemic Restoration Paradigm." Brain Sciences 13, no. 7 (2023): 1011. http://dx.doi.org/10.3390/brainsci13071011.

Full text
Abstract:
Audiovisual speech perception includes the simultaneous processing of auditory and visual speech. Deficits in audiovisual speech perception are reported in autistic individuals; however, less is known regarding audiovisual speech perception within the broader autism phenotype (BAP), which includes individuals with elevated, yet subclinical, levels of autistic traits. We investigate the neural indices of audiovisual speech perception in adults exhibiting a range of autism-like traits using event-related potentials (ERPs) in a phonemic restoration paradigm. In this paradigm, we consider conditio
APA, Harvard, Vancouver, ISO, and other styles
22

Vroomen, Jean, and Jeroen J. Stekelenburg. "Visual Anticipatory Information Modulates Multisensory Interactions of Artificial Audiovisual Stimuli." Journal of Cognitive Neuroscience 22, no. 7 (2010): 1583–96. http://dx.doi.org/10.1162/jocn.2009.21308.

Full text
Abstract:
The neural activity of speech sound processing (the N1 component of the auditory ERP) can be suppressed if a speech sound is accompanied by concordant lip movements. Here we demonstrate that this audiovisual interaction is neither speech specific nor linked to humanlike actions but can be observed with artificial stimuli if their timing is made predictable. In Experiment 1, a pure tone synchronized with a deformation of a rectangle induced a smaller auditory N1 than auditory-only presentations if the temporal occurrence of this audiovisual event was made predictable by two moving disks that to
APA, Harvard, Vancouver, ISO, and other styles
23

McCotter, Maxine V., and Timothy R. Jordan. "The Role of Facial Colour and Luminance in Visual and Audiovisual Speech Perception." Perception 32, no. 8 (2003): 921–36. http://dx.doi.org/10.1068/p3316.

Full text
Abstract:
We conducted four experiments to investigate the role of colour and luminance information in visual and audiovisual speech perception. In experiments la (stimuli presented in quiet conditions) and 1b (stimuli presented in auditory noise), face display types comprised naturalistic colour (NC), grey-scale (GS), and luminance inverted (LI) faces. In experiments 2a (quiet) and 2b (noise), face display types comprised NC, colour inverted (CI), LI, and colour and luminance inverted (CLI) faces. Six syllables and twenty-two words were used to produce auditory and visual speech stimuli. Auditory and v
APA, Harvard, Vancouver, ISO, and other styles
24

Ghaneirad, Erfan, Ellyn Saenger, Gregor R. Szycik, et al. "Deficient Audiovisual Speech Perception in Schizophrenia: An ERP Study." Brain Sciences 13, no. 6 (2023): 970. http://dx.doi.org/10.3390/brainsci13060970.

Full text
Abstract:
In everyday verbal communication, auditory speech perception is often disturbed by background noise. Especially in disadvantageous hearing conditions, additional visual articulatory information (e.g., lip movement) can positively contribute to speech comprehension. Patients with schizophrenia (SZs) demonstrate an aberrant ability to integrate visual and auditory sensory input during speech perception. Current findings about underlying neural mechanisms of this deficit are inconsistent. Particularly and despite the importance of early sensory processing in speech perception, very few studies ha
APA, Harvard, Vancouver, ISO, and other styles
25

Roa Romero, Yadira, Daniel Senkowski, and Julian Keil. "Early and late beta-band power reflect audiovisual perception in the McGurk illusion." Journal of Neurophysiology 113, no. 7 (2015): 2342–50. http://dx.doi.org/10.1152/jn.00783.2014.

Full text
Abstract:
The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the subjective percept of the McGurk illusion as a phenomenon of speech-specific multisensory integration. Therefore, we examined the role of cortical oscillations and event-related responses in the percepti
APA, Harvard, Vancouver, ISO, and other styles
26

Tye-Murray, Nancy, Brent P. Spehar, Joel Myerson, Sandra Hale, and Mitchell S. Sommers. "The self-advantage in visual speech processing enhances audiovisual speech recognition in noise." Psychonomic Bulletin & Review 22, no. 4 (2014): 1048–53. http://dx.doi.org/10.3758/s13423-014-0774-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Bernstein, Lynne E., Zhong-Lin Lu, and Jintao Jiang. "Quantified acoustic–optical speech signal incongruity identifies cortical sites of audiovisual speech processing." Brain Research 1242 (November 2008): 172–84. http://dx.doi.org/10.1016/j.brainres.2008.04.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Dunham, Kacie, Alisa Zoltowski, Jacob I. Feldman, et al. "Neural Correlates of Audiovisual Speech Processing in Autistic and Non-Autistic Youth." Multisensory Research 36, no. 3 (2023): 263–88. http://dx.doi.org/10.1163/22134808-bja10093.

Full text
Abstract:
Abstract Autistic youth demonstrate differences in processing multisensory information, particularly in temporal processing of multisensory speech. Extensive research has identified several key brain regions for multisensory speech processing in non-autistic adults, including the superior temporal sulcus (STS) and insula, but it is unclear to what extent these regions are involved in temporal processing of multisensory speech in autistic youth. As a first step in exploring the neural substrates of multisensory temporal processing in this clinical population, we employed functional magnetic res
APA, Harvard, Vancouver, ISO, and other styles
29

Crosse, Michael J., and Edmund C. Lalor. "The cortical representation of the speech envelope is earlier for audiovisual speech than audio speech." Journal of Neurophysiology 111, no. 7 (2014): 1400–1408. http://dx.doi.org/10.1152/jn.00690.2013.

Full text
Abstract:
Visual speech can greatly enhance a listener's comprehension of auditory speech when they are presented simultaneously. Efforts to determine the neural underpinnings of this phenomenon have been hampered by the limited temporal resolution of hemodynamic imaging and the fact that EEG and magnetoencephalographic data are usually analyzed in response to simple, discrete stimuli. Recent research has shown that neuronal activity in human auditory cortex tracks the envelope of natural speech. Here, we exploit this finding by estimating a linear forward-mapping between the speech envelope and EEG dat
APA, Harvard, Vancouver, ISO, and other styles
30

Loh, Marco, Gabriele Schmid, Gustavo Deco, and Wolfram Ziegler. "Audiovisual Matching in Speech and Nonspeech Sounds: A Neurodynamical Model." Journal of Cognitive Neuroscience 22, no. 2 (2010): 240–47. http://dx.doi.org/10.1162/jocn.2009.21202.

Full text
Abstract:
Audiovisual speech perception provides an opportunity to investigate the mechanisms underlying multimodal processing. By using nonspeech stimuli, it is possible to investigate the degree to which audiovisual processing is specific to the speech domain. It has been shown in a match-to-sample design that matching across modalities is more difficult in the nonspeech domain as compared to the speech domain. We constructed a biophysically realistic neural network model simulating this experimental evidence. We propose that a stronger connection between modalities in speech underlies the behavioral
APA, Harvard, Vancouver, ISO, and other styles
31

Tiippana, Kaisa. "Advances in Understanding the Phenomena and Processing in Audiovisual Speech Perception." Brain Sciences 13, no. 9 (2023): 1345. http://dx.doi.org/10.3390/brainsci13091345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Lalonde, Kaylah, and Rachael Frush Holt. "Audiovisual speech integration development at varying levels of perceptual processing." Journal of the Acoustical Society of America 136, no. 4 (2014): 2263. http://dx.doi.org/10.1121/1.4900174.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Lalonde, Kaylah, and Rachael Frush Holt. "Audiovisual speech perception development at varying levels of perceptual processing." Journal of the Acoustical Society of America 139, no. 4 (2016): 1713–23. http://dx.doi.org/10.1121/1.4945590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Zhang, Yang, Bing Cheng, Tess Koerner, Christine Cao, Edward Carney, and Yue Wang. "Cortical processing of audiovisual speech perception in infancy and adulthood." Journal of the Acoustical Society of America 134, no. 5 (2013): 4234. http://dx.doi.org/10.1121/1.4831559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Barrós-Loscertales, Alfonso, Noelia Ventura-Campos, Maya Visser, et al. "Neural correlates of audiovisual speech processing in a second language." Brain and Language 126, no. 3 (2013): 253–62. http://dx.doi.org/10.1016/j.bandl.2013.05.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hällgren, Mathias, Birgitta Larsby, Björn Lyxell, and Stig Arlinger. "Evaluation of a Cognitive Test Battery in Young and Elderly Normal-Hearing and Hearing-Impaired Persons." Journal of the American Academy of Audiology 12, no. 07 (2001): 357–70. http://dx.doi.org/10.1055/s-0042-1745620.

Full text
Abstract:
AbstractA cognitive test battery sensitive to processes important for speech understanding was developed and investigated. Test stimuli are presented as text or in an auditory or audiovisual modality. The tests investigate phonologic processing and verbal information processing. Four subject groups, young/elderly with normal-hearing and young/elderly with hearing impairment, each including 12 subjects, participated in the study. The only significant effect in the text modality was an age effect in the speed of performance, seen also in the auditory and audiovisual modalities. In the auditory a
APA, Harvard, Vancouver, ISO, and other styles
37

Vakhshiteh, Fatemeh, and Farshad Almasganj. "Exploration of Properly Combined Audiovisual Representation with the Entropy Measure in Audiovisual Speech Recognition." Circuits, Systems, and Signal Processing 38, no. 6 (2018): 2523–43. http://dx.doi.org/10.1007/s00034-018-0975-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Costa-Giomi, Eugenia. "Mode of Presentation Affects Infants’ Preferential Attention to Singing and Speech." Music Perception 32, no. 2 (2014): 160–69. http://dx.doi.org/10.1525/mp.2014.32.2.160.

Full text
Abstract:
Almost from birth, infants prefer to attend to human vocalizations associated with speech over many other sounds. However, studies that have focused on infants’ differential attention to speech and singing have failed to show a speech listening bias. The purpose of the study was to investigate infants’ preferential attention to singing and speech presented in audiovisual and auditory mode. Using an infant-controlled preference procedure, 11-month-olds were presented with audiovisual stimuli depicting a woman singing or reciting a song (Experiment 1, audiovisual condition). The results showed t
APA, Harvard, Vancouver, ISO, and other styles
39

Lalonde, Kaylah, and Grace A. Dwyer. "Visual phonemic knowledge and audiovisual speech-in-noise perception in school-age children." Journal of the Acoustical Society of America 153, no. 3_supplement (2023): A337. http://dx.doi.org/10.1121/10.0019067.

Full text
Abstract:
Our mental representations of speech sounds include information about the visible articulatory gestures that accompany different speech sounds. We call this visual phonemic knowledge. This study examined development of school-age children’s visual phonemic knowledge and their ability to use visual phonemic knowledge to supplement audiovisual speech processing. Sixty-two children (5–16 years) and 18 adults (19–35 years) completed auditory-only, visual-only, and audiovisual tests of consonant-vowel syllable repetition. Auditory-only and audiovisual conditions were presented in steady-state, spee
APA, Harvard, Vancouver, ISO, and other styles
40

PONS, FERRAN, LLORENÇ ANDREU, MONICA SANZ-TORRENT, LUCÍA BUIL-LEGAZ, and DAVID J. LEWKOWICZ. "Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment." Journal of Child Language 40, no. 3 (2012): 687–700. http://dx.doi.org/10.1017/s0305000912000189.

Full text
Abstract:
ABSTRACTSpeech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to i
APA, Harvard, Vancouver, ISO, and other styles
41

Ahn, EunSeon, Areti Majumdar, Taraz G. Lee, and David Brang. "Evidence for a Causal Dissociation of the McGurk Effect and Congruent Audiovisual Speech Perception via TMS to the Left pSTS." Multisensory Research 37, no. 4-5 (2024): 341–63. http://dx.doi.org/10.1163/22134808-bja10129.

Full text
Abstract:
Abstract Congruent visual speech improves speech perception accuracy, particularly in noisy environments. Conversely, mismatched visual speech can alter what is heard, leading to an illusory percept that differs from the auditory and visual components, known as the McGurk effect. While prior transcranial magnetic stimulation (TMS) and neuroimaging studies have identified the left posterior superior temporal sulcus (pSTS) as a causal region involved in the generation of the McGurk effect, it remains unclear whether this region is critical only for this illusion or also for the more general bene
APA, Harvard, Vancouver, ISO, and other styles
42

Vatakis, Argiro, and Charles Spence. "Assessing audiovisual saliency and visual-information content in the articulation of consonants and vowels on audiovisual temporal perception." Seeing and Perceiving 25 (2012): 29. http://dx.doi.org/10.1163/187847612x646514.

Full text
Abstract:
Research has revealed different temporal integration windows between and within different speech-tokens. The limited speech-tokens tested to date has not allowed for the proper evaluation of whether such differences are task or stimulus driven? We conducted a series of experiments to investigate how the physical differences associated with speech articulation affect the temporal aspects of audiovisual speech perception. Videos of consonants and vowels uttered by three speakers were presented. Participants made temporal order judgments (TOJs) regarding which speech-stream had been presented fir
APA, Harvard, Vancouver, ISO, and other styles
43

Gijbels, Liesbeth, Adrian K. C. Lee, and Kaylah Lalonde. "Integration of audiovisual speech perception: From infancy to older adults." Journal of the Acoustical Society of America 157, no. 3 (2025): 1981–2000. https://doi.org/10.1121/10.0036137.

Full text
Abstract:
One of the most prevalent and relevant social experiences for humans — engaging in face-to-face conversations — is inherently multimodal. In the context of audiovisual (AV) speech perception, the visual cues from the speaker's face play a crucial role in language acquisition and in enhancing our comprehension of incoming auditory speech signals. Nonetheless, AV integration reflects substantial individual differences, which cannot be entirely accounted for by the information conveyed through the speech signal or the perceptual abilities of the individual. These differences illustrate changes in
APA, Harvard, Vancouver, ISO, and other styles
44

Van der Burg, Erik, and Patrick T. Goodbourn. "Rapid, generalized adaptation to asynchronous audiovisual speech." Proceedings of the Royal Society B: Biological Sciences 282, no. 1804 (2015): 20143083. http://dx.doi.org/10.1098/rspb.2014.3083.

Full text
Abstract:
The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the correspo
APA, Harvard, Vancouver, ISO, and other styles
45

Paris, Tim, Jeesun Kim, and Christopher Davis. "Updating expectencies about audiovisual associations in speech." Seeing and Perceiving 25 (2012): 164. http://dx.doi.org/10.1163/187847612x647946.

Full text
Abstract:
The processing of multisensory information depends on the learned association between sensory cues. In the case of speech there is a well-learned association between the movements of the lips and the subsequent sound. That is, particular lip and mouth movements reliably lead to a specific sound. EEG and MEG studies that have investigated the differences between this ‘congruent’ AV association and other ‘incongruent’ associations have commonly reported ERP differences from 350 ms after sound onset. Using a 256 active electrode EEG system, we tested whether this ‘congruency effect’ would be redu
APA, Harvard, Vancouver, ISO, and other styles
46

Jerger, Susan, Markus F. Damian, Cassandra Karl, and Hervé Abdi. "Developmental Shifts in Detection and Attention for Auditory, Visual, and Audiovisual Speech." Journal of Speech, Language, and Hearing Research 61, no. 12 (2018): 3095–112. http://dx.doi.org/10.1044/2018_jslhr-h-17-0343.

Full text
Abstract:
Purpose Successful speech processing depends on our ability to detect and integrate multisensory cues, yet there is minimal research on multisensory speech detection and integration by children. To address this need, we studied the development of speech detection for auditory (A), visual (V), and audiovisual (AV) input. Method Participants were 115 typically developing children clustered into age groups between 4 and 14 years. Speech detection (quantified by response times [RTs]) was determined for 1 stimulus, /buh/, presented in A, V, and AV modes (articulating vs. static facial conditions).
APA, Harvard, Vancouver, ISO, and other styles
47

Hueber, Thomas, Eric Tatulli, Laurent Girin, and Jean-Luc Schwartz. "Evaluating the Potential Gain of Auditory and Audiovisual Speech-Predictive Coding Using Deep Learning." Neural Computation 32, no. 3 (2020): 596–625. http://dx.doi.org/10.1162/neco_a_01264.

Full text
Abstract:
Sensory processing is increasingly conceived in a predictive framework in which neurons would constantly process the error signal resulting from the comparison of expected and observed stimuli. Surprisingly, few data exist on the accuracy of predictions that can be computed in real sensory scenes. Here, we focus on the sensory processing of auditory and audiovisual speech. We propose a set of computational models based on artificial neural networks (mixing deep feedforward and convolutional networks), which are trained to predict future audio observations from present and past audio or audiovi
APA, Harvard, Vancouver, ISO, and other styles
48

Treille, Avril, Coriandre Vilain, Sonia Kandel, and Marc Sato. "Electrophysiological evidence for a self-processing advantage during audiovisual speech integration." Experimental Brain Research 235, no. 9 (2017): 2867–76. http://dx.doi.org/10.1007/s00221-017-5018-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Hertrich, Ingo, Susanne Dietrich, and Hermann Ackermann. "Cross-modal Interactions during Perception of Audiovisual Speech and Nonspeech Signals: An fMRI Study." Journal of Cognitive Neuroscience 23, no. 1 (2011): 221–37. http://dx.doi.org/10.1162/jocn.2010.21421.

Full text
Abstract:
During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream—prior to its fusion with auditory phonological features [Hertrich, I., Mathiak, K., Lutzenberger, W., & Ackermann, H. Time course of early audiovisual interactions during speech and non-speech central-auditory processing: An MEG study. Journal of Cognitive Neuroscience, 21, 259–274, 2009]. Using functional magnet
APA, Harvard, Vancouver, ISO, and other styles
50

Gijbels, Liesbeth, Jason D. Yeatman, Kaylah Lalonde, and Adrian K. C. Lee. "Audiovisual Speech Processing in Relationship to Phonological and Vocabulary Skills in First Graders." Journal of Speech, Language, and Hearing Research 64, no. 12 (2021): 5022–40. http://dx.doi.org/10.1044/2021_jslhr-21-00196.

Full text
Abstract:
Purpose: It is generally accepted that adults use visual cues to improve speech intelligibility in noisy environments, but findings regarding visual speech benefit in children are mixed. We explored factors that contribute to audiovisual (AV) gain in young children's speech understanding. We examined whether there is an AV benefit to speech-in-noise recognition in children in first grade and if visual salience of phonemes influences their AV benefit. We explored if individual differences in AV speech enhancement could be explained by vocabulary knowledge, phonological awareness, or general psy
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!