Academic literature on the topic 'Auditory-visual speech perception'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Auditory-visual speech perception.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Auditory-visual speech perception"

1

ERDENER, DOĞU, and DENIS BURNHAM. "Auditory–visual speech perception in three- and four-year-olds and its relationship to perceptual attunement and receptive vocabulary." Journal of Child Language 45, no. 2 (June 6, 2017): 273–89. http://dx.doi.org/10.1017/s0305000917000174.

Full text
Abstract:
AbstractDespite the body of research on auditory–visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception – lip-reading and visual influence in auditory–visual integration; (ii) the development of auditory speech perception and native language perceptual attunement; and (iii) the relationship between these and a language skill relevant at this age, receptive vocabulary. Visual speech perception skills improved even over this relatively short time period. However, regression analyses revealed that vocabulary was predicted by auditory-only speech perception, and native language attunement, but not by visual speech perception ability. The results suggest that, in contrast to infants and schoolchildren, in three- to four-year-olds the relationship between speech perception and language ability is based on auditory and not visual or auditory–visual speech perception ability. Adding these results to existing findings allows elaboration of a more complete account of the developmental course of auditory–visual speech perception.
APA, Harvard, Vancouver, ISO, and other styles
2

Sams, M. "Audiovisual Speech Perception." Perception 26, no. 1_suppl (August 1997): 347. http://dx.doi.org/10.1068/v970029.

Full text
Abstract:
Persons with hearing loss use visual information from articulation to improve their speech perception. Even persons with normal hearing utilise visual information, especially when the stimulus-to-noise ratio is poor. A dramatic demonstration of the role of vision in speech perception is the audiovisual fusion called the ‘McGurk effect’. When the auditory syllable /pa/ is presented in synchrony with the face articulating the syllable /ka/, the subject usually perceives /ta/ or /ka/. The illusory perception is clearly auditory in nature. We recently studied the audiovisual fusion (acoustical /p/, visual /k/) for Finnish (1) syllables, and (2) words. Only 3% of the subjects perceived the syllables according to the acoustical input, ie in 97% of the subjects the perception was influenced by the visual information. For words the percentage of acoustical identifications was 10%. The results demonstrate a very strong influence of visual information of articulation in face-to-face speech perception. Word meaning and sentence context have a negligible influence on the fusion. We have also recorded neuromagnetic responses of the human cortex when the subjects both heard and saw speech. Some subjects showed a distinct response to a ‘McGurk’ stimulus. The response was rather late, emerging about 200 ms from the onset of the auditory stimulus. We suggest that the perisylvian cortex, close to the source area for the auditory 100 ms response (M100), may be activated by the discordant stimuli. The behavioural and neuromagnetic results suggest a precognitive audiovisual speech integration occurring at a relatively early processing level.
APA, Harvard, Vancouver, ISO, and other styles
3

Cienkowski, Kathleen M., and Arlene Earley Carney. "Auditory-Visual Speech Perception and Aging." Ear and Hearing 23, no. 5 (October 2002): 439–49. http://dx.doi.org/10.1097/00003446-200210000-00006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Helfer, Karen S. "Auditory and Auditory-Visual Perception of Clear and Conversational Speech." Journal of Speech, Language, and Hearing Research 40, no. 2 (April 1997): 432–43. http://dx.doi.org/10.1044/jslhr.4002.432.

Full text
Abstract:
Research has shown that speaking in a deliberately clear manner can improve the accuracy of auditory speech recognition. Allowing listeners access to visual speech cues also enhances speech understanding. Whether the nature of information provided by speaking clearly and by using visual speech cues is redundant has not been determined. This study examined how speaking mode (clear vs. conversational) and presentation mode (auditory vs. auditory-visual) influenced the perception of words within nonsense sentences. In Experiment 1, 30 young listeners with normal hearing responded to videotaped stimuli presented audiovisually in the presence of background noise at one of three signal-to-noise ratios. In Experiment 2, 9 participants returned for an additional assessment using auditory-only presentation. Results of these experiments showed significant effects of speaking mode (clear speech was easier to understand than was conversational speech) and presentation mode (auditoryvisual presentation led to better performance than did auditory-only presentation). The benefit of clear speech was greater for words occurring in the middle of sentences than for words at either the beginning or end of sentences for both auditory-only and auditory-visual presentation, whereas the greatest benefit from supplying visual cues was for words at the end of sentences spoken both clearly and conversationally. The total benefit from speaking clearly and supplying visual cues was equal to the sum of each of these effects. Overall, the results suggest that speaking clearly and providing visual speech information provide complementary (rather than redundant) information.
APA, Harvard, Vancouver, ISO, and other styles
5

Ediwarman, Ediwarman, Syafrizal Syafrizal, and John Pahamzah. "PERCEPTION OF SPEECH USING AUDIO VISUAL AND REPLICA FOR STUDENTS OF SULTAN AGENG TIRTAYASA UNIVERSITY." JOURNAL OF LANGUAGE 3, no. 2 (November 29, 2021): 95–102. http://dx.doi.org/10.30743/jol.v3i2.3695.

Full text
Abstract:
This paper exmined the perception of speech using audio visual and replica for students of Sultan Ageng Tirtayasa Univesity. This research was aimed at discussing face-to-face conversation or speech felt by the ears and eyes. The prerequisites for audio-visual perception of speech by using ambiguous perceptual sine wave replicas of natural speech as auditory stimuli are studied in details. When the subjects were unaware that auditory stimuli were speech, they only showed a negligible integration of auditory and visual stimuli. The same subjects learn to feel the same auditory stimuli as speech; they integrate auditory and visual stimuli in the same way as natural speech. These research result suggests a special mode of perception of multisensory speech.
APA, Harvard, Vancouver, ISO, and other styles
6

PONS, FERRAN, LLORENÇ ANDREU, MONICA SANZ-TORRENT, LUCÍA BUIL-LEGAZ, and DAVID J. LEWKOWICZ. "Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment." Journal of Child Language 40, no. 3 (July 9, 2012): 687–700. http://dx.doi.org/10.1017/s0305000912000189.

Full text
Abstract:
ABSTRACTSpeech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666 ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component followed the visual component. None of the groups perceived an audiovisual asynchrony of 366 ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception.
APA, Harvard, Vancouver, ISO, and other styles
7

Van Engen, Kristin J., Avanti Dey, Mitchell S. Sommers, and Jonathan E. Peelle. "Audiovisual speech perception: Moving beyond McGurk." Journal of the Acoustical Society of America 152, no. 6 (December 2022): 3216–25. http://dx.doi.org/10.1121/10.0015262.

Full text
Abstract:
Although it is clear that sighted listeners use both auditory and visual cues during speech perception, the manner in which multisensory information is combined is a matter of debate. One approach to measuring multisensory integration is to use variants of the McGurk illusion, in which discrepant auditory and visual cues produce auditory percepts that differ from those based on unimodal input. Not all listeners show the same degree of susceptibility to the McGurk illusion, and these individual differences are frequently used as a measure of audiovisual integration ability. However, despite their popularity, we join the voices of others in the field to argue that McGurk tasks are ill-suited for studying real-life multisensory speech perception: McGurk stimuli are often based on isolated syllables (which are rare in conversations) and necessarily rely on audiovisual incongruence that does not occur naturally. Furthermore, recent data show that susceptibility to McGurk tasks does not correlate with performance during natural audiovisual speech perception. Although the McGurk effect is a fascinating illusion, truly understanding the combined use of auditory and visual information during speech perception requires tasks that more closely resemble everyday communication: namely, words, sentences, and narratives with congruent auditory and visual speech cues.
APA, Harvard, Vancouver, ISO, and other styles
8

Clement, Bart R., Sarah K. Erickson, Su‐Hyun Jin, and Arlene E. Carney. "Confidence ratings in auditory–visual speech perception." Journal of the Acoustical Society of America 107, no. 5 (May 2000): 2887–88. http://dx.doi.org/10.1121/1.428732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Burnham, Denis, Kaoru Sekiyama, and Dogu Erdener. "Cross‐language auditory‐visual speech perception development." Journal of the Acoustical Society of America 123, no. 5 (May 2008): 3879. http://dx.doi.org/10.1121/1.2935787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

McCotter, Maxine V., and Timothy R. Jordan. "The Role of Facial Colour and Luminance in Visual and Audiovisual Speech Perception." Perception 32, no. 8 (August 2003): 921–36. http://dx.doi.org/10.1068/p3316.

Full text
Abstract:
We conducted four experiments to investigate the role of colour and luminance information in visual and audiovisual speech perception. In experiments la (stimuli presented in quiet conditions) and 1b (stimuli presented in auditory noise), face display types comprised naturalistic colour (NC), grey-scale (GS), and luminance inverted (LI) faces. In experiments 2a (quiet) and 2b (noise), face display types comprised NC, colour inverted (CI), LI, and colour and luminance inverted (CLI) faces. Six syllables and twenty-two words were used to produce auditory and visual speech stimuli. Auditory and visual signals were combined to produce congruent and incongruent audiovisual speech stimuli. Experiments 1a and 1b showed that perception of visual speech, and its influence on identifying the auditory components of congruent and incongruent audiovisual speech, was less for LI than for either NC or GS faces, which produced identical results. Experiments 2a and 2b showed that perception of visual speech, and influences on perception of incongruent auditory speech, was less for LI and CLI faces than for NC and CI faces (which produced identical patterns of performance). Our findings for NC and CI faces suggest that colour is not critical for perception of visual and audiovisual speech. The effect of luminance inversion on performance accuracy was relatively small (5%), which suggests that the luminance information preserved in LI faces is important for the processing of visual and audiovisual speech.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Auditory-visual speech perception"

1

Howard, John Graham. "Temporal aspects of auditory-visual speech and non-speech perception." Thesis, University of Reading, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.553127.

Full text
Abstract:
This thesis concentrates on the temporal aspects of the auditory-visual integratory perceptual experience described above. It is organized in two parts, a literature review, followed by an experimentation section. After a brief introduction (Chapter One), Chapter Two begins by considering the evolution of the earliest biological structures to exploit information in the acoustic and optic environments. The second part of the chapter proposes that the auditory-visual integratory experience might be a by-product of the earliest emergence of spoken language. Chapter Three focuses on human auditory and visual neural structures. It traces the auditory and visual systems of the modem human brain through the complex neuroanatomical forms that construct their pathways, through to where they finally integrate into the high-level multi-sensory association areas. Chapter Four identifies two distinct investigative schools that have each reported on the auditory-visual integratory experience. We consider their different experimental methodologies and a number of architectural and information processing models that have sought to emulate human sensory, cognitive and perceptual processing, and ask how far they can accommodate a bi-sensory integratory processing. Chapter Five draws upon empirical data to support the importance of the temporal dimension of sensory forms in information processing, especially bimodal processing. It considers the implications of different modalities processing differently discontinuous afferent information within different time-frames. It concludes with a discussion of a number of models of biological clocks that have been proposed as essential temporal regulators of human sensory experience. In Part Two, the experiments are presented. Chapter Six provides the general methodology, and in the following Chapters a series of four experiments is reported upon. The experiments follow a logical sequence, each being built upon information either revealed or confirmed in results previously reported. Experiments One, Three, and Four required a radical reinterpretation of the 'fast-detection' paradigm developed for use in signal detection theory. This enables the work of two discrete investigative schools in auditory-visual processing to be brought together. The use of this modified paradigm within an appropriately designed methodology produces experimental results that speak directly to both the 'speech versus non-speech' debate and also to gender studies.
APA, Harvard, Vancouver, ISO, and other styles
2

Ver, Hulst Pamela. "Visual and auditory factors facilitating multimodal speech perception." Connect to resource, 2006. http://hdl.handle.net/1811/6629.

Full text
Abstract:
Thesis (Honors)--Ohio State University, 2006.
Title from first page of PDF file. Document formatted into pages: contains 35 p.; also includes graphics. Includes bibliographical references (p. 24-26). Available online via Ohio State University's Knowledge Bank.
APA, Harvard, Vancouver, ISO, and other styles
3

Anderson, Corinne D. "Auditory and visual characteristics of individual talkers in multimodal speech perception." Connect to resource, 2007. http://hdl.handle.net/1811/28373.

Full text
Abstract:
Thesis (Honors)--Ohio State University, 2007.
Title from first page of PDF file. Document formatted into pages: contains 43 p.; also includes graphics. Includes bibliographical references (p. 29-30). Available online via Ohio State University's Knowledge Bank.
APA, Harvard, Vancouver, ISO, and other styles
4

Leech, Stuart Matthew. "The effect on audiovisual speech perception of auditory and visual source separation." Thesis, University of Sussex, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.271770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wroblewski, Marcin. "Developmental predictors of auditory-visual integration of speech in reverberation and noise." Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/6017.

Full text
Abstract:
Objectives: Elementary school classrooms that meet the acoustic requirements for near-optimum speech recognition are extremely scarce. Poor classroom acoustics may become a barrier to speech understanding as children enter school. The purpose of this study was threefold: 1) to quantify the extent to which reverberation, lexical difficulty, and presentation mode affect speech recognition in noise, 2) to examine to what extent auditory-visual (AV) integration assists with the recognition of speech in noisy and reverberant environments typical of elementary school classrooms, 3) to understand the relationship between developing mechanisms of multisensory integration and the concurrently developing linguistic and cognitive abilities. Design: Twenty-seven typically developing children and 9 young adults participated. Participants repeated short sentences reproduced by 10 speakers on a 30” HDTV and/or over loudspeakers located around the listener in a simulated classroom environment. Signal-to-noise ratio (SNR) for 70 (SNR70) and 30 (SNR30) percent correct performance were measured using an adaptive tracking procedure. Auditory-visual integration was assessed via the SNR difference between AV and auditory-only (AO) conditions, labeled speech-reading benefit (SRB). Linguistic and cognitive aptitude was assessed using the NIH-Toolbox: Cognition Battery (NIH-TB: CB). Results: Children required more favorable SNRs for equivalent performance when compared to adults. Participants benefited from the reduction in lexical difficulty, and in most cases the reduction in reverberation time. Reverberation affected children’s speech recognition in AO condition and adults in AV condition. At SNR30, SRB was greater than that at SNR70. Adults showed marginally significant increase in AV integration relative to children. Adults also showed increase in SRB for lexically hard versus easy words, at high level of reverberation. Development of linguistic and cognitive aptitude accounts for approximately 35% of the variance in AV integration, with crystalized and fluid cognition composite scores identified as strongest predictors. Conclusions: The results of this study add to the body of evidence in support of children requiring more favorable SNRs to perform the same speech recognition tasks as adults in simulated listening environments akin to school classrooms. Our findings shed light on the development of AV integration for speech recognition in noise and reverberation during the school years, and provide insight into the balance of cognitive and linguistic underpinnings necessary for AV integration of degraded speech.
APA, Harvard, Vancouver, ISO, and other styles
6

Watson, D. R. "Cognitive effects of impaired auditory abilities and use of visual speech to supplement perception." Thesis, Queen's University Belfast, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.396891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lees, Nicole C. "Vocalisations with a better view : hyperarticulation augments the auditory-visual advantage for the detection of speech in noise." Thesis, View thesis, 2007. http://handle.uws.edu.au:8081/1959.7/19576.

Full text
Abstract:
Recent studies have shown that there is a visual influence early in speech processing - visual speech enhances the ability to detect auditory speech in noise. However, identifying exactly how visual speech interacts with auditory processing at such an early stage has been challenging, because this so-called AV speech detection advantage is both highly related to a specific lower-order, signal-based, optic-acoustic relationship between the second formant amplitude and the area of the mouth (F2/Mouth-area), and mediated by higher-order, information-based factors. Previous investigations either have maximised or minimised information-based factors, or have minimised signal-based factors, in order to try to tease out the relative importance of these sources of the advantage, but they have not yet been successful in this endeavour. Maximising signal-based factors has not previously been explored. This avenue was explored in this thesis by manipulating speaking style, hyperarticulated speech was used to maximise signal-based factors, and hypoarticulated speech to minimise signal-based factors - to examine whether the AV speech detection advantage is modified by these means, and to provide a clearer idea of the primary source of visual influence in the AV detection advantage. Two sets of six studies were conducted. In the first set, three recorded speech styles, hyperarticulated, normal, and hypoarticulated, were extensively analysed in physical (optic and acoustic) and perceptual (visual and auditory) dimensions ahead of stimulus selection for the second set of studies. The analyses indicated that the three styles comprise distinctive categories on the Hyper-Hypo continuum of articulatory effort (Lindblom, 1990). Most relevantly, both optically and visually hyperarticulated speech was more informative, and hypoarticulated less informative, than normal speech with regard to signal-based movement factors. However, the F2/Mouth-area correlation was similarly strong for all speaking styles, thus allowing examination of signal-based, visual informativeness on AV speech detection with optic-acoustic association controlled. In the second set of studies, six Detection Experiments incorporating the three speaking styles were designed to examine whether, and if so why, more visually-informative (hyperarticulated) speech augmented, and less visually informative (hypoarticulated) speech attenuated, the AV detection advantage relative to normal speech, and to examine visual influence when auditory speech was absent. Detection Experiment 1 used a two-interval, two-alternative (first or second interval, 2I2AFC) detection task, and indicated that hyperarticulation provided an AV detection advantage greater than for normal and hypoarticulated speech, with less of an advantage for hypoarticulated than for normal speech. Detection Experiment 2 used a single-interval, yes-no detection task to assess responses in signal-absent independent of signal-present conditions as a means of addressing participants’ reports that speech was heard when it was not presented in the 2I2AFC task. Hyperarticulation resulted in an AV detection advantage, and for all speaking styles there was a consistent response bias to indicate speech was present in signal-absent conditions. To examine whether the AV detection advantage for hyperarticulation was due to visual, auditory or auditory-visual factors, Detection Experiments 3 and 4 used mismatching AV speaking style combinations (AnormVhyper, AnormVhypo, AhyperVnorm, AhypoVnorm) that were onset-matched or time-aligned, respectively. The results indicated that higher rates of mouth movement can be sufficient for the detection advantage with weak optic-acoustic associations, but, in circumstances where these associations are low, even high rates of movement have little impact on augmenting detection in noise. Furthermore, in Detection Experiment 5, in which visual stimuli consisted only of the mouth movements extracted from the three styles, there was no AV detection advantage, and it seems that this is so because extra-oral information is required, perhaps to provide a frame of reference that improves the availability of mouth movement to the perceiver. Detection Experiment 6 used a new 2I-4AFC task and the measures of false detections and response bias to identify whether visual influence in signal absent conditions is due to response bias or an illusion of hearing speech in noise (termed here the Speech in Noise, SiN, Illusion). In the event, the SiN illusion occurred for both the hyperarticulated and the normal styles – styles with reasonable amounts of movement change. For normal speech, the responses in signal-absent conditions were due only to the illusion of hearing speech in noise, whereas for hypoarticulated speech such responses were due only to response bias. For hyperarticulated speech there is evidence for the presence of both types of visual influence in signal-absent conditions. It seems to be the case that there is more doubt with regard to the presence of auditory speech for non-normal speech styles. An explanation of past and present results is offered within a new framework -the Dynamic Bimodal Accumulation Theory (DBAT). This is developed in this thesis to address the limitations of, and conflicts between, previous theoretical positions. DBAT suggests a bottom-up influence of visual speech on the processing of auditory speech; specifically, it is proposed that the rate of change of visual movements guides auditory attention rhythms ‘on-line’ at corresponding rates, which allows selected samples of the auditory stream to be given prominence. Any patterns contained within these samples then emerge from the course of auditory integration processes. By this account, there are three important elements of visual speech necessary for enhanced detection of speech in noise. First and foremost, when speech is present, visual movement information must be available (as opposed to hypoarticulated and synthetic speech) Then the rate of change, and opticacoustic relatedness also have an impact (as in Detection Experiments 3 and 4). When speech is absent, visual information has an influence; and the SiN illusion (Detection Experiment 6) can be explained as a perceptual modulation of a noise stimulus by visually-driven rhythmic attention. In sum, hyperarticulation augments the AV speech detection advantage, and, whenever speech is perceived in noisy conditions, there is either response bias to perceive speech or a SiN illusion, or both. DBAT provides a detailed description of these results, with wider-ranging explanatory power than previous theoretical accounts. Predictions are put forward for examination of the predictive power of DBAT in future studies.
APA, Harvard, Vancouver, ISO, and other styles
8

Lees, Nicole C. "Vocalisations with a better view hyperarticulation augments the auditory-visual advantage for the detection of speech in noise /." View thesis, 2007. http://handle.uws.edu.au:8081/1959.7/19576.

Full text
Abstract:
Thesis (Ph.D.)--University of Western Sydney, 2007.
A thesis submitted to the University of Western Sydney, College of Arts, in fulfilment of the requirements for the degree of Doctor of Philosophy. Includes bibliography.
APA, Harvard, Vancouver, ISO, and other styles
9

Schnobrich, Kathleen Marie. "The Relationship between Literacy Readiness and Auditory and Visual Perception in Kindergarteners." Miami University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=miami1241010453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Erdener, Vahit Doğu. "The effect of auditory, visual and orthographic information on second language acquisition /." View thesis View thesis, 2002. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030408.114825/index.html.

Full text
Abstract:
Thesis (MA (Hons)) -- University of Western Sydney, 2002.
"A thesis submitted in partial fulfillment of the requirements for the degree of Masters of Arts (Honours), MARCS Auditory Laboratories & School of Psychology, University of Western Sydney, May 2002" Bibliography : leaves 83-93.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Auditory-visual speech perception"

1

Alan, Slater, ed. Perceptual development: Visual, auditory, and speech perception in infancy. Hove, UK: Psychology Press, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Alan, Slater, ed. Perceptual development: Visual, auditory, and speech perception in infancy. East Sussex, UK: Psychology Press, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

1944-, Campbell Ruth, Dodd Barbara, and Burnham D. K, eds. Hearing by eye II: Advances in the psychology of speechreading and auditory-visual speech. Hove, East Sussex, UK: Psychology Press, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Massaro, Dominic W. Speech perception by ear and eye: A paradigm for psychological inquiry. Hillsdale, N.J: Erlbaum Associates, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Slater, Alan. Perceptual Development: Visual, Auditory and Speech Perception in Infancy. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Slater, Alan. Perceptual Development: Visual, Auditory and Speech Perception in Infancy. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Slater, Alan. Perceptual Development: Visual, Auditory and Speech Perception in Infancy. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Slater, Alan. Perceptual Development: Visual, Auditory and Speech Perception in Infancy. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Burnham, Douglas, and Ruth Campbell *G Away*. Hearing Eye II: The Psychology of Speechreading and Auditory-Visual Speech. Taylor & Francis Group, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dodd, B. J., Douglas Burnham, and Ruth Campbell *G Away*. Hearing Eye II: The Psychology of Speechreading and Auditory-Visual Speech. Taylor & Francis Group, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Auditory-visual speech perception"

1

Robert-Ribes, Jordi, Jean-Luc Schwartz, and Pierre Escudier. "A Comparison of Models for Fusion of the Auditory and Visual Sensors in Speech Perception." In Integration of Natural Language and Vision Processing, 81–104. Dordrecht: Springer Netherlands, 1995. http://dx.doi.org/10.1007/978-94-009-1639-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Burnham, Denis, and Barbara Dodd. "Auditory-Visual Speech Perception as a Direct Process: The McGurk Effect in Infants and Across Languages." In Speechreading by Humans and Machines, 103–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/978-3-662-13015-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Huggins, A. W. F. "Speech Perception and Auditory Processing." In Auditory and Visual Pattern Recognition, 79–91. Routledge, 2017. http://dx.doi.org/10.4324/9781315532615-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

"Touch and auditory-visual speech perception." In Hearing Eye II, 268–82. Routledge, 2013. http://dx.doi.org/10.4324/9780203098752-25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Erdener, Doğu. "Second Language Instruction." In Advances in Educational Technologies and Instructional Design, 105–23. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-2588-3.ch005.

Full text
Abstract:
Speech perception has long been taken for granted as an auditory-only process. However, it is now firmly established that speech perception is an auditory-visual process in which visual speech information in the form of lip and mouth movements are taken into account in the speech perception process. Traditionally, foreign language (L2) instructional methods and materials are auditory-based. This chapter presents a general framework of evidence that visual speech information will facilitate L2 instruction. The author claims that this knowledge will form a bridge to cover the gap between psycholinguistics and L2 instruction as an applied field. The chapter also describes how orthography can be used in L2 instruction. While learners from a transparent L1 orthographic background can decipher phonology of orthographically transparent L2s –overriding the visual speech information – that is not the case for those from orthographically opaque L1s.
APA, Harvard, Vancouver, ISO, and other styles
6

"Language specificity in the development of auditory-visual speech perception." In Hearing Eye II, 38–71. Routledge, 2013. http://dx.doi.org/10.4324/9780203098752-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gelder, Beatrice de, and Jean Vroomen. "Auditory and Visual Speech Perception in Alphabetic and Non-alphabetic Chinese-Dutch Bilinguals." In Cognitive Processing in Bilinguals, 413–26. Elsevier, 1992. http://dx.doi.org/10.1016/s0166-4115(08)61508-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Maempel, Hans-Joachim, and Michael Horn. "The Influences of Hearing and Vision on Egocentric Distance and Room Size Perception under Rich-Cue Conditions." In Advances in Fundamental and Applied Research on Spatial Audio [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.102810.

Full text
Abstract:
Artistic renditions are mediated by the performance rooms in which they are staged. The perceived egocentric distance to the artists and the perceived room size are relevant features in this regard. The influences of both the presence and the properties of acoustic and visual environments on these features were investigated. Recordings of music and a speech performance were integrated into direct renderings of six rooms by applying dynamic binaural synthesis and chroma-key compositing. By the use of a linearized extraaural headset and a semi-panoramic stereoscopic projection, the auralized, visualized, and auralized-visualized spatial scenes were presented to test participants who were asked to estimate the egocentric distance and the room size. The mean estimates differed between the acoustic and the visual as well as between the acoustic-visual and the combined single-domain conditions. Geometric estimations in performance rooms relied upon nine-tenths on the visual, and one-tenth on the acoustic properties of the virtualized spatial scenes, but negligibly on their interaction. Structural and material properties of rooms may also influence auditory-visual distance perception.
APA, Harvard, Vancouver, ISO, and other styles
9

"The use of auditory and visual information during phonetic processing: implications for theories of speech perception." In Hearing Eye II, 15–37. Routledge, 2013. http://dx.doi.org/10.4324/9780203098752-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chang, Chien-Yen, and Ting-Wei Chang. "The Development of Parameters and Warning Algorithms for an Intersection Bus-Pedestrian Collision Warning System." In Implementation and Integration of Information Systems in the Service Sector, 163–82. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2649-2.ch011.

Full text
Abstract:
This study presents the conceptual design of an intersection bus-pedestrian collision warning system for bus drivers approaching an intersection. The basic parameters of the proposed design concept include the bus drivers’ perception-reaction time, the emergency deceleration rate of the bus, and pedestrian walking speed. A bus driving simulation was designed and conducted to analyze bus drivers’ responses to unexpected pedestrians crossing unsignalized intersections or signalized intersections during a green light interval for parameter analysis. The timings of auditory warnings and visual warnings, the locations for vehicle detectors and pedestrian detectors, and the locations for visual warning devices were also developed after analyzing the experimental results. The experimental results also highlight some important characteristics of bus driving behavior at intersections. Moreover, bus drivers really pay attention to the warning messages. Finally, this study develops and discusses some warning algorithms.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Auditory-visual speech perception"

1

Burnham, Denis, Valter Ciocca, and Stephanie Stokes. "Auditory-visual perception of lexical tone." In 7th European Conference on Speech Communication and Technology (Eurospeech 2001). ISCA: ISCA, 2001. http://dx.doi.org/10.21437/eurospeech.2001-63.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tiippana, Kaisa, Ilmari Kurki, and Tarja Peromaa. "Applying the summation model in audiovisual speech perception." In The 14th International Conference on Auditory-Visual Speech Processing. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/avsp.2017-28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mixdorff, Hansjörg, Angelika Hönemann, Albert Rilliard, Tan Lee, and Matthew Ma. "Cross-Language Perception of Audio-visual Attitudinal Expressions." In The 14th International Conference on Auditory-Visual Speech Processing. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/avsp.2017-23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vinay, Sandhya, and Dawn Behne. "The Influence of Familial Sinistrality on Audiovisual Speech Perception." In The 14th International Conference on Auditory-Visual Speech Processing. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/avsp.2017-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Aubanel, Vincent, Cassandra Masters, Jeesun Kim, and Chris Davis. "Contribution of visual rhythmic information to speech perception in noise." In The 14th International Conference on Auditory-Visual Speech Processing. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/avsp.2017-18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kawahara, Misako, Disa Sauter, and Akihiro Tanaka. "Impact of Culture on the Development of Multisensory Emotion Perception." In The 14th International Conference on Auditory-Visual Speech Processing. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/avsp.2017-21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kawase, Marina, Ikuma Adachi, and Akihiro Tanaka. "Multisensory Perception of Emotion for Human and Chimpanzee Expressions by Humans." In The 14th International Conference on Auditory-Visual Speech Processing. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/avsp.2017-22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Erdener, Doğu, Şefik Evren Erdener, and Arzu Yordaml. "Auditory-visual speech perception in bipolar disorder: behavioural data and physiological predictions." In The 15th International Conference on Auditory-Visual Speech Processing. ISCA: ISCA, 2019. http://dx.doi.org/10.21437/avsp.2019-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yamamoto, Hisako W., Misako Kawahara, and Akihiro Tanaka. "The developmental path of multisensory perception of emotion and phoneme in Japanese speakers." In The 14th International Conference on Auditory-Visual Speech Processing. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/avsp.2017-20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yamamoto, Hisako W., Misako Kawahara, and Akihiro Tanaka. "The Development of Eye Gaze Patterns during Audiovisual Perception of Affective and Phonetic Information." In The 15th International Conference on Auditory-Visual Speech Processing. ISCA: ISCA, 2019. http://dx.doi.org/10.21437/avsp.2019-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography