Academic literature on the topic 'Perception/McGyrk effect'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Perception/McGyrk effect.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Perception/McGyrk effect"
Möttönen, Riikka, Kaisa Tiippana, Mikko Sams, and Hanna Puharinen. "Sound Location Can Influence Audiovisual Speech Perception When Spatial Attention Is Manipulated." Seeing and Perceiving 24, no. 1 (2011): 67–90. http://dx.doi.org/10.1163/187847511x557308.
Full textMagnotti, John F., Debshila Basu Mallick, and Michael S. Beauchamp. "Reducing Playback Rate of Audiovisual Speech Leads to a Surprising Decrease in the McGurk Effect." Multisensory Research 31, no. 1-2 (2018): 19–38. http://dx.doi.org/10.1163/22134808-00002586.
Full textOmata, Kei, and Ken Mogi. "Fusion and combination in audio-visual integration." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 464, no. 2090 (November 27, 2007): 319–40. http://dx.doi.org/10.1098/rspa.2007.1910.
Full textAlsius, Agnès, Martin Paré, and Kevin G. Munhall. "Forty Years After Hearing Lips and Seeing Voices: the McGurk Effect Revisited." Multisensory Research 31, no. 1-2 (2018): 111–44. http://dx.doi.org/10.1163/22134808-00002565.
Full textMacDonald, John. "Hearing Lips and Seeing Voices: the Origins and Development of the ‘McGurk Effect’ and Reflections on Audio–Visual Speech Perception Over the Last 40 Years." Multisensory Research 31, no. 1-2 (2018): 7–18. http://dx.doi.org/10.1163/22134808-00002548.
Full textLüttke, Claudia S., Alexis Pérez-Bellido, and Floris P. de Lange. "Rapid recalibration of speech perception after experiencing the McGurk illusion." Royal Society Open Science 5, no. 3 (March 2018): 170909. http://dx.doi.org/10.1098/rsos.170909.
Full textLindborg, Alma, and Tobias S. Andersen. "Bayesian binding and fusion models explain illusion and enhancement effects in audiovisual speech perception." PLOS ONE 16, no. 2 (February 19, 2021): e0246986. http://dx.doi.org/10.1371/journal.pone.0246986.
Full textLu, Hong, and Chaochao Pan. "The McGurk effect in self-recognition of people with schizophrenia." Social Behavior and Personality: an international journal 48, no. 6 (June 2, 2020): 1–8. http://dx.doi.org/10.2224/sbp.9219.
Full textWalker, Grant M., Patrick Sarahan Rollo, Nitin Tandon, and Gregory Hickok. "Effect of Bilateral Opercular Syndrome on Speech Perception." Neurobiology of Language 2, no. 3 (2021): 335–53. http://dx.doi.org/10.1162/nol_a_00037.
Full textSams, M. "Audiovisual Speech Perception." Perception 26, no. 1_suppl (August 1997): 347. http://dx.doi.org/10.1068/v970029.
Full textDissertations / Theses on the topic "Perception/McGyrk effect"
Huyse, Aurélie. "Intégration audio-visuelle de la parole: le poids de la vision varie-t-il en fonction de l'âge et du développement langagier?" Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209690.
Full textLe paradigme expérimental utilisé consistait toujours en une tâche d’identification de syllabes présentées dans trois modalités :auditive seule, visuelle seule et audio-visuelle (congruente et incongruente). Les cinq études avaient également comme point commun la présentation de stimuli visuels dont la qualité était réduite, visant à empêcher une lecture labiale de bonne qualité. Le but de chacune de ces études était non seulement d’examiner si les performances variaient en fonction des variables investiguées mais également de déterminer si les différences provenaient bien du processus d’intégration lui-même et non uniquement de différences au niveau de la perception unimodale. Pour cela, les scores des participants ont été comparés à des scores prédits sur base d’un modèle prenant en compte les variations individuelles des poids auditifs et visuels, le weighted fuzzy-logical model of perception.
L’ensemble des résultats, discuté dans la dernière partie de ce travail, fait pencher la balance en faveur de l’hypothèse d’une intégration dépendante du contexte. Nous proposons alors une nouvelle architecture de fusion bimodale, prenant en compte ces dernières données. Enfin, les implications sont aussi d’ordre pratique, suggérant la nécessité d’incorporer des évaluations et rééducations à la fois auditives et visuelles dans le cadre des programmes de revalidation de personnes âgées, dysphasiques ou avec implant cochléaire./During face-to-face conversation, perception of auditory speech is influenced by the visual speech cues contained in lip movements. Indeed, previous research has highlighted the ability of lip-reading to enhance and even modify speech perception. This phenomenon is known as audio-visual integration. The aim of this doctoral thesis is to study the possibility of modifying this audio-visual integration according to several variables. This work lies into the scope of an important debate between invariant versus subject-dependent audio-visual integration in speech processing. Each study of this dissertation investigates the impact of a specific variable on bimodal integration: the quality of the visual input, age of participants, the use of a cochlear implant, age at cochlear implantation and the presence of specific language impairments.
The paradigm used always consisted of a syllable identification task, where syllables were presented in three modalities: auditory only, visual only and audio-visual (congruent and incongruent). There was also a condition where the quality of the visual input was reduced, in order to prevent a lip-reading of good quality. The aim of each of the five studies was not only to examine whether performances were modified according to the variable under study but also to ascertain that differences were indeed issued from the integration process itself. Thereby, our results were analyzed in the framework of model predictive of audio-visual speech performance (weighted fuzzy-logical model of perception) in order to disentangle unisensory effects from audio-visual integration effects.
Taken together, our data suggest that speech integration is not automatic but rather depends on the context. We propose a new architecture of bimodal fusions, taking these considerations into account. Finally, there are also practical implications suggesting the need to incorporate not only auditory but also visual exercise in the rehabilitation programs of older adults and children with cochlear implants or with specific language impairements.
Doctorat en Sciences Psychologiques et de l'éducation
info:eu-repo/semantics/nonPublished
Colin, Cécile. "Etude comportementale et électrophysiologique des processus impliqués dans l'effet Mcgurk et dans l'effet de ventriloquie." Doctoral thesis, Universite Libre de Bruxelles, 2001. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211513.
Full textGraham, Robert Edward. "MUSIC TO OUR EYES: ASSESSING THE ROLE OF EXPERIENCE FOR MULTISENSORY INTEGRATION IN MUSIC PERCEPTION." OpenSIUC, 2017. https://opensiuc.lib.siu.edu/dissertations/1491.
Full textNordstrom, Lauren Donelle. "Brain Mapping of the Latency Epochs in a McGurk Effect Paradigm in Music Performance and Visual Arts Majors." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/4447.
Full textStevanovic, Bettina. "The effect of learning on pitch and speech perception influencing perception of Shepard tones and McGurk syllables using classical and operant conditioning principles /." View thesis, 2007. http://handle.uws.edu.au:8081/1959.7/33694.
Full textA thesis submitted to the University of Western Sydney, College of Arts, School of Psychology in fulfilment of the requirements for the degree of Doctor of Philosophy. Includes bibliography.
Attigodu, Chandrashekara Ganesh. "Characterization of audiovisual binding and fusion in the framework of audiovisual speech scene analysis." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAS006/document.
Full textThe present doctoral work is focused on a tentative fusion between two separate concepts: Auditory Scene Analysis (ASA) and Audiovisual (AV) fusion in speech perception. We introduce “Audio Visual Speech Scene Analysis” (AVSSA) as an extension of the two-stage ASA model to- wards AV scenes, and we propose that a coherence index between the auditory and the visual input is computed prior to AV fusion, enabling to determine whether the sensory inputs should be bound together. This is the “two-stage model of AV fusion”. Previous experiments on the modulation of the McGurk effect by AV coherent vs. incoherent contexts presented before the McGurk target have provided experimental evidence supporting the two-stage model. In this doctoral work, we further evaluate the AVSSA process within the two-stage architecture in various dimensions such as introducing noise, considering multiple sources, assessing neurophysiological correlates and testing in different populations.A first set of experiments in younger adults was focused on behavioral characterization of the AV binding process by introducing noise and results showed that the participants were able to evaluate both the level of acoustic noise and AV coherence and to monitor the AV fusion accordingly. In a second set of behavioral experiments involving competing AV sources, we showed that the AVSSA process enables to evaluate the coherence between auditory and visual features within a complex scene, in order to properly associate the adequate components of a given AV speech source, and provide to the fusion process an assessment of the AV coherence of the extracted source. It also appears that the modulation of fusion depends on the attentional focus on one source or the other.Then an EEG experiment aimed to display a neurophysiological marker of the binding and un- binding process and showed that an incoherent AV context could modulate the effect of the visual input on the N1/P2 component. The last set of experiments were focused on measurement of AV binding and its dynamics in the older population, and provided similar results as in younger adults though with a higher amount of unbinding. The whole set of results enabled better characterize the AVSSA process and were embedded in the proposal of an improved neurocognitive architecture for AV fusion in speech perception
Nahorna, Olha. "Analyse de scènes de parole multisensorielle : mise en évidence et caractérisation d'un processus de liage audiovisuel préalable à la fusion." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENS039/document.
Full textIn audiovisual speech the coherent auditory and visual streams are generally fused into a single percept. This results in enhanced intelligibility in noise, or in visual modification of the auditory percept in the famous “McGurk effect” (the dubbing of the sound “ba” on the image of the speaker uttering “ga” is often perceived as “da”). It is classically considered that processing is done independently in the auditory and visual systems before interaction occurs at a certain representational stage, resulting in an integrated percept. However, some behavioral and neurophysiological data suggest the existence of a two-stage process. A first stage would involve binding together the appropriate pieces of audio and video information, before fusion in a second stage. To demonstrate the existence of this first stage, we have designed an original paradigm aiming at possibly “unbinding” the audio and visual streams. Our paradigm consists in providing before a McGurk stimulus (used as an indicator of audiovisual fusion) an audiovisual context either coherent or incoherent. In the case of an incoherent context we observe a significant decrease of the McGurk effect, implying a reduction of the amount of audiovisual fusion. Various kinds of incoherence (acoustic syllables dubbed on video sentences, phonetic or temporal modifications of the acoustic content of a regular sequence of audiovisual syllables) can significantly reduce the McGurk effect. The unbinding process is fast since one incoherent syllable is enough to produce maximal unbinding. On the other side, the inverse process of “rebinding” by a coherent context following unbinding is progressive, since it appears that at least three coherent syllables are needed to completely recover from unbinding. The subject can also be “freezed” in an “unbound” state by adding a pause between an incoherent context and the McGurk target. In total seven experiments were performed to demonstrate and describe the binding process in audiovisual speech perception. The data are interpreted in the framework of a two-stage “binding and fusion” model
Klitsch, Julia Ulrike. "Open your eyes and listen carefully auditory and audiovisual speech perception and the McGurk effect in Dutch speakers with and without aphasia /." [S.l. : [Groningen : s.n.] ; University Library Groningen] [Host], 2008. http://irs.ub.rug.nl/ppn/.
Full textBedard-Giraud, Kimberly. "Troubles du traitement de la parole chez le dyslexique adulte." Toulouse 3, 2007. http://www.theses.fr/2007TOU30334.
Full textSpeech perception deficits may play a causal role in certain cases of developmental dyslexia. This research focuses on the perception of stop consonants in the adult dyslexic. In the first study [temporal course of Auditory Evoked Potentials (AEPs)], the cortical processing of temporal cues (Voice Onset Time) differentiating voiced and voiceless stops is analysed in dyslexics with persistent deficits. Two atypical electrophysiological patterns are observed: (i) AEP Pattern I is characterised by a differential coding of stimuli on the basis of some temporal cues but with more AEP components and a delay in termination time; (ii) AEP Pattern II is characterised by an absence of differential coding based on temporal cues. The second study [source modelling and asymmetry of temporal processing] shows an atypical functional asymmetry of this temporal cue processing in adult dyslexics - even in compensated cases with relatively normal AEP timecourses. The third study [Categorical Perception and MMN] suggests how atypical temporal cue processing may affect stop consonant discrimination: AEP Pattern I may be associated with the coding of superfluous non-phonetically pertinent cues, while AEP Pattern II may be associated with a severe voiced/voiceless discrimination deficit. In the fourth study [McGurk Effect], the integration of acoustic and visual cues in face-to-face speech perception is analysed in adult dyslexics. Compared to controls, dyslexics demonstrated less audiovisual integration, relying preferentially on acoustic cues. Together, these results are consistent with a speech perception deficit that affects multiple levels of processing in the developmental dyslexic
Deonarine, Justin. "Noise reduction limits the McGurk Effect." Thesis, 2011. http://hdl.handle.net/10012/6046.
Full textBooks on the topic "Perception/McGyrk effect"
Dias, James W., Theresa C. Cook, and Lawrence D. Rosenblum. The McGurk Effect and the Primacy of Multisensory Perception. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780199794607.003.0115.
Full textBook chapters on the topic "Perception/McGyrk effect"
Keil, Julian, Niklas Ihssen, and Nathan Weisz. "Prestimulus Oscillatory Brain Activity Influences the Perception of the McGurk Effect." In IFMBE Proceedings, 219–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12197-5_49.
Full textBurnham, Denis, and Barbara Dodd. "Auditory-Visual Speech Perception as a Direct Process: The McGurk Effect in Infants and Across Languages." In Speechreading by Humans and Machines, 103–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/978-3-662-13015-5_7.
Full textCavedon-Taylor, Dan. "High-Level Perception and Multimodal Perception." In Purpose and Procedure in Philosophy of Perception, 147–73. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198853534.003.0008.
Full textO'Callaghan, Casey. "Processes." In A Multisensory Philosophy of Perception, 19–52. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198833703.003.0002.
Full textNicholls, Michael E. R., Dara A. Searle, and John L. Bradshaw. "Read My Lips: Asymmetries in the Visual Expression and Perception of Speech Revealed through the McGurk Effect*." In Language in Use, 350–58. Routledge, 2020. http://dx.doi.org/10.4324/9781003060994-33.
Full textConference papers on the topic "Perception/McGyrk effect"
AlAnsari, Noora Essa, Ali Idrissi, and Michael Grosvald. "The McGurk Effect in Qatari Arabic: Influences of Lexicality and Consonant Position." In Qatar University Annual Research Forum & Exhibition. Qatar University Press, 2020. http://dx.doi.org/10.29117/quarfe.2020.0279.
Full textMassaro, Dominic. "The McGurk Effect: Auditory Visual Speech Perception’s Piltdown Man." In The 14th International Conference on Auditory-Visual Speech Processing. ISCA: ISCA, 2017. http://dx.doi.org/10.21437/avsp.2017-25.
Full textRahmawati, Sabrina, and Michitaka Ohgishi. "Cross cultural studies on audiovisual speech processing: The Mcgurk effects observed in consonant and vowel perception." In 2011 6th International Conference on Telecommunication Systems, Services, and Applications (TSSA). IEEE, 2011. http://dx.doi.org/10.1109/tssa.2011.6095406.
Full text