To see the other types of publications on this topic, follow the link: Audio-visual integration.

Journal articles on the topic 'Audio-visual integration'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Audio-visual integration.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Pérez-Bellido, Alexis, Marc O. Ernst, Salvador Soto-Faraco, and Joan López-Moliner. "Visual limitations shape audio-visual integration." Journal of Vision 15, no. 14 (October 13, 2015): 5. http://dx.doi.org/10.1167/15.14.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

de Gelder, Beatrice, Jean Vroomen, Leonie Annen, Erik Masthof, and Paul Hodiamont. "Audio-visual integration in schizophrenia." Schizophrenia Research 59, no. 2-3 (February 2003): 211–18. http://dx.doi.org/10.1016/s0920-9964(01)00344-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Maddox, Ross K. "What studies of audio-visual integration do not teach us about audio-visual integration." Journal of the Acoustical Society of America 145, no. 3 (March 2019): 1759. http://dx.doi.org/10.1121/1.5101440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kaposvári, Péter, Gergő Csete, Anna Bognár, Péter Csibri, Eszter Tóth, Nikoletta Szabó, László Vécsei, Gyula Sáry, and Zsigmond Tamás Kincses. "Audio–visual integration through the parallel visual pathways." Brain Research 1624 (October 2015): 71–77. http://dx.doi.org/10.1016/j.brainres.2015.06.036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tsuhan Chen and R. R. Rao. "Audio-visual integration in multimodal communication." Proceedings of the IEEE 86, no. 5 (May 1998): 837–52. http://dx.doi.org/10.1109/5.664274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Makovac, Elena, Antimo Buonocore, and Robert D. McIntosh. "Audio-visual integration and saccadic inhibition." Quarterly Journal of Experimental Psychology 68, no. 7 (July 2015): 1295–305. http://dx.doi.org/10.1080/17470218.2014.979210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wada, Yuji, Norimichi Kitagawa, and Kaoru Noguchi. "Audio–visual integration in temporal perception." International Journal of Psychophysiology 50, no. 1-2 (October 2003): 117–24. http://dx.doi.org/10.1016/s0167-8760(03)00128-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bergman, Penny, Daniel Västfjäll, and Ana Tajadura-Jiménez. "Audio-Visual Integration of Emotional Information." i-Perception 2, no. 8 (October 2011): 781. http://dx.doi.org/10.1068/ic781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Collignon, Olivier, Simon Girard, Frederic Gosselin, Sylvain Roy, Dave Saint-Amour, Maryse Lassonde, and Franco Lepore. "Audio-visual integration of emotion expression." Brain Research 1242 (November 2008): 126–35. http://dx.doi.org/10.1016/j.brainres.2008.04.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sürig, Ralf, Davide Bottari, and Brigitte Röder. "Transfer of Audio-Visual Temporal Training to Temporal and Spatial Audio-Visual Tasks." Multisensory Research 31, no. 6 (2018): 556–78. http://dx.doi.org/10.1163/22134808-00002611.

Full text
Abstract:
Temporal and spatial characteristics of sensory inputs are fundamental to multisensory integration because they provide probabilistic information as to whether or not multiple sensory inputs belong to the same event. The multisensory temporal binding window defines the time range within which two stimuli of different sensory modalities are merged into one percept and has been shown to depend on training. The aim of the present study was to evaluate the role of the training procedure for improving multisensory temporal discrimination and to test for a possible transfer of training to other multisensory tasks. Participants were trained over five sessions in a two-alternative forced-choice simultaneity judgment task. The task difficulty of each trial was either at each participant’s threshold (adaptive group) or randomly chosen (control group). A possible transfer of improved multisensory temporal discrimination on multisensory binding was tested with a redundant signal paradigm in which the temporal alignment of auditory and visual stimuli was systematically varied. Moreover, the size of the spatial audio-visual ventriloquist effect was assessed. Adaptive training resulted in faster improvements compared to the control condition. Transfer effects were found for both tasks: The processing speed of auditory inputs and the size of the ventriloquist effect increased in the adaptive group following the training. We suggest that the relative precision of the temporal and spatial features of a cross-modal stimulus is weighted during multisensory integration. Thus, changes in the precision of temporal processing are expected to enhance the likelihood of multisensory integration for temporally aligned cross-modal stimuli.
APA, Harvard, Vancouver, ISO, and other styles
11

Meienbrock, A., M. J. Naumer, O. Doehrmann, W. Singer, and L. Muckli. "Retinotopic effects during spatial audio-visual integration." Neuropsychologia 45, no. 3 (January 2007): 531–39. http://dx.doi.org/10.1016/j.neuropsychologia.2006.05.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Omata, Kei, and Ken Mogi. "Fusion and combination in audio-visual integration." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 464, no. 2090 (November 27, 2007): 319–40. http://dx.doi.org/10.1098/rspa.2007.1910.

Full text
Abstract:
Language is essentially multi-modal in its sensory origin, the daily conversation depending heavily on the audio-visual (AV) information. Although the perception of spoken language is primarily dominated by audition, the perception of facial expression, particularly that of the mouth, helps us comprehend speech. The McGurk effect is a striking phenomenon where the perceived phoneme is affected by the simultaneous observation of lip movement, and probably reflects the underlying AV integration process. The elucidation of the principles involved in this unique perceptual anomaly poses an interesting problem. Here we study the nature of the McGurk effect by means of neural networks (self-organizing maps, SOM) designed to extract patterns inherent in audio and visual stimuli. It is shown that a McGurk effect-like classification of incoming information occurs without any additional constraint or procedure added to the network, suggesting that the anomaly is a consequence of the AV integration process. Within this framework, an explanation is given for the asymmetric effect of AV pairs in causing the McGurk effect (fusion or combination) based on the ‘distance’ relationship between audio or visual information within the SOM. Our result reveals some generic features of the cognitive process of phoneme perception, and AV sensory integration in general.
APA, Harvard, Vancouver, ISO, and other styles
13

Xiao, Mei, May Wong, Michelle Umali, and Marc Pomplun. "Using Eye-Tracking to Study Audio — Visual Perceptual Integration." Perception 36, no. 9 (September 2007): 1391–95. http://dx.doi.org/10.1068/p5731.

Full text
Abstract:
Perceptual integration of audio—visual stimuli is fundamental to our everyday conscious experience. Eye-movement analysis may be a suitable tool for studying such integration, since eye movements respond to auditory as well as visual input. Previous studies have shown that additional auditory cues in visual-search tasks can guide eye movements more efficiently and reduce their latency. However, these auditory cues were task-relevant since they indicated the target position and onset time. Therefore, the observed effects may have been due to subjects using the cues as additional information to maximize their performance, without perceptually integrating them with the visual displays. Here, we combine a visual-tracking task with a continuous, task-irrelevant sound from a stationary source to demonstrate that audio—visual perceptual integration affects low-level oculomotor mechanisms. Auditory stimuli of constant, increasing, or decreasing pitch were presented. All sound categories induced more smooth-pursuit eye movement than silence, with the greatest effect occurring with stimuli of increasing pitch. A possible explanation is that integration of the visual scene with continuous sound creates the perception of continuous visual motion. Increasing pitch may amplify this effect through its common association with accelerating motion.
APA, Harvard, Vancouver, ISO, and other styles
14

Yoshida, Takami, Kazuhiro Nakadai, and Hiroshi G. Okuno. "Audio-Visual Speech Recognition System for Robots Based on Two-Layered Audio-Visual Integration Framework." Journal of the Robotics Society of Japan 28, no. 8 (2010): 970–77. http://dx.doi.org/10.7210/jrsj.28.970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Fleming, Justin T., Abigail L. Noyce, and Barbara G. Shinn-Cunningham. "Audio-visual spatial alignment improves integration in the presence of a competing audio-visual stimulus." Neuropsychologia 146 (September 2020): 107530. http://dx.doi.org/10.1016/j.neuropsychologia.2020.107530.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Jaekl, Philip, Alexis Pérez-Bellido, and Salvador Soto-Faraco. "On the ‘visual’ in ‘audio-visual integration’: a hypothesis concerning visual pathways." Experimental Brain Research 232, no. 6 (April 4, 2014): 1631–38. http://dx.doi.org/10.1007/s00221-014-3927-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Calvert, Gemma, Peter Hansen, Susan Iversen, and Mick Brammer. "Crossmodal integration of non-speech audio-visual stimuli." NeuroImage 11, no. 5 (May 2000): S727. http://dx.doi.org/10.1016/s1053-8119(00)91657-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Barrós-Loscertales, A., N. Ventura-Campos, JC Bustamante, A. Alsius, M. Calabresi, S. Soto-Faraco, and C. Avila. "Neural correlates of audio-visual integration in bilinguals." NeuroImage 47 (July 2009): S142. http://dx.doi.org/10.1016/s1053-8119(09)71428-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Thompson, William Forde, Frank A. Russo, and Lena Quinto. "Audio-visual integration of emotional cues in song." Cognition & Emotion 22, no. 8 (December 2008): 1457–70. http://dx.doi.org/10.1080/02699930701813974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Luu, Sheena, and Willy Wong. "A correlation feedback model of audio‐visual integration." Journal of the Acoustical Society of America 121, no. 5 (May 2007): 3185. http://dx.doi.org/10.1121/1.4782370.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Fingelkurts, Andrew A., Alexander A. Fingelkurts, Christina M. Krause, Riikka Möttönen, and Mikko Sams. "Cortical operational synchrony during audio–visual speech integration." Brain and Language 85, no. 2 (May 2003): 297–312. http://dx.doi.org/10.1016/s0093-934x(03)00059-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Huyse, Aurélie, Jacqueline Leybaert, and Frédéric Berthommier. "Effects of aging on audio-visual speech integration." Journal of the Acoustical Society of America 136, no. 4 (October 2014): 1918–31. http://dx.doi.org/10.1121/1.4894685.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Su, Shen-Yuan, and Su-Ling Yeh. "Audio-Visual Integration Modifies Emotional Judgment in Music." i-Perception 2, no. 8 (October 2011): 844. http://dx.doi.org/10.1068/ic844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Nakamura, S. "Statistical multimodal integration for audio-visual speech processing." IEEE Transactions on Neural Networks 13, no. 4 (July 2002): 854–66. http://dx.doi.org/10.1109/tnn.2002.1021886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Wan, Shu Ai, Kai Fang Yang, and Hai Yong Zhou. "Multimedia Quality Integration Using Piecewise Function." Advanced Engineering Forum 1 (September 2011): 375–80. http://dx.doi.org/10.4028/www.scientific.net/aef.1.375.

Full text
Abstract:
In this paper the important issue of multimedia quality evaluation is concerned, given the unimodal quality of audio and video. Firstly, the quality integration model recommended in G.1070 is evaluated using experimental results. Theoretical analyses aide empirical observations suggest that the constant coefficients used in the G.1070 model should actually be piecewise adjusted for different levels of audio and visual quality. Then a piecewise function is proposed to perform multimedia quality integration under different levels of the audio and visual quality. Performance gain observed from experimental results substantiates the effectiveness of the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
26

Handley, Rowena, Reinders Simone, Marques Tiago, Pariante Carmine, McGuire Philip, and Dazzan Paola. "NEURAL CORRELATES OF AUDIO-VISUAL INTEGRATION: AN fMRI STUDY." Schizophrenia Research 102, no. 1-3 (June 2008): 105. http://dx.doi.org/10.1016/s0920-9964(08)70316-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Kanaya, S., and K. Yokosawa. "Integration and the perceptual unity of audio-visual utterances." Journal of Vision 10, no. 7 (August 12, 2010): 891. http://dx.doi.org/10.1167/10.7.891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Ninomiya, Yoshiki, Yoshihide Ban, Toshiki Maeno, Daisuke Negi, Chiyomi Miyajima, Kensaku Mori, Takayuki Kitasaka, and Yasuhito Suenaga. "Voice Activity Detection for Driver Using Audio-Visual Integration." Journal of The Institute of Image Information and Television Engineers 62, no. 3 (2008): 435–41. http://dx.doi.org/10.3169/itej.62.435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Jong-Seok Lee and Cheol Hoon Park. "Robust Audio-Visual Speech Recognition Based on Late Integration." IEEE Transactions on Multimedia 10, no. 5 (August 2008): 767–79. http://dx.doi.org/10.1109/tmm.2008.922789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Schutz, M., and J. Vaisberg. "The role of biological motion in audio-visual integration." Journal of Vision 13, no. 9 (July 25, 2013): 881. http://dx.doi.org/10.1167/13.9.881.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Leo, Fabrizio, and Uta Noppeney. "Conditioning Influences Audio-Visual Integration by Increasing Sound Saliency." i-Perception 2, no. 8 (October 2011): 762. http://dx.doi.org/10.1068/ic762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Adams, Wendy J. "The Development of Audio-Visual Integration for Temporal Judgements." PLOS Computational Biology 12, no. 4 (April 14, 2016): e1004865. http://dx.doi.org/10.1371/journal.pcbi.1004865.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Teder-Sälejärvi, W. A., F. Di Russo, J. J. McDonald, and S. A. Hillyard. "Effects of Spatial Congruity on Audio-Visual Multimodal Integration." Journal of Cognitive Neuroscience 17, no. 9 (September 2005): 1396–409. http://dx.doi.org/10.1162/0898929054985383.

Full text
Abstract:
Spatial constraints on multisensory integration of auditory (A) and visual (V) stimuli were investigated in humans using behavioral and electrophysiological measures. The aim was to find out whether cross-modal interactions between A and V stimuli depend on their spatial congruity, as has been found for multisensory neurons in animal studies (Stein & Meredith, 1993). Randomized sequences of unimodal (A or V) and simultaneous bimodal (AV) stimuli were presented to right-or left-field locations while subjects made speeded responses to infrequent targets of greater intensity that occurred in either or both modalities. Behavioral responses to the bimodal stimuli were faster and more accurate than to the uni-modal stimuli for both same-location and different-location AV pairings. The neural basis of this cross-modal facilitation was studied by comparing event-related potentials (ERPs) to the bimodal AV stimuli with the summed ERPs to the unimodal A and V stimuli. These comparisons revealed neural interactions localized to the ventral occipito-temporal cortex (at 190 msec) and to the superior temporal cortical areas (at 260 msec) for both same-and different-location AV pairings. In contrast, ERP interactions that differed according to spatial congruity included a phase and amplitude modulation of visual-evoked activity localized to the ventral occipito-temporal cortex at 100-400 msec and an amplitude modulation of activity localized to the superior temporal region at 260-280 msec. These results demonstrate overlapping but distinctive patterns of multisensory integration for spatially congruent and incongruent AV stimuli.
APA, Harvard, Vancouver, ISO, and other styles
34

Armstrong, Alan, and Johann Issartel. "Sensorimotor synchronization with audio-visual stimuli: limited multisensory integration." Experimental Brain Research 232, no. 11 (July 16, 2014): 3453–63. http://dx.doi.org/10.1007/s00221-014-4031-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Steinweg, Benjamin, and Fred W. Mast. "Semantic incongruity influences response caution in audio-visual integration." Experimental Brain Research 235, no. 1 (October 12, 2016): 349–63. http://dx.doi.org/10.1007/s00221-016-4796-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Blackburn, Catherine L., Pádraig T. Kitterick, Gary Jones, Christian J. Sumner, and Paula C. Stacey. "Visual Speech Benefit in Clear and Degraded Speech Depends on the Auditory Intelligibility of the Talker and the Number of Background Talkers." Trends in Hearing 23 (January 2019): 233121651983786. http://dx.doi.org/10.1177/2331216519837866.

Full text
Abstract:
Perceiving speech in background noise presents a significant challenge to listeners. Intelligibility can be improved by seeing the face of a talker. This is of particular value to hearing impaired people and users of cochlear implants. It is well known that auditory-only speech understanding depends on factors beyond audibility. How these factors impact on the audio-visual integration of speech is poorly understood. We investigated audio-visual integration when either the interfering background speech (Experiment 1) or intelligibility of the target talkers (Experiment 2) was manipulated. Clear speech was also contrasted with sine-wave vocoded speech to mimic the loss of temporal fine structure with a cochlear implant. Experiment 1 showed that for clear speech, the visual speech benefit was unaffected by the number of background talkers. For vocoded speech, a larger benefit was found when there was only one background talker. Experiment 2 showed that visual speech benefit depended upon the audio intelligibility of the talker and increased as intelligibility decreased. Degrading the speech by vocoding resulted in even greater benefit from visual speech information. A single “independent noise” signal detection theory model predicted the overall visual speech benefit in some conditions but could not predict the different levels of benefit across variations in the background or target talkers. This suggests that, similar to audio-only speech intelligibility, the integration of audio-visual speech cues may be functionally dependent on factors other than audibility and task difficulty, and that clinicians and researchers should carefully consider the characteristics of their stimuli when assessing audio-visual integration.
APA, Harvard, Vancouver, ISO, and other styles
37

Ren, Yanna, Zhihan Xu, Sa Lu, Tao Wang, and Weiping Yang. "Stimulus Specific to Age-Related Audio-Visual Integration in Discrimination Tasks." i-Perception 11, no. 6 (November 2020): 204166952097841. http://dx.doi.org/10.1177/2041669520978419.

Full text
Abstract:
Age-related audio-visual integration (AVI) has been investigated extensively; however, AVI ability is either enhanced or reduced with ageing, and this matter is still controversial because of the lack of systematic investigations. To remove possible variates, 26 older adults and 26 younger adults were recruited to conduct meaningless and semantic audio-visual discrimination tasks to assess the ageing effect of AVI systematically. The results for the mean response times showed a significantly faster response to the audio-visual (AV) target than that to the auditory (A) or visual (V) target and a significantly faster response to all targets by the younger adults than that by the older adults (A, V, and AV) in all conditions. In addition, a further comparison of the differences between the probability of audio-visual cumulative distributive functions (CDFs) and race model CDFs showed delayed AVI effects and a longer time window for AVI in older adults than that in younger adults in all conditions. The AVI effect was lower in older adults than that in younger adults during simple meaningless image discrimination (63.0 ms vs. 108.8 ms), but the findings were inverse during semantic image discrimination (310.3 ms vs. 127.2 ms). In addition, there was no significant difference between older and younger adults during semantic character discrimination (98.1 ms vs. 117.2 ms). These results suggested that AVI ability was impaired in older adults, but a compensatory mechanism was established for processing sematic audio-visual stimuli.
APA, Harvard, Vancouver, ISO, and other styles
38

Sánchez-García, Carolina, Sonia Kandel, Christophe Savariaux, and Salvador Soto-Faraco. "The Time Course of Audio-Visual Phoneme Identification: a High Temporal Resolution Study." Multisensory Research 31, no. 1-2 (2018): 57–78. http://dx.doi.org/10.1163/22134808-00002560.

Full text
Abstract:
Speech unfolds in time and, as a consequence, its perception requires temporal integration. Yet, studies addressing audio-visual speech processing have often overlooked this temporal aspect. Here, we address the temporal course of audio-visual speech processing in a phoneme identification task using a Gating paradigm. We created disyllabic Spanish word-like utterances (e.g., /pafa/, /paθa/, …) from high-speed camera recordings. The stimuli differed only in the middle consonant (/f/, /θ/, /s/, /r/, /g/), which varied in visual and auditory saliency. As in classical Gating tasks, the utterances were presented in fragments of increasing length (gates), here in 10 ms steps, for identification and confidence ratings. We measured correct identification as a function of time (at each gate) for each critical consonant in audio, visual and audio-visual conditions, and computed the Identification Point and Recognition Point scores. The results revealed that audio-visual identification is a time-varying process that depends on the relative strength of each modality (i.e., saliency). In some cases, audio-visual identification followed the pattern of one dominant modality (either A or V), when that modality was very salient. In other cases, both modalities contributed to identification, hence resulting in audio-visual advantage or interference with respect to unimodal conditions. Both unimodal dominance and audio-visual interaction patterns may arise within the course of identification of the same utterance, at different times. The outcome of this study suggests that audio-visual speech integration models should take into account the time-varying nature of visual and auditory saliency.
APA, Harvard, Vancouver, ISO, and other styles
39

Nakadai, Kazuhiro, and Tomoaki Koiwa. "Psychologically-Inspired Audio-Visual Speech Recognition Using Coarse Speech Recognition and Missing Feature Theory." Journal of Robotics and Mechatronics 29, no. 1 (February 20, 2017): 105–13. http://dx.doi.org/10.20965/jrm.2017.p0105.

Full text
Abstract:
[abstFig src='/00290001/10.jpg' width='300' text='System architecture of AVSR based on missing feature theory and P-V grouping' ] Audio-visual speech recognition (AVSR) is a promising approach to improving the noise robustness of speech recognition in the real world. For AVSR, the auditory and visual units are the phoneme and viseme, respectively. However, these are often misclassified in the real world because of noisy input. To solve this problem, we propose two psychologically-inspired approaches. One is audio-visual integration based on missing feature theory (MFT) to cope with missing or unreliable audio and visual features for recognition. The other is phoneme and viseme grouping based on coarse-to-fine recognition. Preliminary experiments show that these two approaches are effective for audio-visual speech recognition. Integration based on MFT with an appropriate weight improves the recognition performance by −5 dB. This is the case even in a noisy environment, in which most speech recognition systems do not work properly. Phoneme and viseme grouping further improved the AVSR performance, particularly at a low signal-to-noise ratio.**This work is an extension of our publication “Tomoaki Koiwa et al.: Coarse speech recognition by audio-visual integration based on missing feature theory, IROS 2007, pp.1751-1756, 2007.”
APA, Harvard, Vancouver, ISO, and other styles
40

Naibaho, Lamhot. "THE INTEGRATION OF GROUP DISCUSSION METHOD USING AUDIO VISUAL LEARNING MEDIA TOWARD STUDENTS’ LEARNING ACHIEVEMENT ON LISTENING." International Journal of Research -GRANTHAALAYAH 7, no. 8 (August 31, 2019): 438–45. http://dx.doi.org/10.29121/granthaalayah.v7.i8.2019.697.

Full text
Abstract:
This research is focus on the integration of group discussion method using audio visual learning media. The aim of conducting this research is to find out how the integration of group discussion method using audio visual learning media toward the students learning achievement on listening. The method of the study used was qualitative with Quasy Experimental design. It was done at Teruna Muda Jurnior High School. The instrument used were a set of test and observation sheets. The result of the research are: a) that the everage result using QA method using the audio visual learning is 74; and b) student learning outcomes using the QA method with audio media is no better than the student learning outcomes provided with conventional learning models using audio media. Then it is concluded that the teachers may implement the QA method or conventional learning in teaching listening to the students in the classroom.
APA, Harvard, Vancouver, ISO, and other styles
41

Rajavel, R., and P. S. Sathidevi. "Optimum integration weight for decision fusion audio-visual speech recognition." International Journal of Computational Science and Engineering 10, no. 1/2 (2015): 145. http://dx.doi.org/10.1504/ijcse.2015.067044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Kawakami, Sayaka, Shota Uono, Sadao Otsuka, Sayaka Yoshimura, Shuo Zhao, and Motomi Toichi. "Audio-visual integration and temporal resolution in Autism Spectrum Disorder." Proceedings of the Annual Convention of the Japanese Psychological Association 83 (September 11, 2019): 3D—038–3D—038. http://dx.doi.org/10.4992/pacjpa.83.0_3d-038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Mitchell, Helen F., and Raymond A. R. MacDonald. "Listeners as spectators? Audio-visual integration improves music performer identification." Psychology of Music 42, no. 1 (November 14, 2012): 112–27. http://dx.doi.org/10.1177/0305735612463771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Derrick, Donald, Doreen Hansmann, and Catherine Theys. "Tri-modal speech: Audio-visual-tactile integration in speech perception." Journal of the Acoustical Society of America 146, no. 5 (November 2019): 3495–504. http://dx.doi.org/10.1121/1.5134064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Wyk, Brent C. Vander, Gordon J. Ramsay, Caitlin M. Hudac, Warren Jones, David Lin, Ami Klin, Su Mei Lee, and Kevin A. Pelphrey. "Cortical integration of audio–visual speech and non-speech stimuli." Brain and Cognition 74, no. 2 (November 2010): 97–106. http://dx.doi.org/10.1016/j.bandc.2010.07.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Yu, Jianwei, Shi-Xiong Zhang, Bo Wu, Shansong Liu, Shoukang Hu, Mengzhe Geng, Xunying Liu, Helen Meng, and Dong Yu. "Audio-Visual Multi-Channel Integration and Recognition of Overlapped Speech." IEEE/ACM Transactions on Audio, Speech, and Language Processing 29 (2021): 2067–82. http://dx.doi.org/10.1109/taslp.2021.3078883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Gori, Monica, Claudio Campus, and Giulia Cappagli. "Late development of audio-visual integration in the vertical plane." Current Research in Behavioral Sciences 2 (November 2021): 100043. http://dx.doi.org/10.1016/j.crbeha.2021.100043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Martolini, Chiara, Giulia Cappagli, Sabrina Signorini, and Monica Gori. "Effects of Increasing Stimulated Area in Spatiotemporally Congruent Unisensory and Multisensory Conditions." Brain Sciences 11, no. 3 (March 9, 2021): 343. http://dx.doi.org/10.3390/brainsci11030343.

Full text
Abstract:
Research has shown that the ability to integrate complementary sensory inputs into a unique and coherent percept based on spatiotemporal coincidence can improve perceptual precision, namely multisensory integration. Despite the extensive research on multisensory integration, very little is known about the principal mechanisms responsible for the spatial interaction of multiple sensory stimuli. Furthermore, it is not clear whether the size of spatialized stimulation can affect unisensory and multisensory perception. The present study aims to unravel whether the stimulated area’s increase has a detrimental or beneficial effect on sensory threshold. Sixteen typical adults were asked to discriminate unimodal (visual, auditory, tactile), bimodal (audio-visual, audio-tactile, visuo-tactile) and trimodal (audio-visual-tactile) stimulation produced by one, two, three or four devices positioned on the forearm. Results related to unisensory conditions indicate that the increase of the stimulated area has a detrimental effect on auditory and tactile accuracy and visual reaction times, suggesting that the size of stimulated areas affects these perceptual stimulations. Concerning multisensory stimulation, our findings indicate that integrating auditory and tactile information improves sensory precision only when the stimulation area is augmented to four devices, suggesting that multisensory interaction is occurring for expanded spatial areas.
APA, Harvard, Vancouver, ISO, and other styles
49

MOOREFIELD, VIRGIL, and JEFFREY WEETER. "The Lucid Dream Ensemble: a laboratory of discovery in the age of convergence." Organised Sound 9, no. 3 (December 2004): 271–81. http://dx.doi.org/10.1017/s1355771804000469.

Full text
Abstract:
This paper describes the formation, creations and performances of a digital arts performing ensemble. It considers issues of collaboration across media, especially in the context of composed audio-visual improvisation (comprovisation). By contextualising our creative work within the wider discourse of both the practice and aesthetics of contemporary intermedia, we seek to enhance the potential relevancy of the article's main focus to readers.The story of the Lucid Dream Ensemble is one of contemporary creative activity in the realm of digital arts. It exists in a university setting, and its purpose is educational as well as artistic. Founded at Northwestern University in Evanston, Illinois in 2002, the ensemble seeks to foster collaboration across auditory and visual boundaries. The group is made up of audio-visual performers who control laptop computers in real-time by various interactive means. Its canvas is a surround-sound set-up, as well as three projectors. The ensemble seeks a true integration of video and audio, as expressed in the creation of audio-visual artefacts aimed at providing an immersive experience. The group has progressed from presenting unrelated audio and video, to integrating the process of creation of sound and image, to collecting audio and video as a group and processing them collaboratively.
APA, Harvard, Vancouver, ISO, and other styles
50

Molholm, Sophie, Pejman Sehatpour, Ashesh D. Mehta, Marina Shpaner, Manuel Gomez-Ramirez, Stephanie Ortigue, Jonathan P. Dyke, Theodore H. Schwartz, and John J. Foxe. "Audio-Visual Multisensory Integration in Superior Parietal Lobule Revealed by Human Intracranial Recordings." Journal of Neurophysiology 96, no. 2 (August 2006): 721–29. http://dx.doi.org/10.1152/jn.00285.2006.

Full text
Abstract:
Intracranial recordings from three human subjects provide the first direct electrophysiological evidence for audio-visual multisensory processing in the human superior parietal lobule (SPL). Auditory and visual sensory inputs project to the same highly localized region of the parietal cortex with auditory inputs arriving considerably earlier (30 ms) than visual inputs (75 ms). Multisensory integration processes in this region were assessed by comparing the response to simultaneous audio-visual stimulation with the algebraic sum of responses to the constituent auditory and visual unisensory stimulus conditions. Significant integration effects were seen with almost identical morphology across the three subjects, beginning between 120 and 160 ms. These results are discussed in the context of the role of SPL in supramodal spatial attention and sensory-motor transformations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography