Academic literature on the topic 'Audio-visual integration'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Audio-visual integration.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Audio-visual integration"
Pérez-Bellido, Alexis, Marc O. Ernst, Salvador Soto-Faraco, and Joan López-Moliner. "Visual limitations shape audio-visual integration." Journal of Vision 15, no. 14 (October 13, 2015): 5. http://dx.doi.org/10.1167/15.14.5.
Full textde Gelder, Beatrice, Jean Vroomen, Leonie Annen, Erik Masthof, and Paul Hodiamont. "Audio-visual integration in schizophrenia." Schizophrenia Research 59, no. 2-3 (February 2003): 211–18. http://dx.doi.org/10.1016/s0920-9964(01)00344-9.
Full textMaddox, Ross K. "What studies of audio-visual integration do not teach us about audio-visual integration." Journal of the Acoustical Society of America 145, no. 3 (March 2019): 1759. http://dx.doi.org/10.1121/1.5101440.
Full textKaposvári, Péter, Gergő Csete, Anna Bognár, Péter Csibri, Eszter Tóth, Nikoletta Szabó, László Vécsei, Gyula Sáry, and Zsigmond Tamás Kincses. "Audio–visual integration through the parallel visual pathways." Brain Research 1624 (October 2015): 71–77. http://dx.doi.org/10.1016/j.brainres.2015.06.036.
Full textTsuhan Chen and R. R. Rao. "Audio-visual integration in multimodal communication." Proceedings of the IEEE 86, no. 5 (May 1998): 837–52. http://dx.doi.org/10.1109/5.664274.
Full textMakovac, Elena, Antimo Buonocore, and Robert D. McIntosh. "Audio-visual integration and saccadic inhibition." Quarterly Journal of Experimental Psychology 68, no. 7 (July 2015): 1295–305. http://dx.doi.org/10.1080/17470218.2014.979210.
Full textWada, Yuji, Norimichi Kitagawa, and Kaoru Noguchi. "Audio–visual integration in temporal perception." International Journal of Psychophysiology 50, no. 1-2 (October 2003): 117–24. http://dx.doi.org/10.1016/s0167-8760(03)00128-4.
Full textBergman, Penny, Daniel Västfjäll, and Ana Tajadura-Jiménez. "Audio-Visual Integration of Emotional Information." i-Perception 2, no. 8 (October 2011): 781. http://dx.doi.org/10.1068/ic781.
Full textCollignon, Olivier, Simon Girard, Frederic Gosselin, Sylvain Roy, Dave Saint-Amour, Maryse Lassonde, and Franco Lepore. "Audio-visual integration of emotion expression." Brain Research 1242 (November 2008): 126–35. http://dx.doi.org/10.1016/j.brainres.2008.04.023.
Full textSürig, Ralf, Davide Bottari, and Brigitte Röder. "Transfer of Audio-Visual Temporal Training to Temporal and Spatial Audio-Visual Tasks." Multisensory Research 31, no. 6 (2018): 556–78. http://dx.doi.org/10.1163/22134808-00002611.
Full textDissertations / Theses on the topic "Audio-visual integration"
Dietrich, Kelly. "Analysis of talker characteristics in audio-visual speech integration." Connect to resource, 2008. http://hdl.handle.net/1811/32149.
Full textMihalik, Agoston. "The neural basis of audio-visual integration and adaptation." Thesis, University of Birmingham, 2017. http://etheses.bham.ac.uk//id/eprint/7692/.
Full textMakovac, Elena. "Audio-visual interactions in manual and saccadic responses." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/8040.
Full textExner, Megan. "Training effects in audio-visual integration of sine wave speech." Connect to resource, 2008. http://hdl.handle.net/1811/32154.
Full textSlabu, Lavinia Mihaela. "Auditory processing in the brainstem and audio visual integration in humans studied with fMRI." [S.l. : Groningen : s.n. ; University Library of Groningen] [Host], 2007. http://irs.ub.rug.nl/ppn/305609564.
Full textMegnin, O. "Electrophysiological correlates of audio-visual integration of spoken words in typical development and autism spectrum disorder." Thesis, University College London (University of London), 2010. http://discovery.ucl.ac.uk/19735/.
Full textZachau, S. (Swantje). "Signs in the brain: Hearing signers’ cross-linguistic semantic integration strategies." Doctoral thesis, Oulun yliopisto, 2016. http://urn.fi/urn:isbn:9789526213293.
Full textTiivistelmä Kuuloaistiin ja ääntöelimistön motoriikkaan perustuva puhe ja kuurojen yhteisön käyttämä, näköaistiin ja käsien liikkeisiin perustuva viittomakieli ovat kaksi varsin erilaista ihmisen kielellisen viestintäjärjestelmän toteutumismuotoa. Viittomakieltä käyttävät kuulovammaisten ohella myös monet kuulevat ihmisryhmät. Tähänastinen tutkimustiedon määrä viittomakielistä ja puhutuista kielistä eroaa huomattavasti. Erityisen vähän on tiedetty näiden kahden järjestelmän yhdistämisestä, vaikka valtaosa kuuroista ja kuulevista viittomakielen käyttäjistä hallitsee myös puheen jossain muodossa. Tämän neurolingvistisen tutkimuksen tarkoituksena oli hankkia perustietoja puheen ja viittomakielen välisistä semanttisista yhdistämismekanismeista kuulevilla, viittomakieltä äidinkielenään tai muuna kielenä käyttävillä henkilöillä. Viittomien prosessoinnin perusperiaatteita, jotka ilmenevät aivojen sähköisen toiminnan muutoksina ja valintapäätöksinä, tutkittiin kolmessa koehenkilöryhmässä: kuulevilla viittomakieltä äidinkielenään käyttävillä henkilöillä (kuurojen aikuisten kuulevilla ns. CODA-lapsilla, engl. children of deaf adults), kuulevilla viittomakielen myöhemmin oppineilla henkilöillä (viittomakielen ammattitulkeilla) sekä kuulevilla viittomakieltä osaamattomilla verrokkihenkilöillä. Tapahtumasidonnaiset herätepotentiaalit (ERP:t) ja käyttäytymisvasteen frekvenssit rekisteröitiin koehenkilöiden tehdessä semanttisia valintoja viritetyistä (engl. primed) lekseemipareista. Lekseemiparit esitettiin joko puheena (puhuttu viritesana – puhuttu kohdesana) tai puheen ja viittomakielen välillä (puhuttu viritesana – viitottu kohdesana). Kohdesidonnaisille ERP-vasteille tehtiin temporaaliset pääkomponenttianalyysit (tPCA). Semanttisten yhdistämisprosessien neurokognitiivista perustaa arvioitiin analysoimalla erilaisia ERP-komponentteja (N170, N400, myöhäinen positiivinen kompleksi) vastineina antonyymisiin ja toisiinsa liittymättömiin kohteisiin. Käyttäytymispäätöksen herkkyyttä kohdelekseemeille tarkastellaan suhteessa mitattuun aivojen aktiviteettiin. Käyttäytymisen osalta kaikki kolme koehenkilöryhmää suoriutuivat satunnaistasoa paremmin tehdessään semanttisia valintoja viritetyistä kohdelekseemeistä. Erilaiset tulosmallit viittaavat kuitenkin kolmeen erilaiseen prosessointistrategiaan. Kun kohdelukittua elektrofysiologista dataa analysoitiin pääkomponenttianalyysin avulla ensimmäistä kertaa viittomakielen prosessoinnin yhteydessä, voitiin tutkia tarkkaavaisuuden objektiivisesti allokoituja ERP-komponentteja. Oli jossain määrin yllättävää, että viittomakielellisesti natiivin verrokkiryhmän tulokset osoittivat sen jäsenten toimivan odotettua sisältölähtöisemmin. Tämä viittaa siihen, että viittomakieleen perehtymättömilläkin henkilöillä on perustaidot lingvistisesti ristiin viritettyjen viittomien prosessointiin. Yhdessä käyttäytymisperäiset ja elektrofysiologiset tutkimustulokset toivat esiin laadullisia eroja prosessoinnissa viittomakieltä äidinkielenään puhuvien henkilöiden ja kielen myöhemmin oppineiden henkilöiden välillä. Tämä puolestaan johtaa kysymykseen, voiko yksi viittomien prosessointimalli soveltua erilaisille viittomakielen käyttäjäryhmille?
Moulin, Samuel. "Quel son spatialisé pour la vidéo 3D ? : influence d'un rendu Wave Field Synthesis sur l'expérience audio-visuelle 3D." Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015PA05H102/document.
Full textThe digital entertainment industry is undergoing a major evolution due to the recent spread of stereoscopic-3D videos. It is now possible to experience 3D by watching movies, playing video games, and so on. In this context, video catches most of the attention but what about the accompanying audio rendering? Today, the most often used sound reproduction technologies are based on lateralization effects (stereophony, 5.1 surround systems). Nevertheless, it is quite natural to wonder about the need of introducing a new audio technology adapted to this new visual dimension: the depth. Many alternative technologies seem to be able to render 3D sound environments (binaural technologies, ambisonics, Wave Field Synthesis). Using these technologies could potentially improve users' quality of experience. It could impact the feeling of realism by adding audio-visual spatial congruence, but also the immersion sensation. In order to validate this hypothesis, a 3D audio-visual rendering system is set-up. The visual rendering provides stereoscopic-3D images and is coupled with a Wave Field Synthesis sound rendering. Three research axes are then studied: 1/ Depth perception using unimodal or bimodal presentations. How the audio-visual system is able to render the depth of visual, sound, and audio-visual objects? The conducted experiments show that Wave Field Synthesis can render virtual sound sources perceived at different distances. Moreover, visual and audio-visual objects can be localized with a higher accuracy in comparison to sound objects. 2/ Crossmodal integration in the depth dimension. How to guarantee the perception of congruence when audio-visual stimuli are spatially misaligned? The extent of the integration window was studied at different visual object distances. In other words, according to the visual stimulus position, we studied where sound objects should be placed to provide the perception of a single unified audio-visual stimulus. 3/ 3D audio-visual quality of experience. What is the contribution of sound depth rendering on the 3D audio-visual quality of experience? We first assessed today's quality of experience using sound systems dedicated to the playback of 5.1 soundtracks (5.1 surround system, headphones, soundbar) in combination with 3D videos. Then, we studied the impact of sound depth rendering using the set-up audio-visual system (3D videos and Wave Field Synthesis)
Zannoli, Marina. "Organisation de l'espace audiovisuel tridimensionnel." Phd thesis, Université René Descartes - Paris V, 2012. http://tel.archives-ouvertes.fr/tel-00789816.
Full textWu, KUAN-CHEN, and 吳冠辰. "The Integration of Digital Audio Workstation andAudio-Visualization: The Case of VISUAL PHOTIS." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/84v27a.
Full text輔仁大學
音樂學系
106
The thoughts of music creation and the manners of music presentation have undergone changes following an era of constant innovation in digital music production software and hardware. This study involved a discussion of digital music production methods studied by the author, as well as a technical discussion of audio–visual integration. The author’s audio–visual integration project VISUAL PHOTIS was used as an example to analyze the user experience and technological integration of music production software and hardware. The author first integrated the experience of using four digital music production software packages—Imaschine 2, Maschine 2, Cubase 8, and Bitwig 2—as the starting point of this research. After obtaining basic understanding of the functions and production concepts of these software’s function areas, tool areas, work areas, and time axes, this study conducted an in-depth exploration of the three digital music software packages (Imaschine 2, Maschine 2, Cubase 8) used by the author during the three stages of music composition—creative thinking, integration, and postmixing. The author’s application was also analyzed to illustrate how the author customized a music production workflow. After exploring the basic concepts of the digital music software and creative tools employed, the author’s audio–visual integration project VISUAL PHOTIS was analyzed. The project combined the results of the author’s integration of digital music and video production software packages. The author also analyzed the actual experience of performing the work to provide a complete presentation of the work as a series of materials and integration methods. Finally, the author interviewed other users and students from the digital music class taught by the author to collect their questions on the actual operation of the interactive audio–visual production process designed by the author, and then integrated their responses to questions regarding digital music composition. The purpose was to improve the production techniques explored by the author, and implement them in digital music education to provide audiovisual integration researchers with substantial assistance and references for the future.
Books on the topic "Audio-visual integration"
Wolf, Christian. Audio-Visual Integration in Smooth Pursuit Eye Movements. Wiesbaden: Springer Fachmedien Wiesbaden, 2015. http://dx.doi.org/10.1007/978-3-658-08311-3.
Full textInternational Council for Educational Media., ed. The integration of media into the curriculum: An international report commissioned by the International Council for Educational Media. London: Kogan Page, 1986.
Find full textESL and digital video integration: Case studies. Alexandria, Virginia: TESOL International Association, 2012.
Find full textYūnus, Ṣādir. Bināʾ al-majāl al-ʻArabī: Muwassasāt al-ʻilm wa-al-ʻamal. Bayrūt: Maʻhad al-Inmāʾ al-ʻArabī, 1991.
Find full textYūnus, Ṣādir. Bināʼ al-majāl al-ʻArabī: Muʼassasāt al-ʻilm wa-al-ʻamal. Bayrūt: Maʻhad al-Inmāʼ al-ʻArabī, 1991.
Find full textLearning, Alberta Alberta. Kindergarten to grade 3 (primary programs), early literacy, early numeracy, integration, diagnostic assessment: Alberta authorized resource list. [Edmonton], AB: Alberta Learning, 2004.
Find full textThe video connection: Integrating video into language teaching. Boston: Houghton Mifflin, 1989.
Find full textAltman, Rick. The video connection: Integrating video into language teaching. Boston: Houghton Mifflin, 1989.
Find full text(Editor), Paolo Guerrieri, P. Lelio Iapadre (Editor), and Georg Koopmann (Editor), eds. Cultural Diversity And International Economic Integration: The Global Governance Of The Audio-visual Sector. Edward Elgar Publishing, 2005.
Find full textTucker, Richard N. The Integration of Media into the Curriculum: An International Report Commissioned by the International Council for Educational Media. Kogan Page Ltd, 1987.
Find full textBook chapters on the topic "Audio-visual integration"
Wolf, Christian. "Experiment 1: Audio-visual coherence." In Audio-Visual Integration in Smooth Pursuit Eye Movements, 29–37. Wiesbaden: Springer Fachmedien Wiesbaden, 2014. http://dx.doi.org/10.1007/978-3-658-08311-3_2.
Full textWolf, Christian. "Experiment 2: Audio-visual velocity coherence." In Audio-Visual Integration in Smooth Pursuit Eye Movements, 39–48. Wiesbaden: Springer Fachmedien Wiesbaden, 2014. http://dx.doi.org/10.1007/978-3-658-08311-3_3.
Full textNakamura, Satoshi, Tatsuo Yotsukura, and Shigeo Morishima. "Human-Machine Communication by Audio-Visual Integration." In Intelligent Multimedia Processing with Soft Computing, 349–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/3-540-32367-8_16.
Full textMarian, Viorica. "3. Audio-visual Integration During Bilingual Language Processing." In TheBilingual Mental Lexicon, edited by Aneta Pavlenko, 52–78. Bristol, Blue Ridge Summit: Multilingual Matters, 2009. http://dx.doi.org/10.21832/9781847691262-005.
Full textGanesh, Attigodu Chandrashekara, Frédéric Berthommier, and Jean-Luc Schwartz. "Audio Visual Integration with Competing Sources in the Framework of Audio Visual Speech Scene Analysis." In Advances in Experimental Medicine and Biology, 399–408. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-25474-6_42.
Full textSeman, Noraini, Rosniza Roslan, Nursuriati Jamil, and Norizah Ardi. "Bimodality Streams Integration for Audio-Visual Speech Recognition Systems." In Hybrid Intelligent Systems, 127–39. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-27221-4_11.
Full textWolf, Christian. "General discussion." In Audio-Visual Integration in Smooth Pursuit Eye Movements, 49–52. Wiesbaden: Springer Fachmedien Wiesbaden, 2014. http://dx.doi.org/10.1007/978-3-658-08311-3_4.
Full textWolf, Christian. "Introduction." In Audio-Visual Integration in Smooth Pursuit Eye Movements, 1–27. Wiesbaden: Springer Fachmedien Wiesbaden, 2014. http://dx.doi.org/10.1007/978-3-658-08311-3_1.
Full textCao, Jiangtao, Naoyuki Kubota, Ping Li, and Honghai Liu. "Visual-Audio Integration for User Authentication System of Partner Robots." In Intelligent Robotics and Applications, 486–97. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-16587-0_45.
Full textMovellan, Javier R., and George Chadderdon. "Channel Separability in the Audio-Visual Integration of Speech: A Bayesian Approach." In Speechreading by Humans and Machines, 473–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/978-3-662-13015-5_36.
Full textConference papers on the topic "Audio-visual integration"
Deas, Lesley, Laurie M. Wilcox, Ali Kazimi, and Robert S. Allison. "Audio-visual integration in stereoscopic 3D." In SAP' 13: ACM Symposium on Applied Perception 2013. New York, NY, USA: ACM, 2013. http://dx.doi.org/10.1145/2492494.2492506.
Full textKonno, Takashi, Kenji Nishida, Katsutoshi Itoyama, and Kazuhiro Nakadai. "Audio-Visual 3D Reconstruction Framework for Dynamic Scenes." In 2020 IEEE/SICE International Symposium on System Integration (SII). IEEE, 2020. http://dx.doi.org/10.1109/sii46433.2020.9025812.
Full textYu, Wentao, Steffen Zeiler, and Dorothea Kolossa. "Multimodal Integration for Large-Vocabulary Audio-Visual Speech Recognition." In 2020 28th European Signal Processing Conference (EUSIPCO). IEEE, 2021. http://dx.doi.org/10.23919/eusipco47968.2020.9287841.
Full textBayram, Baris, and Gokhan Ince. "Audio-visual multi-person tracking for active robot perception." In 2015 IEEE/SICE International Symposium on System Integration (SII). IEEE, 2015. http://dx.doi.org/10.1109/sii.2015.7405043.
Full textHe, Weipeng, Haojun Guan, and Jianwei Zhang. "Multimodal object recognition from visual and audio sequences." In 2015 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). IEEE, 2015. http://dx.doi.org/10.1109/mfi.2015.7295798.
Full textNi, Liya, Marek Krzeminski, and Kevin Tuer. "Application of haptic, visual and audio integration in astronomy education." In 2006 IEEE International Workshop on Haptic Audio Visual Environments and Their Applications. IEEE, 2006. http://dx.doi.org/10.1109/have.2006.283790.
Full textWakabayashi, Yukoh, Koji Inoue, Hiromasa Yoshimoto, and Tatsuya Kawahara. "Speaker diarization based on audio-visual integration for smart posterboard." In 2014 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA). IEEE, 2014. http://dx.doi.org/10.1109/apsipa.2014.7041584.
Full textChoi, Jong-suk, Munsang Kim, and Hyun-don Kim. "Probabilistic Speaker Localization in Noisy Environments by Audio-Visual Integration." In 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2006. http://dx.doi.org/10.1109/iros.2006.282260.
Full textBernardin, Keni, Rainer Stiefelhagen, and Alex Waibel. "Probabilistic integration of sparse audio-visual cues for identity tracking." In Proceeding of the 16th ACM international conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1459359.1459380.
Full textNinomiya, Hiroshi, Norihide Kitaoka, Satoshi Tamura, Yurie Iribe, and Kazuya Takeda. "Integration of deep bottleneck features for audio-visual speech recognition." In Interspeech 2015. ISCA: ISCA, 2015. http://dx.doi.org/10.21437/interspeech.2015-204.
Full text