Zeitschriftenartikel zum Thema „Visual speech information“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Visual speech information" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Miller, Rachel M., Kauyumari Sanchez, and Lawrence D. Rosenblum. "Alignment to visual speech information." Attention, Perception, & Psychophysics 72, no. 6 (August 2010): 1614–25. http://dx.doi.org/10.3758/app.72.6.1614.
Der volle Inhalt der QuelleRosenblum, Lawrence D., Deborah A. Yakel, Naser Baseer, Anjani Panchal, Brynn C. Nodarse, and Ryan P. Niehus. "Visual speech information for face recognition." Perception & Psychophysics 64, no. 2 (February 2002): 220–29. http://dx.doi.org/10.3758/bf03195788.
Der volle Inhalt der QuelleYakel, Deborah A., and Lawrence D. Rosenblum. "Face identification using visual speech information." Journal of the Acoustical Society of America 100, no. 4 (October 1996): 2570. http://dx.doi.org/10.1121/1.417401.
Der volle Inhalt der QuelleWeinholtz, Chase, and James W. Dias. "Categorical perception of visual speech information." Journal of the Acoustical Society of America 139, no. 4 (April 2016): 2018. http://dx.doi.org/10.1121/1.4949950.
Der volle Inhalt der QuelleHISANAGA, Satoko, Kaoru SEKIYAMA, Tomohiko IGASAKI, and Nobuki MURAYAMA. "Effects of visual information on audio-visual speech processing." Proceedings of the Annual Convention of the Japanese Psychological Association 75 (September 15, 2011): 2AM061. http://dx.doi.org/10.4992/pacjpa.75.0_2am061.
Der volle Inhalt der QuelleSell, Andrea J., and Michael P. Kaschak. "Does visual speech information affect word segmentation?" Memory & Cognition 37, no. 6 (September 2009): 889–94. http://dx.doi.org/10.3758/mc.37.6.889.
Der volle Inhalt der QuelleHall, Michael D., Paula M. T. Smeele, and Patricia K. Kuhl. "Integration of auditory and visual speech information." Journal of the Acoustical Society of America 103, no. 5 (May 1998): 2985. http://dx.doi.org/10.1121/1.421677.
Der volle Inhalt der QuelleMcGiverin, Rolland. "Speech, Hearing and Visual." Behavioral & Social Sciences Librarian 8, no. 3-4 (April 16, 1990): 73–78. http://dx.doi.org/10.1300/j103v08n03_12.
Der volle Inhalt der QuelleHollich, George J., Peter W. Jusczyk, and Rochelle S. Newman. "Infants use of visual information in speech segmentation." Journal of the Acoustical Society of America 110, no. 5 (November 2001): 2703. http://dx.doi.org/10.1121/1.4777318.
Der volle Inhalt der QuelleTekin, Ender, James Coughlan, and Helen Simon. "Improving speech enhancement algorithms by incorporating visual information." Journal of the Acoustical Society of America 134, no. 5 (November 2013): 4237. http://dx.doi.org/10.1121/1.4831575.
Der volle Inhalt der QuelleUjiie, Yuta, and Kohske Takahashi. "Weaker McGurk Effect for Rubin’s Vase-Type Speech in People With High Autistic Traits." Multisensory Research 34, no. 6 (April 16, 2021): 663–79. http://dx.doi.org/10.1163/22134808-bja10047.
Der volle Inhalt der QuelleReed, Rebecca K., and Edward T. Auer. "Influence of visual speech information on the identification of foreign accented speech." Journal of the Acoustical Society of America 125, no. 4 (April 2009): 2660. http://dx.doi.org/10.1121/1.4784199.
Der volle Inhalt der QuelleKim, Jeesun, and Chris Davis. "How visual timing and form information affect speech and non-speech processing." Brain and Language 137 (October 2014): 86–90. http://dx.doi.org/10.1016/j.bandl.2014.07.012.
Der volle Inhalt der QuelleSams, M. "Audiovisual Speech Perception." Perception 26, no. 1_suppl (August 1997): 347. http://dx.doi.org/10.1068/v970029.
Der volle Inhalt der QuellePlass, John, David Brang, Satoru Suzuki, and Marcia Grabowecky. "Vision perceptually restores auditory spectral dynamics in speech." Proceedings of the National Academy of Sciences 117, no. 29 (July 6, 2020): 16920–27. http://dx.doi.org/10.1073/pnas.2002887117.
Der volle Inhalt der QuelleKarpov, Alexey Anatolyevich. "Assistive Information Technologies based on Audio-Visual Speech Interfaces." SPIIRAS Proceedings 4, no. 27 (March 17, 2014): 114. http://dx.doi.org/10.15622/sp.27.10.
Der volle Inhalt der QuelleWhalen, D. H., Julia Irwin, and Carol A. Fowler. "Audiovisual integration of speech based on minimal visual information." Journal of the Acoustical Society of America 100, no. 4 (October 1996): 2569. http://dx.doi.org/10.1121/1.417395.
Der volle Inhalt der QuelleGurban, M., and J. P. Thiran. "Information Theoretic Feature Extraction for Audio-Visual Speech Recognition." IEEE Transactions on Signal Processing 57, no. 12 (December 2009): 4765–76. http://dx.doi.org/10.1109/tsp.2009.2026513.
Der volle Inhalt der QuelleMishra, Sushmit, Thomas Lunner, Stefan Stenfelt, Jerker Rönnberg, and Mary Rudner. "Visual Information Can Hinder Working Memory Processing of Speech." Journal of Speech, Language, and Hearing Research 56, no. 4 (August 2013): 1120–32. http://dx.doi.org/10.1044/1092-4388(2012/12-0033).
Der volle Inhalt der QuelleBorrie, Stephanie A. "Visual speech information: A help or hindrance in perceptual processing of dysarthric speech." Journal of the Acoustical Society of America 137, no. 3 (March 2015): 1473–80. http://dx.doi.org/10.1121/1.4913770.
Der volle Inhalt der QuelleWayne, Rachel V., and Ingrid S. Johnsrude. "The role of visual speech information in supporting perceptual learning of degraded speech." Journal of Experimental Psychology: Applied 18, no. 4 (2012): 419–35. http://dx.doi.org/10.1037/a0031042.
Der volle Inhalt der QuelleWinneke, Axel H., and Natalie A. Phillips. "Brain processes underlying the integration of audio-visual speech and non-speech information." Brain and Cognition 67 (June 2008): 45. http://dx.doi.org/10.1016/j.bandc.2008.02.096.
Der volle Inhalt der QuelleSánchez-García, Carolina, Sonia Kandel, Christophe Savariaux, Nara Ikumi, and Salvador Soto-Faraco. "Time course of audio–visual phoneme identification: A cross-modal gating study." Seeing and Perceiving 25 (2012): 194. http://dx.doi.org/10.1163/187847612x648233.
Der volle Inhalt der QuelleYordamlı, Arzu, and Doğu Erdener. "Auditory–Visual Speech Integration in Bipolar Disorder: A Preliminary Study." Languages 3, no. 4 (October 17, 2018): 38. http://dx.doi.org/10.3390/languages3040038.
Der volle Inhalt der QuelleDrijvers, Linda, and Asli Özyürek. "Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension." Journal of Speech, Language, and Hearing Research 60, no. 1 (January 2017): 212–22. http://dx.doi.org/10.1044/2016_jslhr-h-16-0101.
Der volle Inhalt der QuelleRosenblum, Lawrence D. "Speech Perception as a Multimodal Phenomenon." Current Directions in Psychological Science 17, no. 6 (December 2008): 405–9. http://dx.doi.org/10.1111/j.1467-8721.2008.00615.x.
Der volle Inhalt der QuelleMishra, Saumya, Anup Kumar Gupta, and Puneet Gupta. "DARE: Deceiving Audio–Visual speech Recognition model." Knowledge-Based Systems 232 (November 2021): 107503. http://dx.doi.org/10.1016/j.knosys.2021.107503.
Der volle Inhalt der QuelleCallan, Daniel E., Jeffery A. Jones, Kevin Munhall, Christian Kroos, Akiko M. Callan, and Eric Vatikiotis-Bateson. "Multisensory Integration Sites Identified by Perception of Spatial Wavelet Filtered Visual Speech Gesture Information." Journal of Cognitive Neuroscience 16, no. 5 (June 2004): 805–16. http://dx.doi.org/10.1162/089892904970771.
Der volle Inhalt der QuelleHertrich, Ingo, Susanne Dietrich, and Hermann Ackermann. "Cross-modal Interactions during Perception of Audiovisual Speech and Nonspeech Signals: An fMRI Study." Journal of Cognitive Neuroscience 23, no. 1 (January 2011): 221–37. http://dx.doi.org/10.1162/jocn.2010.21421.
Der volle Inhalt der QuelleEverdell, Ian T., Heidi Marsh, Micheal D. Yurick, Kevin G. Munhall, and Martin Paré. "Gaze Behaviour in Audiovisual Speech Perception: Asymmetrical Distribution of Face-Directed Fixations." Perception 36, no. 10 (October 2007): 1535–45. http://dx.doi.org/10.1068/p5852.
Der volle Inhalt der QuelleJesse, Alexandra, Nick Vrignaud, Michael M. Cohen, and Dominic W. Massaro. "The processing of information from multiple sources in simultaneous interpreting." Interpreting. International Journal of Research and Practice in Interpreting 5, no. 2 (December 31, 2000): 95–115. http://dx.doi.org/10.1075/intp.5.2.04jes.
Der volle Inhalt der QuelleJia, Xi Bin, and Mei Xia Zheng. "Video Based Visual Speech Feature Model Construction." Applied Mechanics and Materials 182-183 (June 2012): 1367–71. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.1367.
Der volle Inhalt der QuelleShi, Li Juan, Ping Feng, Jian Zhao, Li Rong Wang, and Na Che. "Study on Dual Mode Fusion Method of Video and Audio." Applied Mechanics and Materials 734 (February 2015): 412–15. http://dx.doi.org/10.4028/www.scientific.net/amm.734.412.
Der volle Inhalt der QuelleDias, James W., and Lawrence D. Rosenblum. "Visual Influences on Interactive Speech Alignment." Perception 40, no. 12 (January 1, 2011): 1457–66. http://dx.doi.org/10.1068/p7071.
Der volle Inhalt der QuelleCampbell, Ruth. "The processing of audio-visual speech: empirical and neural bases." Philosophical Transactions of the Royal Society B: Biological Sciences 363, no. 1493 (September 7, 2007): 1001–10. http://dx.doi.org/10.1098/rstb.2007.2155.
Der volle Inhalt der QuelleMetzger, Brian A. ,., John F. ,. Magnotti, Elizabeth Nesbitt, Daniel Yoshor, and Michael S. ,. Beauchamp. "Cross-modal suppression model of speech perception: Visual information drives suppressive interactions between visual and auditory speech in pSTG." Journal of Vision 20, no. 11 (October 20, 2020): 434. http://dx.doi.org/10.1167/jov.20.11.434.
Der volle Inhalt der QuelleIrwin, Julia, Trey Avery, Lawrence Brancazio, Jacqueline Turcios, Kayleigh Ryherd, and Nicole Landi. "Electrophysiological Indices of Audiovisual Speech Perception: Beyond the McGurk Effect and Speech in Noise." Multisensory Research 31, no. 1-2 (2018): 39–56. http://dx.doi.org/10.1163/22134808-00002580.
Der volle Inhalt der QuelleVan Engen, Kristin J., Jasmine E. B. Phelps, Rajka Smiljanic, and Bharath Chandrasekaran. "Enhancing Speech Intelligibility: Interactions Among Context, Modality, Speech Style, and Masker." Journal of Speech, Language, and Hearing Research 57, no. 5 (October 2014): 1908–18. http://dx.doi.org/10.1044/jslhr-h-13-0076.
Der volle Inhalt der QuelleRecords, Nancy L. "A Measure of the Contribution of a Gesture to the Perception of Speech in Listeners With Aphasia." Journal of Speech, Language, and Hearing Research 37, no. 5 (October 1994): 1086–99. http://dx.doi.org/10.1044/jshr.3705.1086.
Der volle Inhalt der QuelleHelfer, Karen S. "Auditory and Auditory-Visual Perception of Clear and Conversational Speech." Journal of Speech, Language, and Hearing Research 40, no. 2 (April 1997): 432–43. http://dx.doi.org/10.1044/jslhr.4002.432.
Der volle Inhalt der QuelleTaitelbaum-Swead, Riki, and Leah Fostick. "Auditory and visual information in speech perception: A developmental perspective." Clinical Linguistics & Phonetics 30, no. 7 (March 30, 2016): 531–45. http://dx.doi.org/10.3109/02699206.2016.1151938.
Der volle Inhalt der QuelleYakel, Deborah A., and Lawrence D. Rosenblum. "Time‐varying information for vowel identification in visual speech perception." Journal of the Acoustical Society of America 108, no. 5 (November 2000): 2482. http://dx.doi.org/10.1121/1.4743160.
Der volle Inhalt der QuelleJohnson, Jennifer A., and Lawrence D. Rosenblum. "Hemispheric differences in perceiving and integrating dynamic visual speech information." Journal of the Acoustical Society of America 100, no. 4 (October 1996): 2570. http://dx.doi.org/10.1121/1.417400.
Der volle Inhalt der QuelleOgihara, Akio, Akira Shintani, Naoshi Doi, and Kunio Fukunaga. "HMM Speech Recognition Using Fusion of Visual and Auditory Information." IEEJ Transactions on Electronics, Information and Systems 115, no. 11 (1995): 1317–24. http://dx.doi.org/10.1541/ieejeiss1987.115.11_1317.
Der volle Inhalt der QuelleKeintz, Connie K., Kate Bunton, and Jeannette D. Hoit. "Influence of Visual Information on the Intelligibility of Dysarthric Speech." American Journal of Speech-Language Pathology 16, no. 3 (August 2007): 222–34. http://dx.doi.org/10.1044/1058-0360(2007/027).
Der volle Inhalt der QuelleYuan, Yi, Andrew Lotto, and Yonghee Oh. "Temporal cues from visual information benefit speech perception in noise." Journal of the Acoustical Society of America 146, no. 4 (October 2019): 3056. http://dx.doi.org/10.1121/1.5137604.
Der volle Inhalt der QuelleBlank, Helen, and Katharina von Kriegstein. "Mechanisms of enhancing visual–speech recognition by prior auditory information." NeuroImage 65 (January 2013): 109–18. http://dx.doi.org/10.1016/j.neuroimage.2012.09.047.
Der volle Inhalt der QuelleMoon, Il-Joon, Mini Jo, Ga-Young Kim, Nicolas Kim, Young-Sang Cho, Sung-Hwa Hong, and Hye-Yoon Seol. "How Does a Face Mask Impact Speech Perception?" Healthcare 10, no. 9 (September 7, 2022): 1709. http://dx.doi.org/10.3390/healthcare10091709.
Der volle Inhalt der QuelleKubicek, Claudia, Anne Hillairet de Boisferon, Eve Dupierrix, Hélène Lœvenbruck, Judit Gervain, and Gudrun Schwarzer. "Face-scanning behavior to silently-talking faces in 12-month-old infants: The impact of pre-exposed auditory speech." International Journal of Behavioral Development 37, no. 2 (February 25, 2013): 106–10. http://dx.doi.org/10.1177/0165025412473016.
Der volle Inhalt der QuelleMcCotter, Maxine V., and Timothy R. Jordan. "The Role of Facial Colour and Luminance in Visual and Audiovisual Speech Perception." Perception 32, no. 8 (August 2003): 921–36. http://dx.doi.org/10.1068/p3316.
Der volle Inhalt der Quelle