Academic literature on the topic 'Multi-modal interface'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multi-modal interface.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Multi-modal interface"
Kim, Laehyun, Yoha Hwang, Se Hyung Park, and Sungdo Ha. "Dental Training System using Multi-modal Interface." Computer-Aided Design and Applications 2, no. 5 (January 2005): 591–98. http://dx.doi.org/10.1080/16864360.2005.10738323.
Full textOka, Ryuichi, Takuichi Nishimura, and Takashi Endo. "Media Information Processing for Robotics. Multi-modal Interface." Journal of the Robotics Society of Japan 16, no. 6 (1998): 749–53. http://dx.doi.org/10.7210/jrsj.16.749.
Full textАбдуллин, А., A. Abdullin, Елена Маклакова, Elena Maklakova, Анна Илунина, Anna Ilunina, И. Земцов, et al. "VOICE SEARCH ALGORITHM IN INTELLIGENT MULTI-MODAL INTERFACE." Modeling of systems and processes 12, no. 1 (August 26, 2019): 4–9. http://dx.doi.org/10.12737/article_5d639c80b4a438.38023981.
Full textPark, Sankyu, Key-Sun Choi, and K. H. (Kane) Kim. "A Framework for Multi-Agent Systems with Multi-Modal User Interfaces in Distributed Computing Environments." International Journal of Software Engineering and Knowledge Engineering 07, no. 03 (September 1997): 351–69. http://dx.doi.org/10.1142/s0218194097000217.
Full textIndhumathi, C., Wenyu Chen, and Yiyu Cai. "Multi-Modal VR for Medical Simulation." International Journal of Virtual Reality 8, no. 1 (January 1, 2009): 1–7. http://dx.doi.org/10.20870/ijvr.2009.8.1.2707.
Full textMac Namara, Damien, Paul Gibson, and Ken Oakley. "The Ideal Voting Interface: Classifying Usability." JeDEM - eJournal of eDemocracy and Open Government 6, no. 2 (December 2, 2014): 182–96. http://dx.doi.org/10.29379/jedem.v6i2.306.
Full textTomori, Zoltán, Peter Keša, Matej Nikorovič, Jan Kaňka, Petr Jákl, Mojmír Šerý, Silvie Bernatová, Eva Valušová, Marián Antalík, and Pavel Zemánek. "Holographic Raman tweezers controlled by multi-modal natural user interface." Journal of Optics 18, no. 1 (November 18, 2015): 015602. http://dx.doi.org/10.1088/2040-8978/18/1/015602.
Full textFolgheraiter, Michele, Giuseppina Gini, and Dario Vercesi. "A Multi-Modal Haptic Interface for Virtual Reality and Robotics." Journal of Intelligent and Robotic Systems 52, no. 3-4 (May 30, 2008): 465–88. http://dx.doi.org/10.1007/s10846-008-9226-5.
Full textDi Nuovo, Alessandro, Frank Broz, Ning Wang, Tony Belpaeme, Angelo Cangelosi, Ray Jones, Raffaele Esposito, Filippo Cavallo, and Paolo Dario. "The multi-modal interface of Robot-Era multi-robot services tailored for the elderly." Intelligent Service Robotics 11, no. 1 (September 2, 2017): 109–26. http://dx.doi.org/10.1007/s11370-017-0237-6.
Full textJung, Jang-Young, Young-Bin Kim, Sang-Hyeok Lee, and Shin-Jin Kang. "Expression Analysis System of Game Player based on Multi-modal Interface." Journal of Korea Game Society 16, no. 2 (April 30, 2016): 7–16. http://dx.doi.org/10.7583/jkgs.2016.16.2.7.
Full textDissertations / Theses on the topic "Multi-modal interface"
Kost, Stefan. "Dynamically generated multi-modal application interfaces." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2006. http://nbn-resolving.de/urn:nbn:de:swb:14-1150806179876-45678.
Full textNewcomb, Matthew Charles. "A multi-modal interface for road planning tasks using vision, haptics and sound." [Ames, Iowa : Iowa State University], 2010. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1476331.
Full textChen, Yenan. "Advanced Multi-modal User Interfaces in 3D Computer Graphics and Virtual Reality." Thesis, Linköpings universitet, Institutionen för teknik och naturvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-75889.
Full textHusseini, Orabi Ahmed. "Multi-Modal Technology for User Interface Analysis including Mental State Detection and Eye Tracking Analysis." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/36451.
Full textDoshi, Siddharth. "Designing a multi-modal traveler information platform for urban transportation." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37167.
Full textSchneider, Thomas W. "A Voice-based Multimodal User Interface for VTQuest." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/33267.
Full textMaster of Science
Hashem, Yassir. "Multi-Modal Insider Threat Detection and Prevention based on Users' Behaviors." Thesis, University of North Texas, 2008. https://digital.library.unt.edu/ark:/67531/metadc1248460/.
Full textHashem, Yassir. "A Multi-Modal Insider Threat Detection and Prevention based on Users' Behaviors." Thesis, University of North Texas, 2018. https://digital.library.unt.edu/ark:/67531/metadc1248460/.
Full textAlacam, Özge [Verfasser], and Christopher [Akademischer Betreuer] Habel. "Verbally Assisted Haptic-Graph Comprehension : Multi-Modal Empirical Research Towards a Human Computer Interface / Özge Alacam. Betreuer: Christopher Habel." Hamburg : Staats- und Universitätsbibliothek Hamburg, 2016. http://d-nb.info/1095766449/34.
Full textAlacam, Özge Verfasser], and Christopher [Akademischer Betreuer] [Habel. "Verbally Assisted Haptic-Graph Comprehension : Multi-Modal Empirical Research Towards a Human Computer Interface / Özge Alacam. Betreuer: Christopher Habel." Hamburg : Staats- und Universitätsbibliothek Hamburg, 2016. http://d-nb.info/1095766449/34.
Full textBooks on the topic "Multi-modal interface"
Gosse, Bouma, and SpringerLink (Online service), eds. Interactive Multi-modal Question-Answering. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg, 2011.
Find full textDjeraba, Chabane. Multi-modal user interactions in controlled environments. New York: Springer, 2010.
Find full textJapan) Workshop on the Future of VR and AR Interfaces (2001 Yokohama. The future of VR and AR interfaces: Multi modal, humanoid, adaptive, and intelligent : proceedings of the workshop at IEEE Virtual Reality 2001, Yokohama, Japan, March 14, 2001. Sankt Augustin: GMD-Forschungszentrum Informationstechnik, 2001.
Find full textBosch, Antal, and Gosse Bouma. Interactive Multi-modal Question-Answering. Springer, 2011.
Find full textBook chapters on the topic "Multi-modal interface"
Dasgupta, Ritwik. "The Power of Multi-Modal Interactions." In Voice User Interface Design, 67–103. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-4125-7_4.
Full textKitamura, Yoshifumi, Satoshi Sakurai, Tokuo Yamaguchi, Ryo Fukazawa, Yuichi Itoh, and Fumio Kishino. "Multi-modal Interface in Multi-Display Environment for Multi-users." In Human-Computer Interaction. Novel Interaction Methods and Techniques, 66–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02577-8_8.
Full textPark, Wanjoo, Laehyun Kim, Hyunchul Cho, and Sehyung Park. "Dial-Based Game Interface with Multi-modal Feedback." In Lecture Notes in Computer Science, 389–96. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15399-0_42.
Full textDu, Yueqiao. "Interactive Design Principles of Educational APP Interface." In Application of Intelligent Systems in Multi-modal Information Analytics, 828–32. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74814-2_119.
Full textTeófilo, Luís Filipe, Pedro Alves Nogueira, and Pedro Brandão Silva. "GEMINI: A Generic Multi-Modal Natural Interface Framework for Videogames." In Advances in Intelligent Systems and Computing, 873–84. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36981-0_81.
Full textLancel, Karen, Hermen Maat, and Frances Brazier. "EEG KISS: Shared Multi-modal, Multi Brain Computer Interface Experience, in Public Space." In Brain Art, 207–28. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-14323-7_7.
Full textTschöpe, Constanze, Frank Duckhorn, Markus Huber, Werner Meyer, and Matthias Wolff. "A Cognitive User Interface for a Multi-modal Human-Machine Interaction." In Speech and Computer, 707–17. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99579-3_72.
Full textUshida, Hirohide, Tomohiko Sato, Toru Yamaguchi, and Tomohiro Takagi. "Fuzzy associative memory system and its application to multi-modal interface." In Advances in Fuzzy Logic, Neural Networks and Genetic Algorithms, 1–18. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/3-540-60607-6_1.
Full textKomatsu, Rikako, Dalai Tang, Takenori Obo, and Naoyuki Kubota. "Multi-modal Communication Interface for Elderly People in Informationally Structured Space." In Intelligent Robotics and Applications, 220–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-25489-5_22.
Full textMayer, Peter, and Paul Panek. "Towards a Multi-modal User Interface for an Affordable Assistive Robot." In Universal Access in Human-Computer Interaction. Aging and Assistive Environments, 680–91. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-07446-7_65.
Full textConference papers on the topic "Multi-modal interface"
Kadavasal, Muthukkumar S., and James H. Oliver. "Virtual Reality Interface Design for Multi-Modal Teleoperation." In ASME-AFM 2009 World Conference on Innovative Virtual Reality. ASMEDC, 2009. http://dx.doi.org/10.1115/winvr2009-732.
Full textGromov, Boris, Luca M. Gambardella, and Gianni A. Di Caro. "Wearable multi-modal interface for human multi-robot interaction." In 2016 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). IEEE, 2016. http://dx.doi.org/10.1109/ssrr.2016.7784305.
Full textMachidori, Yushi, Ko Takayama, and Kaoru Sugita. "Implementation of multi-modal interface for VR application." In 2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST). IEEE, 2019. http://dx.doi.org/10.1109/icawst.2019.8923551.
Full textKim, Sung-Phil, Jae-Hwan Kang, Young Chang Jo, and Ian Oakley. "Development of a multi-modal personal authentication interface." In 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2017. http://dx.doi.org/10.1109/apsipa.2017.8282125.
Full textNad, Dula, Nikola Miskovic, and Edin Omerdic. "Multi-Modal Supervision Interface Concept for Marine Systems." In OCEANS 2019 - Marseille. IEEE, 2019. http://dx.doi.org/10.1109/oceanse.2019.8867226.
Full textTei, Yoshiyuki, Tsutomu Terada, and Masahiko Tsukamoto. "A multi-modal interface for performers in stuffed suits." In AH '14: 5th Augmented Human International Conference. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2582051.2582109.
Full textKurniawati, Evelyn, Luca Celetto, Nicola Capovilla, and Sapna George. "Personalized voice command systems in multi modal user interface." In 2012 IEEE International Conference on Emerging Signal Processing Applications (ESPA 2012). IEEE, 2012. http://dx.doi.org/10.1109/espa.2012.6152442.
Full textKasakevich, M., P. Boulanger, W. F. Bischof, and M. Garcia. "Multi-Modal Interface for a Real-Time CFD Solver." In 2006 IEEE International Workshop on Haptic Audio Visual Environments and their Applications (HAVE 2006). IEEE, 2006. http://dx.doi.org/10.1109/have.2006.283800.
Full textKamel, Ahmed. "A Multi-modal User Interface for Agent Assistant Systems." In 2009 Second International Conferences on Advances in Computer-Human Interactions (ACHI). IEEE, 2009. http://dx.doi.org/10.1109/achi.2009.56.
Full textZhang, Dan, Yijun Wang, Alexander Maye, Andreas K. Engel, Xiaorong Gao, Bo Hong, and Shangkai Gao. "A Brain-Computer Interface Based on Multi-Modal Attention." In 2007 3rd International IEEE/EMBS Conference on Neural Engineering. IEEE, 2007. http://dx.doi.org/10.1109/cne.2007.369697.
Full textReports on the topic "Multi-modal interface"
Perzanowski, Dennis, William Adams, Alan C. Schultz, and Elaine Marsh. Towards Seamless Integration in a Multi-modal Interface. Fort Belvoir, VA: Defense Technical Information Center, January 2000. http://dx.doi.org/10.21236/ada434973.
Full textGreene, Kristen K., Kayee Kwong, Ross J. Michaels, and Gregory P. Fiumara. Design and Testing of a Mobile Touchscreen Interface for Multi-Modal Biometric Capture. National Institute of Standards and Technology, May 2014. http://dx.doi.org/10.6028/nist.ir.8003.
Full text