Journal articles on the topic 'Multi-modal interface'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 journal articles for your research on the topic 'Multi-modal interface.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.
Kim, Laehyun, Yoha Hwang, Se Hyung Park, and Sungdo Ha. "Dental Training System using Multi-modal Interface." Computer-Aided Design and Applications 2, no. 5 (January 2005): 591–98. http://dx.doi.org/10.1080/16864360.2005.10738323.
Full textOka, Ryuichi, Takuichi Nishimura, and Takashi Endo. "Media Information Processing for Robotics. Multi-modal Interface." Journal of the Robotics Society of Japan 16, no. 6 (1998): 749–53. http://dx.doi.org/10.7210/jrsj.16.749.
Full textАбдуллин, А., A. Abdullin, Елена Маклакова, Elena Maklakova, Анна Илунина, Anna Ilunina, И. Земцов, et al. "VOICE SEARCH ALGORITHM IN INTELLIGENT MULTI-MODAL INTERFACE." Modeling of systems and processes 12, no. 1 (August 26, 2019): 4–9. http://dx.doi.org/10.12737/article_5d639c80b4a438.38023981.
Full textPark, Sankyu, Key-Sun Choi, and K. H. (Kane) Kim. "A Framework for Multi-Agent Systems with Multi-Modal User Interfaces in Distributed Computing Environments." International Journal of Software Engineering and Knowledge Engineering 07, no. 03 (September 1997): 351–69. http://dx.doi.org/10.1142/s0218194097000217.
Full textIndhumathi, C., Wenyu Chen, and Yiyu Cai. "Multi-Modal VR for Medical Simulation." International Journal of Virtual Reality 8, no. 1 (January 1, 2009): 1–7. http://dx.doi.org/10.20870/ijvr.2009.8.1.2707.
Full textMac Namara, Damien, Paul Gibson, and Ken Oakley. "The Ideal Voting Interface: Classifying Usability." JeDEM - eJournal of eDemocracy and Open Government 6, no. 2 (December 2, 2014): 182–96. http://dx.doi.org/10.29379/jedem.v6i2.306.
Full textTomori, Zoltán, Peter Keša, Matej Nikorovič, Jan Kaňka, Petr Jákl, Mojmír Šerý, Silvie Bernatová, Eva Valušová, Marián Antalík, and Pavel Zemánek. "Holographic Raman tweezers controlled by multi-modal natural user interface." Journal of Optics 18, no. 1 (November 18, 2015): 015602. http://dx.doi.org/10.1088/2040-8978/18/1/015602.
Full textFolgheraiter, Michele, Giuseppina Gini, and Dario Vercesi. "A Multi-Modal Haptic Interface for Virtual Reality and Robotics." Journal of Intelligent and Robotic Systems 52, no. 3-4 (May 30, 2008): 465–88. http://dx.doi.org/10.1007/s10846-008-9226-5.
Full textDi Nuovo, Alessandro, Frank Broz, Ning Wang, Tony Belpaeme, Angelo Cangelosi, Ray Jones, Raffaele Esposito, Filippo Cavallo, and Paolo Dario. "The multi-modal interface of Robot-Era multi-robot services tailored for the elderly." Intelligent Service Robotics 11, no. 1 (September 2, 2017): 109–26. http://dx.doi.org/10.1007/s11370-017-0237-6.
Full textJung, Jang-Young, Young-Bin Kim, Sang-Hyeok Lee, and Shin-Jin Kang. "Expression Analysis System of Game Player based on Multi-modal Interface." Journal of Korea Game Society 16, no. 2 (April 30, 2016): 7–16. http://dx.doi.org/10.7583/jkgs.2016.16.2.7.
Full textWang, X., S. K. Ong, and A. Y. C. Nee. "Multi-modal augmented-reality assembly guidance based on bare-hand interface." Advanced Engineering Informatics 30, no. 3 (August 2016): 406–21. http://dx.doi.org/10.1016/j.aei.2016.05.004.
Full textKim, Hansol, Kun Ha Suh, and Eui Chul Lee. "Multi-modal user interface combining eye tracking and hand gesture recognition." Journal on Multimodal User Interfaces 11, no. 3 (March 6, 2017): 241–50. http://dx.doi.org/10.1007/s12193-017-0242-2.
Full textFussell, Susan R., Delia Grenville, Sara Kiesler, Jodi Forlizzi, and Anna M. Wichansky. "Accessing Multi-Modal Information on Cell Phones While Sitting and Driving." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 46, no. 22 (September 2002): 1809–13. http://dx.doi.org/10.1177/154193120204602207.
Full textZou, Chun-Ping, Duan-Shi Chen, and Hong-Xing Hua. "Torsional Vibration Analysis of Complicated Multi-Branched Shafting Systems by Modal Synthesis Method." Journal of Vibration and Acoustics 125, no. 3 (June 18, 2003): 317–23. http://dx.doi.org/10.1115/1.1569949.
Full textLeite, Luis, Rui Torres, and Luis Aly. "Common Spaces: Multi-Modal-Media Ecosystem for Live Performances." Matlit Revista do Programa de Doutoramento em Materialidades da Literatura 6, no. 1 (August 10, 2018): 187–98. http://dx.doi.org/10.14195/2182-8830_6-1_13.
Full textDe Boeck, J., C. Raymaekers, and K. Coninx. "Aspects of Haptic Feedback in a Multi-modal Interface for Object Modelling." Virtual Reality 6, no. 4 (August 2003): 257–70. http://dx.doi.org/10.1007/s10055-003-0108-7.
Full textCoury, Bruce G., John Sadowsky, Paul R. Schuster, Michael Kurnow, Marcus J. Huber, and Edmund H. Durfee. "Reducing the Interaction Burden of Complex Systems." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 41, no. 1 (October 1997): 335–39. http://dx.doi.org/10.1177/107118139704100175.
Full textHorva´th, Imre. "Investigation of Hand Motion Language in Shape Conceptualization." Journal of Computing and Information Science in Engineering 4, no. 1 (March 1, 2004): 37–42. http://dx.doi.org/10.1115/1.1645864.
Full textWang, Ning, Alessandro Di Nuovo, Angelo Cangelosi, and Ray Jones. "Temporal patterns in multi-modal social interaction between elderly users and service robot." Interaction Studies 20, no. 1 (July 15, 2019): 4–24. http://dx.doi.org/10.1075/is.18042.wan.
Full textMichelson, Nicholas J., Alberto L. Vazquez, James R. Eles, Joseph W. Salatino, Erin K. Purcell, Jordan J. Williams, X. Tracy Cui, and Takashi D. Y. Kozai. "Multi-scale, multi-modal analysis uncovers complex relationship at the brain tissue-implant neural interface: new emphasis on the biological interface." Journal of Neural Engineering 15, no. 3 (April 6, 2018): 033001. http://dx.doi.org/10.1088/1741-2552/aa9dae.
Full textLee, Yong-Gu, Hyungjun Park, Woontaek Woo, Jeha Ryu, Hong Kook Kim, Sung Wook Baik, Kwang Hee Ko, et al. "Immersive modeling system (IMMS) for personal electronic products using a multi-modal interface." Computer-Aided Design 42, no. 5 (May 2010): 387–401. http://dx.doi.org/10.1016/j.cad.2009.11.003.
Full textHuang, Xingrong, Louis Jézéquel, Sébastien Besset, and Lin Li. "Optimization of the dynamic behavior of vehicle structures by means of passive interface controls." Journal of Vibration and Control 24, no. 3 (August 8, 2016): 466–91. http://dx.doi.org/10.1177/1077546316650131.
Full textLuo, Xisheng, Yu Liang, Ting Si, and Zhigang Zhai. "Effects of non-periodic portions of interface on Richtmyer–Meshkov instability." Journal of Fluid Mechanics 861 (December 20, 2018): 309–27. http://dx.doi.org/10.1017/jfm.2018.923.
Full textKato, Tsuneaki, and Mitsunori Matsushita. "Multi-Modal Interface for Information Access through Extraction and Visualization of Time-Series Information." Transactions of the Japanese Society for Artificial Intelligence 22 (2007): 553–62. http://dx.doi.org/10.1527/tjsai.22.553.
Full textFischer, Christian, and GÜNTHER Schmidt. "Multi-modal human-robot interface for interaction with a remotely operating mobile service robot." Advanced Robotics 12, no. 4 (January 1997): 397–409. http://dx.doi.org/10.1163/156855398x00262.
Full textKaber, David B., Melanie C. Wright, and Mohamed A. Sheik-Nainar. "Investigation of multi-modal interface features for adaptive automation of a human–robot system." International Journal of Human-Computer Studies 64, no. 6 (June 2006): 527–40. http://dx.doi.org/10.1016/j.ijhcs.2005.11.003.
Full textPark, Juyeon, and Myeong-Heum Yeoun. "Suitable for Smart Home users' situations Explore the design of a multi-modal interface." Journal of Communication Design 73 (October 31, 2020): 429–41. http://dx.doi.org/10.25111/jcd.2020.73.31.
Full textDe Bérigny Wall, Caitilin, and Xiangyu Wang. "InterANTARCTICA: Tangible User Interface for Museum Based Interaction." International Journal of Virtual Reality 8, no. 3 (January 1, 2009): 19–24. http://dx.doi.org/10.20870/ijvr.2009.8.3.2737.
Full textBiocca, Frank, Jin Kim, and Yung Choi. "Visual Touch in Virtual Environments: An Exploratory Study of Presence, Multimodal Interfaces, and Cross-Modal Sensory Illusions." Presence: Teleoperators and Virtual Environments 10, no. 3 (June 2001): 247–65. http://dx.doi.org/10.1162/105474601300343595.
Full textKaufman, Leah S., Jim Stewart, Bruce Thomas, and Gerhard Deffner. "Computers and Telecommunications in the Year 2000-Multi-Modal Interfaces, Miniaturisation, and Portability." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 40, no. 6 (October 1996): 353–56. http://dx.doi.org/10.1177/154193129604000607.
Full textSun, Bo Wen, Bin Shen, Wen Wu Chen, Yan Peng Zhang, and Jia Qi Lin. "VEET: 3D Virtual Electrical Experimental Tool Supporting Multi-Modal User Interfaces and Platforms." Advanced Materials Research 981 (July 2014): 196–99. http://dx.doi.org/10.4028/www.scientific.net/amr.981.196.
Full textKhan, Sumbul, and Bige Tunçer. "Speech analysis for conceptual CAD modeling using multi-modal interfaces: An investigation into Architects’ and Engineers’ speech preferences." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 33, no. 03 (March 14, 2019): 275–88. http://dx.doi.org/10.1017/s0890060419000015.
Full textDOI, MASATAKA, KENJI SUZUKI, and SHUJI HASHIMOTO. "AN INTEGRATED COMMUNICATIVE ROBOT — BUGNOID." International Journal of Humanoid Robotics 01, no. 01 (March 2004): 127–42. http://dx.doi.org/10.1142/s0219843604000034.
Full textFukui, Yukio, Makoto Shimojo, and Juli Yamashita. "Recognition by Inconsistent Information from Visual and Haptic Interface." Journal of Robotics and Mechatronics 9, no. 3 (June 20, 1997): 208–12. http://dx.doi.org/10.20965/jrm.1997.p0208.
Full textVardhan, Jai, and Girijesh Prasad. "Enhancing an Eye-Tracker based Human-Computer Interface with Multi-modal Accessibility Applied for Text Entry." International Journal of Computer Applications 130, no. 16 (November 17, 2015): 16–22. http://dx.doi.org/10.5120/ijca2015907194.
Full textCHAUHAN, P., J. BOGER, T. HUSSEIN, S. MOON, F. RUDZICZ, and J. POLGAR. "Creating the CARE-RATE interface through multi-modal participatory design with caregivers of people with dementia." Gerontechnology 17, s (April 24, 2018): 23. http://dx.doi.org/10.4017/gt.2018.17.s.023.00.
Full textWilson, M. D. "First MMI(2) demonstrator: A multi-modal interface for man machine interaction with knowledge based systems." Expert Systems with Applications 4, no. 4 (January 1992): 423. http://dx.doi.org/10.1016/0957-4174(92)90135-f.
Full textChen, Weiya, Tetsuo Sawaragi, and Toshihiro Hiraoka. "Adaptive multi-modal interface model concerning mental workload in take-over request during semi-autonomous driving." SICE Journal of Control, Measurement, and System Integration 14, no. 2 (March 11, 2021): 10–21. http://dx.doi.org/10.1080/18824889.2021.1894023.
Full textSzynkiewicz, Wojciech, Włodzimierz Kasprzak, Cezary Zieliński, Wojciech Dudek, Maciej Stefańczyk, Artur Wilkowski, and Maksym Figat. "Utilisation of Embodied Agents in the Design of Smart Human–Computer Interfaces—A Case Study in Cyberspace Event Visualisation Control." Electronics 9, no. 6 (June 11, 2020): 976. http://dx.doi.org/10.3390/electronics9060976.
Full textWalther, Jürgen, Pablo D. Dans, Alexandra Balaceanu, Adam Hospital, Genís Bayarri, and Modesto Orozco. "A multi-modal coarse grained model of DNA flexibility mappable to the atomistic level." Nucleic Acids Research 48, no. 5 (January 20, 2020): e29-e29. http://dx.doi.org/10.1093/nar/gkaa015.
Full textAdolphs, Svenja, Dawn Knight, and Ronald Carter. "Capturing context for heterogeneous corpus analysis." International Journal of Corpus Linguistics 16, no. 3 (October 24, 2011): 305–24. http://dx.doi.org/10.1075/ijcl.16.3.02ado.
Full textWeissgerber, Doanna, Bruce Bridgeman, and Alex Pang. "Feel the Information with VisPad: A Large Area Vibrotactile Device." Information Visualization 3, no. 1 (March 2004): 36–48. http://dx.doi.org/10.1057/palgrave.ivs.9500060.
Full textTytgat, L., J. I. R. Owen, and P. Campagne. "Development of a Civil Military Interface in Europe for Galileo." Journal of Navigation 53, no. 2 (May 2000): 273–78. http://dx.doi.org/10.1017/s0373463300008845.
Full textWeiland, William J., James M. Stokes, and Joan M. Ryder. "Scaneval: A Toolkit for Eye-Tracking Research and Attention-Driven Applications." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 42, no. 16 (October 1998): 1148. http://dx.doi.org/10.1177/154193129804201608.
Full textRegodić, Milovan, Zoltán Bárdosi, Georgi Diakov, Malik Galijašević, Christian F. Freyschlag, and Wolfgang Freysinger. "Visual display for surgical targeting: concepts and usability study." International Journal of Computer Assisted Radiology and Surgery 16, no. 9 (April 8, 2021): 1565–76. http://dx.doi.org/10.1007/s11548-021-02355-8.
Full textMcWilliams, Thomas, Bruce Mehler, Bobbie Seppelt, and Bryan Reimer. "Driving Simulator Validation for In-Vehicle Human Machine Interface Assessment." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, no. 1 (November 2019): 2104–8. http://dx.doi.org/10.1177/1071181319631438.
Full textCanare, Dominic, Barbara Chaparro, and Alex Chaparro. "Using Gesture, Gaze, and Combination Input Schemes as Alternatives to the Computer Mouse." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, no. 1 (September 2018): 297–301. http://dx.doi.org/10.1177/1541931218621068.
Full textJames, Jose, Bhavani Rao R., and Gabriel Neamtu. "Design of a bi-manual haptic interface for skill acquisition in surface mount device soldering." Soldering & Surface Mount Technology 31, no. 2 (April 1, 2019): 133–42. http://dx.doi.org/10.1108/ssmt-01-2018-0001.
Full textPandolfi, Ronald J., Daniel B. Allan, Elke Arenholz, Luis Barroso-Luque, Stuart I. Campbell, Thomas A. Caswell, Austin Blair, et al. "Xi-cam: a versatile interface for data visualization and analysis." Journal of Synchrotron Radiation 25, no. 4 (May 31, 2018): 1261–70. http://dx.doi.org/10.1107/s1600577518005787.
Full textDanihelka, Jiri, Roman Hak, Lukas Kencl, and Jiri Zara. "3D Talking-Head Interface to Voice-Interactive Services on Mobile Phones." International Journal of Mobile Human Computer Interaction 3, no. 2 (April 2011): 50–64. http://dx.doi.org/10.4018/jmhci.2011040104.
Full text