Academic literature on the topic 'Multi-modal interface'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multi-modal interface.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multi-modal interface"

1

Kim, Laehyun, Yoha Hwang, Se Hyung Park, and Sungdo Ha. "Dental Training System using Multi-modal Interface." Computer-Aided Design and Applications 2, no. 5 (January 2005): 591–98. http://dx.doi.org/10.1080/16864360.2005.10738323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Oka, Ryuichi, Takuichi Nishimura, and Takashi Endo. "Media Information Processing for Robotics. Multi-modal Interface." Journal of the Robotics Society of Japan 16, no. 6 (1998): 749–53. http://dx.doi.org/10.7210/jrsj.16.749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Абдуллин, А., A. Abdullin, Елена Маклакова, Elena Maklakova, Анна Илунина, Anna Ilunina, И. Земцов, et al. "VOICE SEARCH ALGORITHM IN INTELLIGENT MULTI-MODAL INTERFACE." Modeling of systems and processes 12, no. 1 (August 26, 2019): 4–9. http://dx.doi.org/10.12737/article_5d639c80b4a438.38023981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Park, Sankyu, Key-Sun Choi, and K. H. (Kane) Kim. "A Framework for Multi-Agent Systems with Multi-Modal User Interfaces in Distributed Computing Environments." International Journal of Software Engineering and Knowledge Engineering 07, no. 03 (September 1997): 351–69. http://dx.doi.org/10.1142/s0218194097000217.

Full text
Abstract:
In current multi-agent systems, the user is typically interacting with a single agent at a time through relatively inflexible and modestly intelligent interfaces. As a consequence, these systems force the users to submit simplistic requests only and suffer from problems such as the low-level nature of the system services offered to users, the weak reusability of agents, and the weak extensibility of the systems. In this paper, a framework for multi-agent systems called the open agent architecture (OAA) which reduces such problems, is discussed. The OAA is designed to handle complex requests that involve multiple agents. In some cases of complex requests from users, the components of the requests do not directly correspond to the capabilities of various application agents, and therefore, the system is required to translate the user's model of the task into the system's model before apportioning subtasks to the agents. To maximize users' efficiency in generating this type of complex requests, the OAA offers an intelligent multi-modal user interface agent which supports a natural language interface with a mix of spoken language, handwriting, and gesture. The effectiveness of the OAA environment including the intelligent distributed multi-modal interface has been observed in our development of several practical multi-agent systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Indhumathi, C., Wenyu Chen, and Yiyu Cai. "Multi-Modal VR for Medical Simulation." International Journal of Virtual Reality 8, no. 1 (January 1, 2009): 1–7. http://dx.doi.org/10.20870/ijvr.2009.8.1.2707.

Full text
Abstract:
Over the past three decades computer graphics and virtual reality (VR) have played a significant role in adding value to medicine for diagnosis and treatment applications. Medical simulation is increasingly used in medical training and surgical planning. This paper investigates the multi-modal VR interface for medical simulation focusing on motion tracking, stereographic visualization, voice navigation, and interactions. Applications in virtual anatomy learning, surgical training and pre-treatment planning will also be discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Mac Namara, Damien, Paul Gibson, and Ken Oakley. "The Ideal Voting Interface: Classifying Usability." JeDEM - eJournal of eDemocracy and Open Government 6, no. 2 (December 2, 2014): 182–96. http://dx.doi.org/10.29379/jedem.v6i2.306.

Full text
Abstract:
This work presents a feature-oriented taxonomy for commercial electronic voting machines, which focuses on usability aspects. Based on this analysis, we propose a ‘Just-Like-Paper’ (JLP) classification method which identifies five broad categories of eVoting interface. We extend the classification to investigate its application as an indicator of voting efficiency and identify a universal ten-step process encompassing all possible voting steps spanning the twenty-six machines studied. Our analysis concludes that multi-functional and progressive interfaces are likely to be more efficient versus multi-modal voter-activated machines.
APA, Harvard, Vancouver, ISO, and other styles
7

Tomori, Zoltán, Peter Keša, Matej Nikorovič, Jan Kaňka, Petr Jákl, Mojmír Šerý, Silvie Bernatová, Eva Valušová, Marián Antalík, and Pavel Zemánek. "Holographic Raman tweezers controlled by multi-modal natural user interface." Journal of Optics 18, no. 1 (November 18, 2015): 015602. http://dx.doi.org/10.1088/2040-8978/18/1/015602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Folgheraiter, Michele, Giuseppina Gini, and Dario Vercesi. "A Multi-Modal Haptic Interface for Virtual Reality and Robotics." Journal of Intelligent and Robotic Systems 52, no. 3-4 (May 30, 2008): 465–88. http://dx.doi.org/10.1007/s10846-008-9226-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Di Nuovo, Alessandro, Frank Broz, Ning Wang, Tony Belpaeme, Angelo Cangelosi, Ray Jones, Raffaele Esposito, Filippo Cavallo, and Paolo Dario. "The multi-modal interface of Robot-Era multi-robot services tailored for the elderly." Intelligent Service Robotics 11, no. 1 (September 2, 2017): 109–26. http://dx.doi.org/10.1007/s11370-017-0237-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jung, Jang-Young, Young-Bin Kim, Sang-Hyeok Lee, and Shin-Jin Kang. "Expression Analysis System of Game Player based on Multi-modal Interface." Journal of Korea Game Society 16, no. 2 (April 30, 2016): 7–16. http://dx.doi.org/10.7583/jkgs.2016.16.2.7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Multi-modal interface"

1

Kost, Stefan. "Dynamically generated multi-modal application interfaces." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2006. http://nbn-resolving.de/urn:nbn:de:swb:14-1150806179876-45678.

Full text
Abstract:
This work introduces a new UIMS (User Interface Management System), which aims to solve numerous problems in the field of user-interface development arising from hard-coded use of user interface toolkits. The presented solution is a concrete system architecture based on the abstract ARCH model consisting of an interface abstraction-layer, a dialog definition language called GIML (Generalized Interface Markup Language) and pluggable interface rendering modules. These components form an interface toolkit called GITK (Generalized Interface ToolKit). With the aid of GITK (Generalized Interface ToolKit) one can build an application, without explicitly creating a concrete end-user interface. At runtime GITK can create these interfaces as needed from the abstract specification and run them. Thereby GITK is equipping one application with many interfaces, even kinds of interfaces that did not exist when the application was written. It should be noted that this work will concentrate on providing the base infrastructure for adaptive/adaptable system, and does not aim to deliver a complete solution. This work shows that the proposed solution is a fundamental concept needed to create interfaces for everyone, which can be used everywhere and at any time. This text further discusses the impact of such technology for users and on the various aspects of software systems and their development. The targeted main audience of this work are software developers or people with strong interest in software development.
APA, Harvard, Vancouver, ISO, and other styles
2

Newcomb, Matthew Charles. "A multi-modal interface for road planning tasks using vision, haptics and sound." [Ames, Iowa : Iowa State University], 2010. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1476331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Yenan. "Advanced Multi-modal User Interfaces in 3D Computer Graphics and Virtual Reality." Thesis, Linköpings universitet, Institutionen för teknik och naturvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-75889.

Full text
Abstract:
Computers are developed continuously to satisfy the human demands, and typical tools used everywhere for ranging from daily life usage to all kinds of research. Virtual Reality (VR), a virtual environment simulated to present physical presence in the real word and imaginary worlds, has been widely applied to simulate the virtual environment. People’s feeling is limited to visual perception when only computers are applied for simulations, since computers are limited to display visualization of data, while human senses include sight, smell, hearing, taste, touch and so on. Other devices can be applied, such as haptics, a device for sense of touch, to enhance the human perception in virtual environment. A good way to apply VR applications is to place them in a virtual display system, a system with multiply tools displays a virtual environment with experiencing different human senses, to enhance the people’s feeling of being immersed in a virtual environment. Such virtual display systems include VR dome, recursive acronym CAVE, VR workbench, VR workstation and so on. Menus with lots of advantages in manipulating applications are common in conventional systems, operating systems or other systems in computers. Normally a system will not be usable without them. Although VR applications are more natural and intuitive, they are much less or not usable without menus. But very few studies have focused on user interfaces in VR. This situation motivates us working further in this area. We want to create two models on different purposes. One is inspired from menus in conventional system and the sense of touch. And the other one is designed based on the spatial presence of VR. The first model is a two-dimensional pie menu in pop-up style with spring force feedback. This model is in a pie shape with eight options on the root menu. And there is a pop-up style hierarchical menu belongs to each option on the root menu. When the haptics device is near an option on the root menu, the spring force will force the haptics device towards to the center of the option and that option will be selected, and then the sub menu with nine options will pop up. The pie shape together with the spring force effect is expected to both increase the speed of selection and decrease the error rate of selection. The other model is a semiautomatic three-dimensional cube menu. This cube menu is designed with a aim to provide a simple, elegant, efficient and accurate user interface approach. This model is designed with four faces, including the front, back, left and right faces of the cube. Each face represents a category and has nine widgets. Users can make selections in different categories. An efficient way to change between categories is to rotate the cube automatically. Thus, a navigable rotation animation system is built and is manipulating the cube rotate horizontally for ninety degrees each time, so one of the faces will always face users. These two models are built under H3DAPI, an open source haptics software development platform with UI toolkit, a user interface toolkit. After the implementation, we made a pilot study, which is a formative study, to evaluate the feasibility of both menus. This pilot study includes a list of tasks for each menu, a questionnaire regards to the menu performance for each subject and a discussion with each subject. Six students participated as test subjects. In the pie menu, most of the subjects feel the spring force guides them to the target option and they can control the haptics device comfortably under such force. In the cube menu, the navigation rotation system works well and the cube rotates accurately and efficiently. The results of the pilot study show the models work as we initially expected. The recorded task completion time for each menu shows that with the same amount of tasks and similar difficulties, subjects spent more time on the cube menu than on the pie menu. This may implicate that pie menu is a faster approach comparing to the cube menu. We further consider that both the pie shape and force feedback may help reducing the selection time. The result for the option selection error rate test on the cube menu may implicates that option selection without any force feedback may also achieve a considerable good effect. Through the answers from the questionnaire for each subject, both menus are comfortable to use and in good control.
APA, Harvard, Vancouver, ISO, and other styles
4

Husseini, Orabi Ahmed. "Multi-Modal Technology for User Interface Analysis including Mental State Detection and Eye Tracking Analysis." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/36451.

Full text
Abstract:
We present a set of easy-to-use methods and tools to analyze human attention, behaviour, and physiological responses. A potential application of our work is evaluating user interfaces being used in a natural manner. Our approach is designed to be scalable and to work remotely on regular personal computers using expensive and noninvasive equipment. The data sources our tool processes are nonintrusive, and captured from video; i.e. eye tracking, and facial expressions. For video data retrieval, we use a basic webcam. We investigate combinations of observation modalities to detect and extract affective and mental states. Our tool provides a pipeline-based approach that 1) collects observational, data 2) incorporates and synchronizes the signal modality mentioned above, 3) detects users' affective and mental state, 4) records user interaction with applications and pinpoints the parts of the screen users are looking at, 5) analyzes and visualizes results. We describe the design, implementation, and validation of a novel multimodal signal fusion engine, Deep Temporal Credence Network (DTCN). The engine uses Deep Neural Networks to provide 1) a generative and probabilistic inference model, and 2) to handle multimodal data such that its performance does not degrade due to the absence of some modalities. We report on the recognition accuracy of basic emotions for each modality. Then, we evaluate our engine in terms of effectiveness of recognizing basic six emotions and six mental states, which are agreeing, concentrating, disagreeing, interested, thinking, and unsure. Our principal contributions include the implementation of a 1) multimodal signal fusion engine, 2) real time recognition of affective and primary mental states from nonintrusive and inexpensive modality, 3) novel mental state-based visualization techniques, 3D heatmaps, 3D scanpaths, and widget heatmaps that find parts of the user interface where users are perhaps unsure, annoyed, frustrated, or satisfied.
APA, Harvard, Vancouver, ISO, and other styles
5

Doshi, Siddharth. "Designing a multi-modal traveler information platform for urban transportation." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37167.

Full text
Abstract:
Urban transportation networks are inefficient due to sub-optimal use by travelers. One approach to counter the increase in urban transportation demand is to provide better information to travelers, which would allow them to make better use of the network. Existing traveler information systems do this to a certain extent, but are limited by the data available and the scope of their implementation. These systems are vertically integrated and closed so that using any external elements for analysis, user interfacing etc. is difficult. The effects of such traveler information systems are reviewed via a comparative analysis of case studies available in the literature. It is found that information availability has a definite positive effect, but the social and environmental benefits are difficult to quantify. It is also seen that combining data by integrating systems can lead to additional uses for the same data and result on better quality of service and information. In this thesis, a regional platform for multi-modal traveler information is proposed that would support the development of traveler information systems. The architecture incorporates a central processing and storage module, which acts as an information clearinghouse and supports receiving, managing and sending data to and from multiple sources and interfaces. This setup allows sharing of data for analysis or application development, but with access control. The components are loosely coupled to minimize inter-dependencies. Due to this, the source, analysis, user interface and storage components can be developed independently of each other. To better develop the requirements and understand the challenges of the proposed concept, a limited implementation of the system is designed for the midtown Atlanta region, incorporating multiple data sources and user interfaces. The individual elements of the system are described in detail as is the testing and evaluation of the system.
APA, Harvard, Vancouver, ISO, and other styles
6

Schneider, Thomas W. "A Voice-based Multimodal User Interface for VTQuest." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/33267.

Full text
Abstract:
The original VTQuest web-based software system requires users to interact using a mouse or a keyboard, forcing the usersâ hands and eyes to be constantly in use while communicating with the system. This prevents the user from being able to perform other tasks which require the userâ s hands or eyes at the same time. This restriction on the userâ s ability to multitask while using VTQuest is unnecessary and has been eliminated with the creation of the VTQuest Voice web-based software system. VTQuest Voice extends the original VTQuest functionality by providing the user with a voice interface to interact with the system using the Speech Application Language Tags (SALT) technology. The voice interface provides the user with the ability to navigate through the site, submit queries, browse query results, and receive helpful hints to better utilize the voice system. Individuals with a handicap that prevents them from using their arms or hands, users who are not familiar with the mouse and keyboard style of communication, and those who have their hands preoccupied need alternative communication interfaces which do not require the use of their hands. All of these users require and benefit from a voice interface being added onto VTQuest. Through the use of the voice interface, all of the systemâ s features can be accessed exclusively with voice and without the use of a userâ s hands. Using a voice interface also frees the userâ s eyes from being used during the process of selecting an option or link on a page, which allows the user to look at the system less frequently. VTQuest Voice is implemented and tested for operation on computers running Microsoft Windows using Microsoft Internet Explorer with the correct SALT and Adobe Scalable Vector Graphics (SVG) Viewer plug-ins installed. VTQuest Voice offers a variety of features including an extensive grammar and out-of-turn interaction, which are flexible for future growth. The grammar offers ways in which users may begin or end a query to better accommodate the variety of ways users may phrase their queries. To accommodate for abbreviations of building names and alternate pronunciations of building names, the grammar also includes nicknames for the buildings. The out-of-turn interaction combines multiple steps into one spoken sentence thereby shortening the interaction and also making the process more natural for the user. The addition of a voice interface is recommended for web applications which a user may need to use his or her eyes and hands to multitask. Additional functionality which can be added later to VTQuest Voice is touch screen support and accessibility from cell phones, Personal Digital Assistants (PDAs), and other mobile devices.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
7

Hashem, Yassir. "Multi-Modal Insider Threat Detection and Prevention based on Users' Behaviors." Thesis, University of North Texas, 2008. https://digital.library.unt.edu/ark:/67531/metadc1248460/.

Full text
Abstract:
Insider threat is one of the greatest concerns for information security that could cause more significant financial losses and damages than any other attack. However, implementing an efficient detection system is a very challenging task. It has long been recognized that solutions to insider threats are mainly user-centric and several psychological and psychosocial models have been proposed. A user's psychophysiological behavior measures can provide an excellent source of information for detecting user's malicious behaviors and mitigating insider threats. In this dissertation, we propose a multi-modal framework based on the user's psychophysiological measures and computer-based behaviors to distinguish between a user's behaviors during regular activities versus malicious activities. We utilize several psychophysiological measures such as electroencephalogram (EEG), electrocardiogram (ECG), and eye movement and pupil behaviors along with the computer-based behaviors such as the mouse movement dynamics, and keystrokes dynamics to build our framework for detecting malicious insiders. We conduct human subject experiments to capture the psychophysiological measures and the computer-based behaviors for a group of participants while performing several computer-based activities in different scenarios. We analyze the behavioral measures, extract useful features, and evaluate their capability in detecting insider threats. We investigate each measure separately, then we use data fusion techniques to build two modules and a comprehensive multi-modal framework. The first module combines the synchronized EEG and ECG psychophysiological measures, and the second module combines the eye movement and pupil behaviors with the computer-based behaviors to detect the malicious insiders. The multi-modal framework utilizes all the measures and behaviors in one model to achieve better detection accuracy. Our findings demonstrate that psychophysiological measures can reveal valuable knowledge about a user's malicious intent and can be used as an effective indicator in designing insider threat monitoring and detection frameworks. Our work lays out the necessary foundation to establish a new generation of insider threat detection and mitigation mechanisms that are based on a user's involuntary behaviors, such as psychophysiological measures, and learn from the real-time data to determine whether a user is malicious.
APA, Harvard, Vancouver, ISO, and other styles
8

Hashem, Yassir. "A Multi-Modal Insider Threat Detection and Prevention based on Users' Behaviors." Thesis, University of North Texas, 2018. https://digital.library.unt.edu/ark:/67531/metadc1248460/.

Full text
Abstract:
Insider threat is one of the greatest concerns for information security that could cause more significant financial losses and damages than any other attack. However, implementing an efficient detection system is a very challenging task. It has long been recognized that solutions to insider threats are mainly user-centric and several psychological and psychosocial models have been proposed. A user's psychophysiological behavior measures can provide an excellent source of information for detecting user's malicious behaviors and mitigating insider threats. In this dissertation, we propose a multi-modal framework based on the user's psychophysiological measures and computer-based behaviors to distinguish between a user's behaviors during regular activities versus malicious activities. We utilize several psychophysiological measures such as electroencephalogram (EEG), electrocardiogram (ECG), and eye movement and pupil behaviors along with the computer-based behaviors such as the mouse movement dynamics, and keystrokes dynamics to build our framework for detecting malicious insiders. We conduct human subject experiments to capture the psychophysiological measures and the computer-based behaviors for a group of participants while performing several computer-based activities in different scenarios. We analyze the behavioral measures, extract useful features, and evaluate their capability in detecting insider threats. We investigate each measure separately, then we use data fusion techniques to build two modules and a comprehensive multi-modal framework. The first module combines the synchronized EEG and ECG psychophysiological measures, and the second module combines the eye movement and pupil behaviors with the computer-based behaviors to detect the malicious insiders. The multi-modal framework utilizes all the measures and behaviors in one model to achieve better detection accuracy. Our findings demonstrate that psychophysiological measures can reveal valuable knowledge about a user's malicious intent and can be used as an effective indicator in designing insider threat monitoring and detection frameworks. Our work lays out the necessary foundation to establish a new generation of insider threat detection and mitigation mechanisms that are based on a user's involuntary behaviors, such as psychophysiological measures, and learn from the real-time data to determine whether a user is malicious.
APA, Harvard, Vancouver, ISO, and other styles
9

Alacam, Özge [Verfasser], and Christopher [Akademischer Betreuer] Habel. "Verbally Assisted Haptic-Graph Comprehension : Multi-Modal Empirical Research Towards a Human Computer Interface / Özge Alacam. Betreuer: Christopher Habel." Hamburg : Staats- und Universitätsbibliothek Hamburg, 2016. http://d-nb.info/1095766449/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Alacam, Özge Verfasser], and Christopher [Akademischer Betreuer] [Habel. "Verbally Assisted Haptic-Graph Comprehension : Multi-Modal Empirical Research Towards a Human Computer Interface / Özge Alacam. Betreuer: Christopher Habel." Hamburg : Staats- und Universitätsbibliothek Hamburg, 2016. http://d-nb.info/1095766449/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Multi-modal interface"

1

Gosse, Bouma, and SpringerLink (Online service), eds. Interactive Multi-modal Question-Answering. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Djeraba, Chabane. Multi-modal user interactions in controlled environments. New York: Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Japan) Workshop on the Future of VR and AR Interfaces (2001 Yokohama. The future of VR and AR interfaces: Multi modal, humanoid, adaptive, and intelligent : proceedings of the workshop at IEEE Virtual Reality 2001, Yokohama, Japan, March 14, 2001. Sankt Augustin: GMD-Forschungszentrum Informationstechnik, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bosch, Antal, and Gosse Bouma. Interactive Multi-modal Question-Answering. Springer, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Multi-modal interface"

1

Dasgupta, Ritwik. "The Power of Multi-Modal Interactions." In Voice User Interface Design, 67–103. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-4125-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kitamura, Yoshifumi, Satoshi Sakurai, Tokuo Yamaguchi, Ryo Fukazawa, Yuichi Itoh, and Fumio Kishino. "Multi-modal Interface in Multi-Display Environment for Multi-users." In Human-Computer Interaction. Novel Interaction Methods and Techniques, 66–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02577-8_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Park, Wanjoo, Laehyun Kim, Hyunchul Cho, and Sehyung Park. "Dial-Based Game Interface with Multi-modal Feedback." In Lecture Notes in Computer Science, 389–96. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15399-0_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Du, Yueqiao. "Interactive Design Principles of Educational APP Interface." In Application of Intelligent Systems in Multi-modal Information Analytics, 828–32. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74814-2_119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Teófilo, Luís Filipe, Pedro Alves Nogueira, and Pedro Brandão Silva. "GEMINI: A Generic Multi-Modal Natural Interface Framework for Videogames." In Advances in Intelligent Systems and Computing, 873–84. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36981-0_81.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lancel, Karen, Hermen Maat, and Frances Brazier. "EEG KISS: Shared Multi-modal, Multi Brain Computer Interface Experience, in Public Space." In Brain Art, 207–28. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-14323-7_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tschöpe, Constanze, Frank Duckhorn, Markus Huber, Werner Meyer, and Matthias Wolff. "A Cognitive User Interface for a Multi-modal Human-Machine Interaction." In Speech and Computer, 707–17. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99579-3_72.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ushida, Hirohide, Tomohiko Sato, Toru Yamaguchi, and Tomohiro Takagi. "Fuzzy associative memory system and its application to multi-modal interface." In Advances in Fuzzy Logic, Neural Networks and Genetic Algorithms, 1–18. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/3-540-60607-6_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Komatsu, Rikako, Dalai Tang, Takenori Obo, and Naoyuki Kubota. "Multi-modal Communication Interface for Elderly People in Informationally Structured Space." In Intelligent Robotics and Applications, 220–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-25489-5_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mayer, Peter, and Paul Panek. "Towards a Multi-modal User Interface for an Affordable Assistive Robot." In Universal Access in Human-Computer Interaction. Aging and Assistive Environments, 680–91. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-07446-7_65.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Multi-modal interface"

1

Kadavasal, Muthukkumar S., and James H. Oliver. "Virtual Reality Interface Design for Multi-Modal Teleoperation." In ASME-AFM 2009 World Conference on Innovative Virtual Reality. ASMEDC, 2009. http://dx.doi.org/10.1115/winvr2009-732.

Full text
Abstract:
A multi modal teleoperation interface is introduced featuring an integrated virtual reality (VR) based simulation augmented by sensors and image processing capabilities on-board the remotely operated vehicle. The proposed virtual reality interface fuses an existing VR model with live video feed and prediction states, thereby creating a multi modal control interface. Virtual reality addresses the typical limitations of video-based teleoperation caused by signal lag and limited field of view. The 3D environment in VR along with visual cues generated from real time sensor data allows the operator to navigate in a continuous fashion. The vehicle incorporates an on-board computer and a stereo vision system to facilitate obstacle detection. A vehicle adaptation system with a priori risk maps and real state tracking system enables temporary autonomous operation of the vehicle for local navigation around obstacles and automatic re-establishment of the vehicle’s teleoperated state. Finally, the system provides real time update of the virtual environment based on anomalies encountered by the vehicle. The VR interface architecture is discussed and implementation results are presented. The VR based multi modal teleoperation interface is expected to be more adaptable and intuitive when compared to other interfaces.
APA, Harvard, Vancouver, ISO, and other styles
2

Gromov, Boris, Luca M. Gambardella, and Gianni A. Di Caro. "Wearable multi-modal interface for human multi-robot interaction." In 2016 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). IEEE, 2016. http://dx.doi.org/10.1109/ssrr.2016.7784305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Machidori, Yushi, Ko Takayama, and Kaoru Sugita. "Implementation of multi-modal interface for VR application." In 2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST). IEEE, 2019. http://dx.doi.org/10.1109/icawst.2019.8923551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kim, Sung-Phil, Jae-Hwan Kang, Young Chang Jo, and Ian Oakley. "Development of a multi-modal personal authentication interface." In 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2017. http://dx.doi.org/10.1109/apsipa.2017.8282125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nad, Dula, Nikola Miskovic, and Edin Omerdic. "Multi-Modal Supervision Interface Concept for Marine Systems." In OCEANS 2019 - Marseille. IEEE, 2019. http://dx.doi.org/10.1109/oceanse.2019.8867226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tei, Yoshiyuki, Tsutomu Terada, and Masahiko Tsukamoto. "A multi-modal interface for performers in stuffed suits." In AH '14: 5th Augmented Human International Conference. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2582051.2582109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kurniawati, Evelyn, Luca Celetto, Nicola Capovilla, and Sapna George. "Personalized voice command systems in multi modal user interface." In 2012 IEEE International Conference on Emerging Signal Processing Applications (ESPA 2012). IEEE, 2012. http://dx.doi.org/10.1109/espa.2012.6152442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kasakevich, M., P. Boulanger, W. F. Bischof, and M. Garcia. "Multi-Modal Interface for a Real-Time CFD Solver." In 2006 IEEE International Workshop on Haptic Audio Visual Environments and their Applications (HAVE 2006). IEEE, 2006. http://dx.doi.org/10.1109/have.2006.283800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kamel, Ahmed. "A Multi-modal User Interface for Agent Assistant Systems." In 2009 Second International Conferences on Advances in Computer-Human Interactions (ACHI). IEEE, 2009. http://dx.doi.org/10.1109/achi.2009.56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Dan, Yijun Wang, Alexander Maye, Andreas K. Engel, Xiaorong Gao, Bo Hong, and Shangkai Gao. "A Brain-Computer Interface Based on Multi-Modal Attention." In 2007 3rd International IEEE/EMBS Conference on Neural Engineering. IEEE, 2007. http://dx.doi.org/10.1109/cne.2007.369697.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Multi-modal interface"

1

Perzanowski, Dennis, William Adams, Alan C. Schultz, and Elaine Marsh. Towards Seamless Integration in a Multi-modal Interface. Fort Belvoir, VA: Defense Technical Information Center, January 2000. http://dx.doi.org/10.21236/ada434973.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Greene, Kristen K., Kayee Kwong, Ross J. Michaels, and Gregory P. Fiumara. Design and Testing of a Mobile Touchscreen Interface for Multi-Modal Biometric Capture. National Institute of Standards and Technology, May 2014. http://dx.doi.org/10.6028/nist.ir.8003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography