To see the other types of publications on this topic, follow the link: Multi-modal interface.

Journal articles on the topic 'Multi-modal interface'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multi-modal interface.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kim, Laehyun, Yoha Hwang, Se Hyung Park, and Sungdo Ha. "Dental Training System using Multi-modal Interface." Computer-Aided Design and Applications 2, no. 5 (January 2005): 591–98. http://dx.doi.org/10.1080/16864360.2005.10738323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Oka, Ryuichi, Takuichi Nishimura, and Takashi Endo. "Media Information Processing for Robotics. Multi-modal Interface." Journal of the Robotics Society of Japan 16, no. 6 (1998): 749–53. http://dx.doi.org/10.7210/jrsj.16.749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Абдуллин, А., A. Abdullin, Елена Маклакова, Elena Maklakova, Анна Илунина, Anna Ilunina, И. Земцов, et al. "VOICE SEARCH ALGORITHM IN INTELLIGENT MULTI-MODAL INTERFACE." Modeling of systems and processes 12, no. 1 (August 26, 2019): 4–9. http://dx.doi.org/10.12737/article_5d639c80b4a438.38023981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Park, Sankyu, Key-Sun Choi, and K. H. (Kane) Kim. "A Framework for Multi-Agent Systems with Multi-Modal User Interfaces in Distributed Computing Environments." International Journal of Software Engineering and Knowledge Engineering 07, no. 03 (September 1997): 351–69. http://dx.doi.org/10.1142/s0218194097000217.

Full text
Abstract:
In current multi-agent systems, the user is typically interacting with a single agent at a time through relatively inflexible and modestly intelligent interfaces. As a consequence, these systems force the users to submit simplistic requests only and suffer from problems such as the low-level nature of the system services offered to users, the weak reusability of agents, and the weak extensibility of the systems. In this paper, a framework for multi-agent systems called the open agent architecture (OAA) which reduces such problems, is discussed. The OAA is designed to handle complex requests that involve multiple agents. In some cases of complex requests from users, the components of the requests do not directly correspond to the capabilities of various application agents, and therefore, the system is required to translate the user's model of the task into the system's model before apportioning subtasks to the agents. To maximize users' efficiency in generating this type of complex requests, the OAA offers an intelligent multi-modal user interface agent which supports a natural language interface with a mix of spoken language, handwriting, and gesture. The effectiveness of the OAA environment including the intelligent distributed multi-modal interface has been observed in our development of several practical multi-agent systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Indhumathi, C., Wenyu Chen, and Yiyu Cai. "Multi-Modal VR for Medical Simulation." International Journal of Virtual Reality 8, no. 1 (January 1, 2009): 1–7. http://dx.doi.org/10.20870/ijvr.2009.8.1.2707.

Full text
Abstract:
Over the past three decades computer graphics and virtual reality (VR) have played a significant role in adding value to medicine for diagnosis and treatment applications. Medical simulation is increasingly used in medical training and surgical planning. This paper investigates the multi-modal VR interface for medical simulation focusing on motion tracking, stereographic visualization, voice navigation, and interactions. Applications in virtual anatomy learning, surgical training and pre-treatment planning will also be discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Mac Namara, Damien, Paul Gibson, and Ken Oakley. "The Ideal Voting Interface: Classifying Usability." JeDEM - eJournal of eDemocracy and Open Government 6, no. 2 (December 2, 2014): 182–96. http://dx.doi.org/10.29379/jedem.v6i2.306.

Full text
Abstract:
This work presents a feature-oriented taxonomy for commercial electronic voting machines, which focuses on usability aspects. Based on this analysis, we propose a ‘Just-Like-Paper’ (JLP) classification method which identifies five broad categories of eVoting interface. We extend the classification to investigate its application as an indicator of voting efficiency and identify a universal ten-step process encompassing all possible voting steps spanning the twenty-six machines studied. Our analysis concludes that multi-functional and progressive interfaces are likely to be more efficient versus multi-modal voter-activated machines.
APA, Harvard, Vancouver, ISO, and other styles
7

Tomori, Zoltán, Peter Keša, Matej Nikorovič, Jan Kaňka, Petr Jákl, Mojmír Šerý, Silvie Bernatová, Eva Valušová, Marián Antalík, and Pavel Zemánek. "Holographic Raman tweezers controlled by multi-modal natural user interface." Journal of Optics 18, no. 1 (November 18, 2015): 015602. http://dx.doi.org/10.1088/2040-8978/18/1/015602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Folgheraiter, Michele, Giuseppina Gini, and Dario Vercesi. "A Multi-Modal Haptic Interface for Virtual Reality and Robotics." Journal of Intelligent and Robotic Systems 52, no. 3-4 (May 30, 2008): 465–88. http://dx.doi.org/10.1007/s10846-008-9226-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Di Nuovo, Alessandro, Frank Broz, Ning Wang, Tony Belpaeme, Angelo Cangelosi, Ray Jones, Raffaele Esposito, Filippo Cavallo, and Paolo Dario. "The multi-modal interface of Robot-Era multi-robot services tailored for the elderly." Intelligent Service Robotics 11, no. 1 (September 2, 2017): 109–26. http://dx.doi.org/10.1007/s11370-017-0237-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jung, Jang-Young, Young-Bin Kim, Sang-Hyeok Lee, and Shin-Jin Kang. "Expression Analysis System of Game Player based on Multi-modal Interface." Journal of Korea Game Society 16, no. 2 (April 30, 2016): 7–16. http://dx.doi.org/10.7583/jkgs.2016.16.2.7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, X., S. K. Ong, and A. Y. C. Nee. "Multi-modal augmented-reality assembly guidance based on bare-hand interface." Advanced Engineering Informatics 30, no. 3 (August 2016): 406–21. http://dx.doi.org/10.1016/j.aei.2016.05.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Kim, Hansol, Kun Ha Suh, and Eui Chul Lee. "Multi-modal user interface combining eye tracking and hand gesture recognition." Journal on Multimodal User Interfaces 11, no. 3 (March 6, 2017): 241–50. http://dx.doi.org/10.1007/s12193-017-0242-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Fussell, Susan R., Delia Grenville, Sara Kiesler, Jodi Forlizzi, and Anna M. Wichansky. "Accessing Multi-Modal Information on Cell Phones While Sitting and Driving." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 46, no. 22 (September 2002): 1809–13. http://dx.doi.org/10.1177/154193120204602207.

Full text
Abstract:
Multimodal interfaces have been identified as a possible solution for reducing the visual and motor demands of small devices such as cell phones. In a within-subjects factorial experiment, we explored where audio is useful in a cell phone interface that supports database applications. Participants sat at a desk and drove in a car simulator while choosing a hotel from a descriptive long list. We compared participants' performance with and without the option to listen to the information while it was presented in text. Participants rarely preferred or used the audio option while seated. A substantial number preferred and used the audio option while driving, especially when the hotel choice task was more difficult. Those who chose the audio option looked less at the phone, but increased their task time and did not improve their driving performance. We discuss implications of reading and listening for safety and design.
APA, Harvard, Vancouver, ISO, and other styles
14

Zou, Chun-Ping, Duan-Shi Chen, and Hong-Xing Hua. "Torsional Vibration Analysis of Complicated Multi-Branched Shafting Systems by Modal Synthesis Method." Journal of Vibration and Acoustics 125, no. 3 (June 18, 2003): 317–23. http://dx.doi.org/10.1115/1.1569949.

Full text
Abstract:
The torsional vibration calculations of the complicated multi-branched system with rigid connection and flexible connections made up of elastic-coupling parts are very difficult to perform using conventional methods. In this paper, a modal synthesis method of torsional vibration analysis for the system is proposed. This approach is an improved method of Hurty’s fixed-interface and Hou’s free-interface modal synthesis methods. Because of the introduction of flexible substructure, the improved modal synthesis method can effectively treat the complicated system in which there exists a rigid connection and a flexible connection that is formed by an elastic-coupling part. When the calculation is performed, the complicated multi-branched system is divided into several substructures that are analyzed by FEM (finite element method) except the special elastic-coupling part that is defined as flexible substructure and treated individually. The efficiency of modal synthesis is improved by choosing suitable number of lower-frequency modes in modal synthesis. As an example of an application of this method, the analysis of torsional vibration of a cam-type engine shafting system is carried out both numerically and experimentally. The results show that the above kind of multi-branched shafting system can be analyzed effectively by the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
15

Leite, Luis, Rui Torres, and Luis Aly. "Common Spaces: Multi-Modal-Media Ecosystem for Live Performances." Matlit Revista do Programa de Doutoramento em Materialidades da Literatura 6, no. 1 (August 10, 2018): 187–98. http://dx.doi.org/10.14195/2182-8830_6-1_13.

Full text
Abstract:
Common Spaces is an interface for real-time media convergence and live performance combining media, applications and devices. A multimodal media ecosystem was designed to respond to the requirement of a specific performance — how to mix multiple applications into a single environment. This collaborative environment provides a flexible interface for performers to negotiate, share, and mix media, applications, and devices. Common Spaces is a framework based on interoperability and data flow, a network of virtual wires connecting applications that “talk” to each other sharing resources through technologies such as OSC or Syphon. With this approach, media designers have the freedom to choose a set of applications and devices that best suit their needs and are not restricted to a unique environment. We have implemented and performed with this ecosystem in live events, demonstrating its feasibility. In our paper we describe the project's concept and methodology. In the proposed performance we will use the Digital Archive of Portuguese Experimental Literature (www.po-ex.net) as a framework, appropriating its database assets, remixing its contents, as well as the techniques and methods they imply, stimulating the understanding of the archive as variable and adaptable. These digital re-readings and re-codings of experimental poems further highlight the importance of the materialities of experimental writing, integrating self-awareness in the modes of exchanges between literature, music, animation, performance, and technology.
APA, Harvard, Vancouver, ISO, and other styles
16

De Boeck, J., C. Raymaekers, and K. Coninx. "Aspects of Haptic Feedback in a Multi-modal Interface for Object Modelling." Virtual Reality 6, no. 4 (August 2003): 257–70. http://dx.doi.org/10.1007/s10055-003-0108-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Coury, Bruce G., John Sadowsky, Paul R. Schuster, Michael Kurnow, Marcus J. Huber, and Edmund H. Durfee. "Reducing the Interaction Burden of Complex Systems." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 41, no. 1 (October 1997): 335–39. http://dx.doi.org/10.1177/107118139704100175.

Full text
Abstract:
Reducing the burden of interacting with complex systems has been a long standing goal of user interface design. In our approach to this problem, we have been developing user interfaces that allow users to interact with complex systems in a natural way and in high-level, task-related terms. These capabilities help users concentrate on making important decisions without the distractions of manipulating systems and user interfaces. To attain such a goal, our approach uses a unique combination of multi-modal interaction and interaction planning. In this paper, we motivate the basis for our approach, we describe the user interface technologies we have developed, and briefly discuss the relevant research and development issues.
APA, Harvard, Vancouver, ISO, and other styles
18

Horva´th, Imre. "Investigation of Hand Motion Language in Shape Conceptualization." Journal of Computing and Information Science in Engineering 4, no. 1 (March 1, 2004): 37–42. http://dx.doi.org/10.1115/1.1645864.

Full text
Abstract:
This paper summarizes the results of an empirical study concerning the use of hand motions as one of the input mechanisms of a multi-modal interface of computer aided conceptual design systems for initial shape design of consumer durables. A hand motion language (HML) has been developed and used in designed experiments to describe simple, compound and hybrid shapes. The subjects were asked to reconstruct the presented shapes by sketching on paper. Comprehension of the hand motion language has been evaluated in terms of fidelity and efficiency. The results clearly indicate the potentials of a HML in shape conceptualization. In addition, the experiments revealed several new issues related to the application of the hand motion language in a multi-modal interface.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Ning, Alessandro Di Nuovo, Angelo Cangelosi, and Ray Jones. "Temporal patterns in multi-modal social interaction between elderly users and service robot." Interaction Studies 20, no. 1 (July 15, 2019): 4–24. http://dx.doi.org/10.1075/is.18042.wan.

Full text
Abstract:
Abstract Social interaction, especially for older people living alone is a challenge currently facing human-robot interaction (HRI). There has been little research on user preference towards HRI interfaces. In this paper, we took both objective observations and participants’ opinions into account in studying older users with a robot partner. The developed dual-modal robot interface offered older users options of speech or touch screen to perform tasks. Fifteen people aged from 70 to 89 years old, participated. We analyzed the spontaneous actions of the participants, including their attentional activities and conversational activities, the temporal characteristics of these social behaviours, as well as questionnaires. It has been revealed that social engagement with the robot demonstrated by older people was no different from what might be expected towards a human partner. This study is an early attempt to reveal the social connections between human beings and a personal robot in real life.
APA, Harvard, Vancouver, ISO, and other styles
20

Michelson, Nicholas J., Alberto L. Vazquez, James R. Eles, Joseph W. Salatino, Erin K. Purcell, Jordan J. Williams, X. Tracy Cui, and Takashi D. Y. Kozai. "Multi-scale, multi-modal analysis uncovers complex relationship at the brain tissue-implant neural interface: new emphasis on the biological interface." Journal of Neural Engineering 15, no. 3 (April 6, 2018): 033001. http://dx.doi.org/10.1088/1741-2552/aa9dae.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lee, Yong-Gu, Hyungjun Park, Woontaek Woo, Jeha Ryu, Hong Kook Kim, Sung Wook Baik, Kwang Hee Ko, et al. "Immersive modeling system (IMMS) for personal electronic products using a multi-modal interface." Computer-Aided Design 42, no. 5 (May 2010): 387–401. http://dx.doi.org/10.1016/j.cad.2009.11.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Huang, Xingrong, Louis Jézéquel, Sébastien Besset, and Lin Li. "Optimization of the dynamic behavior of vehicle structures by means of passive interface controls." Journal of Vibration and Control 24, no. 3 (August 8, 2016): 466–91. http://dx.doi.org/10.1177/1077546316650131.

Full text
Abstract:
As a form of passive control, padding rubber layers onto the most heavily deformed zones of a system can improve the dynamic behavior and the acoustic comfort of a vehicle system. This paper proposes an extensive hybrid modal synthesis method in order to study coupled fluid-structure systems, in retaining a few degrees of freedom. Modal criteria, corresponding to noise transmission paths between substructures in the system, have been derived to characterize the dynamic phenomenon from a modal view. These criteria were then substituted by Kriging interpolation models to avoid prohibitive simulation steps during optimization of the complex system. Once the mathematical models of the investigated modal criteria were established and the multi-objective functions for rubber characteristics defined, an approximate optimal solution leading to superior dynamic performance could be obtained based on a genetic algorithm. The analytical results and numerical experiments conducted have also justified the efficiency of our proposed strategy.
APA, Harvard, Vancouver, ISO, and other styles
23

Luo, Xisheng, Yu Liang, Ting Si, and Zhigang Zhai. "Effects of non-periodic portions of interface on Richtmyer–Meshkov instability." Journal of Fluid Mechanics 861 (December 20, 2018): 309–27. http://dx.doi.org/10.1017/jfm.2018.923.

Full text
Abstract:
The development of a non-periodic $\text{air}\text{/}\text{SF}_{6}$ gaseous interface subjected to a planar shock wave is investigated experimentally and theoretically to evaluate the effects of the non-periodic portions of the interface on the Richtmyer–Meshkov instability. Experimentally, five kinds of discontinuous chevron-shaped interfaces with or without non-periodic portions are created by the extended soap film technique. The post-shock flows and the interface morphologies are captured by schlieren photography combined with a high-speed video camera. A periodic chevron-shaped interface, which is multi-modal (81 % fundamental mode and 19 % high-order modes), is first considered to evaluate the impulsive linear model and several typical nonlinear models. Then, the non-periodic chevron-shaped interfaces are investigated and the results show that the existence of non-periodic portions significantly changes the balanced position of the initial interface, and subsequently disables the nonlinear model which is applicable to the periodic chevron-shaped interface. A modified nonlinear model is proposed to consider the effects of the non-periodic portions. It turns out that the new model can predict the growth of the shocked non-periodic interface well. Finally, a method is established using spectrum analysis on the initial shape of the interface to separate its bubble structure and spike structure such that the new model can apply to any random perturbed interface. These findings can facilitate the understanding of the evolution of non-periodic interfaces which are more common in reality.
APA, Harvard, Vancouver, ISO, and other styles
24

Kato, Tsuneaki, and Mitsunori Matsushita. "Multi-Modal Interface for Information Access through Extraction and Visualization of Time-Series Information." Transactions of the Japanese Society for Artificial Intelligence 22 (2007): 553–62. http://dx.doi.org/10.1527/tjsai.22.553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Fischer, Christian, and GÜNTHER Schmidt. "Multi-modal human-robot interface for interaction with a remotely operating mobile service robot." Advanced Robotics 12, no. 4 (January 1997): 397–409. http://dx.doi.org/10.1163/156855398x00262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Kaber, David B., Melanie C. Wright, and Mohamed A. Sheik-Nainar. "Investigation of multi-modal interface features for adaptive automation of a human–robot system." International Journal of Human-Computer Studies 64, no. 6 (June 2006): 527–40. http://dx.doi.org/10.1016/j.ijhcs.2005.11.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Park, Juyeon, and Myeong-Heum Yeoun. "Suitable for Smart Home users' situations Explore the design of a multi-modal interface." Journal of Communication Design 73 (October 31, 2020): 429–41. http://dx.doi.org/10.25111/jcd.2020.73.31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

De Bérigny Wall, Caitilin, and Xiangyu Wang. "InterANTARCTICA: Tangible User Interface for Museum Based Interaction." International Journal of Virtual Reality 8, no. 3 (January 1, 2009): 19–24. http://dx.doi.org/10.20870/ijvr.2009.8.3.2737.

Full text
Abstract:
This paper presents the design and concept for an interactive museum installation, InterANTARCTICA. The museum installation is based on a gesture-driven spatially surrounded tangible user interface (TUI) platform. The TUI allows a technological exploration of environmental climate change research by developing the status of interaction in museum installation art. The aim of the museum installation is to produce a cross-media platform suited to TUI and gestural interactions. We argue that our museum installation InterANTARCTICA pursues climate change in an interactive context, thus reinventing museum installation art in an experiential multi-modal context (sight, sound, touch).
APA, Harvard, Vancouver, ISO, and other styles
29

Biocca, Frank, Jin Kim, and Yung Choi. "Visual Touch in Virtual Environments: An Exploratory Study of Presence, Multimodal Interfaces, and Cross-Modal Sensory Illusions." Presence: Teleoperators and Virtual Environments 10, no. 3 (June 2001): 247–65. http://dx.doi.org/10.1162/105474601300343595.

Full text
Abstract:
How do users generate an illusion of presence in a rich and consistent virtual environment from an impoverished, incomplete, and often inconsistent set of sensory cues? We conducted an experiment to explore how multimodal perceptual cues are integrated into a coherent experience of virtual objects and spaces. Specifically, we explored whether inter-modal integration contributes to generating the illusion of presence in virtual environments. To discover whether intermodal integration might play a role in presence, we looked for evidence of intermodal integration in the form of cross-modal interactions—perceptual illusions in which users use sensory cues in one modality to “fill in” the “missing” components of perceptual experience. One form of cross-modal interaction, a cross-modal transfer, is defined as a form of synesthesia, that is, a perceptual illusion in which stimulation to a sensory modality connected to the interface (such as the visual modality) is accompanied by perceived stimulation to an unconnected sensory modality that receives no apparent stimulation from the virtual environment (such as the haptic modality). Users of our experimental virtual environment who manipulated the visual analog of a physical force, a virtual spring, reported haptic sensations of “physical resistance”, even though the interface included no haptic displays. A path model of the data suggested that this cross-modal illusion was correlated with and dependent upon the sensation of spatial and sensory presence. We conclude that this is evidence that presence may derive from the process of multi-modal integration and, therefore, may be associated with other illusions, such as cross-modal transfers, that result from the process of creating a coherent mental model of the space. Finally, we suggest that this perceptual phenomenon might be used to improve user experiences with multimodal interfaces, specifically by supporting limited sensory displays (such as haptic displays) with appropriate synesthetic stimulation to other sensory modalities (such as visual and auditory analogs of haptic forces).
APA, Harvard, Vancouver, ISO, and other styles
30

Kaufman, Leah S., Jim Stewart, Bruce Thomas, and Gerhard Deffner. "Computers and Telecommunications in the Year 2000-Multi-Modal Interfaces, Miniaturisation, and Portability." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 40, no. 6 (October 1996): 353–56. http://dx.doi.org/10.1177/154193129604000607.

Full text
Abstract:
In this, the second of three sets of position papers for the CTG-CSTG co-sponsored symposium on Computers and Telecommunications in the Year 2000, we begin with a paper by Leah Kaufman and Jim Stewart on the human factors challenges involved in creating an effective multimodal communications environment. Bruce Thomas continues with a position paper outlining the advantages and disadvantages of technology miniaturisation, and how these advantages and disadvantages impact our approaches to user interface design. In the final paper in this set, Gerhard Deffner describes the portability-functionality dilemma, in which designers are confronted with two distinct user goals that are difficult to meet simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
31

Sun, Bo Wen, Bin Shen, Wen Wu Chen, Yan Peng Zhang, and Jia Qi Lin. "VEET: 3D Virtual Electrical Experimental Tool Supporting Multi-Modal User Interfaces and Platforms." Advanced Materials Research 981 (July 2014): 196–99. http://dx.doi.org/10.4028/www.scientific.net/amr.981.196.

Full text
Abstract:
This paper introduces a practical and cross-platform virtual electrical experimental tool (VEET) based on an off the shelf game engine called Unity3D which is powerful and flexible to develop Virtual and Augmented Reality (VR/AR) applications. Taking the electrical experiments of technological university as examples, the well designed virtual experimental system has the characteristics of lifelike three dimensional (3D) experimental environments, AR interactive interface on mobile devices, intelligent detecting mechanism and cross-platform. We described VEET’?s flexible design and demonstrate its use in teaching where 120 students from three classes conducted electrical experiments with it. The experiments in VEET were presented on desktop, mobile and web browser using low cost common devices (personal computer, android handheld device, Chrome browser). Evaluating the main performance parameters, the well practicability was confirmed.
APA, Harvard, Vancouver, ISO, and other styles
32

Khan, Sumbul, and Bige Tunçer. "Speech analysis for conceptual CAD modeling using multi-modal interfaces: An investigation into Architects’ and Engineers’ speech preferences." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 33, no. 03 (March 14, 2019): 275–88. http://dx.doi.org/10.1017/s0890060419000015.

Full text
Abstract:
AbstractSpeech- and gesture-based interfaces for computer-aided design (CAD) modeling must employ vocabulary suitable for target professional groups. We conducted an experiment with 40 participants from architecture and engineering backgrounds to elicit their speech preferences for four CAD manipulation tasks: Scale, Rotate, Copy, and Move. We compiled speech command terms used by participants and analyzed verbalizations based on three analytic themes: the exactness of descriptions, the granularity of descriptions, and the use of CAD legacy terms. We found that participants from both groups used precise and vague expressions in their verbalizations and used a median of three parameters in their verbalizations. Architects used CAD legacy terms more than Engineers in the tasks Scale and Rotate. Based on these findings, we give recommendations for the design of speech- and gesture-based interface for conceptual CAD modeling.
APA, Harvard, Vancouver, ISO, and other styles
33

DOI, MASATAKA, KENJI SUZUKI, and SHUJI HASHIMOTO. "AN INTEGRATED COMMUNICATIVE ROBOT — BUGNOID." International Journal of Humanoid Robotics 01, no. 01 (March 2004): 127–42. http://dx.doi.org/10.1142/s0219843604000034.

Full text
Abstract:
A communicative robot — BUGNOID — which integrates various sensory data and behavior modules is introduced with some experimental results. To achieve flexible communication with humans, the robot has a multi-modal interface with diverse channels of communication. Moreover, the robot can create an environmental map and recognize its environment taking human behavior into account with the aim of co-existing with humans.
APA, Harvard, Vancouver, ISO, and other styles
34

Fukui, Yukio, Makoto Shimojo, and Juli Yamashita. "Recognition by Inconsistent Information from Visual and Haptic Interface." Journal of Robotics and Mechatronics 9, no. 3 (June 20, 1997): 208–12. http://dx.doi.org/10.20965/jrm.1997.p0208.

Full text
Abstract:
Haptic interaction is a important paradigm to be investigated further in virtual reality technology. Though the human sense of sight is generally more sensitive than that of touch, it is subject to optical illusions. We took experiments to investigate the characteristics of shape recognition based on the sense of sight and of touch or haptics in an optical illusion environment. The result is that the evaluated value of recognition is greatly affected by optical illusion. Furthermore, the differential threshold becomes larger when haptic information was added. Therefore, the design of multi modal interfaces requires much consideration so that the visual environmental setting does not cause optical illusion. Also, two methods for haptic display are considered.
APA, Harvard, Vancouver, ISO, and other styles
35

Vardhan, Jai, and Girijesh Prasad. "Enhancing an Eye-Tracker based Human-Computer Interface with Multi-modal Accessibility Applied for Text Entry." International Journal of Computer Applications 130, no. 16 (November 17, 2015): 16–22. http://dx.doi.org/10.5120/ijca2015907194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

CHAUHAN, P., J. BOGER, T. HUSSEIN, S. MOON, F. RUDZICZ, and J. POLGAR. "Creating the CARE-RATE interface through multi-modal participatory design with caregivers of people with dementia." Gerontechnology 17, s (April 24, 2018): 23. http://dx.doi.org/10.4017/gt.2018.17.s.023.00.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Wilson, M. D. "First MMI(2) demonstrator: A multi-modal interface for man machine interaction with knowledge based systems." Expert Systems with Applications 4, no. 4 (January 1992): 423. http://dx.doi.org/10.1016/0957-4174(92)90135-f.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Weiya, Tetsuo Sawaragi, and Toshihiro Hiraoka. "Adaptive multi-modal interface model concerning mental workload in take-over request during semi-autonomous driving." SICE Journal of Control, Measurement, and System Integration 14, no. 2 (March 11, 2021): 10–21. http://dx.doi.org/10.1080/18824889.2021.1894023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Szynkiewicz, Wojciech, Włodzimierz Kasprzak, Cezary Zieliński, Wojciech Dudek, Maciej Stefańczyk, Artur Wilkowski, and Maksym Figat. "Utilisation of Embodied Agents in the Design of Smart Human–Computer Interfaces—A Case Study in Cyberspace Event Visualisation Control." Electronics 9, no. 6 (June 11, 2020): 976. http://dx.doi.org/10.3390/electronics9060976.

Full text
Abstract:
The goal of the research reported here was to investigate whether the design methodology utilising embodied agents can be applied to produce a multi-modal human–computer interface for cyberspace events visualisation control. This methodology requires that the designed system structure be defined in terms of cooperating agents having well-defined internal components exhibiting specified behaviours. System activities are defined in terms of finite state machines and behaviours parameterised by transition functions. In the investigated case the multi-modal interface is a component of the Operational Centre which is a part of the National Cybersecurity Platform. Embodied agents have been successfully used in the design of robotic systems. However robots operate in physical environments, while cyberspace events visualisation involves cyberspace, thus the applied design methodology required a different definition of the environment. It had to encompass the physical environment in which the operator acts and the computer screen where the results of those actions are presented. Smart human–computer interaction (HCI) is a time-aware, dynamic process in which two parties communicate via different modalities, e.g., voice, gesture, eye movement. The use of computer vision and machine intelligence techniques are essential when the human is carrying an exhausting and concentration demanding activity. The main role of this interface is to support security analysts and operators controlling visualisation of cyberspace events like incidents or cyber attacks especially when manipulating graphical information. Visualisation control modalities include visual gesture- and voice-based commands.
APA, Harvard, Vancouver, ISO, and other styles
40

Walther, Jürgen, Pablo D. Dans, Alexandra Balaceanu, Adam Hospital, Genís Bayarri, and Modesto Orozco. "A multi-modal coarse grained model of DNA flexibility mappable to the atomistic level." Nucleic Acids Research 48, no. 5 (January 20, 2020): e29-e29. http://dx.doi.org/10.1093/nar/gkaa015.

Full text
Abstract:
Abstract We present a new coarse grained method for the simulation of duplex DNA. The algorithm uses a generalized multi-harmonic model that can represent any multi-normal distribution of helical parameters, thus avoiding caveats of current mesoscopic models for DNA simulation and representing a breakthrough in the field. The method has been parameterized from accurate parmbsc1 atomistic molecular dynamics simulations of all unique tetranucleotide sequences of DNA embedded in long duplexes and takes advantage of the correlation between helical states and backbone configurations to derive atomistic representations of DNA. The algorithm, which is implemented in a simple web interface and in a standalone package reproduces with high computational efficiency the structural landscape of long segments of DNA untreatable by atomistic molecular dynamics simulations.
APA, Harvard, Vancouver, ISO, and other styles
41

Adolphs, Svenja, Dawn Knight, and Ronald Carter. "Capturing context for heterogeneous corpus analysis." International Journal of Corpus Linguistics 16, no. 3 (October 24, 2011): 305–24. http://dx.doi.org/10.1075/ijcl.16.3.02ado.

Full text
Abstract:
Heterogeneous corpora are emergent multi-modal datasets which comprise a variety of different records of everyday communication, from SMS/MMS messages to interactions in virtual environments, and from GPS data to phone and video calls. By tracking a person’s specific (inter)actions over time and place, the analysis of such “ubiquitous” corpora enables more detailed investigations of the interface between different communicative modes. This paper outlines some of the ways in which multi-modal, heterogeneous corpora can be utilised in corpus-based analyses of language-in-use and how we can construct richer descriptions of language use in relation to context. The paper further illustrates how the compilation of such corpora may enable us to extrapolate further information about communication across different speakers, media and environments, helping to generate useful insights into the extent to which everyday language and communicative choices are determined by different spatial, temporal and social contexts.
APA, Harvard, Vancouver, ISO, and other styles
42

Weissgerber, Doanna, Bruce Bridgeman, and Alex Pang. "Feel the Information with VisPad: A Large Area Vibrotactile Device." Information Visualization 3, no. 1 (March 2004): 36–48. http://dx.doi.org/10.1057/palgrave.ivs.9500060.

Full text
Abstract:
A new haptics design for visualizing data is constructed out of commodity massage pads and custom controllers and interfaces to a computer. It is an output device for information that can be transmitted to a user who sits on the pad. Two unique properties of the design are: (a) its large feedback area and (b) its passive nature, where unlike most current haptics devices, the user's hands are free to work on other things. To test how useful such a device is for visualizing data, we added the VisPad interface to our protein structure-alignment program (ProtAlign) and performed usability studies. The studies demonstrated that information could be perceived significantly faster utilizing our multi-modal presentation compared to vision-based graphical visualization alone.
APA, Harvard, Vancouver, ISO, and other styles
43

Tytgat, L., J. I. R. Owen, and P. Campagne. "Development of a Civil Military Interface in Europe for Galileo." Journal of Navigation 53, no. 2 (May 2000): 273–78. http://dx.doi.org/10.1017/s0373463300008845.

Full text
Abstract:
Satellite navigation has demonstrated its ability to enhance the safety and efficiency of multi-modal transport systems and act as a stimulant to economic growth and commercial development. The decision of the Transport Ministers to proceed with the definition stage of Galileo currently being funded by the European Commission and the ESA GalileoSat programme will include the complex question of security and defence considerations. Initial studies were completed over the past year in a Civil Military Interface study and by the GNSS Forum for Security and Defence Considerations. This paper presents the findings of the Civil Military Interface study undertaken for the European Commission, DGVII, that identified the security and military implications of a civil operated and controlled satellite navigation service for Europe.
APA, Harvard, Vancouver, ISO, and other styles
44

Weiland, William J., James M. Stokes, and Joan M. Ryder. "Scaneval: A Toolkit for Eye-Tracking Research and Attention-Driven Applications." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 42, no. 16 (October 1998): 1148. http://dx.doi.org/10.1177/154193129804201608.

Full text
Abstract:
ScanEval is an eye tracking software toolkit that records and processes eye movement data in terms of screen regions of interest. Based on these data, the system provides real-time attention assessment measures that can be used dynamically to provide feedback to an application, as well as a variety of summary measures and data capture that can be used for post analysis. This toolkit can be used for a wide variety of purposes including: reactive multi-media or multi-modal systems, training systems incorporating attention assessment, interface design and evaluation, or human factors experimentation. ScanEval is designed as a self-contained program with an application programming interface, for ease of integration with existing applications. It represents a major new enabling technology for commercial systems, in that it will make eye-tracking and visual attention measurement readily available to system developers.
APA, Harvard, Vancouver, ISO, and other styles
45

Regodić, Milovan, Zoltán Bárdosi, Georgi Diakov, Malik Galijašević, Christian F. Freyschlag, and Wolfgang Freysinger. "Visual display for surgical targeting: concepts and usability study." International Journal of Computer Assisted Radiology and Surgery 16, no. 9 (April 8, 2021): 1565–76. http://dx.doi.org/10.1007/s11548-021-02355-8.

Full text
Abstract:
Abstract Purpose Interactive image-guided surgery technologies enable accurate target localization while preserving critical nearby structures in many surgical interventions. Current state-of-the-art interfaces largely employ traditional anatomical cross-sectional views or augmented reality environments to present the actual spatial location of the surgical instrument in preoperatively acquired images. This work proposes an alternative, simple, minimalistic visual interface intended to assist during real-time surgical target localization. Methods The estimated 3D pose of the interventional instruments and their positional uncertainty are intuitively presented in a visual interface with respect to the target point. A usability study with multidisciplinary participants evaluates the proposed interface projected in surgical microscope oculars against cross-sectional views. The latter was presented on a screen both stand-alone and combined with the proposed interface. The instruments were electromagnetically navigated in phantoms. Results The usability study demonstrated that the participants were able to detect invisible targets marked in phantom imagery with significant enhancements for localization accuracy and duration time. Clinically experienced users reached the targets with shorter trajectories. The stand-alone and multi-modal versions of the proposed interface outperformed cross-sectional views-only navigation in both quantitative and qualitative evaluations. Conclusion The results and participants’ feedback indicate potential to accurately navigate users toward the target with less distraction and workload. An ongoing study evaluates the proposed system in a preclinical setting for auditory brainstem implantation.
APA, Harvard, Vancouver, ISO, and other styles
46

McWilliams, Thomas, Bruce Mehler, Bobbie Seppelt, and Bryan Reimer. "Driving Simulator Validation for In-Vehicle Human Machine Interface Assessment." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, no. 1 (November 2019): 2104–8. http://dx.doi.org/10.1177/1071181319631438.

Full text
Abstract:
Driving simulator validation is an important and ongoing process. Advances in in-vehicle human machine interfaces (HMI) mean there is a continuing need to reevaluate the validity of use cases of driving simulators relative to real world driving. Along with this, our tools for evaluating driver demand are evolving, and these approaches and measures must also be considered in evaluating the validity of a driving simulator for particular purposes. We compare driver glance behavior during HMI interactions with a production level multi-modal infotainment system on-road and in a driving simulator. In glance behavior analysis using traditional glance metrics, as well as a contemporary modified AttenD measure, we see evidence for strong relative validity and instances of absolute validity of the simulator compared to on-road driving.
APA, Harvard, Vancouver, ISO, and other styles
47

Canare, Dominic, Barbara Chaparro, and Alex Chaparro. "Using Gesture, Gaze, and Combination Input Schemes as Alternatives to the Computer Mouse." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, no. 1 (September 2018): 297–301. http://dx.doi.org/10.1177/1541931218621068.

Full text
Abstract:
Novel input devices can increase the bandwidth between users and their devices. Traditional desktop computing uses windows, icons, menus, and pointers – an interface built for the computer mouse and very effective for pointing-and-clicking. Alternative devices provide a variety of interactions including touch-free, gesture-based input and gaze-tracking to determine the user’s on-screen gaze location, but these input channels are not well-suited to a point-and-click interface. This study evaluates five new schemes, some multi-modal. These experimental schemes perform worse than mouse-based input for a picture sorting task, and motion-based gesture control creates more errors. Some gaze-based input has similar performance to the mouse while not creating additional workload.
APA, Harvard, Vancouver, ISO, and other styles
48

James, Jose, Bhavani Rao R., and Gabriel Neamtu. "Design of a bi-manual haptic interface for skill acquisition in surface mount device soldering." Soldering & Surface Mount Technology 31, no. 2 (April 1, 2019): 133–42. http://dx.doi.org/10.1108/ssmt-01-2018-0001.

Full text
Abstract:
Purpose Offering unskilled people training in engineering and vocational skills helps to decrease unemployment rate. The purpose of this paper is to augment actual hands-on conventional vocational training methods with virtual haptic simulations as part of computer-based vocational education and training. Design/methodology/approach This paper discusses the design of a bi-manual virtual multi-modal training interface for learning basic skills in surface mount device hand soldering. This research aims to analyze human hand dexterity of novices and experts at micro level skill knowledge capture by simulating and tracking the users’ actions in the manual soldering process through a multi-modal user interface. Findings Haptic feedback can enhance the experience of a virtual training environment for the end user and can provide a supplementary modality for imparting tangible principles to increase effectiveness. This will improve the teaching and learning of engineering and vocational skills with touch-based haptics technology, targeted toward teachers and students of various disciplines in engineering. Compared with the traditional training methods for learning soldering skills, the proposed method shows more efficiency in faster skill acquisition and skill learning. Originality/value In this study, the authors proposed a novel bi-manual virtual training simulator model for teaching soldering skills for surface mount technology and inspection. This research aims to investigate the acquisition of soldering skills through virtual environment, with and without haptic feedback. This acts as a basic-level training simulator that provides introductory training in soldering skills and can help initially unskilled people find educational opportunities and job offers in the electronics industry.
APA, Harvard, Vancouver, ISO, and other styles
49

Pandolfi, Ronald J., Daniel B. Allan, Elke Arenholz, Luis Barroso-Luque, Stuart I. Campbell, Thomas A. Caswell, Austin Blair, et al. "Xi-cam: a versatile interface for data visualization and analysis." Journal of Synchrotron Radiation 25, no. 4 (May 31, 2018): 1261–70. http://dx.doi.org/10.1107/s1600577518005787.

Full text
Abstract:
Xi-cam is an extensible platform for data management, analysis and visualization. Xi-cam aims to provide a flexible and extensible approach to synchrotron data treatment as a solution to rising demands for high-volume/high-throughput processing pipelines. The core of Xi-cam is an extensible plugin-based graphical user interface platform which provides users with an interactive interface to processing algorithms. Plugins are available for SAXS/WAXS/GISAXS/GIWAXS, tomography and NEXAFS data. With Xi-cam's `advanced' mode, data processing steps are designed as a graph-based workflow, which can be executed live, locally or remotely. Remote execution utilizes high-performance computing or de-localized resources, allowing for the effective reduction of high-throughput data. Xi-cam's plugin-based architecture targets cross-facility and cross-technique collaborative development, in support of multi-modal analysis. Xi-cam is open-source and cross-platform, and available for download on GitHub.
APA, Harvard, Vancouver, ISO, and other styles
50

Danihelka, Jiri, Roman Hak, Lukas Kencl, and Jiri Zara. "3D Talking-Head Interface to Voice-Interactive Services on Mobile Phones." International Journal of Mobile Human Computer Interaction 3, no. 2 (April 2011): 50–64. http://dx.doi.org/10.4018/jmhci.2011040104.

Full text
Abstract:
This paper presents a novel framework for easy creation of interactive, platform-independent voice-services with an animated 3D talking-head interface, on mobile phones. The Framework supports automated multi-modal interaction using speech and 3D graphics. The difficulty of synchronizing the audio stream to the animation is examined and alternatives for distributed network control of the animation and application logic is discussed. The ability of modern mobile devices to handle such applications is documented and it is shown that the power consumption trade-off of rendering on the mobile phone versus streaming from the server favors the phone. The presented tools will empower developers and researchers in future research and usability studies in the area of mobile talking-head applications (Figure 1). These may be used for example in entertainment, commerce, health care or education.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography