To see the other types of publications on this topic, follow the link: Interfaces multimodales.

Journal articles on the topic 'Interfaces multimodales'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Interfaces multimodales.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Laforest, Frédérique, Stéphane Frénot, and Nada Al Masri. "Dossier médical semi-structuré pour des interfaces de saisie multimodales." Document numérique 6, no. 1-2 (2002): 29–46. http://dx.doi.org/10.3166/dn.6.1-2.29-46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Loayza, Andrea, Rodrigo Proaño, and Diego Ordóñez Camacho. "Aplicaciones sensibles al contexto. Tendencias actuales." Enfoque UTE 4, no. 2 (2013): 95–110. http://dx.doi.org/10.29019/enfoqueute.v4n2.31.

Full text
Abstract:
(Recibido: 2013/10/07 - Aceptado: 2013/12/10)Las aplicaciones sensibles al contexto adaptan automáticamente su comportamiento y configuración, dependiendo de las condiciones del entorno y de las preferencias del usuario. Esta revisión del estado del arte presenta las tendencias en cuanto a técnicas y herramientas para el desarrollo de estas aplicaciones, así como los ámbitos de interés actual de la comunidad científica en esta área, donde se destaca la investigación en interfaces multimodales, localización, detección de actividades, control de interrupciones, aplicaciones predictivas y de ayud
APA, Harvard, Vancouver, ISO, and other styles
3

Reyes Flores, Itzel Alessandra, Carmen Mezura-Godoy, and Gabriela Sánchez Morales. "Hacia un modelo de interfaces multimodales adaptables a los canales de aprendizaje en aplicaciones colaborativas como apoyo a la educación." Research in Computing Science 111, no. 1 (2016): 57–67. http://dx.doi.org/10.13053/rcs-111-1-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Abdelmessih, Marie Thérèse. "Strategies of Engagement in Using Life: A Multimodal Novel." Interfaces, no. 38 (January 1, 2017): 105–26. http://dx.doi.org/10.4000/interfaces.314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Waibel, Alex, Minh Tue Vo, Paul Duchnowski, and Stefan Manke. "Multimodal interfaces." Artificial Intelligence Review 10, no. 3-4 (1996): 299–319. http://dx.doi.org/10.1007/bf00127684.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Crangle, Colleen. "Conversational interfaces to robots." Robotica 15, no. 1 (1997): 117–27. http://dx.doi.org/10.1017/s0263574797000143.

Full text
Abstract:
There is growing interest in robots that are designed specifically to interact with people and which respond to voice commands. Very little attention has been paid, however, to the kind of verbal interaction that is possible or desirable with robots. This paper presents recent work in multimodal interfaces that addresses this question. It proposes a new form of robot-user interface, namely a collaborative conversational interface. This article explains what collaborative conversational interfaces are, argues for their application in robots, and presents strategies for designing good conversati
APA, Harvard, Vancouver, ISO, and other styles
7

Santangelo, A., A. Gentile, G. Vella, N. Ingraffia, and M. Liotta. "XPL the Extensible Presentation Language." Mobile Information Systems 5, no. 2 (2009): 125–39. http://dx.doi.org/10.1155/2009/317534.

Full text
Abstract:
The last decade has witnessed a growing interest in the development of web interfaces enabling both multiple ways to access contents and, at the same time, fruition by multiple modalities of interaction (point-and-click, contents reading, voice commands, gestures, etc.). In this paper we describe a framework aimed at streamlining the design process of multi-channel, multimodal interfaces enabling full reuse of software components. This framework is called the eXtensible Presentation architecture and Language (XPL), a presentation language based on design pattern paradigm that keeps separated t
APA, Harvard, Vancouver, ISO, and other styles
8

JOHNSTON, MICHAEL, and SRINIVAS BANGALORE. "Finite-state multimodal integration and understanding." Natural Language Engineering 11, no. 2 (2005): 159–87. http://dx.doi.org/10.1017/s1351324904003572.

Full text
Abstract:
Multimodal interfaces are systems that allow input and/or output to be conveyed over multiple channels such as speech, graphics, and gesture. In addition to parsing and understanding separate utterances from different modes such as speech or gesture, multimodal interfaces also need to parse and understand composite multimodal utterances that are distributed over multiple input modes. We present an approach in which multimodal parsing and understanding are achieved using a weighted finite-state device which takes speech and gesture streams as inputs and outputs their joint interpretation. In co
APA, Harvard, Vancouver, ISO, and other styles
9

Ryumin, Dmitry, Ildar Kagirov, Alexandr Axyonov, et al. "A Multimodal User Interface for an Assistive Robotic Shopping Cart." Electronics 9, no. 12 (2020): 2093. http://dx.doi.org/10.3390/electronics9122093.

Full text
Abstract:
This paper presents the research and development of the prototype of the assistive mobile information robot (AMIR). The main features of the presented prototype are voice and gesture-based interfaces with Russian speech and sign language recognition and synthesis techniques and a high degree of robot autonomy. AMIR prototype’s aim is to be used as a robotic cart for shopping in grocery stores and/or supermarkets. Among the main topics covered in this paper are the presentation of the interface (three modalities), the single-handed gesture recognition system (based on a collected database of Ru
APA, Harvard, Vancouver, ISO, and other styles
10

O’Halloran, Kay, Sabine Tan, Bradley Smith, and Alexey Podlasov. "Challenges in designing digital interfaces for the study of multimodal phenomena." Information Design Journal 18, no. 1 (2010): 2–21. http://dx.doi.org/10.1075/idj.18.1.02hal.

Full text
Abstract:
The paper discusses the challenges faced by researchers in developing effective digital interfaces for analyzing the meaning-making processes of multimodal phenomena. The authors propose a social semiotic approach as the underlying theoretical foundation, because interactive digital technology is the embodiment of multimodal social semiotic communication. The paper outlines the complex issues with which researchers are confronted in designing digital interface frameworks for modeling, analyzing, and retrieving meaning from multimodal data, giving due consideration to the multiplicity of theore
APA, Harvard, Vancouver, ISO, and other styles
11

Yamauchi, Takashi, Jinsil Seo, and Annie Sungkajun. "Interactive Plants: Multisensory Visual-Tactile Interaction Enhances Emotional Experience." Mathematics 6, no. 11 (2018): 225. http://dx.doi.org/10.3390/math6110225.

Full text
Abstract:
Using a multisensory interface system, we examined how people’s emotional experiences change as their tactile sense (touching a plant) was augmented with visual sense (“seeing” their touch). Our system (the Interactive Plant system) senses the electrical capacitance of the human body and visualizes users’ tactile information on a flat screen (when the touch is gentle, the program draws small and thin roots around the pot; when the touch is more harsh or abrupt, big and thick roots are displayed). We contrasted this multimodal combination (touch + vision) with a unimodal interface (touch only o
APA, Harvard, Vancouver, ISO, and other styles
12

Chandarana, Meghan, Erica L. Meszaros, Anna Trujillo, and B. Danette Allen. "Natural Language Based Multimodal Interface for UAV Mission Planning." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (2017): 68–72. http://dx.doi.org/10.1177/1541931213601483.

Full text
Abstract:
As the number of viable applications for unmanned aerial vehicle (UAV) systems increases at an exponential rate, interfaces that reduce the reliance on highly skilled engineers and pilots must be developed. Recent work aims to make use of common human communication modalities such as speech and gesture. This paper explores a multimodal natural language interface that uses a combination of speech and gesture input modalities to build complex UAV flight paths by defining trajectory segment primitives. Gesture inputs are used to define the general shape of a segment while speech inputs provide ad
APA, Harvard, Vancouver, ISO, and other styles
13

Sreetharan, Sharmila, and Michael Schutz. "Improving Human–Computer Interface Design through Application of Basic Research on Audiovisual Integration and Amplitude Envelope." Multimodal Technologies and Interaction 3, no. 1 (2019): 4. http://dx.doi.org/10.3390/mti3010004.

Full text
Abstract:
Quality care for patients requires effective communication amongst medical teams. Increasingly, communication is required not only between team members themselves, but between members and the medical devices monitoring and managing patient well-being. Most human–computer interfaces use either auditory or visual displays, and despite significant experimentation, they still elicit well-documented concerns. Curiously, few interfaces explore the benefits of multimodal communication, despite extensive documentation of the brain’s sensitivity to multimodal signals. New approaches built on insights f
APA, Harvard, Vancouver, ISO, and other styles
14

Gaouar, Lamia, Abdelkrim Benamar, Olivier Le Goaer, and Frédérique Biennier. "HCIDL: Human-computer interface description language for multi-target, multimodal, plastic user interfaces." Future Computing and Informatics Journal 3, no. 1 (2018): 110–30. http://dx.doi.org/10.1016/j.fcij.2018.02.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Maye, Alexander, Dan Zhang, Yijun Wang, Shangkai Gao, and Andreas K. Engel. "Multimodal brain-computer interfaces." Tsinghua Science and Technology 16, no. 2 (2011): 133–39. http://dx.doi.org/10.1016/s1007-0214(11)70020-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Dutoit, Thierry, Laurence Nigay, and Michael Schnaider. "Multimodal human–computer interfaces." Signal Processing 86, no. 12 (2006): 3515–17. http://dx.doi.org/10.1016/j.sigpro.2006.03.031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Flanagan, J. L. "Speech-centric multimodal interfaces." IEEE Signal Processing Magazine 21, no. 6 (2004): 76–81. http://dx.doi.org/10.1109/msp.2004.1359145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Fähnrich, K. P., and K. H. Hanne. "Multimodal and multimedia interfaces." ACM SIGCHI Bulletin 26, no. 3 (1994): 17–18. http://dx.doi.org/10.1145/181518.181520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Bangalore, Srinivas, and Michael Johnston. "Robust Understanding in Multimodal Interfaces." Computational Linguistics 35, no. 3 (2009): 345–97. http://dx.doi.org/10.1162/coli.08-022-r2-06-26.

Full text
Abstract:
Multimodal grammars provide an effective mechanism for quickly creating integration and understanding capabilities for interactive systems supporting simultaneous use of multiple input modalities. However, like other approaches based on hand-crafted grammars, multimodal grammars can be brittle with respect to unexpected, erroneous, or disfluent input. In this article, we show how the finite-state approach to multimodal language processing can be extended to support multimodal applications combining speech with complex freehand pen input, and evaluate the approach in the context of a multimodal
APA, Harvard, Vancouver, ISO, and other styles
20

Rigas, Dimitrios, and Badr Almutairi. "An Empirical Investigation into the Role of Avatars in Multimodal E-government Interfaces." International Journal of Sociotechnology and Knowledge Development 5, no. 1 (2013): 14–22. http://dx.doi.org/10.4018/jskd.2013010102.

Full text
Abstract:
Interfaces for e-government applications are becoming essential for the modern life. E-government uses web-based interfaces to deliver effective, efficient and convenient services to citizens, business and government. However, one of the main obstacles (or barriers) of using such applications is the lack of the user trust and usability. These issues are often neglected in the interfaces of e-government application. This paper describes an empirical comparative study that investigated the use of multimodal metaphors to enhance the usability and increase the user trust. Specific designs of multi
APA, Harvard, Vancouver, ISO, and other styles
21

Gomes, Kylie, and Sara L. Riggs. "Crossmodal matching." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 60, no. 1 (2016): 1595–99. http://dx.doi.org/10.1177/1541931213601368.

Full text
Abstract:
Multimodal interfaces which distribute information across vision, audition, and touch have been demonstrated to improve performance in various complex domains. However, many multimodal studies to- date fail to conduct crossmodal matching, a critical step to ensure cues across different sensory channels are perceived to be of equal intensity. The present study compared two different methods of crossmodal matching based on previous work conducted by Stevens – the methods of bracketing and adjustment. Each participant completed the crossmodal matching task using two different interfaces using the
APA, Harvard, Vancouver, ISO, and other styles
22

Novais, Ana Elisa Costa. "Convenções de interfaces digitais e leitura ou: para ler interfaces nos textos." Texto Digital 15, no. 2 (2020): 130–61. http://dx.doi.org/10.5007/1807-9288.2019v15n2p130.

Full text
Abstract:
Um movimento que transporta convenções de interfaces digitais (botões, janelas, mensagens de sistema, ícones, entre outros) para textos impressos é o fenômeno em foco neste trabalho. Inicialmente, discuto a importância cultural das interfaces digitais em três dimensões: pensando-as como diálogo, como mídia remediada e como sistema semiótico específico, constituído de convenções que lhe são próprias. Em seguida, analiso seis textos que usam convenções de interface em sua composição multimodal, buscando, em diálogo com estudos e categorias do Design de Interação, elementos para esclarecer quais
APA, Harvard, Vancouver, ISO, and other styles
23

Novais, Ana Elisa. "Convenções de interfaces digitais e leitura ou: para ler interfaces nos textos." Texto Digital 16, no. 1 (2020): 233–65. http://dx.doi.org/10.5007/1807-9288.2020v16n1p233.

Full text
Abstract:
Um movimento que transporta convenções de interfaces digitais (botões, janelas, mensagens de sistema, ícones, entre outros) para textos impressos é o fenômeno em foco neste trabalho. Inicialmente, discuto a importância cultural das interfaces digitais em três dimensões: pensando-as como diálogo, como mídia remediada e como sistema semiótico específico, constituído de convenções que lhe são próprias. Em seguida, analiso seis textos que usam convenções de interface em sua composição multimodal, buscando, em diálogo com estudos e categorias do Design de Interação, elementos para esclarecer quais
APA, Harvard, Vancouver, ISO, and other styles
24

Jones, Matt. "Classic and Alternative Mobile Search." International Journal of Mobile Human Computer Interaction 3, no. 1 (2011): 22–36. http://dx.doi.org/10.4018/jmhci.2011010102.

Full text
Abstract:
As mobile search turns into a mainstream activity, the author reflects on research that provides insights into the impact of current interfaces and pointers to yet unmet needs. Classic text dominated interface and interaction techniques are reviewed, showing how they can enhance the user experience. While today’s interfaces emphasise direct, query-result approaches, serving up discrete chunks of content, the author suggests an alternative set of features for future mobile search. With reference to example systems, the paper argues for indirect, continuous and multimodal approaches. Further, wh
APA, Harvard, Vancouver, ISO, and other styles
25

Karpov, Ronzhin, Lee, and Shalin. "Speech technologies in multimodal interfaces." SPIIRAS Proceedings 1, no. 2 (2014): 183. http://dx.doi.org/10.15622/sp.2.13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Sebe, Nicu. "Multimodal interfaces: Challenges and perspectives." Journal of Ambient Intelligence and Smart Environments 1, no. 1 (2009): 23–30. http://dx.doi.org/10.3233/ais-2009-0003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Paterno, Fabio, Carmen Santoro, Jani Mantyjarvi, Giulio Mori, and Sandro Sansone. "Authoring pervasive multimodal user interfaces." International Journal of Web Engineering and Technology 4, no. 2 (2008): 235. http://dx.doi.org/10.1504/ijwet.2008.018099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Trejo, L. J., K. R. Wheeler, C. C. Jorgensen, et al. "Multimodal neuroelectric interface development." IEEE Transactions on Neural Systems and Rehabilitation Engineering 11, no. 2 (2003): 199–204. http://dx.doi.org/10.1109/tnsre.2003.814426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Kost, Stefan. "GITK – Eine generische Architektur für multimodale Interfaces." i-com 3, no. 1/2004 (2004): 42–43. http://dx.doi.org/10.1524/icom.3.1.42.32959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Basov, Oleg, Irina Kipyatkova, and Anton Saveliev. "Multimodal Subscriber Interfaces for Infocommunication Systems." Computing and Informatics 36, no. 4 (2017): 908–24. http://dx.doi.org/10.4149/cai_2017_4_908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Prammanee, Srihathai, Klaus Moessner, and Rahim Tafazolli. "Discovering modalities for adaptive multimodal interfaces." Interactions 13, no. 3 (2006): 66–70. http://dx.doi.org/10.1145/1125864.1125906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Karpov, A. A., and R. M. Yusupov. "Multimodal Interfaces of Human–Computer Interaction." Herald of the Russian Academy of Sciences 88, no. 1 (2018): 67–74. http://dx.doi.org/10.1134/s1019331618010094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Encarnacao, L. M., and L. J. Hettinger. "Guest editors' introduction - Perceptual multimodal interfaces." IEEE Computer Graphics and Applications 23, no. 5 (2003): 24–25. http://dx.doi.org/10.1109/mcg.2003.1231174.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Van Hees, Kris, and Jan Engelen. "Equivalent representations of multimodal user interfaces." Universal Access in the Information Society 12, no. 4 (2012): 339–68. http://dx.doi.org/10.1007/s10209-012-0282-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

König, Werner A., Roman Rädle, and Harald Reiterer. "Interactive design of multimodal user interfaces." Journal on Multimodal User Interfaces 3, no. 3 (2010): 197–213. http://dx.doi.org/10.1007/s12193-010-0044-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Oviatt, Sharon, and Philip Cohen. "Perceptual user interfaces: multimodal interfaces that process what comes naturally." Communications of the ACM 43, no. 3 (2000): 45–53. http://dx.doi.org/10.1145/330534.330538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Sharma, R., V. I. Pavlovic, and T. S. Huang. "Toward multimodal human-computer interface." Proceedings of the IEEE 86, no. 5 (1998): 853–69. http://dx.doi.org/10.1109/5.664275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Raptis, George E., Giannis Kavvetsos, and Christina Katsini. "MuMIA: Multimodal Interactions to Better Understand Art Contexts." Applied Sciences 11, no. 6 (2021): 2695. http://dx.doi.org/10.3390/app11062695.

Full text
Abstract:
Cultural heritage is a challenging domain of application for novel interactive technologies, where varying aspects in the way that cultural assets are delivered play a major role in enhancing the visitor experience, either onsite or online. Technology-supported natural human–computer interaction that is based on multimodalities is a key factor in enabling wider and enriched access to cultural heritage assets. In this paper, we present the design and evaluation of an interactive system that aims to support visitors towards a better understanding of art contexts through the use of a multimodal i
APA, Harvard, Vancouver, ISO, and other styles
39

He, Zhipeng, Zina Li, Fuzhou Yang, et al. "Advances in Multimodal Emotion Recognition Based on Brain–Computer Interfaces." Brain Sciences 10, no. 10 (2020): 687. http://dx.doi.org/10.3390/brainsci10100687.

Full text
Abstract:
With the continuous development of portable noninvasive human sensor technologies such as brain–computer interfaces (BCI), multimodal emotion recognition has attracted increasing attention in the area of affective computing. This paper primarily discusses the progress of research into multimodal emotion recognition based on BCI and reviews three types of multimodal affective BCI (aBCI): aBCI based on a combination of behavior and brain signals, aBCI based on various hybrid neurophysiology modalities and aBCI based on heterogeneous sensory stimuli. For each type of aBCI, we further review sever
APA, Harvard, Vancouver, ISO, and other styles
40

KETTEBEKOV, SANSHZAR, and RAJEEV SHARMA. "UNDERSTANDING GESTURES IN MULTIMODAL HUMAN COMPUTER INTERACTION." International Journal on Artificial Intelligence Tools 09, no. 02 (2000): 205–23. http://dx.doi.org/10.1142/s021821300000015x.

Full text
Abstract:
In recent years because of the advances in computer vision research, free hand gestures have been explored as a means of human-computer interaction (HCI). Gestures in combination with speech can be an important step toward natural, multimodal HCI. However, interpretation of gestures in a multimodal setting can be a particularly challenging problem. In this paper, we propose an approach for studying multimodal HCI in the context of a computerized map. An implemented testbed allows us to conduct user studies and address issues toward understanding of hand gestures in a multimodal computer interf
APA, Harvard, Vancouver, ISO, and other styles
41

Chai, J. Y., Z. Prasov, and S. Qu. "Cognitive Principles in Robust Multimodal Interpretation." Journal of Artificial Intelligence Research 27 (September 26, 2006): 55–83. http://dx.doi.org/10.1613/jair.1936.

Full text
Abstract:
Multimodal conversational interfaces provide a natural means for users to communicate with computer systems through multiple modalities such as speech and gesture. To build effective multimodal interfaces, automated interpretation of user multimodal inputs is important. Inspired by the previous investigation on cognitive status in multimodal human machine interaction, we have developed a greedy algorithm for interpreting user referring expressions (i.e., multimodal reference resolution). This algorithm incorporates the cognitive principles of Conversational Implicature and Givenness Hierarchy
APA, Harvard, Vancouver, ISO, and other styles
42

Faria, Brígida Mónica, Luís Paulo Reis, and Nuno Lau. "Knowledge Discovery and Multimodal Inputs for Driving an Intelligent Wheelchair." International Journal of Knowledge Discovery in Bioinformatics 2, no. 4 (2011): 18–34. http://dx.doi.org/10.4018/jkdb.2011100102.

Full text
Abstract:
Cerebral Palsy is defined as a group of permanent disorders in the development of movement and posture. The motor disorders in cerebral palsy are associated with deficits of perception, cognition, communication, and behaviour, which can affect autonomy and independence. The interface between the user and an intelligent wheelchair can be done with several input devices such as joysticks, microphones, and brain computer interfaces (BCI). BCI enables interaction between users and hardware systems through the recognition of brainwave activity. The current BCI systems have very low accuracy on the
APA, Harvard, Vancouver, ISO, and other styles
43

Kvale, Knut, and Narada Dilp Warakagoda. "Speech centric multimodal interfaces for disabled users." Technology and Disability 20, no. 2 (2008): 87–95. http://dx.doi.org/10.3233/tad-2008-20204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ronzhin and Karpov. "Multimodal Interfaces: Main Principles and Cognitive Aspects." SPIIRAS Proceedings 1, no. 3 (2014): 300. http://dx.doi.org/10.15622/sp.3.18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Dahl, Deborah A. "The W3C multimodal architecture and interfaces standard." Journal on Multimodal User Interfaces 7, no. 3 (2013): 171–82. http://dx.doi.org/10.1007/s12193-013-0120-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Suhm, Bernhard, Brad Myers, and Alex Waibel. "Multimodal error correction for speech user interfaces." ACM Transactions on Computer-Human Interaction 8, no. 1 (2001): 60–98. http://dx.doi.org/10.1145/371127.371166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Grasso, Michael A., David S. Ebert, and Timothy W. Finin. "The integrality of speech in multimodal interfaces." ACM Transactions on Computer-Human Interaction 5, no. 4 (1998): 303–25. http://dx.doi.org/10.1145/300520.300521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Cohen, Philip R., and David R. McGee. "Tangible multimodal interfaces for safety-critical applications." Communications of the ACM 47, no. 1 (2004): 41. http://dx.doi.org/10.1145/962081.962103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Kong, J., W. Y. Zhang, N. Yu, and X. J. Xia. "Design of human-centric adaptive multimodal interfaces." International Journal of Human-Computer Studies 69, no. 12 (2011): 854–69. http://dx.doi.org/10.1016/j.ijhcs.2011.07.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Song, Kisub, and Kyong-Ho Lee. "Generating multimodal user interfaces for Web services." Interacting with Computers 20, no. 4-5 (2008): 480–90. http://dx.doi.org/10.1016/j.intcom.2008.07.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!