To see the other types of publications on this topic, follow the link: Natural user interfaces.

Journal articles on the topic 'Natural user interfaces'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Natural user interfaces.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Alvarez-Lopez, Fernando, Marcelo Fabián Maina, and Francesc Saigí-Rubió. "Natural User Interfaces." Surgical Innovation 23, no. 4 (July 9, 2016): 429–30. http://dx.doi.org/10.1177/1553350616639145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hearst, Marti A. "'Natural' search user interfaces." Communications of the ACM 54, no. 11 (November 2011): 60–67. http://dx.doi.org/10.1145/2018396.2018414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Norman, Donald A. "Natural user interfaces are not natural." Interactions 17, no. 3 (May 2010): 6–10. http://dx.doi.org/10.1145/1744161.1744163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Malaka, Rainer, Tanja Döring, Thomas Fröhlich, Thomas Muender, Georg Volkmar, Dirk Wenig, and Nima Zargham. "Using Natural User Interfaces for Previsualization." EAI Endorsed Transactions on Creative Technologies 8, no. 26 (March 16, 2021): 169030. http://dx.doi.org/10.4108/eai.16-3-2021.169030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Malizia, Alessio, and Andrea Bellucci. "The artificiality of natural user interfaces." Communications of the ACM 55, no. 3 (March 2012): 36–38. http://dx.doi.org/10.1145/2093548.2093563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ahmed, Naveed, Hind Kharoub, Selma Manel Medjden, and Areej Alsaafin. "A Natural User Interface for 3D Animation Using Kinect." International Journal of Technology and Human Interaction 16, no. 4 (October 2020): 35–54. http://dx.doi.org/10.4018/ijthi.2020100103.

Full text
Abstract:
This article presents a new natural user interface to control and manipulate a 3D animation using the Kinect. The researchers design a number of gestures that allow the user to play, pause, forward, rewind, scale, and rotate the 3D animation. They also implement a cursor-based traditional interface and compare it with the natural user interface. Both interfaces are extensively evaluated via a user study in terms of both the usability and user experience. Through both quantitative and the qualitative evaluation, they show that a gesture-based natural user interface is a preferred method to control a 3D animation compared to a cursor-based interface. The natural user interface not only proved to be more efficient but resulted in a more engaging and enjoyable user experience.
APA, Harvard, Vancouver, ISO, and other styles
7

Wojciechowski, A. "Hand’s poses recognition as a mean of communication within natural user interfaces." Bulletin of the Polish Academy of Sciences: Technical Sciences 60, no. 2 (October 1, 2012): 331–36. http://dx.doi.org/10.2478/v10175-012-0044-3.

Full text
Abstract:
Abstract. Natural user interface (NUI) is a successor of command line interfaces (CLI) and graphical user interfaces (GUI) so well known to computer users. A new natural approach is based on extensive human behaviors tracking, where hand tracking and gesture recognition seem to play the main roles in communication. The presented paper reviews common approaches to discussed hand features tracking and provides a very effective proposal of the contour based hand’s poses recognition method which can be straightforwardly used for a hand-based natural user interface. Its possible usage varies from medical systems interaction, through games up to impaired people communication support.
APA, Harvard, Vancouver, ISO, and other styles
8

Hetsevich, S. A., Dz A. Dzenisyk, Yu S. Hetsevich, L. I. Kaigorodova, and K. A. Nikalaenka. "Design of Belarusian and Russian natural language interfaces for online help systems." Informatics 18, no. 4 (December 31, 2021): 40–52. http://dx.doi.org/10.37661/1816-0301-2021-18-4-40-52.

Full text
Abstract:
O b j e c t i v e s. The main goal of the work is a research of the natural language user interfaces and the developmentof a prototype of such an interface. The prototype is a bilingual Russian and Belarusian question-and-answer dialogue system. The research of the natural language interfaces was conducted in terms of the use of natural language for interaction between a user and a computer system. The main problems here are the ambiguity of natural language and the difficulties in the design of natural language interfaces that meet user expectations.M e t ho d s. The main principles of modelling the natural language user interfaces are considered. As an intelligent system, it consists of a database, knowledge machine and a user interface. Speech recognition and speech synthesis components make natural language interfaces more convenient from the point of view of usability.R e s u l t s. The description of the prototype of a natural language interface for a question-and-answer intelligent system is presented. The model of the prototype includes speech-to-text and text-to-speech Belarusian and Russian subsystems, generation of responses in the form of the natural language and formal text.An additional component is natural Belarusian and Russian voice input. Some of the data, required for human voice recognition, are stored as knowledge in the knowledge base or created on the basis of existing knowledge. Another important component is Belarusian and Russian voice output. This component is the top required for making the natural language interface more user-friendly.Co n c l u s i o n. The article presents the research of natural language user interfaces, the result of which provides the development and description of the prototype of the natural language interface for the intelligent question- and-answer system.
APA, Harvard, Vancouver, ISO, and other styles
9

Bailey, Shannon K. T., Daphne E. Whitmer, Bradford L. Schroeder, and Valerie K. Sims. "Development of Gesture-based Commands for Natural User Interfaces." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (September 2017): 1466–67. http://dx.doi.org/10.1177/1541931213601851.

Full text
Abstract:
Human-computer interfaces are changing to meet the evolving needs of users and overcome limitations of previous generations of computer systems. The current state of computers consists largely of graphical user interfaces (GUI) that incorporate windows, icons, menus, and pointers (WIMPs) as visual representations of computer interactions controlled via user input on a mouse and keyboard. Although this model of interface has dominated human-computer interaction for decades, WIMPs require an extra step between the user’s intent and the computer action, imposing both limitations on the interaction and introducing cognitive demands (van Dam, 1997). Alternatively, natural user interfaces (NUI) employ input methods such as speech, touch, and gesture commands. With NUIs, users can interact directly with the computer without using an intermediary device (e.g., mouse, keyboard). Using the body as an input device may be more “natural” because it allows the user to apply existing knowledge of how to interact with the world (Roupé, Bosch-Sijtsema, & Johansson, 2014). To utilize the potential of natural interfaces, research must first determine what interactions can be considered natural. For the purpose of this paper, we focus on the naturalness of gesture-based interfaces. The purpose of this study was to determine how people perform natural gesture-based computer actions. To answer this question, we first narrowed down potential gestures that would be considered natural for an action. In a previous study, participants ( n=17) were asked how they would gesture to interact with a computer to complete a series of actions. After narrowing down the potential natural gestures by calculating the most frequently performed gestures for each action, we asked participants ( n=188) to rate the naturalness of the gestures in the current study. Participants each watched 26 videos of gestures (3-5 seconds each) and were asked how natural or arbitrary they interpreted each gesture for the series of computer commands (e.g., move object left, shrink object, select object, etc.). The gestures in these videos included the 17 gestures that were most often performed in the previous study in which participants were asked what gesture they would naturally use to complete the computer actions. Nine gestures were also included that were created arbitrarily to act as a comparison to the natural gestures. By analyzing the ratings on a continuum from “Completely Arbitrary” to “Completely Natural,” we found that the natural gestures people produced in the first study were also interpreted as the intended action by this separate sample of participants. All the gestures that were rated as either “Mostly Natural” or “Completely Natural” by participants corresponded to how the object manipulation would be performed physically. For example, the gesture video that depicts a fist closing was rated as “natural” by participants for the action of “selecting an object.” All of the gestures that were created arbitrarily were interpreted as “arbitrary” when they did not correspond to the physical action. Determining how people naturally gesture computer commands and how people interpret those gestures is useful because it can inform the development of NUIs and contributes to the literature on what makes gestures seem “natural.”
APA, Harvard, Vancouver, ISO, and other styles
10

Oliveira, Felipe Francisco Ramos de, Marlon Marques Ferreira, and Alexandre Furst. "ESTUDO DA USABILIDADE NAS INTERFACES HOMEM-MÁQUINA." e-xacta 6, no. 2 (November 30, 2013): 93. http://dx.doi.org/10.18674/exacta.v6i2.1079.

Full text
Abstract:
<p align="justify">Este artigo documenta e analisa o processo de evolução das principais interfaces homem-máquina, com enfoque na usabilidade, e as diferenças tecnológicas entre elas. A pesquisa desempenhada para elaboração deste documento procura, também, experimentar o desempenho das interfaces CLI (Command Line Interface), GUI (Graphical User Interface) e NUI (Natural User Interface) por meio de um experimento de usabilidade que aborde as três interfaces em um único objetivo e permita o recolhimento de dados para avaliação.</p><p align="justify">Abstract</p><p align="justify">This article documents and analyzes the evolution of the main man-machine interfaces, with a focus on usability and technological differences between them. The research carried out for the preparation of this document also seeks to experience the performance of interfaces CLI (Command Line Interface) GUI (Graphical User Interface) and NUI (Natural User Interface) through an experiment that addresses the usability of three interfaces on a single goal and allow the collection of data for evaluation.</p>
APA, Harvard, Vancouver, ISO, and other styles
11

Manaris, Bill Z., and Wayne D. Dominick. "NALIGE: a user interface management system for the development of natural language interfaces." International Journal of Man-Machine Studies 38, no. 6 (June 1993): 891–921. http://dx.doi.org/10.1006/imms.1993.1042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

LaViola, Joseph J., and Odest Chadwicke Jenkins. "Natural User Interfaces for Adjustable Autonomy in Robot Control." IEEE Computer Graphics and Applications 35, no. 3 (May 2015): 20–21. http://dx.doi.org/10.1109/mcg.2015.61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Rusák, Zoltán, Ismail Cimen, Imre Horváth, and Aadjan Van der Helm. "Affordances for designing natural user interfaces for 3D modelling." International Journal of Computer Aided Engineering and Technology 8, no. 1/2 (2016): 8. http://dx.doi.org/10.1504/ijcaet.2016.073267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Echeverría, Martha Alicia Magaña, Pedro C. Santana-Mancilla, Hector Fabian Quintero Carrillo, and Enrique Alejandro Fernández Enciso. "Natural User Interfaces to Teach Math on Higher Education." Procedia - Social and Behavioral Sciences 106 (December 2013): 1883–89. http://dx.doi.org/10.1016/j.sbspro.2013.12.214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Martin-SanJose, Juan-Fernando, M. Carmen Juan, Ramón Mollá, and Roberto Vivó. "Advanced displays and natural user interfaces to support learning." Interactive Learning Environments 25, no. 1 (October 26, 2015): 17–34. http://dx.doi.org/10.1080/10494820.2015.1090455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Bhowmik, Achintya K. "Natural and Intuitive User Interfaces with Perceptual Computing Technologies." Information Display 29, no. 4 (July 2013): 6–10. http://dx.doi.org/10.1002/j.2637-496x.2013.tb00626.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Planas, Elena, Gwendal Daniel, Marco Brambilla, and Jordi Cabot. "Towards a model-driven approach for multiexperience AI-based user interfaces." Software and Systems Modeling 20, no. 4 (August 2021): 997–1009. http://dx.doi.org/10.1007/s10270-021-00904-y.

Full text
Abstract:
AbstractSoftware systems start to include other types of interfaces beyond the “traditional” Graphical-User Interfaces (GUIs). In particular, Conversational User Interfaces (CUIs) such as chat and voice are becoming more and more popular. These new types of interfaces embed smart natural language processing components to understand user requests and respond to them. To provide an integrated user experience all the user interfaces in the system should be aware of each other and be able to collaborate. This is what is known as a multiexperience User Interface. Despite their many benefits, multiexperience UIs are challenging to build. So far CUIs are created as standalone components using a platform-dependent set of libraries and technologies. This raises significant integration, evolution and maintenance issues. This paper explores the application of model-driven techniques to the development of software applications embedding a multiexperience User Interface. We will discuss how raising the abstraction level at which these interfaces are defined enables a faster development and a better deployment and integration of each interface with the rest of the software system and the other interfaces with whom it may need to collaborate. In particular, we propose a new Domain Specific Language (DSL) for specifying several types of CUIs and show how this DSL can be part of an integrated modeling environment able to describe the interactions between the modeled CUIs and the other models of the system (including the models of the GUI). We will use the standard Interaction Flow Modeling Language (IFML) as an example “host” language.
APA, Harvard, Vancouver, ISO, and other styles
18

Marsh, William E., Jonathan W. Kelly, Veronica J. Dark, and James H. Oliver. "Cognitive Demands of Semi-Natural Virtual Locomotion." Presence: Teleoperators and Virtual Environments 22, no. 3 (August 1, 2013): 216–34. http://dx.doi.org/10.1162/pres_a_00152.

Full text
Abstract:
There is currently no fully natural, general-purpose locomotion interface. Instead, interfaces such as gamepads or treadmills are required to explore large virtual environments (VEs). Furthermore, sensory feedback that would normally be used in real-world movement is often restricted in VR due to constraints such as reduced field of view (FOV). Accommodating these limitations with locomotion interfaces afforded by most virtual reality (VR) systems may induce cognitive demands on the user that are unrelated to the primary task to be performed in the VE. Users of VR systems often have many competing task demands, and additional cognitive demands during locomotion must compete for finite resources. Two studies were previously reported investigating the working memory demands imposed by semi-natural locomotion interfaces (Study 1) and reduced sensory feedback (Study 2). This paper expands on the previously reported results and adds discussion linking the two studies. The results indicated that locomotion with a less natural interface increases spatial working memory demands, and that locomotion with a lower FOV increases general attentional demands. These findings are discussed in terms of their practical implications for selection of locomotion interfaces when designing VEs.
APA, Harvard, Vancouver, ISO, and other styles
19

Guerino, Guilherme Corredato, and Natasha Malveira Costa Valentim. "Usability and user experience evaluation of natural user interfaces: a systematic mapping study." IET Software 14, no. 5 (October 1, 2020): 451–67. http://dx.doi.org/10.1049/iet-sen.2020.0051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Majdanik, Dawid, Adrian Madoń, and Tomasz Szymczyk. "Natural interfaces in VR - comparative analysis." Journal of Computer Sciences Institute 18 (March 30, 2021): 1–6. http://dx.doi.org/10.35784/jcsi.2385.

Full text
Abstract:
The article presents the results of a comparative analysis of contemporary virtual reality devices. The analysis focuses on both the analysis of technical parameters of the goggles as well as comparison of natural interfaces. The following devices were tested: HTC Vive, Oculus Rift, PlayStation VR, Samsung Gear VR. The most ergonomic and user-friendly interface turned out to be Oculus Rift, while goggles Samsung Gear VR were the worst from tested devices.
APA, Harvard, Vancouver, ISO, and other styles
21

He, Zecheng, Srinivas Sunkara, Xiaoxue Zang, Ying Xu, Lijuan Liu, Nevan Wichers, Gabriel Schubiner, Ruby Lee, and Jindong Chen. "ActionBert: Leveraging User Actions for Semantic Understanding of User Interfaces." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 7 (May 18, 2021): 5931–38. http://dx.doi.org/10.1609/aaai.v35i7.16741.

Full text
Abstract:
As mobile devices are becoming ubiquitous, regularly interacting with a variety of user interfaces (UIs) is a common aspect of daily life for many people. To improve the accessibility of these devices and to enable their usage in a variety of settings, building models that can assist users and accomplish tasks through the UI is vitally important. However, there are several challenges to achieve this. First, UI components of similar appearance can have different functionalities, making understanding their function more important than just analyzing their appearance. Second, domain-specific features like Document Object Model (DOM) in web pages and View Hierarchy (VH) in mobile applications provide important signals about the semantics of UI elements, but these features are not in a natural language format. Third, owing to a large diversity in UIs and absence of standard DOM or VH representations, building a UI understanding model with high coverage requires large amounts of training data. Inspired by the success of pre-training based approaches in NLP for tackling a variety of problems in a data-efficient way, we introduce a new pre-trained UI representation model called ActionBert. Our methodology is designed to leverage visual, linguistic and domain-specific features in user interaction traces to pre-train generic feature representations of UIs and their components. Our key intuition is that user actions, e.g., a sequence of clicks on different UI components, reveals important information about their functionality. We evaluate the proposed model on a wide variety of downstream tasks, ranging from icon classification to UI component retrieval based on its natural language description. Experiments show that the proposed ActionBert model outperforms multi-modal baselines across all downstream tasks by up to 15.5%.
APA, Harvard, Vancouver, ISO, and other styles
22

Macaranas, A., A. N. Antle, and B. E. Riecke. "What is Intuitive Interaction? Balancing Users' Performance and Satisfaction with Natural User Interfaces." Interacting with Computers 27, no. 3 (February 12, 2015): 357–70. http://dx.doi.org/10.1093/iwc/iwv003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Marsh, William E., Jonathan W. Kelly, Julie Dickerson, and James H. Oliver. "Fuzzy Navigation Engine: Mitigating the Cognitive Demands of Semi-Natural Locomotion." Presence: Teleoperators and Virtual Environments 23, no. 3 (October 1, 2014): 300–319. http://dx.doi.org/10.1162/pres_a_00195.

Full text
Abstract:
Many interfaces exist for locomotion in virtual reality, although they are rarely considered fully natural. Past research has found that using such interfaces places cognitive demands on the user, with unnatural actions and concurrent tasks competing for finite cognitive resources. Notably, using semi-natural interfaces leads to poor performance on concurrent tasks requiring spatial working memory. This paper presents an adaptive system designed to track a user's concurrent cognitive task load and adjust interface parameters accordingly, varying the extent to which movement is fully natural. A fuzzy inference system is described and the results of an initial validation study are presented. Users of this adaptive interface demonstrated better performance than users of a baseline interface on several movement metrics, indicating that the adaptive interface helped users manage the demands of concurrent spatial tasks in a virtual environment. However, participants experienced some unexpected difficulties when faced with a concurrent verbal task.
APA, Harvard, Vancouver, ISO, and other styles
24

Shavitt, Carmel, Anastasia Kuzminykh, Itay Ridel, and Jessica R. Cauchard. "Naturally Together: A Systematic Approach for Multi-User Interaction With Natural Interfaces." Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (October 13, 2021): 1–31. http://dx.doi.org/10.1145/3476090.

Full text
Abstract:
New technology is moving towards intuitive and natural interaction techniques that are increasingly embedded in human space (e.g., home and office environment) and aims to support multiple users, yet their interfaces do not cover it to the full. Imagine that you have a multi-user device, should it act differently in different situations, people, and group settings? Current Multi-User Interfaces address each of the users as an individual that works independently from others, and there is a lack of understanding of the mechanisms that impact shared usage of these products. Thus we have linked environmental (external) and user-centered (internal) factors to the way users interact with multi-user devices. We analyzed 124 papers that involve multi-user interfaces and created a classification model out of 8 factors. Both the model and factors were validated by a large-scale online study. Our model defines the factors affecting multi-user usage with a single device and leads to a decision on the most important ones in different situations. This paper is the first to identify these factors and to create a set of practical guidelines for designing Multi-User Interfaces.
APA, Harvard, Vancouver, ISO, and other styles
25

Meleiro, Pedro, Rui Rodrigues, João Jacob, and Tiago Marques. "Natural User Interfaces in the Motor Development of Disabled Children." Procedia Technology 13 (2014): 66–75. http://dx.doi.org/10.1016/j.protcy.2014.02.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Goyzueta, Denilson V., Joseph Guevara M., Andrés Montoya A., Erasmo Sulla E., Yuri Lester S., Pari L., and Elvis Supo C. "Analysis of a User Interface Based on Multimodal Interaction to Control a Robotic Arm for EOD Applications." Electronics 11, no. 11 (May 25, 2022): 1690. http://dx.doi.org/10.3390/electronics11111690.

Full text
Abstract:
A global human–robot interface that meets the needs of Technical Explosive Ordnance Disposal Specialists (TEDAX) for the manipulation of a robotic arm is of utmost importance to make the task of handling explosives safer, more intuitive and also provide high usability and efficiency. This paper aims to evaluate the performance of a multimodal system for a robotic arm that is based on Natural User Interface (NUI) and Graphical User Interface (GUI). The mentioned interfaces are compared to determine the best configuration for the control of the robotic arm in Explosive Ordnance Disposal (EOD) applications and to improve the user experience of TEDAX agents. Tests were conducted with the support of police agents Explosive Ordnance Disposal Unit-Arequipa (UDEX-AQP), who evaluated the developed interfaces to find a more intuitive system that generates the least stress load to the operator, resulting that our proposed multimodal interface presents better results compared to traditional interfaces. The evaluation of the laboratory experiences was based on measuring the workload and usability of each interface evaluated.
APA, Harvard, Vancouver, ISO, and other styles
27

Pollard, Kimberly A., Stephanie M. Lukin, Matthew Marge, Ashley Foots, and Susan G. Hill. "How We Talk with Robots: Eliciting Minimally-Constrained Speech to Build Natural Language Interfaces and Capabilities." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, no. 1 (September 2018): 160–64. http://dx.doi.org/10.1177/1541931218621037.

Full text
Abstract:
Industry, military, and academia are showing increasing interest in collaborative human-robot teaming in a variety of task contexts. Designing effective user interfaces for human-robot interaction is an ongoing challenge, and a variety of single and multiple-modality interfaces have been explored. Our work is to develop a bi-directional natural language interface for remote human-robot collaboration in physically situated tasks. When combined with a visual interface and audio cueing, we intend for the natural language interface to provide a naturalistic user experience that requires little training. Building the language portion of this interface requires first understanding how potential users would speak to the robot. In this paper, we describe our elicitation of minimally-constrained robot-directed language, observations about the users’ language behavior, and future directions for constructing an automated robotic system that can accommodate these language needs.
APA, Harvard, Vancouver, ISO, and other styles
28

MANARIS, BILL Z. "AN ENGINEERING ENVIRONMENT FOR NATURAL LANGUAGE INTERFACES TO INTERACTIVE COMPUTER SYSTEMS." International Journal on Artificial Intelligence Tools 03, no. 04 (December 1994): 557–79. http://dx.doi.org/10.1142/s0218213094000303.

Full text
Abstract:
This paper discusses the development of natural language interfaces to interactive computer systems using the NALIGE user interface management system. The task of engineering such interfaces is reduced to producing a set of well-formed specifications which describe lexical, syntactic, semantic, and pragmatic aspects of the selected application domain. These specifications are converted by NALIGE to an autonomous natural language interface that exhibits the prescribed linguistic and functional behavior. Development of several applications is presented to demonstrate how NALIGE and the associated development methodology may facilitate the design and implementation of practical natural language interfaces. This includes a natural language interface to Unix and its subsequent porting to MS-DOS, VAX/VMS, and VM/CMS; a natural language interface for Internet navigation and resource location; a natural language interface for text pattern matching; a natural language interface for text editing; and a natural language interface for electronic mail management. Additionally, design issues and considerations are identified/addressed, such as reuse and portability, content coupling, morphological processing, scalability, and habitability.
APA, Harvard, Vancouver, ISO, and other styles
29

Berdasco, López, Diaz, Quesada, and Guerrero. "User Experience Comparison of Intelligent Personal Assistants: Alexa, Google Assistant, Siri and Cortana." Proceedings 31, no. 1 (November 20, 2019): 51. http://dx.doi.org/10.3390/proceedings2019031051.

Full text
Abstract:
Natural user interfaces are becoming popular. One of the most common natural user interfaces nowadays are voice activated interfaces, particularly smart personal assistants such as Google Assistant, Alexa, Cortana, and Siri. This paper presents the results of an evaluation of these four smart personal assistants in two dimensions: the correctness of their answers and how natural the responses feel to users. Ninety-two participants conducted the evaluation. Results show that Alexa and Google Assistant are significantly better than Siri and Cortana. However, there is no statistically significant difference between Alexa and Google Assistant.
APA, Harvard, Vancouver, ISO, and other styles
30

Huang, Jinmiao, Prakhar Jaiswal, and Rahul Rai. "Gesture-based system for next generation natural and intuitive interfaces." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 33, no. 1 (May 30, 2018): 54–68. http://dx.doi.org/10.1017/s0890060418000045.

Full text
Abstract:
AbstractWe present a novel and trainable gesture-based system for next-generation intelligent interfaces. The system requires a non-contact depth sensing device such as an RGB-D (color and depth) camera for user input. The camera records the user's static hand pose and palm center dynamic motion trajectory. Both static pose and dynamic trajectory are used independently to provide commands to the interface. The sketches/symbols formed by palm center trajectory is recognized by the Support Vector Machine classifier. Sketch/symbol recognition process is based on a set of geometrical and statistical features. Static hand pose recognizer is incorporated to expand the functionalities of our system. Static hand pose recognizer is used in conjunction with sketch classification algorithm to develop a robust and effective system for natural and intuitive interaction. To evaluate the performance of the system user studies were performed on multiple participants. The efficacy of the presented system is demonstrated using multiple interfaces developed for different tasks including computer-aided design modeling.
APA, Harvard, Vancouver, ISO, and other styles
31

García, Alberto, J. Ernesto Solanes, Adolfo Muñoz, Luis Gracia, and Josep Tornero. "Augmented Reality-Based Interface for Bimanual Robot Teleoperation." Applied Sciences 12, no. 9 (April 26, 2022): 4379. http://dx.doi.org/10.3390/app12094379.

Full text
Abstract:
Teleoperation of bimanual robots is being used to carry out complex tasks such as surgeries in medicine. Despite the technological advances, current interfaces are not natural to the users, who spend long periods of time in learning how to use these interfaces. In order to mitigate this issue, this work proposes a novel augmented reality-based interface for teleoperating bimanual robots. The proposed interface is more natural to the user and reduces the interface learning process. A full description of the proposed interface is detailed in the paper, whereas its effectiveness is shown experimentally using two industrial robot manipulators. Moreover, the drawbacks and limitations of the classic teleoperation interface using joysticks are analyzed in order to highlight the benefits of the proposed augmented reality-based interface approach.
APA, Harvard, Vancouver, ISO, and other styles
32

Lieberman, Henry. "User Interface Goals, AI Opportunities." AI Magazine 30, no. 4 (September 18, 2009): 16. http://dx.doi.org/10.1609/aimag.v30i4.2266.

Full text
Abstract:
This is an opinion piece about the relationship between the fields of human-computer interaction (HCI), and artificial intelligence (AI). The ultimate goal of both fields is to make user interfaces more effective and easier to use for people. But historically, they have disagreed about whether "intelligence" or "direct manipulation" is the better route to achieving this. There is an unjustified perception in HCI that AI is unreliable. There is an unjustified perception in AI that interfaces are merely cosmetic. This disagreement is counterproductive.This article argues that AI's goals of intelligent interfaces would benefit enormously by the user-centered design and testing principles of HCI. It argues that HCI's stated goals of meeting the needs of users and interacting in natural ways, would be best served by application of AI. Peace.
APA, Harvard, Vancouver, ISO, and other styles
33

Colli Alfaro, Jose Guillermo, and Ana Luisa Trejos. "User-Independent Hand Gesture Recognition Classification Models Using Sensor Fusion." Sensors 22, no. 4 (February 9, 2022): 1321. http://dx.doi.org/10.3390/s22041321.

Full text
Abstract:
Recently, it has been proven that targeting motor impairments as early as possible while using wearable mechatronic devices for assisted therapy can improve rehabilitation outcomes. However, despite the advanced progress on control methods for wearable mechatronic devices, the need for a more natural interface that allows for better control remains. To address this issue, electromyography (EMG)-based gesture recognition systems have been studied as a potential solution for human–machine interface applications. Recent studies have focused on developing user-independent gesture recognition interfaces to reduce calibration times for new users. Unfortunately, given the stochastic nature of EMG signals, the performance of these interfaces is negatively impacted. To address this issue, this work presents a user-independent gesture classification method based on a sensor fusion technique that combines EMG data and inertial measurement unit (IMU) data. The Myo Armband was used to measure muscle activity and motion data from healthy subjects. Participants were asked to perform seven types of gestures in four different arm positions while using the Myo on their dominant limb. Data obtained from 22 participants were used to classify the gestures using three different classification methods. Overall, average classification accuracies in the range of 67.5–84.6% were obtained, with the Adaptive Least-Squares Support Vector Machine model obtaining accuracies as high as 92.9%. These results suggest that by using the proposed sensor fusion approach, it is possible to achieve a more natural interface that allows better control of wearable mechatronic devices during robot assisted therapies.
APA, Harvard, Vancouver, ISO, and other styles
34

Ryumin, Dmitry, Ildar Kagirov, Alexandr Axyonov, Nikita Pavlyuk, Anton Saveliev, Irina Kipyatkova, Milos Zelezny, Iosif Mporas, and Alexey Karpov. "A Multimodal User Interface for an Assistive Robotic Shopping Cart." Electronics 9, no. 12 (December 8, 2020): 2093. http://dx.doi.org/10.3390/electronics9122093.

Full text
Abstract:
This paper presents the research and development of the prototype of the assistive mobile information robot (AMIR). The main features of the presented prototype are voice and gesture-based interfaces with Russian speech and sign language recognition and synthesis techniques and a high degree of robot autonomy. AMIR prototype’s aim is to be used as a robotic cart for shopping in grocery stores and/or supermarkets. Among the main topics covered in this paper are the presentation of the interface (three modalities), the single-handed gesture recognition system (based on a collected database of Russian sign language elements), as well as the technical description of the robotic platform (architecture, navigation algorithm). The use of multimodal interfaces, namely the speech and gesture modalities, make human-robot interaction natural and intuitive, as well as sign language recognition allows hearing-impaired people to use this robotic cart. AMIR prototype has promising perspectives for real usage in supermarkets, both due to its assistive capabilities and its multimodal user interface.
APA, Harvard, Vancouver, ISO, and other styles
35

Coury, Bruce G., John Sadowsky, Paul R. Schuster, Michael Kurnow, Marcus J. Huber, and Edmund H. Durfee. "Reducing the Interaction Burden of Complex Systems." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 41, no. 1 (October 1997): 335–39. http://dx.doi.org/10.1177/107118139704100175.

Full text
Abstract:
Reducing the burden of interacting with complex systems has been a long standing goal of user interface design. In our approach to this problem, we have been developing user interfaces that allow users to interact with complex systems in a natural way and in high-level, task-related terms. These capabilities help users concentrate on making important decisions without the distractions of manipulating systems and user interfaces. To attain such a goal, our approach uses a unique combination of multi-modal interaction and interaction planning. In this paper, we motivate the basis for our approach, we describe the user interface technologies we have developed, and briefly discuss the relevant research and development issues.
APA, Harvard, Vancouver, ISO, and other styles
36

Pietroni, Eva, Alfonsina Pagano, Luigi Biocca, and Giacomo Frassineti. "Accessibility, Natural User Interfaces and Interactions in Museums: The IntARSI Project." Heritage 4, no. 2 (April 4, 2021): 567–84. http://dx.doi.org/10.3390/heritage4020034.

Full text
Abstract:
In a museum context, people have specific needs in terms of physical, cognitive, and social accessibility that cannot be ignored. Therefore, we need to find a way to make art and culture accessible to them through the aid of Universal Design principles, advanced technologies, and suitable interfaces and contents. Integration of such factors is a priority of the Museums General Direction of the Italian Ministry of Cultural Heritage, within the wider strategy of museum exploitation. In accordance with this issue, the IntARSI project, publicly funded, consists of a pre-evaluation and a report of technical specifications for a new concept of museology applied to the new Museum of Civilization in Rome (MuCIV). It relates to planning of multimedia, virtual, and mixed reality applications based on the concept of “augmented” and multisensory experience, innovative tangible user interfaces, and storytelling techniques. An inclusive approach is applied, taking into account the needs and attitudes of a wide audience with different ages, cultural interests, skills, and expectations, as well as cognitive and physical abilities.
APA, Harvard, Vancouver, ISO, and other styles
37

MARUYAMA, Tsubasa, and Toyoaki TOMURA. "2P1-B19 An application of Augmented Reality to Natural User Interfaces." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2010 (2010): _2P1—B19_1—_2P1—B19_4. http://dx.doi.org/10.1299/jsmermd.2010._2p1-b19_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Bhowmik, Achintya K. "39.2:Invited Paper: Natural and Intuitive User Interfaces: Technologies and Applications." SID Symposium Digest of Technical Papers 44, no. 1 (June 2013): 544–46. http://dx.doi.org/10.1002/j.2168-0159.2013.tb06266.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Steinicke, Frank. "Natural Locomotion Interfaces – With a Little Bit of Magic!" Journal on Interactive Systems 2, no. 2 (November 16, 2011): 1. http://dx.doi.org/10.5753/jis.2011.588.

Full text
Abstract:
The mission of the Immersive Media Group (IMG) is to develop virtual locomotion user interfaces which allow humans to experience arbitrary 3D environments by means of the natural walking metaphor. Traveling through immersive virtual environments (IVEs) by means of real walking is an important activity to increase naturalness of virtual reality (VR)-based interaction. However, the size of the virtual world often differs from the size of the tracked lab space so that a straightforward implementation of omni-directional and unlimited walking is not possible. Redirected walking is one concept to address this issue by inconspicuously guiding the user on a physical path that may differ from the path the user perceives in the virtual world. For example, intentionally rotating the virtual camera to one side causes the user to unknowingly compensate by walking on a circular arc into the opposite direction. In the scope of the LOCUI project, which is funded by the German Research Foundation, we analyze how gains of locomotor speed, turns and curvatures can gradually alter the physical trajectory with respect to the path perceived in the virtual world without the users observing any discrepancy. Thus, users can be guided in order to avoid collisions with physical obstacles (e.g., lab walls) or they can be guided to arbitrary locations in the physical space. For example, if the user approaches a virtual object, she can be guided to a real proxy prop that is registered to and aligned with its virtual counterpart. Hence, the user can interact with a virtual object by touching the corresponding real-world proxy prop that provides haptic feedback. Based on the results of psychophysical experiments we plan With such a user interface it becomes possible to intuitively interact with any virtual object by touching registered real-world props.
APA, Harvard, Vancouver, ISO, and other styles
40

Alencar, Andreza Leite de, and Ana Carolina Salgado. "Improving User Interaction on Ontology-based Peer Data Management Systems." iSys - Brazilian Journal of Information Systems 7, no. 2 (November 15, 2014): 67–85. http://dx.doi.org/10.5753/isys.2014.252.

Full text
Abstract:
The issue of user interaction for query formulation and execution has been investigated for distributed and dynamic environments, such as Peer Data Management System (PDMS). Many of these PDMS are semantic based and composed by data peers which export schemas that are represented by ontologies. In the literature we can find some proposed PDMS interfaces, but none of them addresses, in a general way, the needs of a PDMS for user interaction. In this work we propose a visual user query interface for ontology-based PDMS. It provides a simple and straightforward interaction with this type of system. It aims not only providing a natural visual query interface but also supporting a precise and direct manipulation of the data schemas for query generation.
APA, Harvard, Vancouver, ISO, and other styles
41

Hammitzsch, Martin. "Framework for Graphical User Interfaces of Geospatial Early Warning Systems." International Journal of Open Source Software and Processes 3, no. 4 (October 2011): 49–63. http://dx.doi.org/10.4018/jossp.2011100103.

Full text
Abstract:
An important component of Early Warning Systems (EWS) for man-made and natural hazards is the command and control unit’s Graphical User Interface (GUI). All relevant information of an EWS is concentrated in this GUI and offered to human operators. However, when designing the GUI, not only the user experience and the GUI’s screens are relevant, but also the frameworks and technologies that the GUI is built on and the implementation of the GUI itself are of great importance. Implementations differ based on their applications in different domains but the design and approaches to implement the GUIs of different EWS often show analogies. The design and development of such GUIs are performed repeatedly on some parts of the system for each EWS. Thus, the generic GUI framework of a geospatial EWS for tsunamis is introduced to enable possible synergistic effects on the development of other new related technology. The results presented here could be adopted and reused in other EWS for man-made and natural hazards.
APA, Harvard, Vancouver, ISO, and other styles
42

Chen, Hung-Jen. "P-4 Investigation of the Optimal Interactive Methods for Natural User Interfaces." Japanese journal of ergonomics 53, Supplement2 (2017): S704—S705. http://dx.doi.org/10.5100/jje.53.s704.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Gruber, David R. "From Typing to Touching; A Review of Writing with Natural User Interfaces." Writing & Pedagogy 6, no. 1 (June 10, 2014): 127. http://dx.doi.org/10.1558/wap.v6i1.127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Liu, Ying-Hsang, Paul Thomas, Marijana Bacic, Tom Gedeon, and Xindi Li. "Natural Search User Interfaces for Complex Biomedical Search: An Eye Tracking Study." Journal of the Australian Library and Information Association 66, no. 4 (August 10, 2017): 364–81. http://dx.doi.org/10.1080/24750158.2017.1357915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Erra, Ugo, Delfina Malandrino, and Luca Pepe. "A methodological evaluation of natural user interfaces for immersive 3D Graph explorations." Journal of Visual Languages & Computing 44 (February 2018): 13–27. http://dx.doi.org/10.1016/j.jvlc.2017.11.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Bowman, Nicholas David, Daniel Pietschmann, and Benny Liebold. "The golden (hands) rule: Exploring user experiences with gamepad and natural-user interfaces in popular video games." Journal of Gaming & Virtual Worlds 9, no. 1 (March 1, 2017): 71–85. http://dx.doi.org/10.1386/jgvw.9.1.71_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Chandarana, Meghan, Erica L. Meszaros, Anna Trujillo, and B. Danette Allen. "Natural Language Based Multimodal Interface for UAV Mission Planning." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (September 2017): 68–72. http://dx.doi.org/10.1177/1541931213601483.

Full text
Abstract:
As the number of viable applications for unmanned aerial vehicle (UAV) systems increases at an exponential rate, interfaces that reduce the reliance on highly skilled engineers and pilots must be developed. Recent work aims to make use of common human communication modalities such as speech and gesture. This paper explores a multimodal natural language interface that uses a combination of speech and gesture input modalities to build complex UAV flight paths by defining trajectory segment primitives. Gesture inputs are used to define the general shape of a segment while speech inputs provide additional geometric information needed to fully characterize a trajectory segment. A user study is conducted in order to evaluate the efficacy of the multimodal interface.
APA, Harvard, Vancouver, ISO, and other styles
48

Truck, Isis, and Mohammed-Amine Abchir. "Natural Language Processing and Fuzzy Tools for Business Processes in a Geolocation Context." Advances in Artificial Intelligence 2017 (May 24, 2017): 1–11. http://dx.doi.org/10.1155/2017/9462457.

Full text
Abstract:
In the geolocation field where high-level programs and low-level devices coexist, it is often difficult to find a friendly user interface to configure all the parameters. The challenge addressed in this paper is to propose intuitive and simple, thus natural language interfaces to interact with low-level devices. Such interfaces contain natural language processing (NLP) and fuzzy representations of words that facilitate the elicitation of business-level objectives in our context. A complete methodology is proposed, from the lexicon construction to a dialogue software agent including a fuzzy linguistic representation, based on synonymy.
APA, Harvard, Vancouver, ISO, and other styles
49

Абдуллин, А., A. Abdullin, В. Лавлинский, V. Lavlinskiy, И. Земцов, I. Zemcov, Ольга Иванова, Ol'ga Ivanova, Сари Аббас, and Sari Abbas. "MODELS OF INTELLIGENT INTERFACES OF INFORMATION SYSTEMS SEARCH." Modeling of systems and processes 12, no. 2 (October 24, 2019): 4–9. http://dx.doi.org/10.12737/article_5db1e3e5e16f07.14402511.

Full text
Abstract:
Currently, there are many models of intelligent interfaces of search information systems for search using natural for the user modalities of speech, face, gestures and their recognition. In this article, on the example of the intelligent interface of the unmanned aerial vehicle, two models of user modality recognition will be studied, these are hidden markov models and bayesian network, these models will have to recognize the hand gestures of the operator of the unmanned aerial vehicle.
APA, Harvard, Vancouver, ISO, and other styles
50

Basori, Ahmad Hoirul, and Hani Moaiteq Abdullah AlJahdali. "TOU-AR:Touchable Interface for Interactive Interaction in Augmented Reality Environment." Computer Engineering and Applications Journal 6, no. 2 (July 17, 2017): 45–50. http://dx.doi.org/10.18495/comengapp.v6i1.194.

Full text
Abstract:
Touchable interface is one of the future interfaces that can be implemented at any medium such as water, table or even sand. The word multi touch refers to the ability to distinguish between two or more fingers touching a touch-sensing surface, such as a touch screen or a touch pad. This interface is provided tracking the area by using depth camera and projected the interface into the medium. This interface is widely used in augmented reality environment. User will project the particular interface into real world medium and user hand will be tracked simultaneously when touching the area. User can interact in more freely ways and as natural as human did in their daily life
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography