To see the other types of publications on this topic, follow the link: Gesture-Based User Interface.

Journal articles on the topic 'Gesture-Based User Interface'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Gesture-Based User Interface.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Elmagrouni, Issam, Abdelaziz Ettaoufik, Siham Aouad, and Abderrahim Maizate. "Approach for Improving User Interface Based on Gesture Recognition." E3S Web of Conferences 297 (2021): 01030. http://dx.doi.org/10.1051/e3sconf/202129701030.

Full text
Abstract:
Gesture recognition technology based on visual detection to acquire gestures information is obtained in a non-contact manner. There are two types of gesture recognition: independent and continuous gesture recognition. The former aims to classify videos or other types of gesture sequences that only contain one isolated gesture instance in each sequence (e.g., RGB-D or skeleton data). In this study, we review existing research methods of visual gesture recognition and will be grouped according to the following family: static, dynamic, based on the supports (Kinect, Leap…etc), works that focus on the application of gesture recognition on robots and works on dealing with gesture recognition at the browser level. Following that, we take a look at the most common JavaScript-based deep learning frameworks. Then we present the idea of defining a process for improving user interface control based on gesture recognition to streamline the implementation of this mechanism.
APA, Harvard, Vancouver, ISO, and other styles
2

Kirmizibayrak, Can, Nadezhda Radeva, Mike Wakid, John Philbeck, John Sibert, and James Hahn. "Evaluation of Gesture Based Interfaces for Medical Volume Visualization Tasks." International Journal of Virtual Reality 11, no. 2 (2012): 1–13. http://dx.doi.org/10.20870/ijvr.2012.11.2.2839.

Full text
Abstract:
Interactive systems are increasingly used in medical applications with the widespread availability of various imaging modalities. Gesture-based interfaces can be beneficial to interact with these kinds of systems in a variety of settings, as they can be easier to learn and can eliminate several shortcomings of traditional tactile systems, especially for surgical applications. We conducted two user studies that explore different gesture-based interfaces for interaction with volume visualizations. The first experiment focused on rotation tasks, where the performance of the gesture-based interface (using Microsoft Kinect) was compared to using the mouse. The second experiment studied localization of internal structures, comparing slice-based visualizations via gestures and the mouse, in addition to a 3D Magic Lens visualization. The results of the user studies showed that the gesture-based interface outperform the traditional mouse both in time and accuracy in the orientation matching task. The traditional mouse was the superior interface for the second experiment in terms of accuracy. However, the gesture-based Magic Lens interface was found to have the fastest target localization time. We discuss these findings and their further implications in the use of gesture-based interfaces in medical volume visualization, and discuss the possible underlying psychological mechanisms why these methods can outperform traditional interaction methods
APA, Harvard, Vancouver, ISO, and other styles
3

Ryumin, Dmitry, Ildar Kagirov, Alexandr Axyonov, et al. "A Multimodal User Interface for an Assistive Robotic Shopping Cart." Electronics 9, no. 12 (2020): 2093. http://dx.doi.org/10.3390/electronics9122093.

Full text
Abstract:
This paper presents the research and development of the prototype of the assistive mobile information robot (AMIR). The main features of the presented prototype are voice and gesture-based interfaces with Russian speech and sign language recognition and synthesis techniques and a high degree of robot autonomy. AMIR prototype’s aim is to be used as a robotic cart for shopping in grocery stores and/or supermarkets. Among the main topics covered in this paper are the presentation of the interface (three modalities), the single-handed gesture recognition system (based on a collected database of Russian sign language elements), as well as the technical description of the robotic platform (architecture, navigation algorithm). The use of multimodal interfaces, namely the speech and gesture modalities, make human-robot interaction natural and intuitive, as well as sign language recognition allows hearing-impaired people to use this robotic cart. AMIR prototype has promising perspectives for real usage in supermarkets, both due to its assistive capabilities and its multimodal user interface.
APA, Harvard, Vancouver, ISO, and other styles
4

Yoon, Hoon, Hojeong Im, Seonha Chung, and Taeha Yi. "Exploring Preferential Ring-Based Gesture Interaction Across 2D Screen and Spatial Interface Environments." Applied Sciences 15, no. 12 (2025): 6879. https://doi.org/10.3390/app15126879.

Full text
Abstract:
As gesture-based interactions expand across traditional 2D screens and immersive XR platforms, designing intuitive input modalities tailored to specific contexts becomes increasingly essential. This study explores how users cognitively and experientially engage with gesture-based interactions in two distinct environments: a lean-back 2D television interface and an immersive XR spatial environment. A within-subject experimental design was employed, utilizing a gesture-recognizable smart ring to perform tasks using three gesture modalities: (a) Surface-Touch gesture, (b) mid-air gesture, and (c) micro finger-touch gesture. The results revealed clear, context-dependent user preferences; Surface-Touch gestures were preferred in the 2D context due to their controlled and pragmatic nature, whereas mid-air gestures were favored in the XR context for their immersive, intuitive qualities. Interestingly, longer gesture execution times did not consistently reduce user satisfaction, indicating that compatibility between the gesture modality and the interaction environment matters more than efficiency alone. This study concludes that successful gesture-based interface design must carefully consider the contextual alignment, highlighting the nuanced interplay among user expectations, environmental context, and gesture modality. Consequently, these findings provide practical considerations for designing Natural User Interfaces (NUIs) for various interaction contexts.
APA, Harvard, Vancouver, ISO, and other styles
5

Lim, C. J., Nam-Hee Lee, Yun-Guen Jeong, and Seung-Il Heo. "Gesture based Natural User Interface for e-Training." Journal of the Ergonomics Society of Korea 31, no. 4 (2012): 577–83. http://dx.doi.org/10.5143/jesk.2012.31.4.577.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Laine, Teemu H., and Hae Jung Suk. "Investigating User Experience of an Immersive Virtual Reality Simulation Based on a Gesture-Based User Interface." Applied Sciences 14, no. 11 (2024): 4935. http://dx.doi.org/10.3390/app14114935.

Full text
Abstract:
The affordability of equipment and availability of development tools have made immersive virtual reality (VR) popular across research fields. Gesture-based user interface has emerged as an alternative method to handheld controllers to interact with the virtual world using hand gestures. Moreover, a common goal for many VR applications is to elicit a sense of presence in users. Previous research has identified many factors that facilitate the evocation of presence in users of immersive VR applications. We investigated the user experience of Four Seasons, an immersive virtual reality simulation where the user interacts with a natural environment and animals with their hands using a gesture-based user interface (UI). We conducted a mixed-method user experience evaluation with 21 Korean adults (14 males, 7 females) who played Four Seasons. The participants filled in a questionnaire and answered interview questions regarding presence and experience with the gesture-based UI. The questionnaire results indicated high ratings for presence and gesture-based UI, with some issues related to the realism of interaction and lack of sensory feedback. By analyzing the interview responses, we identified 23 potential presence factors and proposed a classification for organizing presence factors based on the internal–external and dynamic–static dimensions. Finally, we derived a set of design principles based on the potential presence factors and demonstrated their usefulness for the heuristic evaluation of existing gesture-based immersive VR experiences. The results of this study can be used for designing and evaluating presence-evoking gesture-based VR experiences.
APA, Harvard, Vancouver, ISO, and other styles
7

Ahmed, Naveed, Hind Kharoub, Selma Manel Medjden, and Areej Alsaafin. "A Natural User Interface for 3D Animation Using Kinect." International Journal of Technology and Human Interaction 16, no. 4 (2020): 35–54. http://dx.doi.org/10.4018/ijthi.2020100103.

Full text
Abstract:
This article presents a new natural user interface to control and manipulate a 3D animation using the Kinect. The researchers design a number of gestures that allow the user to play, pause, forward, rewind, scale, and rotate the 3D animation. They also implement a cursor-based traditional interface and compare it with the natural user interface. Both interfaces are extensively evaluated via a user study in terms of both the usability and user experience. Through both quantitative and the qualitative evaluation, they show that a gesture-based natural user interface is a preferred method to control a 3D animation compared to a cursor-based interface. The natural user interface not only proved to be more efficient but resulted in a more engaging and enjoyable user experience.
APA, Harvard, Vancouver, ISO, and other styles
8

Wojciechowski, A. "Hand’s poses recognition as a mean of communication within natural user interfaces." Bulletin of the Polish Academy of Sciences: Technical Sciences 60, no. 2 (2012): 331–36. http://dx.doi.org/10.2478/v10175-012-0044-3.

Full text
Abstract:
Abstract. Natural user interface (NUI) is a successor of command line interfaces (CLI) and graphical user interfaces (GUI) so well known to computer users. A new natural approach is based on extensive human behaviors tracking, where hand tracking and gesture recognition seem to play the main roles in communication. The presented paper reviews common approaches to discussed hand features tracking and provides a very effective proposal of the contour based hand’s poses recognition method which can be straightforwardly used for a hand-based natural user interface. Its possible usage varies from medical systems interaction, through games up to impaired people communication support.
APA, Harvard, Vancouver, ISO, and other styles
9

Colli Alfaro, Jose Guillermo, and Ana Luisa Trejos. "User-Independent Hand Gesture Recognition Classification Models Using Sensor Fusion." Sensors 22, no. 4 (2022): 1321. http://dx.doi.org/10.3390/s22041321.

Full text
Abstract:
Recently, it has been proven that targeting motor impairments as early as possible while using wearable mechatronic devices for assisted therapy can improve rehabilitation outcomes. However, despite the advanced progress on control methods for wearable mechatronic devices, the need for a more natural interface that allows for better control remains. To address this issue, electromyography (EMG)-based gesture recognition systems have been studied as a potential solution for human–machine interface applications. Recent studies have focused on developing user-independent gesture recognition interfaces to reduce calibration times for new users. Unfortunately, given the stochastic nature of EMG signals, the performance of these interfaces is negatively impacted. To address this issue, this work presents a user-independent gesture classification method based on a sensor fusion technique that combines EMG data and inertial measurement unit (IMU) data. The Myo Armband was used to measure muscle activity and motion data from healthy subjects. Participants were asked to perform seven types of gestures in four different arm positions while using the Myo on their dominant limb. Data obtained from 22 participants were used to classify the gestures using three different classification methods. Overall, average classification accuracies in the range of 67.5–84.6% were obtained, with the Adaptive Least-Squares Support Vector Machine model obtaining accuracies as high as 92.9%. These results suggest that by using the proposed sensor fusion approach, it is possible to achieve a more natural interface that allows better control of wearable mechatronic devices during robot assisted therapies.
APA, Harvard, Vancouver, ISO, and other styles
10

Bailey, Shannon K. T., and Cheryl I. Johnson. "Performance on a Natural User Interface Task is Correlated with Higher Gesture Production." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, no. 1 (2019): 1384–88. http://dx.doi.org/10.1177/1071181319631181.

Full text
Abstract:
The study examined whether the individual differences of gesture production and attitudes toward gesturing were related to performance on a gesture-based natural user interface. Participants completed a lesson using gesture interactions and were measured on how long it took to complete the lesson, their reported mental effort, and how much they learned during the lesson. The Brief Assessment of Gestures survey was used to determine different dimensions of a participant’s predisposition to gesture, with four subscales: Perception, Production, Social Production, and Social Perception. Only an individual’s propensity to produce gestures was related to higher learning outcomes from the computer-based lesson.
APA, Harvard, Vancouver, ISO, and other styles
11

Kwon, Min-Cheol, Geonuk Park, and Sunwoong Choi. "Smartwatch User Interface Implementation Using CNN-Based Gesture Pattern Recognition." Sensors 18, no. 9 (2018): 2997. http://dx.doi.org/10.3390/s18092997.

Full text
Abstract:
In recent years, with an increase in the use of smartwatches among wearable devices, various applications for the device have been developed. However, the realization of a user interface is limited by the size and volume of the smartwatch. This study aims to propose a method to classify the user’s gestures without the need of an additional input device to improve the user interface. The smartwatch is equipped with an accelerometer, which collects the data and learns and classifies the gesture pattern using a machine learning algorithm. By incorporating the convolution neural network (CNN) model, the proposed pattern recognition system has become more accurate than the existing model. The performance analysis results show that the proposed pattern recognition system can classify 10 gesture patterns at an accuracy rate of 97.3%.
APA, Harvard, Vancouver, ISO, and other styles
12

Huang, Zhong Zhu, Zhi Quan Feng, Na Na He, and Xue Wen Yang. "Research on Gesture Speed Estimation Model in 3D Interactive Interface." Applied Mechanics and Materials 713-715 (January 2015): 1847–50. http://dx.doi.org/10.4028/www.scientific.net/amm.713-715.1847.

Full text
Abstract:
Gesture has different speed in the process of movement. To reflect the different speed of the user, this paper presents a gesture speed estimation method. Firstly, we use data glove and camera to establish the relation between the variation of gesture contour and that of gesture speed. Secondly, we build the gesture speed estimation model by stages. Finally, we get the real-time speed of hand motion through this model and complete interactive task. The main innovation of this paper is that we reveal the relation between the gesture contour and speed to lay the foundation for further capture the interaction of user intention. Experimental results indicate that the time cost of our method decreased by 31% compared with freehand tracking based on behavioral models and the 3D interactive system based on our model is of high the user experience.
APA, Harvard, Vancouver, ISO, and other styles
13

Bailey, Shannon K. T., Daphne E. Whitmer, Bradford L. Schroeder, and Valerie K. Sims. "Development of Gesture-based Commands for Natural User Interfaces." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (2017): 1466–67. http://dx.doi.org/10.1177/1541931213601851.

Full text
Abstract:
Human-computer interfaces are changing to meet the evolving needs of users and overcome limitations of previous generations of computer systems. The current state of computers consists largely of graphical user interfaces (GUI) that incorporate windows, icons, menus, and pointers (WIMPs) as visual representations of computer interactions controlled via user input on a mouse and keyboard. Although this model of interface has dominated human-computer interaction for decades, WIMPs require an extra step between the user’s intent and the computer action, imposing both limitations on the interaction and introducing cognitive demands (van Dam, 1997). Alternatively, natural user interfaces (NUI) employ input methods such as speech, touch, and gesture commands. With NUIs, users can interact directly with the computer without using an intermediary device (e.g., mouse, keyboard). Using the body as an input device may be more “natural” because it allows the user to apply existing knowledge of how to interact with the world (Roupé, Bosch-Sijtsema, & Johansson, 2014). To utilize the potential of natural interfaces, research must first determine what interactions can be considered natural. For the purpose of this paper, we focus on the naturalness of gesture-based interfaces. The purpose of this study was to determine how people perform natural gesture-based computer actions. To answer this question, we first narrowed down potential gestures that would be considered natural for an action. In a previous study, participants ( n=17) were asked how they would gesture to interact with a computer to complete a series of actions. After narrowing down the potential natural gestures by calculating the most frequently performed gestures for each action, we asked participants ( n=188) to rate the naturalness of the gestures in the current study. Participants each watched 26 videos of gestures (3-5 seconds each) and were asked how natural or arbitrary they interpreted each gesture for the series of computer commands (e.g., move object left, shrink object, select object, etc.). The gestures in these videos included the 17 gestures that were most often performed in the previous study in which participants were asked what gesture they would naturally use to complete the computer actions. Nine gestures were also included that were created arbitrarily to act as a comparison to the natural gestures. By analyzing the ratings on a continuum from “Completely Arbitrary” to “Completely Natural,” we found that the natural gestures people produced in the first study were also interpreted as the intended action by this separate sample of participants. All the gestures that were rated as either “Mostly Natural” or “Completely Natural” by participants corresponded to how the object manipulation would be performed physically. For example, the gesture video that depicts a fist closing was rated as “natural” by participants for the action of “selecting an object.” All of the gestures that were created arbitrarily were interpreted as “arbitrary” when they did not correspond to the physical action. Determining how people naturally gesture computer commands and how people interpret those gestures is useful because it can inform the development of NUIs and contributes to the literature on what makes gestures seem “natural.”
APA, Harvard, Vancouver, ISO, and other styles
14

Pomboza-Junez, Gonzalo, Juan A. Holgado-Terriza, and Nuria Medina-Medina. "Toward the gestural interface: comparative analysis between touch user interfaces versus gesture-based user interfaces on mobile devices." Universal Access in the Information Society 18, no. 1 (2017): 107–26. http://dx.doi.org/10.1007/s10209-017-0580-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kim, Hansol, Yoonkyung Kim, and Eui Chul Lee. "Method for User Interface of Large Displays Using Arm Pointing and Finger Counting Gesture Recognition." Scientific World Journal 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/683045.

Full text
Abstract:
Although many three-dimensional pointing gesture recognition methods have been proposed, the problem of self-occlusion has not been considered. Furthermore, because almost all pointing gesture recognition methods use a wide-angle camera, additional sensors or cameras are required to concurrently perform finger gesture recognition. In this paper, we propose a method for performing both pointing gesture and finger gesture recognition for large display environments, using a single Kinect device and a skeleton tracking model. By considering self-occlusion, a compensation technique can be performed on the user’s detected shoulder position when a hand occludes the shoulder. In addition, we propose a technique to facilitate finger counting gesture recognition, based on the depth image of the hand position. In this technique, the depth image is extracted from the end of the pointing vector. By using exception handling for self-occlusions, experimental results indicate that the pointing accuracy of a specific reference position was significantly improved. The average root mean square error was approximately 13 pixels for a 1920 × 1080 pixels screen resolution. Moreover, the finger counting gesture recognition accuracy was 98.3%.
APA, Harvard, Vancouver, ISO, and other styles
16

Muhammad, P., and S. Anjana Devi. "Hand Gesture User Interface for Smart Devices Based on Mems Sensors." Procedia Computer Science 93 (2016): 940–46. http://dx.doi.org/10.1016/j.procs.2016.07.279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Song, Rakbin, Yuna Hong, and Noyoon Kwak. "User Interface Using Hand Gesture Recognition Based on MediaPipe Hands Model." Journal of Korea Multimedia Society 26, no. 2 (2023): 103–15. http://dx.doi.org/10.9717/kmms.2023.26.2.103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

De Bérigny Wall, Caitilin, and Xiangyu Wang. "InterANTARCTICA: Tangible User Interface for Museum Based Interaction." International Journal of Virtual Reality 8, no. 3 (2009): 19–24. http://dx.doi.org/10.20870/ijvr.2009.8.3.2737.

Full text
Abstract:
This paper presents the design and concept for an interactive museum installation, InterANTARCTICA. The museum installation is based on a gesture-driven spatially surrounded tangible user interface (TUI) platform. The TUI allows a technological exploration of environmental climate change research by developing the status of interaction in museum installation art. The aim of the museum installation is to produce a cross-media platform suited to TUI and gestural interactions. We argue that our museum installation InterANTARCTICA pursues climate change in an interactive context, thus reinventing museum installation art in an experiential multi-modal context (sight, sound, touch).
APA, Harvard, Vancouver, ISO, and other styles
19

Holder, Sherrie, and Leia Stirling. "Effect of Gesture Interface Mapping on Controlling a Multi-degree-of-freedom Robotic Arm in a Complex Environment." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (2020): 183–87. http://dx.doi.org/10.1177/1071181320641045.

Full text
Abstract:
There are many robotic scenarios that require real-time function in large or unconstrained environments, for example, the robotic arm on the International Space Station (ISS). Use of fully-wearable gesture control systems are well-suited to human-robot interaction scenarios where users are mobile and must have hands free. A human study examined operation of a simulated ISS robotic arm using three different gesture input mappings compared to the traditional joystick interface. Two gesture mappings permitted multiple simultaneous inputs (multi-input), while the third was a single-input method. Experimental results support performance advantages of multi-input gesture methods over single input. Differences between the two multi-input methods in task completion and workload display an effect of user-directed attention on interface success. Mappings based on natural human arm movement are promising for gesture interfaces in mobile robotic applications. This study also highlights challenges in gesture mapping, including how users align gestures with their body and environment.
APA, Harvard, Vancouver, ISO, and other styles
20

Małecki, Krzysztof, Adam Nowosielski, and Mateusz Kowalicki. "Gesture-Based User Interface for Vehicle On-Board System: A Questionnaire and Research Approach." Applied Sciences 10, no. 18 (2020): 6620. http://dx.doi.org/10.3390/app10186620.

Full text
Abstract:
Touchless interaction with electronic devices using gestures is gaining popularity and along with speech-based communication offers their users natural and intuitive control methods. Now, these interaction modes go beyond the entertainment industry and are successfully applied in real-life scenarios such as a car interior. In the paper, we analyse the potential of hand gesture interaction in the vehicle environment by physically challenged drivers. A survey conducted with potential users shows that the knowledge of gesture-based interaction and its practical use by people with disabilities is low. Based on these results we proposed a gesture-based system for vehicle on-board system. It has been developed on the available state-of-the-art solutions and investigated in terms of usability on a group of people with different physical limitations who drive a car on daily basis mostly using steering aid tools. The obtained results are compared with the performance of users without any disabilities.
APA, Harvard, Vancouver, ISO, and other styles
21

Kim, Jinhyuk, Jaekwang Cha, and Shiho Kim. "Hands-Free User Interface for VR Headsets Based on In Situ Facial Gesture Sensing." Sensors 20, no. 24 (2020): 7206. http://dx.doi.org/10.3390/s20247206.

Full text
Abstract:
The typical configuration of virtual reality (VR) devices consists of a head-mounted display (HMD) and handheld controllers. As such, these units have limited utility in tasks that require hand-free operation, such as in surgical operations or assembly works in cyberspace. We propose a user interface for a VR headset based on a wearer’s facial gestures for hands-free interaction, similar to a touch interface. By sensing and recognizing the expressions associated with the in situ intentional movements of a user’s facial muscles, we define a set of commands that combine predefined facial gestures with head movements. This is achieved by utilizing six pairs of infrared (IR) photocouplers positioned at the foam interface of an HMD. We demonstrate the usability and report on the user experience as well as the performance of the proposed command set using an experimental VR game without any additional controllers. We obtained more than 99% of recognition accuracy for each facial gesture throughout the three steps of experimental tests. The proposed input interface is a cost-effective and efficient solution that facilitates hands-free user operation of a VR headset using built-in infrared photocouplers positioned in the foam interface. The proposed system recognizes facial gestures and incorporates a hands-free user interface to HMD, which is similar to the touch-screen experience of a smartphone.
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Xian, Paula Tarrío, Ana María Bernardos, Eduardo Metola, and José Ramón Casar. "User-independent accelerometer-based gesture recognition for mobile devices." ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal 1, no. 3 (2013): 11–25. http://dx.doi.org/10.14201/adcaij20121311125.

Full text
Abstract:
Many mobile devices embed nowadays inertial sensors. This enables new forms of human-computer interaction through the use of gestures (movements performed with the mobile device) as a way of communication. This paper presents an accelerometer-based gesture recognition system for mobile devices which is able to recognize a collection of 10 different hand gestures. The system was conceived to be light and to operate in a user-independent manner in real time. The recognition system was implemented in a smart phone and evaluated through a collection of user tests, which showed a recognition accuracy similar to other state-of-the art techniques and a lower computational complexity. The system was also used to build a human-robot interface that enables controlling a wheeled robot with the gestures made with the mobile phone
APA, Harvard, Vancouver, ISO, and other styles
23

Lee, Seongjo, Sohyun Sim, Kyhyun Um, Young-Sik Jeong, Seung-won Jung, and Kyungeun Cho. "Development of a Hand Gestures SDK for NUI-Based Applications." Mathematical Problems in Engineering 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/212639.

Full text
Abstract:
Concomitant with the advent of the ubiquitous era, research into better human computer interaction (HCI) for human-focused interfaces has intensified. Natural user interface (NUI), in particular, is being actively investigated with the objective of more intuitive and simpler interaction between humans and computers. However, developing NUI-based applications without special NUI-related knowledge is difficult. This paper proposes a NUI-specific SDK, called “Gesture SDK,” for development of NUI-based applications. Gesture SDK provides a gesture generator with which developers can directly define gestures. Further, a “Gesture Recognition Component” is provided that enables defined gestures to be recognized by applications. We generated gestures using the proposed SDK and developed a “Smart Interior,” NUI-based application using the Gesture Recognition Component. The results of experiments conducted indicate that the recognition rate of the generated gestures was 96% on average.
APA, Harvard, Vancouver, ISO, and other styles
24

Huang, Jinmiao, Prakhar Jaiswal, and Rahul Rai. "Gesture-based system for next generation natural and intuitive interfaces." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 33, no. 1 (2018): 54–68. http://dx.doi.org/10.1017/s0890060418000045.

Full text
Abstract:
AbstractWe present a novel and trainable gesture-based system for next-generation intelligent interfaces. The system requires a non-contact depth sensing device such as an RGB-D (color and depth) camera for user input. The camera records the user's static hand pose and palm center dynamic motion trajectory. Both static pose and dynamic trajectory are used independently to provide commands to the interface. The sketches/symbols formed by palm center trajectory is recognized by the Support Vector Machine classifier. Sketch/symbol recognition process is based on a set of geometrical and statistical features. Static hand pose recognizer is incorporated to expand the functionalities of our system. Static hand pose recognizer is used in conjunction with sketch classification algorithm to develop a robust and effective system for natural and intuitive interaction. To evaluate the performance of the system user studies were performed on multiple participants. The efficacy of the presented system is demonstrated using multiple interfaces developed for different tasks including computer-aided design modeling.
APA, Harvard, Vancouver, ISO, and other styles
25

Nyyssönen, Taneli, Seppo Helle, Teijo Lehtonen, and Jouni Smed. "A Comparison of One- and Two-Handed Gesture User Interfaces in Virtual Reality—A Task-Based Approach." Multimodal Technologies and Interaction 8, no. 2 (2024): 10. http://dx.doi.org/10.3390/mti8020010.

Full text
Abstract:
This paper presents two gesture-based user interfaces which were designed for a 3D design review in virtual reality (VR) with inspiration drawn from the shipbuilding industry’s need to streamline and make their processes more sustainable. The user interfaces, one focusing on single-hand (unimanual) gestures and the other focusing on dual-handed (bimanual) usage, are tested as a case study using 13 tasks. The unimanual approach attempts to provide a higher degree of flexibility, while the bimanual approach seeks to provide more control over the interaction. The interfaces were developed for the Meta Quest 2 VR headset using the Unity game engine. Hand-tracking (HT) is utilized due to potential usability benefits in comparison to standard controller-based user interfaces, which lack intuitiveness regarding the controls and can cause more strain. The user interfaces were tested with 25 test users, and the results indicate a preference toward the one-handed user interface with little variation in test user categories. Additionally, the testing order, which was counterbalanced, had a statistically significant impact on the preference and performance, indicating that learning novel interaction mechanisms requires an adjustment period for reliable results. VR sickness was also strongly experienced by a few users, and there were no signs that gesture controls would significantly alleviate it.
APA, Harvard, Vancouver, ISO, and other styles
26

Wolf, Catherine G., and James R. Rhyne. "A Taxonomic Approach to Understanding Direct Manipulation." Proceedings of the Human Factors Society Annual Meeting 31, no. 5 (1987): 576–80. http://dx.doi.org/10.1177/154193128703100522.

Full text
Abstract:
This paper presents a taxonomy for user interface techniques which is useful in understanding direct manipulation interfaces. The taxonomy is based on the way actions and objects are specified in the interface. We suggest that direct manipulation is a characteristic shared by a number of different interface techniques, rather than a single interface style. A relatively new interface method, gesture, is also described in terms of the taxonomy and some observations are made on its potential.
APA, Harvard, Vancouver, ISO, and other styles
27

Beer, Wolfgang. "GeoPointer – approaching tangible augmentation of the real world." International Journal of Pervasive Computing and Communications 7, no. 1 (2011): 60–74. http://dx.doi.org/10.1108/17427371111123694.

Full text
Abstract:
PurposeThe aim of this paper is to present an architecture and prototypical implementation of a context‐sensitive software system which combines the tangible user interface approach with a mobile augmented reality (AR) application.Design/methodology/approachThe work which is described within this paper is based on a creational approach, which means that a prototypical implementation is used to gather further research results. The prototypical approach allows performing ongoing tests concerning the accuracy and different context‐sensitive threshold functions.FindingsWithin this paper, the implementation and practical use of tangible user interfaces for outdoor selection of geographical objects is reported and discussed in detail.Research limitations/implicationsFurther research is necessary within the area of context‐sensitive dynamically changing threshold functions, which would allow improving the accuracy of the selected tangible user interface approach.Practical implicationsThe practical implication of using tangible user interfaces within outdoor applications should improve the usability of AR applications.Originality/valueDespite the fact that there exist a multitude of research results within the area of gesture recognition and AR applications, this research work focuses on the pointing gesture to select outdoor geographical objects.
APA, Harvard, Vancouver, ISO, and other styles
28

Kathe, Vedant. "GestureFusion: A Gesture-based Interaction System." International Journal for Research in Applied Science and Engineering Technology 12, no. 4 (2024): 1410–15. http://dx.doi.org/10.22214/ijraset.2024.60046.

Full text
Abstract:
Abstract: In an era of increasingly sophisticated human-computer interaction, the ability to comprehend and interpret hand gestures has become a pivotal element for bridging the gap between humans and machines. This project, titled ”GestureFusion: A Gesture-based Interaction System,” seeks to contribute to this rapidly evolving field by developing a comprehensive system for recognizing and categorizing hand gestures through the application of computer vision techniques. The primary objective of this project is to create a versatile and extensible hand gesture recognition system that can be employed in various domains such as virtual reality, robotics, sign language interpretation, and human-computer interface design. To achieve this, the project employs state-of-the-art computer vision algorithms and deep learning techniques. Key components of this project include data collection and annotation, model training and optimization, the development of a user-friendly application programming interface (API) for integration into diverse applications.
APA, Harvard, Vancouver, ISO, and other styles
29

Kim, Myung-Gyun, and Hee-Dong Park. "Design and Implementation of a Windows User Interface Based on User-Defined Hand Gesture Recognition." Journal of Digital Contents Society 25, no. 2 (2024): 529–34. http://dx.doi.org/10.9728/dcs.2024.25.2.529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Ikeda, Takahiro, Naoki Noda, Satoshi Ueki, and Hironao Yamada. "Gesture Interface and Transfer Method for AMR by Using Recognition of Pointing Direction and Object Recognition." Journal of Robotics and Mechatronics 35, no. 2 (2023): 288–97. http://dx.doi.org/10.20965/jrm.2023.p0288.

Full text
Abstract:
This paper describes a gesture interface for a factory transfer robot. Our proposed interface used gesture recognition to recognize the pointing direction, instead of estimating the point as in conventional pointing gesture estimation. When the autonomous mobile robot (AMR) recognized the pointing direction, it performed position control based on the object recognition. The AMR traveled along our unique path to ensure that its camera detected the object to be referenced for position control. The experimental results confirmed that the position and angular errors of the AMR controlled with our interface were 0.058 m and 4.7° averaged over five subjects and two conditions, which were sufficiently accurate for transportation. A questionnaire showed that our interface was user-friendly compared with manual operation with a commercially available controller.
APA, Harvard, Vancouver, ISO, and other styles
31

Sluÿters, Arthur, Mehdi Ousmer, Paolo Roselli, and Jean Vanderdonckt. "QuantumLeap, a Framework for Engineering Gestural User Interfaces based on the Leap Motion Controller." Proceedings of the ACM on Human-Computer Interaction 6, EICS (2022): 1–47. http://dx.doi.org/10.1145/3532211.

Full text
Abstract:
Despite the tremendous progress made for recognizing gestures acquired by various devices, such as the Leap Motion Controller, developing a gestural user interface based on such devices still induces a significant programming and software engineering effort before obtaining a running interactive application. To facilitate this development, we present QuantumLeap, a framework for engineering gestural user interfaces based on the Leap Motion Controller. Its pipeline software architecture can be parameterized to define a workflow among modules for acquiring gestures from the Leap Motion Controller, for segmenting them, recognizing them, and managing their mapping to functions of the application. To demonstrate its practical usage, we implement two gesture-based applications: an image viewer that allows healthcare workers to browse DICOM medical images of their patients without any hygiene issues commonly associated with touch user interfaces and a large-scale application for managing multimedia contents on wall screens. To evaluate the usability of QuantumLeap, seven participants took part in an experiment in which they used QuantumLeap to add a gestural interface to an existing application.
APA, Harvard, Vancouver, ISO, and other styles
32

Zeng, Xin, Xiaoyu Wang, Tengxiang Zhang, Chun Yu, Shengdong Zhao, and Yiqiang Chen. "GestureGPT: Toward Zero-Shot Free-Form Hand Gesture Understanding with Large Language Model Agents." Proceedings of the ACM on Human-Computer Interaction 8, ISS (2024): 462–99. http://dx.doi.org/10.1145/3698145.

Full text
Abstract:
Existing gesture interfaces only work with a fixed set of gestures defined either by interface designers or by users themselves, which introduces learning or demonstration efforts that diminish their naturalness. Humans, on the other hand, understand free-form gestures by synthesizing the gesture, context, experience, and common sense. In this way, the user does not need to learn, demonstrate, or associate gestures. We introduce GestureGPT, a free-form hand gesture understanding framework that mimics human gesture understanding procedures to enable a natural free-form gestural interface. Our framework leverages multiple Large Language Model agents to manage and synthesize gesture and context information, then infers the interaction intent by associating the gesture with an interface function. More specifically, our triple-agent framework includes a Gesture Description Agent that automatically segments and formulates natural language descriptions of hand poses and movements based on hand landmark coordinates. The description is deciphered by a Gesture Inference Agent through self-reasoning and querying about the interaction context (e.g., interaction history, gaze data), which is managed by a Context Management Agent. Following iterative exchanges, the Gesture Inference Agent discerns the user’s intent by grounding it to an interactive function. We validated our framework offline under two real-world scenarios: smart home control and online video streaming. The average zero-shot Top-1/Top-5 grounding accuracies are 44.79%/83.59% for smart home tasks and 37.50%/73.44% for video streaming tasks. We also provide an extensive discussion that includes rationale for model selection, generalizability, and future research directions for a practical system etc.
APA, Harvard, Vancouver, ISO, and other styles
33

Payal, Mohit. "A Human-Machine Interface for Electronic Assistive Technologies." Mathematical Statistician and Engineering Applications 71, no. 1 (2022): 351–67. http://dx.doi.org/10.17762/msea.v71i1.2127.

Full text
Abstract:
Human-machine interaction (HMI) refers to the two-way exchange of information and actions between a human and a machine via the latter's user interface. Gestures and other forms of natural user interfaces are becoming increasingly popular because they allow humans to interact with technology in ways that feel more natural to them. Gesture-based HMI uses a sensor like the Microsoft Kinect to detect human motion and posture, which is then translated into machine input. Using Kinect's data—which includes RGB (red, green, and blue), depth, and skeleton information—to recognize meaningful human motions is the core function of gesture-based HMI. This article provides an introduction of electronic assistive technologies (EATs) and discusses the importance of human-machine interfaces (HMIs) in their development. HMIs for EATs must consider accessibility, personalization, safety, and user-centered design elements to meet the needs and preferences of users with disabilities or limited mobility. There are benefits and drawbacks to using each type of human-machine interface currently in use, such as brain-computer interfaces, touchscreens, switches, and sensors, and voice recognition software. Good design has the potential to increase the usability and performance of these technologies, as evidenced by studies of successful HMIs in EATs. Constant research and improvement of HMIs for EATs is necessary to increase accessibility and quality of life for people with impairments or restricted mobility.
APA, Harvard, Vancouver, ISO, and other styles
34

Parth, Chandak. "Advances in Human-Robot Interaction: A Systematic Review of Intuitive Interfaces and Communication Modalities." International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences 9, no. 5 (2021): 1–9. https://doi.org/10.5281/zenodo.14508447.

Full text
Abstract:
Recent developments in Human-Robot Interaction (HRI) are examined in this targeted systematic study, with an emphasis on the creation and application of user-friendly interfaces and communication modalities in collaborative and industrial contexts. This review looks at how gesture-based controls, natural language programming, and user-centered design methods have made robotic systems easier to use and more useful by looking at some impactful studies done from 2002 to 2018. Significant advancements have been made in three key areas, according to the review: (1) contactless gesture control systems that facilitate natural and ergonomic interaction; (2) natural language interfaces that utilize common language to program and control robots; and (3) the incorporation of user-centered design principles that enhance system usability and operator trust. Notwithstanding these developments, there are still issues with accuracy in gesture detection, ambiguity in natural language processing, and interface customization for small and medium-sized businesses. In order to construct more user-friendly and effective human-robot collaboration systems, this review’s findings indicate that future advancements in HRI should concentrate on integrating various communication modes, utilizing artificial intelligence for adaptable interfaces, and broadening user-centered design techniques.
APA, Harvard, Vancouver, ISO, and other styles
35

Schweitzer, Frédéric, and Alexandre Campeau-Lecours. "IMU-Based Hand Gesture Interface Implementing a Sequence-Matching Algorithm for the Control of Assistive Technologies." Signals 2, no. 4 (2021): 729–53. http://dx.doi.org/10.3390/signals2040043.

Full text
Abstract:
Assistive technologies (ATs) often have a high-dimensionality of possible movements (e.g., assistive robot with several degrees of freedom or a computer), but the users have to control them with low-dimensionality sensors and interfaces (e.g., switches). This paper presents the development of an open-source interface based on a sequence-matching algorithm for the control of ATs. Sequence matching allows the user to input several different commands with low-dimensionality sensors by not only recognizing their output, but also their sequential pattern through time, similarly to Morse code. In this paper, the algorithm is applied to the recognition of hand gestures, inputted using an inertial measurement unit worn by the user. An SVM-based algorithm, that is aimed to be robust, with small training sets (e.g., five examples per class) is developed to recognize gestures in real-time. Finally, the interface is applied to control a computer’s mouse and keyboard. The interface was compared against (and combined with) the head movement-based AssystMouse software. The hand gesture interface showed encouraging results for this application but could also be used with other body parts (e.g., head and feet) and could control various ATs (e.g., assistive robotic arm and prosthesis).
APA, Harvard, Vancouver, ISO, and other styles
36

Chen, Kuen Meau, and Ming Jen Wang. "Using the Interactive Design of Gesture Recognition in Augmented Reality." Applied Mechanics and Materials 311 (February 2013): 185–90. http://dx.doi.org/10.4028/www.scientific.net/amm.311.185.

Full text
Abstract:
Due to the rapid development of computer hardware, the mobile computer systems such as PDAs, high-end mobile phones are capable of running augmented reality (AR, hereafter) system nowadays. The mouse and keyboard based user interfaces of the traditional AR system may not be suitable for the mobile AR system because of different hardware interface and use environment. The goal of this research is to propose a novel computer-vision based human-computer interaction model, which is expected to greatly improve usability of the mobile augmented reality. In this research, we will conduct an experiment on testing the usability of a new gesture-based interface and propose a product evaluation model for e-commerce applications based on the gesture interface. In the end, we expected the new interaction model could encourage more commercial applications and other research projects. In this paper, we propose a new interface interaction model called PinchAR. The focus of PinchAR is on adapting the interface design to the changing hardware design. This paper summarizes the PinchAR project, that is, the design of an intuitive interaction model in an AR environment. Also included in this paper are the results of the PinchAR experiments.
APA, Harvard, Vancouver, ISO, and other styles
37

Kharoub, Hind, Mohammed Lataifeh, and Naveed Ahmed. "3D User Interface Design and Usability for Immersive VR." Applied Sciences 9, no. 22 (2019): 4861. http://dx.doi.org/10.3390/app9224861.

Full text
Abstract:
This work presents a novel design of a new 3D user interface for an immersive virtual reality desktop and a new empirical analysis of the proposed interface using three interaction modes. The proposed novel dual-layer 3D user interface allows for user interactions with multiple screens portrayed within a curved 360-degree effective field of view available for the user. Downward gaze allows the user to raise the interaction layer that facilitates several traditional desktop tasks. The 3D user interface is analyzed using three different interaction modes, point-and-click, controller-based direct manipulation, and a gesture-based user interface. A comprehensive user study is performed within a mixed-methods approach for the usability and user experience analysis of all three user interaction modes. Each user interaction is quantitatively and qualitatively analyzed for simple and compound tasks in both standing and seated positions. The crafted mixed approach for this study allows to collect, evaluate, and validate the viability of the new 3D user interface. The results are used to draw conclusions about the suitability of the interaction modes for a variety of tasks in an immersive Virtual Reality 3D desktop environment.
APA, Harvard, Vancouver, ISO, and other styles
38

Vázquez, J. Emmanuel, Manuel Martin-Ortiz, Ivan Olmos-Pineda, and Arturo Olvera-Lopez. "Wheelchair Control Based on Facial Gesture Recognition." International Journal of Information Technologies and Systems Approach 12, no. 2 (2019): 104–22. http://dx.doi.org/10.4018/ijitsa.2019070106.

Full text
Abstract:
In this article, an approach for controlling a wheelchair using gestures from the user's face is presented, particularly some commands for the basic control operations required for driving a wheelchair are recognized. In order to recognize the face gestures an Artificial Neural Network which is trained since it is one of the most successful classifiers in Pattern Recognition. In particular, the authors' proposed method is useful for controlling a wheelchair when the user has restricted (or zero) mobility in some parts of the body such as: legs, arms or hands. According to their experimental results, the proposed approach provides a successful tool for controlling a wheelchair through a Natural User Interface based on machine learning.
APA, Harvard, Vancouver, ISO, and other styles
39

K Nalband, Mariyam Nabila, and Dr Shabana Sultana. "GESTURE CONTROLLED ROBOT USING LABVIEW." International Research Journal of Computer Science 8, no. 8 (2021): 193–99. http://dx.doi.org/10.26562/irjcs.2021.v0808.007.

Full text
Abstract:
Now a day’s Humans can interact to robot. These robots can be used in their work. These days we have to find the user interface which is of fundamental importance. The interface must be natural and user friendly. In the existing system the human hand gesture is directly sensed by robot and based on this it makes the movement. The sensors and accelerometer is carried by a human. Based on the person making the moves or changing the position this is sensed by the sensor along with the constraints and this is passed to robot and it will imitate the same. But in this paper work the hand gesture is captured in the form of image and is processed and sent to robot in the form of signals via Wi-Fi. The images are generated in vision assistant and processed via vision acquisition.
APA, Harvard, Vancouver, ISO, and other styles
40

M S H, Salam, Jou T S, and Ahmad A F. "3D Object Manipulation Using Speech And Hand Gesture." Journal of Advanced Research in Computing and Applications 31, no. 1 (2024): 1–12. http://dx.doi.org/10.37934/arca.31.1.112.

Full text
Abstract:
Natural user interface (NUI) is an interface that enables users to interact with the digital world in the same way they interact with the physical world through sensory input such as touch, speech, and gesture. The combination of multiple modalities for NUI has become the trend in user interface recently. There is significant progress in advancing speech and hand recognition technology, which makes both become effective input modalities in HCI. However, there are limitations exist that degrade the performance which includes the complexity of vocabulary and unnatural hand gestures to instruct the machine. Therefore, this project aims to develop an application with natural gesture and speech input for 3D object manipulation. Three phases have been carried out, first is data collection and analysis, second is application structure design, and the third is the implementation of speech and gesture in 3D object manipulation. This application is developed by using Leap Motion Controller for hand gesture tracking, and Microsoft Azure Speech Cognitive Service and Microsoft Azure Language Understanding Intelligence Service for natural language speech recognition. The evaluation has been performed based on the accuracy of the command recognition and usability and user acceptance. The results show that the approaches developed in this project able to make good recognition of the speech command and gesture interaction while user experience testing shows high level of satisfaction in the application.
APA, Harvard, Vancouver, ISO, and other styles
41

Deo, Aditi, Aishwarya Wankhede, Rutuja Asawa, and Supriya Lohar. "MEMS Accelerometer Based Hand Gesture-Controlled Robot." International Journal for Research in Applied Science and Engineering Technology 10, no. 8 (2022): 265–67. http://dx.doi.org/10.22214/ijraset.2022.46158.

Full text
Abstract:
Abstract: Gesture detection has gotten a lot of attention from a lot of different research communities, including humancomputer interaction and image processing. User interface technology has become increasingly significant as the number of human-machine interactions in our daily lives has increased. Gestures as intuitive expressions will substantially simplify the interaction process and allow humans to command computers and machines more intuitively. Robots can now be controlled by a remote control, a mobile phone, or a direct wired connection. When considering cost and required hardware, all of this adds to the complexity, particularly for low-level applications. MEMS accelerometer-based gesture-controlled robot attempts to create a gesture-controlled robot that can be commanded wirelessly. User is able to control motions of the robot by wearing the controller glove and performing predefined gestures.
APA, Harvard, Vancouver, ISO, and other styles
42

Yeh, Shih-Ching, Eric Hsiao-Kuang Wu, Ying-Ru Lee, R. Vaitheeshwari, and Chen-Wei Chang. "User Experience of Virtual-Reality Interactive Interfaces: A Comparison between Hand Gesture Recognition and Joystick Control for XRSPACE MANOVA." Applied Sciences 12, no. 23 (2022): 12230. http://dx.doi.org/10.3390/app122312230.

Full text
Abstract:
This research intends to understand whether users would adopt the interactive interface of hand gesture recognition for XRSPACE MANOVA in the virtual-reality environment. Different from the traditional joystick control and external sensors, XRSPACE MANOVA’s hand gesture recognition relies on cameras built into the head-mount display to detect users’ hand gestures and interact with the system to provide a more life-like immersive experience. To better understand if users would accept this hand gesture recognition, the current experiment compares users’ experiences with hand gesture recognition and joystick control for XRSPACE MANOVA while controlling for the effects of gender, college major, and the completion time. The results suggest that users of hand gesture recognition have better perceptions of enjoyment, satisfaction, and confirmation, which means that they have a relatively fun and satisfying experience and that their expectations of the system/technology confirm their actual usage. Based on the parametric statistical analyses, user assessments show that perceived usefulness, perceived ease-of-use, attitude, and perception of internal control suggest that, in terms of operating performance, users are more accepting of the traditional joystick control. When considering the length of usage time, this study finds that, when hand gesture recognition is used for a relatively longer time, users’ subjective evaluations of internal control and behavioral intention to use are reduced. This study has, therefore, identified potential issues with hand gesture recognition for XRSPACE MANOVA and discussed how to improve this interactive interface. It is hoped that users of hand gesture recognition will obtain the same level of operating experience as if they were using the traditional joystick control.
APA, Harvard, Vancouver, ISO, and other styles
43

V, Bharatwaj, Guruvenkatesh S, and Srinivasan K. "Gesture based Interface between User and the Digital Device using Sixth Sense Technology." International Journal of Computer Applications 118, no. 7 (2015): 17–22. http://dx.doi.org/10.5120/20757-3160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

ARPITHA D, APTHI, CHAITRA T.P, KUMAR P. HARISH, JAYABHARATHI B.S, and SURESH K. "DESIGN AND IMPLEMENTATION OF VOICE, TOUCH AND GESTURE BASED “NATURAL USER INTERFACE” (NUI)." i-manager’s Journal on Pattern Recognition 4, no. 1 (2017): 23. http://dx.doi.org/10.26634/jpr.4.1.13642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

RAUTARAY, SIDDHARTH S., and ANUPAM AGRAWAL. "VISION-BASED APPLICATION-ADAPTIVE HAND GESTURE RECOGNITION SYSTEM." International Journal of Information Acquisition 09, no. 01 (2013): 1350007. http://dx.doi.org/10.1142/s0219878913500071.

Full text
Abstract:
With the increasing role of computing devices, facilitating natural human computer interaction (HCI) will have a positive impact on their usage and acceptance as a whole. For long time, research on HCI has been restricted to techniques based on the use of keyboard, mouse, etc. Recently, this paradigm has changed. Techniques such as vision, sound, speech recognition allow for much richer form of interaction between the user and machine. The emphasis is to provide a natural form of interface for interaction. Gestures are one of the natural forms of interaction between humans. As gesture commands are found to be natural for humans, the development of gesture control systems for controlling devices have become a popular research topic in recent years. Researchers have proposed different gesture recognition systems which act as an interface for controlling the applications. One of the drawbacks of present gesture recognition systems is application dependence which makes it difficult to transfer one gesture control interface into different applications. This paper focuses on designing a vision-based hand gesture recognition system which is adaptive to different applications thus making the gesture recognition systems to be application adaptive. The designed system comprises different processing steps like detection, segmentation, tracking, recognition, etc. For making the system as application-adaptive, different quantitative and qualitative parameters have been taken into consideration. The quantitative parameters include gesture recognition rate, features extracted and root mean square error of the system while the qualitative parameters include intuitiveness, accuracy, stress/comfort, computational efficiency, user's tolerance, and real-time performance related to the proposed system. These parameters have a vital impact on the performance of the proposed application adaptive hand gesture recognition system.
APA, Harvard, Vancouver, ISO, and other styles
46

Hesenius, Marc, Markus Kleffmann, and Volker Gruhn. "AugIR Meets GestureCards: A Digital Sketching Environment for Gesture-Based Applications." Interacting with Computers 33, no. 2 (2021): 134–54. http://dx.doi.org/10.1093/iwcomp/iwab017.

Full text
Abstract:
Abstract To gain a common understanding of an application’s layouts, dialogs and interaction flows, development teams often sketch user interface (UI). Nowadays, they must also define multi-touch gestures, but tools for sketching UIs often lack support for custom gestures and typically just integrate a basic predefined gesture set, which might not suffice to specifically tailor the interaction to the desired use cases. Furthermore, sketching can be enhanced with digital means, but it remains unclear whether digital sketching is actually beneficial when designing gesture-based applications. We extended the AugIR, a digital sketching environment, with GestureCards, a hybrid gesture notation, to allow software engineers to define custom gestures when sketching UIs. We evaluated our approach in a user study contrasting digital and analog sketching of gesture-based UIs.
APA, Harvard, Vancouver, ISO, and other styles
47

Dong, Ao Shuang, Xiao Liu, and Hui Yan Jiang. "Gesture Recognition Research Based on Kinect." Advanced Materials Research 971-973 (June 2014): 1928–31. http://dx.doi.org/10.4028/www.scientific.net/amr.971-973.1928.

Full text
Abstract:
With the constant progress of science and technology and the popularity of computer, the human-computer interaction way, tends to be diversified. From the traditional keyboard, graphical user interface (GUI), tablets of handwritten Chinese characters to the recent hot speech recognition, gesture of somatosensory peripherals. Undoubtedly, the human-computer interaction way trends to be more nature and convenience. Gestures with the characteristics of intuition and naturalness have become an important means of human-computer interaction. It got rid of the bondage of traditional keyboard, mouse and so on, more in line with the habit of human beings, therefore has very broad application prospects. In this paper, gesture recognition is also chosen as the research subject.
APA, Harvard, Vancouver, ISO, and other styles
48

Kim, Jeonghyeon, Jung-Hoon Ahn, and Youngwon Kim. "Immersive Interaction for Inclusive Virtual Reality Navigation: Enhancing Accessibility for Socially Underprivileged Users." Electronics 14, no. 5 (2025): 1046. https://doi.org/10.3390/electronics14051046.

Full text
Abstract:
Existing virtual reality (VR) street view and 360-degree road view applications often rely on complex controllers or touch interfaces, which can hinder user immersion and accessibility. These challenges are particularly pronounced for under-represented populations, such as older adults and individuals with limited familiarity with digital devices. Such groups frequently face physical or environmental constraints that restrict their ability to engage in outdoor activities, highlighting the need for alternative methods of experiencing the world through virtual environments. To address this issue, we propose a VR street view application featuring an intuitive, gesture-based interface designed to simplify user interaction and enhance accessibility for socially disadvantaged individuals. Our approach seeks to optimize digital accessibility by reducing barriers to entry, increasing user immersion, and facilitating a more inclusive virtual exploration experience. Through usability testing and iterative design, this study evaluates the effectiveness of gesture-based interactions in improving accessibility and engagement. The findings emphasize the importance of user-centered design in fostering an inclusive VR environment that accommodates diverse needs and abilities.
APA, Harvard, Vancouver, ISO, and other styles
49

Weng Chou, Ka, Albert Quek, and Hui Hong Yew. "Onmyouji: Gesture-based virtual reality game." International Journal of Engineering & Technology 7, no. 2.14 (2018): 110. http://dx.doi.org/10.14419/ijet.v7i2.14.11465.

Full text
Abstract:
With the emergence of Virtual Reality (VR), user’s senses are taken into consideration in achieving a natural and intuitive human-computer interaction (HCI) as well for a more effective data interaction. The conventional keyboard and mouse interface and control methods of VR no longer sufficiently handles the richness of the information and facilitate intuitive user interaction in the virtual environment. There are alternative controllers such as motion controller used in the game, but this burden users for having them to hold a controller during gameplay. Hand gestures are a powerful tool for communication among humans. Hand gestures recognition has the potential to achieve the naturalness and intuitiveness for HCI. Here, we have proposed a VR game implemented a set of simple gestures with Leap Motion controller named Onmyouji. Onmyouji emphasizes on the VR experience provided with utilizing hand gestures to move and cast attack as the player control. A set of gestures are designed to fulfill the naturalness and intuitiveness of controls in the game. By qualitative user evaluation on a target interest group, most of the participants evaluated positively on the intuitive gestures and gameplay. For the overall game experience, all participants rated 4 and 5 out of 5 scores.
APA, Harvard, Vancouver, ISO, and other styles
50

KETTEBEKOV, SANSHZAR, and RAJEEV SHARMA. "UNDERSTANDING GESTURES IN MULTIMODAL HUMAN COMPUTER INTERACTION." International Journal on Artificial Intelligence Tools 09, no. 02 (2000): 205–23. http://dx.doi.org/10.1142/s021821300000015x.

Full text
Abstract:
In recent years because of the advances in computer vision research, free hand gestures have been explored as a means of human-computer interaction (HCI). Gestures in combination with speech can be an important step toward natural, multimodal HCI. However, interpretation of gestures in a multimodal setting can be a particularly challenging problem. In this paper, we propose an approach for studying multimodal HCI in the context of a computerized map. An implemented testbed allows us to conduct user studies and address issues toward understanding of hand gestures in a multimodal computer interface. Absence of an adequate gesture classification in HCI makes gesture interpretation difficult. We formalize a method for bootstrapping the interpretation process by a semantic classification of gesture primitives in HCI context. We distinguish two main categories of gesture classes based on their spatio-temporal deixis. Results of user studies revealed that gesture primitives, originally extracted from weather map narration, form patterns of co-occurrence with speech parts in association with their meaning in a visual display control system. The results of these studies indicated two levels of gesture meaning: individual stroke and motion complex. These findings define a direction in approaching interpretation in natural gesture-speech interfaces.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!