To see the other types of publications on this topic, follow the link: Touchless user interface.

Journal articles on the topic 'Touchless user interface'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 26 journal articles for your research on the topic 'Touchless user interface.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jung, Il-Lyong, Akatyev Nikolay, Won-Dong Jang, and Chang-Su Kim. "Touchless User Interface for Real-Time Mobile Devices." Transactions of The Korean Institute of Electrical Engineers 60, no. 2 (February 1, 2011): 435–40. http://dx.doi.org/10.5370/kiee.2011.60.2.435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Iannessi, Antoine, Pierre Yves Marcy, Olivier Clatz, Nicholas Ayache, and Pierre Fillard. "Touchless User Interface for Intraoperative Image Control: Almost There." RadioGraphics 34, no. 4 (July 2014): 1142–44. http://dx.doi.org/10.1148/rg.344135158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

MA, Meng, Pascal Fallavollita, Séverine Habert, Simon Weidert, and Nassir Navab. "Device- and system-independent personal touchless user interface for operating rooms." International Journal of Computer Assisted Radiology and Surgery 11, no. 6 (March 16, 2016): 853–61. http://dx.doi.org/10.1007/s11548-016-1375-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ruppert, Guilherme Cesar Soares, Leonardo Oliveira Reis, Paulo Henrique Junqueira Amorim, Thiago Franco de Moraes, and Jorge Vicente Lopes da Silva. "Touchless gesture user interface for interactive image visualization in urological surgery." World Journal of Urology 30, no. 5 (May 12, 2012): 687–91. http://dx.doi.org/10.1007/s00345-012-0879-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ahmad, Bashar I., Chrisminder Hare, Harpreet Singh, Arber Shabani, Briana Lindsay, Lee Skrypchuk, Patrick Langdon, and Simon Godsill. "Touchless Selection Schemes for Intelligent Automotive User Interfaces With Predictive Mid-Air Touch." International Journal of Mobile Human Computer Interaction 11, no. 3 (July 2019): 18–39. http://dx.doi.org/10.4018/ijmhci.2019070102.

Full text
Abstract:
Predictive touch technology aims to improve the usability and performance of in-vehicle displays under the influence of perturbations due to the road and driving conditions. It fundamentally relies on predicting and early in the freehand pointing movement, the interface item the user intends to select, using a novel Bayesian inference framework. This article focusses on evaluating facilitation schemes for selecting the predicted interface component whilst driving, and without physically touching the display, thus touchless. Initially, several viable schemes were identified in a brainstorming session followed by an expert workshop with 12 participants. A simulator study with 24 participants using a prototype predictive touch system was then conducted. A number of collected quantitative and qualitative measures show that immediate mid-air selection, where the system autonomously auto-selects the predicted interface component, may be the most promising strategy for predictive touch.
APA, Harvard, Vancouver, ISO, and other styles
6

Ketabdar, Hamed, Amin Haji-Abolhassani, and Mehran Roshandel. "MagiThings." International Journal of Mobile Human Computer Interaction 5, no. 3 (July 2013): 23–41. http://dx.doi.org/10.4018/jmhci.2013070102.

Full text
Abstract:
The theory of around device interaction (ADI) has recently gained a lot of attention in the field of human computer interaction (HCI). As an alternative to the classic data entry methods, such as keypads and touch screens interaction, ADI proposes a touchless user interface that extends beyond the peripheral area of a device. In this paper, the authors propose a new approach for around mobile device interaction based on magnetic field. Our new approach, which we call it “MagiThings”, takes the advantage of digital compass (a magnetometer) embedded in new generation of mobile devices such as Apple’s iPhone 3GS/4G, and Google’s Nexus. The user movements of a properly shaped magnet around the device deform the original magnetic field. The magnet is taken or worn around the fingers. The changes made in the magnetic field pattern around the device constitute a new way of interacting with the device. Thus, the magnetic field encompassing the device plays the role of a communication channel and encodes the hand/finger movement patterns into temporal changes sensed by the compass sensor. The mobile device samples momentary status of the field. The field changes, caused by hand (finger) gesture, is used as a basis for sending interaction commands to the device. The pattern of change is matched against pre-recorded templates or trained models to recognize a gesture. The proposed methodology has been successfully tested for a variety of applications such as interaction with user interface of a mobile device, character (digit) entry, user authentication, gaming, and touchless mobile music synthesis. The experimental results show high accuracy in recognizing simple or complex gestures in a wide range of applications. The proposed method provides a practical and simple framework for touchless interaction with mobile devices relying only on an internally embedded sensor and a magnet.
APA, Harvard, Vancouver, ISO, and other styles
7

Małecki, Krzysztof, Adam Nowosielski, and Mateusz Kowalicki. "Gesture-Based User Interface for Vehicle On-Board System: A Questionnaire and Research Approach." Applied Sciences 10, no. 18 (September 22, 2020): 6620. http://dx.doi.org/10.3390/app10186620.

Full text
Abstract:
Touchless interaction with electronic devices using gestures is gaining popularity and along with speech-based communication offers their users natural and intuitive control methods. Now, these interaction modes go beyond the entertainment industry and are successfully applied in real-life scenarios such as a car interior. In the paper, we analyse the potential of hand gesture interaction in the vehicle environment by physically challenged drivers. A survey conducted with potential users shows that the knowledge of gesture-based interaction and its practical use by people with disabilities is low. Based on these results we proposed a gesture-based system for vehicle on-board system. It has been developed on the available state-of-the-art solutions and investigated in terms of usability on a group of people with different physical limitations who drive a car on daily basis mostly using steering aid tools. The obtained results are compared with the performance of users without any disabilities.
APA, Harvard, Vancouver, ISO, and other styles
8

Tan, Justin H., Cherng Chao, Mazen Zawaideh, Anne C. Roberts, and Thomas B. Kinney. "Informatics in Radiology: Developing a Touchless User Interface for Intraoperative Image Control during Interventional Radiology Procedures." RadioGraphics 33, no. 2 (March 2013): E61—E70. http://dx.doi.org/10.1148/rg.332125101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rosa, Guillermo M., and MarÍa L. Elizondo. "Use of a gesture user interface as a touchless image navigation system in dental surgery: Case series report." Imaging Science in Dentistry 44, no. 2 (2014): 155. http://dx.doi.org/10.5624/isd.2014.44.2.155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Frid, Emma. "Accessible Digital Musical Instruments—A Review of Musical Interfaces in Inclusive Music Practice." Multimodal Technologies and Interaction 3, no. 3 (July 26, 2019): 57. http://dx.doi.org/10.3390/mti3030057.

Full text
Abstract:
Current advancements in music technology enable the creation of customized Digital Musical Instruments (DMIs). This paper presents a systematic review of Accessible Digital Musical Instruments (ADMIs) in inclusive music practice. History of research concerned with facilitating inclusion in music-making is outlined, and current state of developments and trends in the field are discussed. Although the use of music technology in music therapy contexts has attracted more attention in recent years, the topic has been relatively unexplored in Computer Music literature. This review investigates a total of 113 publications focusing on ADMIs. Based on the 83 instruments in this dataset, ten control interface types were identified: tangible controllers, touchless controllers, Brain–Computer Music Interfaces (BCMIs), adapted instruments, wearable controllers or prosthetic devices, mouth-operated controllers, audio controllers, gaze controllers, touchscreen controllers and mouse-controlled interfaces. The majority of the AMDIs were tangible or physical controllers. Although the haptic modality could potentially play an important role in musical interaction for many user groups, relatively few of the ADMIs (14.5%) incorporated vibrotactile feedback. Aspects judged to be important for successful ADMI design were instrument adaptability and customization, user participation, iterative prototyping, and interdisciplinary development teams.
APA, Harvard, Vancouver, ISO, and other styles
11

Jurewicz, Katherina A., David M. Neyens, Ken Catchpole, and Scott T. Reeves. "Developing a 3D Gestural Interface for Anesthesia-Related Human-Computer Interaction Tasks Using Both Experts and Novices." Human Factors: The Journal of the Human Factors and Ergonomics Society 60, no. 7 (June 15, 2018): 992–1007. http://dx.doi.org/10.1177/0018720818780544.

Full text
Abstract:
Objective: The purpose of this research was to compare gesture-function mappings for experts and novices using a 3D, vision-based, gestural input system when exposed to the same context of anesthesia tasks in the operating room (OR). Background: 3D, vision-based, gestural input systems can serve as a natural way to interact with computers and are potentially useful in sterile environments (e.g., ORs) to limit the spread of bacteria. Anesthesia providers’ hands have been linked to bacterial transfer in the OR, but a gestural input system for anesthetic tasks has not been investigated. Methods: A repeated-measures study was conducted with two cohorts: anesthesia providers (i.e., experts) ( N = 16) and students (i.e., novices) ( N = 30). Participants chose gestures for 10 anesthetic functions across three blocks to determine intuitive gesture-function mappings. Reaction time was collected as a complementary measure for understanding the mappings. Results: The two gesture-function mapping sets showed some similarities and differences. The gesture mappings of the anesthesia providers showed a relationship to physical components in the anesthesia environment that were not seen in the students’ gestures. The students also exhibited evidence related to longer reaction times compared to the anesthesia providers. Conclusion: Domain expertise is influential when creating gesture-function mappings. However, both experts and novices should be able to use a gesture system intuitively, so development methods need to be refined for considering the needs of different user groups. Application: The development of a touchless interface for perioperative anesthesia may reduce bacterial contamination and eventually offer a reduced risk of infection to patients.
APA, Harvard, Vancouver, ISO, and other styles
12

Kurz, Marc, Robert Gstoettner, and Erik Sonnleitner. "Smart Rings vs. Smartwatches: Utilizing Motion Sensors for Gesture Recognition." Applied Sciences 11, no. 5 (February 25, 2021): 2015. http://dx.doi.org/10.3390/app11052015.

Full text
Abstract:
Since electronic components are constantly getting smaller and smaller, sensors and logic boards can be fitted into smaller enclosures. This miniaturization lead to the development of smart rings containing motion sensors. These sensors of smart rings can be used to recognize hand/finger gestures enabling natural interaction. Unlike vision-based systems, wearable systems do not require a special infrastructure to operate in. Smart rings are highly mobile and are able to communicate wirelessly with various devices. They could potentially be used as a touchless user interface for countless applications, possibly leading to new developments in many areas of computer science and human–computer interaction. Specifically, the accelerometer and gyroscope sensors of a custom-built smart ring and of a smartwatch are used to train multiple machine learning models. The accuracy of the models is compared to evaluate whether smart rings or smartwatches are better suited for gesture recognition tasks. All the real-time data processing to predict 12 different gesture classes is done on a smartphone, which communicates wirelessly with the smart ring and the smartwatch. The system achieves accuracy scores of up to 98.8%, utilizing different machine learning models. Each machine learning model is trained with multiple different feature vectors in order to find optimal features for the gesture recognition task. A minimum accuracy threshold of 92% was derived from related research, to prove that the proposed system is able to compete with state-of-the-art solutions.
APA, Harvard, Vancouver, ISO, and other styles
13

Kurschl, Werner, Mirjam Augstein, Thomas Burger, and Claudia Pointner. "User modeling for people with special needs." International Journal of Pervasive Computing and Communications 10, no. 3 (August 26, 2014): 313–36. http://dx.doi.org/10.1108/ijpcc-07-2014-0040.

Full text
Abstract:
Purpose – The purpose of this paper is to present an approach where a novel user modeling wizard for people with motor impairments is used to gain a deeper understanding of very specific (touch-based and touchless) interaction patterns. The findings are used to set up and fill a user model which allows to automatically derive an application- and user-specific configuration for natural user interfaces. Design/methodology/approach – Based on expert knowledge in the domain of software/user interfaces for people with special needs, a test-case –based user modeling tool was developed. Task-based user tests were conducted with seven users for the touch-based interaction scenario and with five users for the touchless interaction scenario. The participants are all people with different motor and/or cognitive impairments. Findings – The paper describes the results of different test cases that were designed to model users’ touch-based and touchless interaction capabilities. To evaluate the tool’s findings, experts additionally judged the participants’ performance (their opinions were compared to the tool’s findings). The results suggest that the user modeling tool could quite well capture users’ capabilities. Social implications – The paper presents a tool that can be used to model users’ interaction capabilities. The approach aims at taking over some of the (very time-consuming) configuration tasks consultants have to do to configure software according to the needs of people with disabilities. This can lead to a wider accessibility of software, especially in the area of gesture-based user interaction. Originality/value – Part of the approach has been published in the proceedings of the Interactional Conference on Advances in Mobile Computing and Multimedia 2014. Significant additions have been made since (e.g. all of the touchless interaction part of the approach and the related user study).
APA, Harvard, Vancouver, ISO, and other styles
14

Yoshida, Soichiro, Masaya Ito, Manabu Tatokoro, Minato Yokoyama, Junichiro Ishioka, Yoh Matsuoka, Noboru Numao, Kazutaka Saito, Yasuhisa Fujii, and Kazunori Kihara. "Multitask Imaging Monitor for Surgical Navigation: Combination of Touchless Interface and Head-Mounted Display." Urologia Internationalis 98, no. 4 (April 8, 2015): 486–88. http://dx.doi.org/10.1159/000381104.

Full text
Abstract:
As a result of the dramatic improvements in the resolution, wearability, and weight of head-mounted displays (HMDs), they have become increasingly applied in the medical field as personal imaging monitors. The combined use of a multiplexer with an HMD allows the wearer to simultaneously and seamlessly monitor multiple streams of imaging information through the HMD. We developed a multitask imaging monitor for surgical navigation by combining a touchless surgical imaging control system with an HMD. This system is composed of a standard color digital video camera mounted on the HMD and computer software that enables the identification of the number of pictured fingertips from the video camera image. The HMD wearer uses this information as a touchless interface for the operating multiplexer, which can control the arrays and types of imaging information displayed on the HMD. We used this system in an experimental demonstration during a single-port gasless partial nephrectomy. The use of this multitask imaging monitor using a touchless interface would refine the surgical workflow, especially during surgical navigation.
APA, Harvard, Vancouver, ISO, and other styles
15

Cronin, Seán, and Gavin Doherty. "Touchless computer interfaces in hospitals: A review." Health Informatics Journal 25, no. 4 (February 10, 2018): 1325–42. http://dx.doi.org/10.1177/1460458217748342.

Full text
Abstract:
The widespread use of technology in hospitals and the difficulty of sterilising computer controls has increased opportunities for the spread of pathogens. This leads to an interest in touchless user interfaces for computer systems. We present a review of touchless interaction with computer equipment in the hospital environment, based on a systematic search of the literature. Sterility provides an implied theme and motivation for the field as a whole, but other advantages, such as hands-busy settings, are also proposed. Overcoming hardware restrictions has been a major theme, but in recent research, technical difficulties have receded. Image navigation is the most frequently considered task and the operating room the most frequently considered environment. Gestures have been implemented for input, system and content control. Most of the studies found have small sample sizes and focus on feasibility, acceptability or gesture-recognition accuracy. We conclude this article with an agenda for future work.
APA, Harvard, Vancouver, ISO, and other styles
16

Janczyk, Markus, Aiping Xiong, and Robert W. Proctor. "Stimulus-Response and Response-Effect Compatibility With Touchless Gestures and Moving Action Effects." Human Factors: The Journal of the Human Factors and Ergonomics Society 61, no. 8 (March 7, 2019): 1297–314. http://dx.doi.org/10.1177/0018720819831814.

Full text
Abstract:
Objective: To determine whether response-effect (R-E) compatibility or stimulus-response (S-R) compatibility is more critical for touchless gesture responses. Background: Content on displays can be moved in the same direction (S-R incompatible but R-E compatible) or opposite direction (S-R compatible but R-E incompatible) as the touchless gesture that produces the movement. Previous studies suggested that it is easier to produce a button-press response when it is R-E compatible (and S-R incompatible). However, whether this R-E compatibility effect also occurs for touchless gesture responses is unknown. Method: Experiments 1 and 2 employed an R-E compatibility manipulation in which participants made responses with an upward or downward touchless gesture that resulted in the display content moving in the same (compatible) or opposite (incompatible) direction. Experiment 3 employed an S-R compatibility manipulation in which the stimulus occurred at the upper or lower location on the screen. Results: Overall, only negligible influences of R-E compatibility on performing the touchless gestures were observed (in contrast to button-press responses), whereas S-R compatibility heavily affected the gestural responses. Conclusion: The R-E compatibility obtained in many previous studies with various types of responses appears not to hold for touchless gestures as responses. Application: The results suggest that in the design of touchless interfaces, unique factors may contribute to determining which mappings of gesture and display movements are preferred by users.
APA, Harvard, Vancouver, ISO, and other styles
17

Ferri, Llopis, Moreno, Ibañez Civera, and Garcia-Breijo. "A Wearable Textile 3D Gesture Recognition Sensor Based on Screen-Printing Technology." Sensors 19, no. 23 (November 20, 2019): 5068. http://dx.doi.org/10.3390/s19235068.

Full text
Abstract:
Research has developed various solutions in order for computers to recognize hand gestures in the context of human machine interface (HMI). The design of a successful hand gesture recognition system must address functionality and usability. The gesture recognition market has evolved from touchpads to touchless sensors, which do not need direct contact. Their application in textiles ranges from the field of medical environments to smart home applications and the automotive industry. In this paper, a textile capacitive touchless sensor has been developed by using screen-printing technology. Two different designs were developed to obtain the best configuration, obtaining good results in both cases. Finally, as a real application, a complete solution of the sensor with wireless communications is presented to be used as an interface for a mobile phone.
APA, Harvard, Vancouver, ISO, and other styles
18

Betancur, J. Alejandro, Nicolás Gómez, Mario Castro, Frederic Merienne, and Daniel Suárez. "User experience comparison among touchless, haptic and voice Head-Up Displays interfaces in automobiles." International Journal on Interactive Design and Manufacturing (IJIDeM) 12, no. 4 (June 25, 2018): 1469–79. http://dx.doi.org/10.1007/s12008-018-0498-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Chanci, Daniela, Naveen Madapana, Glebys Gonzalez, and Juan Wachs. "Correlation Between Gestures’ Qualitative Properties and Usa- bility metrics." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (December 2020): 726–30. http://dx.doi.org/10.1177/1071181320641168.

Full text
Abstract:
The choice of best gestures and commands for touchless interfaces is a critical step that determines the user- satisfaction and overall efficiency of surgeon computer interaction. In this regard, usability metrics such as task completion time, error rate, and memorability have a long-standing as potential entities in determining the best gesture vocabulary. In addition, some previous works concerned with this problem have utilized qualitative measures to identify the best gesture. In this work, we hypothesize that there is a correlation between the qualitative properties of gestures (v) and their usability metrics (u). Therefore, we conducted an experiment with linguists to quantify the properties of the gestures. Next, a user study was conducted with surgeons, and the usability metrics were measured. Lastly, linear and non-linear regression techniques were used to find the correlations between u and v. Results show that usability metrics are correlated with the gestures’ qualitative properties ( R2 = 0.4).
APA, Harvard, Vancouver, ISO, and other styles
20

Alvarez-Lopez, Fernando, Marcelo Fabián Maina, and Francesc Saigí-Rubió. "Use of Commercial Off-The-Shelf Devices for the Detection of Manual Gestures in Surgery: Systematic Literature Review." Journal of Medical Internet Research 21, no. 5 (May 3, 2019): e11925. http://dx.doi.org/10.2196/11925.

Full text
Abstract:
Background The increasingly pervasive presence of technology in the operating room raises the need to study the interaction between the surgeon and computer system. A new generation of tools known as commercial off-the-shelf (COTS) devices enabling touchless gesture–based human-computer interaction is currently being explored as a solution in surgical environments. Objective The aim of this systematic literature review was to provide an account of the state of the art of COTS devices in the detection of manual gestures in surgery and to identify their use as a simulation tool for motor skills teaching in minimally invasive surgery (MIS). Methods For this systematic literature review, a search was conducted in PubMed, Excerpta Medica dataBASE, ScienceDirect, Espacenet, OpenGrey, and the Institute of Electrical and Electronics Engineers databases. Articles published between January 2000 and December 2017 on the use of COTS devices for gesture detection in surgical environments and in simulation for surgical skills learning in MIS were evaluated and selected. Results A total of 3180 studies were identified, 86 of which met the search selection criteria. Microsoft Kinect (Microsoft Corp) and the Leap Motion Controller (Leap Motion Inc) were the most widely used COTS devices. The most common intervention was image manipulation in surgical and interventional radiology environments, followed by interaction with virtual reality environments for educational or interventional purposes. The possibility of using this technology to develop portable low-cost simulators for skills learning in MIS was also examined. As most of the articles identified in this systematic review were proof-of-concept or prototype user testing and feasibility testing studies, we concluded that the field was still in the exploratory phase in areas requiring touchless manipulation within environments and settings that must adhere to asepsis and antisepsis protocols, such as angiography suites and operating rooms. Conclusions COTS devices applied to hand and instrument gesture–based interfaces in the field of simulation for skills learning and training in MIS could open up a promising field to achieve ubiquitous training and presurgical warm up.
APA, Harvard, Vancouver, ISO, and other styles
21

Jurewicz, Katherina, and David M. Neyens. "Mapping 3D Gestural Inputs to Traditional Touchscreen Interface Designs within the Context of Anesthesiology." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (September 2017): 696–700. http://dx.doi.org/10.1177/1541931213601660.

Full text
Abstract:
Gestures are a natural means of every day human-human communication, and with the advances in gestural input technology, there is an opportunity to investigate gestures as a means of communicating with computers and other devices. The primary benefit of gestural input technology is that it facilitates a touchless interaction, so the ideal market demand for this technology is an environment where touch needs to be minimized. The perfect example of an environment that discourages touch are sterile or clean environments, such as operating rooms (ORs). Healthcare-associated infections are a great burden to the healthcare system, and gestural input technology can decrease the number of surfaces, computers, and other devices that a healthcare provider comes in contact with, thus reducing the likelihood of bacterial contamination. The objective of this research was to map 3D gestural inputs to traditional touchscreen interface designs within the context of anesthesiology. An experimental study was conducted to elicit intuitive gestures from users and assess the cognitive complexity of ten typical functions of anesthesia providers. Intuitive gestures were observed in six out of the ten functions without any cognitive complexity concerns. Two functions, of the remaining four, demonstrated a higher-level gesture mapping with no cognitive complexity concerns. Overall, gestural input technology demonstrated promise for the ten functions of anesthesia providers in the operating room, and future research will continue investigating the application of gestural input technology for anesthesiology in the OR.
APA, Harvard, Vancouver, ISO, and other styles
22

Benitez-Garcia, Gibran, Muhammad Haris, Yoshiyuki Tsuda, and Norimichi Ukita. "Finger Gesture Spotting from Long Sequences Based on Multi-Stream Recurrent Neural Networks." Sensors 20, no. 2 (January 18, 2020): 528. http://dx.doi.org/10.3390/s20020528.

Full text
Abstract:
Gesture spotting is an essential task for recognizing finger gestures used to control in-car touchless interfaces. Automated methods to achieve this task require to detect video segments where gestures are observed, to discard natural behaviors of users’ hands that may look as target gestures, and be able to work online. In this paper, we address these challenges with a recurrent neural architecture for online finger gesture spotting. We propose a multi-stream network merging hand and hand-location features, which help to discriminate target gestures from natural movements of the hand, since these may not happen in the same 3D spatial location. Our multi-stream recurrent neural network (RNN) recurrently learns semantic information, allowing to spot gestures online in long untrimmed video sequences. In order to validate our method, we collect a finger gesture dataset in an in-vehicle scenario of an autonomous car. 226 videos with more than 2100 continuous instances were captured with a depth sensor. On this dataset, our gesture spotting approach outperforms state-of-the-art methods with an improvement of about 10% and 15% of recall and precision, respectively. Furthermore, we demonstrated that by combining with an existing gesture classifier (a 3D Convolutional Neural Network), our proposal achieves better performance than previous hand gesture recognition methods.
APA, Harvard, Vancouver, ISO, and other styles
23

Ruppert, Guilherme, Leonardo Reis, Paulo Amorim, Tiago Moraes, and Jorge Silva. "1515 TOUCHLESS GESTURE USER INTERFACE FOR INTERACTIVE IMAGE VISUALIZATION IN UROLOGICAL SURGERY." Journal of Urology 187, no. 4S (April 2012). http://dx.doi.org/10.1016/j.juro.2012.02.1282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Gan, Runze, Jiaming Liang, Bashar I. Ahmad, and Simon Godsill. "Modeling intent and destination prediction within a Bayesian framework: Predictive touch as a usecase." Data-Centric Engineering 1 (2020). http://dx.doi.org/10.1017/dce.2020.11.

Full text
Abstract:
Abstract In various scenarios, the motion of a tracked object, for example, a pointing apparatus, pedestrian, animal, vehicle, and others, is driven by achieving a premeditated goal such as reaching a destination. This is albeit the various possible trajectories to this endpoint. This paper presents a generic Bayesian framework that utilizes stochastic models that can capture the influence of intent (viz., destination) on the object behavior. It leads to simple algorithms to infer, as early as possible, the intended endpoint from noisy sensory observations, with relatively low computational and training data requirements. This framework is introduced in the context of the novel predictive touch technology for intelligent user interfaces and touchless interactions. It can determine, early in the interaction task or pointing gesture, the interface item the user intends to select on the display (e.g., touchscreen) and accordingly simplify as well as expedite the selection task. This is shown to significantly improve the usability of displays in vehicles, especially under the influence of perturbations due to road and driving conditions, and enable intuitive contact-free interactions. Data collected in instrumented vehicles are shown to demonstrate the effectiveness of the proposed intent prediction approach.
APA, Harvard, Vancouver, ISO, and other styles
25

Kasprzak, J. D., M. Kierepka, J. Z. Peruga, D. Dudek, B. Machura, M. Stanuch, A. Zlahoda-Huzior, et al. "P4357Implementation of interactive mixed reality display of three-dimensional echocardiography during percutaneous structural interventions." European Heart Journal 40, Supplement_1 (October 1, 2019). http://dx.doi.org/10.1093/eurheartj/ehz745.0764.

Full text
Abstract:
Abstract Background Three-dimensional (3D) echocardiographic data acquired from transesophageal (TEE) window are commonly used in planning and during percutaneous structural cardiac interventions (PSCI). Purpose We hypothesized that innovative, interactive mixed reality display can be integrated with procedural PSCI workflow to improve perception and interpretation of 3D data representing cardiac anatomy. Methods 3D TEE datasets were acquired before, during and after the completion of PSCI in 8 patients (occluders: 2 atrial appendage, 2 patent foramen ovale and 3 atrial septal implantations and percutaneous mitral commissurotomy). 30 Carthesian DICOM files were used to test the feasibility of mixed reality with commercially available head-mounted device (overlying hologram of 3D TEE data onto real-world view) as display for the interventional or imaging operator. Dedicated software was used for files conversion and 3D rendering of data to display device (in 1 case real-time Wi-Fi streaming from echocardiograph) and spatial manipulation of hologram during PSCI. Custom viewer was used to perform volume rendering and adjustment (cropping, transparency and shading control). Results Pre- and intraprocedural 3D TEE was performed in all 8 patients (5 women, age 40–83). Thirty selected 3DTEE datasets were successfully transferred and displayed in mixed reality head-mounted device as a holographic image overlying the real world view. The analysis was performed both before and during the procedure and compared with flatscreen 2-D display of the echocardiograph. In one case, real-time data transfer was successfully implemented during mitral balloon commissurotomy. The quality of visualization was judged as good without diagnostic content loss in all (100%) datasets. Both target structures and additional anatomical details were clearly presented including fenestrations of atrial septal defect, prominent Eustachian valve and earlier cardiac implants. Volume rendered views were touchlessly manipulated and displayed with a selection of intensity windows, transfer functions, and filters. Detail display was judged comparable to current 2-D volume-rendering on commercial workstations and touchless user interface - comfortable for optimization of views during PSCI. Conclusions Mixed reality display using a commercially available head-mounted device can be successfully integrated with preparation and execution of PSCI. The benefits of this solution include touchless image control and unobstructed real world viewing facilitating intraprocedural use, thus showing superiority over virtual or enhanced reality solutions. Expected progress includes integration of color flow data and optimization of real-time streaming option.
APA, Harvard, Vancouver, ISO, and other styles
26

Benkar, Archana, Abhishek Duduskar, Shivani Gandhamwar, and Prof P. A. More. "Gesture-Based Smart Switch." International Journal of Advanced Research in Science, Communication and Technology, June 11, 2021, 402–10. http://dx.doi.org/10.48175/ijarsct-1415.

Full text
Abstract:
The project aims at making old/dum electric appliances smart for controlling remotely for easy and touchless/contactless operations via software operations. As in this era of covid-19, a button is the most common interface to interact with the digital world. It could be as simple as a light/fan switch. So our Smart Switch box can be used to replace the existing switches in home which produces sparks and results in fire accidents in a few situations. Considering the advantages of Wi-Fi, an advanced automation system was developed to control the appliances in the house.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography