Academic literature on the topic 'Gesture-Based User Interface'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Gesture-Based User Interface.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Gesture-Based User Interface"

1

Elmagrouni, Issam, Abdelaziz Ettaoufik, Siham Aouad, and Abderrahim Maizate. "Approach for Improving User Interface Based on Gesture Recognition." E3S Web of Conferences 297 (2021): 01030. http://dx.doi.org/10.1051/e3sconf/202129701030.

Full text
Abstract:
Gesture recognition technology based on visual detection to acquire gestures information is obtained in a non-contact manner. There are two types of gesture recognition: independent and continuous gesture recognition. The former aims to classify videos or other types of gesture sequences that only contain one isolated gesture instance in each sequence (e.g., RGB-D or skeleton data). In this study, we review existing research methods of visual gesture recognition and will be grouped according to the following family: static, dynamic, based on the supports (Kinect, Leap…etc), works that focus on the application of gesture recognition on robots and works on dealing with gesture recognition at the browser level. Following that, we take a look at the most common JavaScript-based deep learning frameworks. Then we present the idea of defining a process for improving user interface control based on gesture recognition to streamline the implementation of this mechanism.
APA, Harvard, Vancouver, ISO, and other styles
2

Kirmizibayrak, Can, Nadezhda Radeva, Mike Wakid, John Philbeck, John Sibert, and James Hahn. "Evaluation of Gesture Based Interfaces for Medical Volume Visualization Tasks." International Journal of Virtual Reality 11, no. 2 (2012): 1–13. http://dx.doi.org/10.20870/ijvr.2012.11.2.2839.

Full text
Abstract:
Interactive systems are increasingly used in medical applications with the widespread availability of various imaging modalities. Gesture-based interfaces can be beneficial to interact with these kinds of systems in a variety of settings, as they can be easier to learn and can eliminate several shortcomings of traditional tactile systems, especially for surgical applications. We conducted two user studies that explore different gesture-based interfaces for interaction with volume visualizations. The first experiment focused on rotation tasks, where the performance of the gesture-based interface (using Microsoft Kinect) was compared to using the mouse. The second experiment studied localization of internal structures, comparing slice-based visualizations via gestures and the mouse, in addition to a 3D Magic Lens visualization. The results of the user studies showed that the gesture-based interface outperform the traditional mouse both in time and accuracy in the orientation matching task. The traditional mouse was the superior interface for the second experiment in terms of accuracy. However, the gesture-based Magic Lens interface was found to have the fastest target localization time. We discuss these findings and their further implications in the use of gesture-based interfaces in medical volume visualization, and discuss the possible underlying psychological mechanisms why these methods can outperform traditional interaction methods
APA, Harvard, Vancouver, ISO, and other styles
3

Ryumin, Dmitry, Ildar Kagirov, Alexandr Axyonov, et al. "A Multimodal User Interface for an Assistive Robotic Shopping Cart." Electronics 9, no. 12 (2020): 2093. http://dx.doi.org/10.3390/electronics9122093.

Full text
Abstract:
This paper presents the research and development of the prototype of the assistive mobile information robot (AMIR). The main features of the presented prototype are voice and gesture-based interfaces with Russian speech and sign language recognition and synthesis techniques and a high degree of robot autonomy. AMIR prototype’s aim is to be used as a robotic cart for shopping in grocery stores and/or supermarkets. Among the main topics covered in this paper are the presentation of the interface (three modalities), the single-handed gesture recognition system (based on a collected database of Russian sign language elements), as well as the technical description of the robotic platform (architecture, navigation algorithm). The use of multimodal interfaces, namely the speech and gesture modalities, make human-robot interaction natural and intuitive, as well as sign language recognition allows hearing-impaired people to use this robotic cart. AMIR prototype has promising perspectives for real usage in supermarkets, both due to its assistive capabilities and its multimodal user interface.
APA, Harvard, Vancouver, ISO, and other styles
4

Yoon, Hoon, Hojeong Im, Seonha Chung, and Taeha Yi. "Exploring Preferential Ring-Based Gesture Interaction Across 2D Screen and Spatial Interface Environments." Applied Sciences 15, no. 12 (2025): 6879. https://doi.org/10.3390/app15126879.

Full text
Abstract:
As gesture-based interactions expand across traditional 2D screens and immersive XR platforms, designing intuitive input modalities tailored to specific contexts becomes increasingly essential. This study explores how users cognitively and experientially engage with gesture-based interactions in two distinct environments: a lean-back 2D television interface and an immersive XR spatial environment. A within-subject experimental design was employed, utilizing a gesture-recognizable smart ring to perform tasks using three gesture modalities: (a) Surface-Touch gesture, (b) mid-air gesture, and (c) micro finger-touch gesture. The results revealed clear, context-dependent user preferences; Surface-Touch gestures were preferred in the 2D context due to their controlled and pragmatic nature, whereas mid-air gestures were favored in the XR context for their immersive, intuitive qualities. Interestingly, longer gesture execution times did not consistently reduce user satisfaction, indicating that compatibility between the gesture modality and the interaction environment matters more than efficiency alone. This study concludes that successful gesture-based interface design must carefully consider the contextual alignment, highlighting the nuanced interplay among user expectations, environmental context, and gesture modality. Consequently, these findings provide practical considerations for designing Natural User Interfaces (NUIs) for various interaction contexts.
APA, Harvard, Vancouver, ISO, and other styles
5

Lim, C. J., Nam-Hee Lee, Yun-Guen Jeong, and Seung-Il Heo. "Gesture based Natural User Interface for e-Training." Journal of the Ergonomics Society of Korea 31, no. 4 (2012): 577–83. http://dx.doi.org/10.5143/jesk.2012.31.4.577.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Laine, Teemu H., and Hae Jung Suk. "Investigating User Experience of an Immersive Virtual Reality Simulation Based on a Gesture-Based User Interface." Applied Sciences 14, no. 11 (2024): 4935. http://dx.doi.org/10.3390/app14114935.

Full text
Abstract:
The affordability of equipment and availability of development tools have made immersive virtual reality (VR) popular across research fields. Gesture-based user interface has emerged as an alternative method to handheld controllers to interact with the virtual world using hand gestures. Moreover, a common goal for many VR applications is to elicit a sense of presence in users. Previous research has identified many factors that facilitate the evocation of presence in users of immersive VR applications. We investigated the user experience of Four Seasons, an immersive virtual reality simulation where the user interacts with a natural environment and animals with their hands using a gesture-based user interface (UI). We conducted a mixed-method user experience evaluation with 21 Korean adults (14 males, 7 females) who played Four Seasons. The participants filled in a questionnaire and answered interview questions regarding presence and experience with the gesture-based UI. The questionnaire results indicated high ratings for presence and gesture-based UI, with some issues related to the realism of interaction and lack of sensory feedback. By analyzing the interview responses, we identified 23 potential presence factors and proposed a classification for organizing presence factors based on the internal–external and dynamic–static dimensions. Finally, we derived a set of design principles based on the potential presence factors and demonstrated their usefulness for the heuristic evaluation of existing gesture-based immersive VR experiences. The results of this study can be used for designing and evaluating presence-evoking gesture-based VR experiences.
APA, Harvard, Vancouver, ISO, and other styles
7

Ahmed, Naveed, Hind Kharoub, Selma Manel Medjden, and Areej Alsaafin. "A Natural User Interface for 3D Animation Using Kinect." International Journal of Technology and Human Interaction 16, no. 4 (2020): 35–54. http://dx.doi.org/10.4018/ijthi.2020100103.

Full text
Abstract:
This article presents a new natural user interface to control and manipulate a 3D animation using the Kinect. The researchers design a number of gestures that allow the user to play, pause, forward, rewind, scale, and rotate the 3D animation. They also implement a cursor-based traditional interface and compare it with the natural user interface. Both interfaces are extensively evaluated via a user study in terms of both the usability and user experience. Through both quantitative and the qualitative evaluation, they show that a gesture-based natural user interface is a preferred method to control a 3D animation compared to a cursor-based interface. The natural user interface not only proved to be more efficient but resulted in a more engaging and enjoyable user experience.
APA, Harvard, Vancouver, ISO, and other styles
8

Wojciechowski, A. "Hand’s poses recognition as a mean of communication within natural user interfaces." Bulletin of the Polish Academy of Sciences: Technical Sciences 60, no. 2 (2012): 331–36. http://dx.doi.org/10.2478/v10175-012-0044-3.

Full text
Abstract:
Abstract. Natural user interface (NUI) is a successor of command line interfaces (CLI) and graphical user interfaces (GUI) so well known to computer users. A new natural approach is based on extensive human behaviors tracking, where hand tracking and gesture recognition seem to play the main roles in communication. The presented paper reviews common approaches to discussed hand features tracking and provides a very effective proposal of the contour based hand’s poses recognition method which can be straightforwardly used for a hand-based natural user interface. Its possible usage varies from medical systems interaction, through games up to impaired people communication support.
APA, Harvard, Vancouver, ISO, and other styles
9

Colli Alfaro, Jose Guillermo, and Ana Luisa Trejos. "User-Independent Hand Gesture Recognition Classification Models Using Sensor Fusion." Sensors 22, no. 4 (2022): 1321. http://dx.doi.org/10.3390/s22041321.

Full text
Abstract:
Recently, it has been proven that targeting motor impairments as early as possible while using wearable mechatronic devices for assisted therapy can improve rehabilitation outcomes. However, despite the advanced progress on control methods for wearable mechatronic devices, the need for a more natural interface that allows for better control remains. To address this issue, electromyography (EMG)-based gesture recognition systems have been studied as a potential solution for human–machine interface applications. Recent studies have focused on developing user-independent gesture recognition interfaces to reduce calibration times for new users. Unfortunately, given the stochastic nature of EMG signals, the performance of these interfaces is negatively impacted. To address this issue, this work presents a user-independent gesture classification method based on a sensor fusion technique that combines EMG data and inertial measurement unit (IMU) data. The Myo Armband was used to measure muscle activity and motion data from healthy subjects. Participants were asked to perform seven types of gestures in four different arm positions while using the Myo on their dominant limb. Data obtained from 22 participants were used to classify the gestures using three different classification methods. Overall, average classification accuracies in the range of 67.5–84.6% were obtained, with the Adaptive Least-Squares Support Vector Machine model obtaining accuracies as high as 92.9%. These results suggest that by using the proposed sensor fusion approach, it is possible to achieve a more natural interface that allows better control of wearable mechatronic devices during robot assisted therapies.
APA, Harvard, Vancouver, ISO, and other styles
10

Bailey, Shannon K. T., and Cheryl I. Johnson. "Performance on a Natural User Interface Task is Correlated with Higher Gesture Production." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, no. 1 (2019): 1384–88. http://dx.doi.org/10.1177/1071181319631181.

Full text
Abstract:
The study examined whether the individual differences of gesture production and attitudes toward gesturing were related to performance on a gesture-based natural user interface. Participants completed a lesson using gesture interactions and were measured on how long it took to complete the lesson, their reported mental effort, and how much they learned during the lesson. The Brief Assessment of Gestures survey was used to determine different dimensions of a participant’s predisposition to gesture, with four subscales: Perception, Production, Social Production, and Social Perception. Only an individual’s propensity to produce gestures was related to higher learning outcomes from the computer-based lesson.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Gesture-Based User Interface"

1

Kolagani, Vijay Kumar. "Gesture Based Human-Computer Interaction with Natural User Interface." University of Akron / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=akron1542601474940954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bernard, Arnaud Jean Marc. "Human computer interface based on hand gesture recognition." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/42748.

Full text
Abstract:
With the improvement of multimedia technologies such as broadband-enabled HDTV, video on demand and internet TV, the computer and the TV are merging to become a single device. Moreover the previously cited technologies as well as DVD or Blu-ray can provide menu navigation and interactive content. The growing interest in video conferencing led to the integration of the webcam in different devices such as laptop, cell phones and even the TV set. Our approach is to directly use an embedded webcam to remotely control a TV set using hand gestures. Using specific gestures, a user is able to control the TV. A dedicated interface can then be used to select a TV channel, adjust volume or browse videos from an online streaming server. This approach leads to several challenges. The first is the use of a simple webcam which leads to a vision based system. From the single webcam, we need to recognize the hand and identify its gesture or trajectory. A TV set is usually installed in a living room which implies constraints such as a potentially moving background and luminance change. These issues will be further discussed as well as the methods developed to resolve them. Video browsing is one example of the use of gesture recognition. To illustrate another application, we developed a simple game controlled by hand gestures. The emergence of 3D TVs is allowing the development of 3D video conferencing. Therefore we also consider the use of a stereo camera to recognize hand gesture.
APA, Harvard, Vancouver, ISO, and other styles
3

Parra, González Luis Otto. "gestUI: a model-driven method for including gesture-based interaction in user interfaces." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/89090.

Full text
Abstract:
The research reported and discussed in this thesis represents a novel approach to define custom gestures and to include gesture-based interaction in user interfaces of the software systems with the aim of help to solve the problems found in the related literature about the development of gesture-based user interfaces. The research is conducted according to Design Science methodology that is based on the design and investigation of artefacts in a context. In this thesis, the new artefact is the model-driven method to include gesture-based interaction in user interfaces. This methodology considers two cycles: the main cycle is an engineering cycle where we design a model-driven method to include interaction based on gestures. The second cycle is the research cycle, we define two research cycles: the first research cycle corresponds to the validation of the proposed method with an empirical evaluation and the second cycle corresponds to the technical action research to validate the method in an industrial context. Additionally, Design Science provides us the clues on how to conduct the research, be rigorous, and put in practice scientific rules. Besides Design Science has been a key issue for organising our research, we acknowledge the application of this framework since it has helps us to report clearly our findings. The thesis presents a theoretical framework introducing concepts related with the research performed, followed by a state of the art where we know about the related work in three areas: Human-computer Interaction, Model-driven paradigm in Human-Computer Interaction and Empirical Software Engineering. The design and implementation of gestUI is presented following the Model-driven Paradigm and the Model-View-Controller design pattern. Then, we performed two evaluations of gestUI: (i) an empirical evaluation based on ISO 25062-2006 to evaluate usability considering effectiveness, efficiency and satisfaction. Satisfaction is measured with perceived ease of use, perceived usefulness and intention of use, and (ii) a technical action research to evaluate user experience and usability. We use Model Evaluation Method, User Experience Questionnaire and Microsoft Reaction cards as guides to perform the aforementioned evaluations. The contributions of our thesis, limitations of the tool support and the approach are discussed and further work are presented.<br>La investigación reportada y discutida en esta tesis representa un método nuevo para definir gestos personalizados y para incluir interacción basada en gestos en interfaces de usuario de sistemas software con el objetivo de ayudar a resolver los problemas encontrados en la literatura relacionada respecto al desarrollo de interfaces basadas en gestos de usuarios. Este trabajo de investigación ha sido realizado de acuerdo a la metodología Ciencia del Diseño, que está basada en el diseño e investigación de artefactos en un contexto. En esta tesis, el nuevo artefacto es el método dirigido por modelos para incluir interacción basada en gestos en interfaces de usuario. Esta metodología considera dos ciclos: el ciclo principal, denominado ciclo de ingeniería, donde se ha diseñado un método dirigido por modelos para incluir interacción basada en gestos. El segundo ciclo es el ciclo de investigación, donde se definen dos ciclos de este tipo. El primero corresponde a la validación del método propuesto con una evaluación empírica y el segundo ciclo corresponde a un Technical Action Research para validar el método en un contexto industrial. Adicionalmente, Ciencia del Diseño provee las claves sobre como conducir la investigación, sobre cómo ser riguroso y poner en práctica reglas científicas. Además, Ciencia del Diseño ha sido un recurso clave para organizar la investigación realizada en esta tesis. Nosotros reconocemos la aplicación de este marco de trabajo puesto que nos ayuda a reportar claramente nuestros hallazgos. Esta tesis presenta un marco teórico introduciendo conceptos relacionados con la investigación realizada, seguido por un estado del arte donde conocemos acerca del trabajo relacionado en tres áreas: Interacción Humano-Ordenador, paradigma dirigido por modelos en Interacción Humano-Ordenador e Ingeniería de Software Empírica. El diseño e implementación de gestUI es presentado siguiendo el paradigma dirigido por modelos y el patrón de diseño Modelo-Vista-Controlador. Luego, nosotros hemos realizado dos evaluaciones de gestUI: (i) una evaluación empírica basada en ISO 25062-2006 para evaluar la usabilidad considerando efectividad, eficiencia y satisfacción. Satisfacción es medida por medio de la facilidad de uso percibida, utilidad percibida e intención de uso; y, (ii) un Technical Action Research para evaluar la experiencia del usuario y la usabilidad. Nosotros hemos usado Model Evaluation Method, User Experience Questionnaire y Microsoft Reaction Cards como guías para realizar las evaluaciones antes mencionadas. Las contribuciones de nuestra tesis, limitaciones del método y de la herramienta de soporte, así como el trabajo futuro son discutidas y presentadas.<br>La investigació reportada i discutida en aquesta tesi representa un mètode per definir gests personalitzats i per incloure interacció basada en gests en interfícies d'usuari de sistemes de programari. L'objectiu és ajudar a resoldre els problemes trobats en la literatura relacionada al desenvolupament d'interfícies basades en gests d'usuaris. Aquest treball d'investigació ha sigut realitzat d'acord a la metodologia Ciència del Diseny, que està basada en el disseny i investigació d'artefactes en un context. En aquesta tesi, el nou artefacte és el mètode dirigit per models per incloure interacció basada en gests en interfícies d'usuari. Aquesta metodologia es considerada en dos cicles: el cicle principal, denominat cicle d'enginyeria, on es dissenya un mètode dirigit per models per incloure interacció basada en gestos. El segon cicle és el cicle de la investigació, on es defineixen dos cicles d'aquest tipus. El primer es correspon a la validació del mètode proposat amb una avaluació empírica i el segon cicle es correspon a un Technical Action Research per validar el mètode en un context industrial. Addicionalment, Ciència del Disseny proveeix les claus sobre com conduir la investigació, sobre com ser rigorós i ficar en pràctica regles científiques. A més a més, Ciència del Disseny ha sigut un recurs clau per organitzar la investigació realitzada en aquesta tesi. Nosaltres reconeixem l'aplicació d'aquest marc de treball donat que ens ajuda a reportar clarament les nostres troballes. Aquesta tesi presenta un marc teòric introduint conceptes relacionats amb la investigació realitzada, seguit per un estat del art on coneixem a prop el treball realitzat en tres àrees: Interacció Humà-Ordinador, paradigma dirigit per models en la Interacció Humà-Ordinador i Enginyeria del Programari Empírica. El disseny i implementació de gestUI es presenta mitjançant el paradigma dirigit per models i el patró de disseny Model-Vista-Controlador. Després, nosaltres hem realitzat dos avaluacions de gestUI: (i) una avaluació empírica basada en ISO 25062-2006 per avaluar la usabilitat considerant efectivitat, eficiència i satisfacció. Satisfacció es mesura mitjançant la facilitat d'ús percebuda, utilitat percebuda i intenció d'ús; (ii) un Technical Action Research per avaluar l'experiència del usuari i la usabilitat. Nosaltres hem usat Model Evaluation Method, User Experience Questionnaire i Microsoft Reaction Cards com guies per realitzar les avaluacions mencionades. Les contribucions de la nostra tesi, limitacions del mètode i de la ferramenta de suport així com el treball futur són discutides i presentades.<br>Parra González, LO. (2017). gestUI: a model-driven method for including gesture-based interaction in user interfaces [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/89090<br>TESIS
APA, Harvard, Vancouver, ISO, and other styles
4

Milivojevic, Mladen. "Gesture-based interaction for Centralized Traffic Control." Thesis, KTH, Medieteknik och interaktionsdesign, MID, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-196815.

Full text
Abstract:
Ever wondered how trains arrive and depart on time? Traffic Control systems are there to help control the traffic flow, with many operators monitoring and taking actions when necessary. Operators of the traffic control systems use a keyboard and a mouse to interact. Current user interface and work setup consist of many usability issues that can be improved in order to increase operator’s efficiency and productivity in controlling the traffic. Interviewing users of the system and researching on related topics led to a newly proposed design in interaction, user interface as well as some suggestions for increasing productivity. Gesture-based interaction is introduced and simulated for traffic control systems that tend to improve the operation. Various gestures are designed such as panning, zooming and hovering the map using Leap Motion controller which enables intuitive interaction. These gestures aim to solve identified usability issues discovered during the interview with the user. The project aims to answer the following question: Can gesture-based interaction solve usability issues and establish the intuitive use of the CTC system? Performing exploratory research on this topic involved designing, implementing and testing hand gestures with users. From an ergonomic perspective, body posture and hand position of the operator is examined and suggested to use sit-to-stand workstations in order to reduce pain and discomfort while working. Gesture-based interaction eliminates finding mouse cursor on large screens, it enables fast request of detailed information and also it provides a better overview of the map surroundings. Laboratory tests confirm that gesture-based interaction brings more natural and intuitive use of traffic control systems. There is a big potential for gesture-based interaction to increase usability and bring efficient controlling for operators. It would reduce delays of the train and maintain safe traffic flow.<br>Har du någonsin undrat hur tåg anländer och avgaå i tid? Trafikledningssystem (CTC-system) hjälper till att kontrollera trafikflöet där operatörer övervakar och vidtar åtgärder vid behov. Operatörer av ett trafikledningssystem använder idag ett tangentbord och en mus för att interagera. Det nuvarande användargränssnittet och arbetsinstallationen består av många användbarhetsproblem som kan förbättras för att öka operatörens effektivitet och produktivitet för att kontrollera trafiken. Intervjuer med användare av systemet samt forskning om ämnet ledde till ett nytt föreslag av interaktion, utformning av användargränssnitt samt några förslag för att öka produktiviteten. Den gestbaserade interaktionen som infördes och simulerades för trafikkontrollsystemet tenderar att förbättra funktionen. Olika gester utformades som möjliggör för användaren att panorera, zooma och sväva över kartan. Gesterna implementerades med hjälp av Leap Motion Controller som möjliggör intuitiv interaktion. Dessa gester syftar till att lösa identifierade användbarhetsproblem som upptäcks under intervjuerna med användarna. Syftet med detta arbete var att svara på följande forskningsfråga: Kan gestbaserad interaktion lösa användbarhetsproblem och etablera intuitiv användning av CTC-systemet? Den explorativa forskning som utfördes i detta arbete inkluderade att utforma, genomföra och testa gester med användare. Kroppshållning och handposition för operatorerna undersöktes ur ett ergonomiskt perspektiv och studien föreslår att använda sitt-till-stå arbetsstationer för att minska smärta och obehag under arbetet. Gestbaserad interaktion eliminerar problemet att hitta muspekaren på stora skärmar, vilket gör det enkelt att snabbt hitta detaljerad information och ger även en bättre överblick över kartans omgivning. Laboratorietester bekräftar att gestbaserad interaktion ger mer naturlig och intuitiv användning av trafikledningssystemet. Det finns en stor potential för gestbaserad interaktion för att öka användbarheten och ge en effektiv kontroll för operatörerna. Det skulle minska förseningarna av tåget och upprätthålla ett säkert trafikflöde.
APA, Harvard, Vancouver, ISO, and other styles
5

Kuhlman, Lane M. "Gesture Mapping for Interaction Design: An Investigative Process for Developing Interactive Gesture Libraries." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1244003264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Profita, Halley P. "Social acceptability of wearable technology use in public: an exploration of the societal perceptions of a gesture-based mobile textile interface." Thesis, Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/44785.

Full text
Abstract:
Textile forms of wearable technology offer the potential for users to interact with electronic devices in a whole new manner. However, the operation of a wearable system can result in non-traditional on-body interactions (including gestural commands) that users may not be comfortable with performing in a public setting. Understanding the societal perceptions of gesture-based interactions will ultimately impact how readily a new form of mobile technology will be adopted within society. The goal of this research is to assess the social acceptability of a user's interaction with an electronic textile wearable interface. Two means of interaction were studied: the first was to assess the most acceptable input method for the interface (tapping, sliding, circular rotation); and the second assessment was to measure the social acceptability of a user interacting with the detachable textile interface at different locations on the body. The study recruited participants who strictly identified themselves as being of American nationality so as to gain insight into the culture-specific perceptions of interacting with a wearable form of technology.
APA, Harvard, Vancouver, ISO, and other styles
7

Loachamín, Valencia Mauricio Renán. "Natural user interfaces and smart devices for the assessment of spatial memory using auditory stimuli." Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/107955.

Full text
Abstract:
En esta tesis, el objetivo principal fue diseñar y desarrollar una nueva tarea que combinara interfaces de usuario naturales (NUI) y dispositivos inteligentes para evaluar la memoria espacial utilizando estímulos auditivos, y su validación tanto en niños como en adultos. La nueva tarea evalúa la capacidad de los participantes para detectar y localizar estímulos auditivos que se emiten en diferentes posiciones del área de trabajo. La tarea reconoce los movimientos de los brazos del usuario, utilizando para ello Kinect. Los dispositivos inteligentes (conejos Karotz) se utilizan para emitir estímulos auditivos y también como señales visuales. Por lo tanto, la tarea combina estímulos auditivos con claves visuales reales para la evaluación de la memoria espacial. La tarea incluye un total de 45 estímulos acústicos, repartidos en 5 niveles y cada nivel consta de 3 ensayos. Nuestra tarea es el primer trabajo que combina NUI y dispositivos inteligentes para la evaluación de la memoria espacial. Del mismo modo, nuestra tarea es el primer trabajo que utiliza estímulos auditivos para evaluar la memoria espacial. Para la validación, se llevaron a cabo 3 estudios. El rendimiento de nuestra tarea se comparó con métodos tradicionales. El primer estudio involucró niños con y sin síntomas de falta de atención. Un total de 34 niños participaron (17 niños con falta de atención). Los resultados demostraron que los niños con falta de atención mostraron un rendimiento estadísticamente peor en la tarea. Estos niños con falta de atención también mostraron un rendimiento estadísticamente peor con el método tradicional para evaluar el aprendizaje de sonidos verbales. No se encontraron diferencias estadísticamente significativas en el tiempo dedicado por cada grupo para completar la tarea. Los resultados sugieren que la tarea es una buena herramienta para distinguir las dificultades de memoria espacial en niños con falta de atención. El segundo estudio comparó el rendimiento en la tarea entre niños mayores y adultos (32 niños y 38 adultos sanos). Los resultados de rendimiento con la tarea fueron significativamente más bajos para los niños mayores. Se encontraron correlaciones entre nuestra tarea y los métodos tradicionales, lo que indica que nuestra tarea ha demostrado ser una herramienta válida para evaluar la memoria espacial mediante el uso de estímulos auditivos tanto para niños mayores como para adultos. A partir del análisis, podemos concluir que la satisfacción con la tarea de los niños mayores fue significativamente mayor que la de los adultos. El tercer estudio incluyó un total de 148 participantes (niños más pequeños, niños mayores y adultos). Los resultados están en línea con el segundo estudio. El rendimiento de la tarea se relacionó significativamente, de forma incremental y directa con el grupo de edad (niños más pequeños <niños mayores <adultos). Los resultados fueron mejores para adultos y niños mayores; resultado que es consistente con la idea de que los adultos pueden almacenar más elementos en la memoria a corto plazo que los niños. Las siguientes conclusiones generales se han extraído del desarrollo y los tres estudios: * Las interfaces de usuario naturales y los dispositivos inteligentes son apropiados para desarrollar tareas para la evaluación de la memoria espacial. * Como juego de ordenador, nuestra tarea facilita el control de la presentación de estímulos y el almacenamiento de las respuestas. * Nuestra tarea y tareas similares podrían usarse para la evaluación y el entrenamiento de la memoria espacial en niños y adultos. * La tarea podría ser una herramienta alternativa para evaluar la memoria espacial en niños con síntomas de falta de atención. * La tarea promueve interés y permite la evaluación de una manera ecológica. * La tarea podría ayudar a identificar alteraciones en la memoria espacial tanto en niños como en adultos.<br>In this thesis, the main objective was to design and develop a new task that combine Natural User Interfaces (NUI) and smart devices for assessing spatial memory using auditory stimuli, and its validation in both children and adults. The new task tests the ability of participants to detect and localize auditory stimuli that are emitted in different positions of the task area. The task recognizes the movements of the arms of the user using Kinect. Smart devices (Karotz rabbits) are used for emitting auditory stimuli and also as visual cues. Therefore, the task combines auditory stimuli with real visual cues for the assessment of spatial memory. The task includes a total of 45 acoustic stimuli, which should be randomly emitted in different locations. The task is composed of five different levels. Each level consists of 3 trials. The difference between levels lies in the number of sounds to be used in each trial. To our knowledge, our task is the first work that combines NUI and smart devices for the assessment of spatial memory. Similarly, our task is the first work that uses auditory stimuli to assess the spatial memory. For the validation, three studies were carried out to determine the efficacy and utility of our task with regard to the performance outcomes, usability, fun, perception and overall satisfaction. The performance of our task was compared with traditional methods. The first study involved children with and without symptoms of inattention. A total of 34 children participated (17 children with inattention). The results showed that the children with inattention showed statistically worse performance in the task. These children with inattention also showed statistically worse performance in the traditional method for testing the learning of verbal sounds. There were no statistically significant differences in the time spent by each group to complete the task. The results suggest that the task is a good tool for distinguishing spatial memory difficulties in children with inattention. The second study compared the performance in the task between older children and adults. A total of 70 participants were involved in this study. There were 32 healthy children from 9 to 10 years old, and 38 healthy adults from 18 to 28 years old. The performance outcomes with the task were significantly lower for the older children. Correlations were found between our task and traditional methods, indicating that our task has proven to be a valid tool for assessing spatial memory by using auditory stimuli for both older children and adults. From the analysis, we can conclude that the older children were significantly more satisfied with the task than the adults. In the third study, a total of 148 participants were involved. They were distributed in three groups (younger children, older children and adults). A total of 100 children and 48 adults participated in this study. The results are in line with the second study. The task performance was significantly incrementally and directly related to the age group (younger children <older children <adults). The results were better for adults and older children; this is consistent with the idea that adults can store more elements in short-term memory than children. The following general conclusions were extracted from the development and the studies: * Natural user interfaces and smart devices are appropriated for developing tasks for the assessment of spatial memory. * As a computer-based game, our task facilitates the control of the presentation of stimuli and the recording of responses. * Our task and similar tasks could be used for assessment and training of spatial memory in children and adults. * The task could be an alternative tool to assess spatial memory in children with symptoms of inattention. * The task promotes engagement and allows the assessment in an ecological way. * The task could help in the identification of alterations in spatial memory in both children and adults.<br>En aquesta tesi, l'objectiu principal va ser dissenyar i desenvolupar una nova tasca que combinés NUI i dispositius intel·ligents per a avaluar la memòria espacial utilitzant estímuls auditius, i la seua validació tant en xiquets, com en adults. La nova tasca avalua la capacitat dels participants per a detectar i localitzar estímuls auditius que s'emeten en diferents posicions de l'àrea de treball. La tasca reconeix els moviments dels braços de l'usuari, utilitzant per a açò Kinect. Els dispositius intel·ligents (conills Karotz) s'utilitzen per a emetre estímuls auditius i, també, com a senyals visuals. Per tant, la tasca combina estímuls auditius amb claus visuals reals per a l'avaluació de la memòria espacial. La tasca inclou un total de 45 estímuls acústics, repartits en 5 nivells diferents i cada nivell consta de 3 assajos. La nostra tasca és el primer treball que combina NUI i dispositius intel·ligents per a l'avaluació de la memòria espacial. De la mateixa manera, la nostra tasca és el primer treball que utilitza estímuls auditius per a avaluar la memòria espacial. Per a la validació, es van dur a terme tres estudis. El rendiment de la nostra tasca es va comparar amb mètodes tradicionals. El primer estudi va involucrar 34 xiquets (17 xiquets amb inatenció). Els resultats van demostrar que els xiquets amb inatenció van mostrar un rendiment estadísticament pitjor en la tasca. Aquests xiquets amb inatenció també van mostrar un rendiment estadísticament pitjor amb el mètode tradicional per a avaluar l'aprenentatge de sons verbals. No es van trobar diferències estadísticament significatives en el temps dedicat per cada grup per a completar la tasca. Els resultats suggereixen que la tasca és una bona ferramenta per a distingir les dificultats de memòria espacial en xiquets amb dificultats d'atenció. El segon estudi va comparar el rendiment en la tasca entre xiquets majors i adults. Un total de 70 participants van estar involucrats en aquest estudi. Van participar 32 xiquets i 38 adults sans. Els resultats de rendiment amb la tasca van ser significativament més baixos per als xiquets majors. Es van trobar correlacions entre la nostra tasca i els mètodes tradicionals, la qual cosa indica que la nostra tasca ha demostrat ser una ferramenta vàlida per a avaluar la memòria espacial mitjançant l'ús d'estímuls auditius tant per a xiquets majors, com per a adults. A partir de l'anàlisi, podem concloure que la satisfacció amb la tasca dels xiquets majors va ser significativament major que la dels adults. El tercer estudi va incloure un total de 148 participants (100 xiquets i 48 adults). Es van distribuir en tres grups (xiquets més xicotets, xiquets majors i adults). Els resultats estan en línia amb el segon estudi. El rendiment de la tasca es va relacionar significativament, de forma incremental i directa amb el grup d'edat (xiquets més xicotets <xiquets majors <adults). Els resultats van ser millors per a adults i xiquets majors; resultat que és consistent amb la idea que els adults poden emmagatzemar més elements en la memòria a curt termini que els xiquets. Las següents conclusions generals s'han extret del desenvolupament i els tres estudis: * Les NUI i els dispositius intel·ligents són apropiats per a desenvolupar tasques per a l'avaluació de la memòria espacial. * Com a joc d'ordinador, la nostra tasca facilita el control de la presentació d'estímuls i l'emmagatzematge de les respostes. * La nostra tasca, i tasques similars, podrien usar-se per a l'avaluació i l'entrenament de la memòria espacial en xiquets i adults. * La tasca podria ser una ferramenta alternativa per a avaluar la memòria espacial en xiquets amb problemes de inatenció. * La tasca promou interès i permet l'avaluació d'una manera ecològica. * La tasca podria ajudar a identificar alteracions en la memòria espacial tant en xiquets, com en adults.<br>Loachamín Valencia, MR. (2018). Natural user interfaces and smart devices for the assessment of spatial memory using auditory stimuli [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/107955<br>TESIS
APA, Harvard, Vancouver, ISO, and other styles
8

Antoniac, P. (Peter). "Augmented reality based user interface for mobile applications and services." Doctoral thesis, University of Oulu, 2005. http://urn.fi/urn:isbn:9514276965.

Full text
Abstract:
Abstract Traditional design of user interfaces for mobile phones is limited to a small interaction that provides only the necessary means to place phone calls or to write short messages. Such narrow activities supported via current terminals suppress users from moving towards mobile and ubiquitous computing environments of the future. Unfortunately, the next generation of user interfaces for mobile terminals seems to apply the same design patterns as commonly used for desktop computers. Whereas the desktop environment has enough resources to implement such design, capabilities of the mobile terminals fall under constraints dictated by mobility, like the size and weight. Additionally, to make mobile terminals available for everyone, users should be able to operate them with minimal or no preparation, while users of desktop computers will require certain degree of training. This research looks into how to improve the user interface of future mobile devices by using a more human-centred design. One possible solution is to combine the Augmented Reality technique with image recognition in such a way that it will allow the user to access a "virtualized interface". Such an interface is feasible since the user of an Augmented Reality system is able to see synthetic objects overlaying the real world. Overlaying the user's sight and using the image recognition process, the user interacts with the system using a combination of virtual buttons and hand gestures. The major contribution of this work is the definition of the user's gestures that makes it possible for human-computer interaction with such Augmented Reality based User Interfaces. Another important contribution is the evaluation on how mobile applications and services work with this kind of user interface and whether the technology is available to support it.
APA, Harvard, Vancouver, ISO, and other styles
9

Abi-Rached, Habib. "Stereo-based hand gesture tracking and recognition in immersive stereoscopic displays /." Thesis, Connect to this title online; UW restricted, 2006. http://hdl.handle.net/1773/6012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Chong, and 王翀. "Joint color-depth restoration with kinect depth camera and its applications to image-based rendering and hand gesture recognition." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/206343.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Gesture-Based User Interface"

1

Kwak, Jeonghoon, and Yunsick Sung. "Gesture-Based User Interface Design for UAV Controls." In Advances in Computer Science and Ubiquitous Computing. Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-7605-3_157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Park, HyeSun, EunYi Kim, SangSu Jang, and HangJoon Kim. "An HMM Based Gesture Recognition for Perceptual User Interface." In Advances in Multimedia Information Processing - PCM 2004. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30542-2_126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dehankar, A. V., Sanjeev Jain, and V. M. Thakare. "Hand Gesture-Based User Interface for Controlling Mobile Applications." In Lecture Notes in Electrical Engineering. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1906-8_59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Iqbal, M. Mohamed, Chintam Sravan Kumar, Mukkoti Maruthi Venkata Chalapathi, A. Vijaya Krishna, and P. Purushotham. "Gesture Based User Interface Access Using Convolution Neural Network." In Cognitive Science and Technology. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-97-8533-9_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pirker, Johanna, Mathias Pojer, Andreas Holzinger, and Christian Gütl. "Gesture-Based Interactions in Video Games with the Leap Motion Controller." In Human-Computer Interaction. User Interface Design, Development and Multimodality. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58071-5_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kagirov, Ildar, Dmitry Ryumin, and Miloš Železný. "Gesture-Based Intelligent User Interface for Control of an Assistive Mobile Information Robot." In Lecture Notes in Computer Science. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60337-3_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lim, C. J., and Y. G. Jung. "A Study on the Usability Testing of Gesture Tracking-Based Natural User Interface." In Communications in Computer and Information Science. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39473-7_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Htet Kyaw, Alexander, Lawson Spencer, Sasa Zivkovic, and Leslie Lok. "Gesture Recognition for Feedback Based Mixed Reality and Robotic Fabrication: A Case Study of the UnLog Tower." In Computational Design and Robotic Fabrication. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-99-8405-3_28.

Full text
Abstract:
AbstractMixed Reality (MR) platforms enable users to interact with three-dimensional holographic instructions during the assembly and fabrication of highly custom and parametric architectural constructions without the necessity of two-dimensional drawings. Previous MR fabrication projects have primarily relied on digital menus and custom buttons as the interface for user interaction with the MR environment. Despite this approach being widely adopted, it is limited in its ability to allow for direct human interaction with physical objects to modify fabrication instructions within the MR environment. This research integrates user interactions with physical objects through real-time gesture recognition as input to modify, update or generate new digital information enabling reciprocal stimuli between the physical and the virtual environment. Consequently, the digital environment is generative of the user’s provided interaction with physical objects to allow seamless feedback in the fabrication process. This research investigates gesture recognition for feedback-based MR workflows for robotic fabrication, human assembly, and quality control in the construction of the UnLog Tower.
APA, Harvard, Vancouver, ISO, and other styles
9

Gölz, Jacqueline, and Christian Hatzfeld. "Sensor Design." In Springer Series on Touch and Haptic Systems. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04536-3_10.

Full text
Abstract:
AbstractMultiple sensors are applied in haptic devices designs. Even if they are not closed-loop controlled in a narrow sense of force or torque generation, they are used to detect movement ranges and limits or the detection of the presence of a user and its type of interaction with an object or human-machine-interface (HMI). Almost any type of technical sensor had been applied in the context of haptic devices. The emerging market of gesture based user interaction and integration of haptics due to ergonomic reasons extends the range of sensors potentially relevant for haptic devices. However, what exactly is a sensor? Which is the right one for your purpose and is there a systematic way to choose it? To support you answering these fundamental questions, classification of sensors is helpful. This chapter starts with a definition and classifications according to measurand and sensing principles. Constraints, you will have to focus on, are discussed and selection criteria are deduced. An introduction in technologies and design principles for mechanical sensors serves as an overview for your selection process. Common types of force/torque, positioning, velocity and acceleration sensors are presented. Furthermore, imaging and temperature sensors are addressed briefly in this section.
APA, Harvard, Vancouver, ISO, and other styles
10

Kratky, Andreas. "Gesture-Based User Interfaces for Public Spaces." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21663-3_61.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Gesture-Based User Interface"

1

Alvarado García, Gabriel Alfredo, and Fávell Eduardo Núñez Rodríguez. "Gesture-Based Control of an OMRON Viper 650 Robot." In I Conferencia Internacional de Ciencia, Tecnología e Innovación. Trans Tech Publications Ltd, 2024. http://dx.doi.org/10.4028/p-ag7cow.

Full text
Abstract:
This project focuses on developing and implementing a robotic control system based on detecting signs and gestures using computer vision. The main goal was to create an intuitive and efficient interface for interacting with an OMRON Viper 650 industrial robot. To achieve this, computer vision technologies like Mediapipe and OpenCV were used to detect and recognize the user’s hands and fingers in real-time. The collected data was processed with a Python script and stored in a text file. Additionally, a program was developed in C# using OMRON’s ACE programming interface to extract data from the text file and send commands to the Viper 650 robot, enabling it to interpret the user’s gestures and perform actions accordingly. This project has successfully created an innovative solution that combines computer vision, programming, and industrial robotics to provide an intuitive and efficient control experience, opening up new possibilities in industrial and human-robot interaction applications.
APA, Harvard, Vancouver, ISO, and other styles
2

Dannenberg, R. B., and D. Amon. "A gesture based user interface prototyping system." In the 2nd annual ACM SIGGRAPH symposium. ACM Press, 1989. http://dx.doi.org/10.1145/73660.73676.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Krastev, Georgi, and Ivan Ralev. "Gesture Based System for User Interface Control." In 2021 5th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT). IEEE, 2021. http://dx.doi.org/10.1109/ismsit52890.2021.9604693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sunar, Noorhazirah, Soh Kai Jie, Nur Haliza Abd Wahab, M. F. Rahmat, Noraishah Muhammad, and Muhammad Yazid Zakwan Zuber. "Vision-based smart hand gesture user interface." In INTERNATIONAL CONFERENCE ON ADVANCES IN PURE & APPLIED MATHEMATICS (ICAPAM). AIP Publishing, 2025. https://doi.org/10.1063/5.0209567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sripradha K, Nikhil Kamath K, Nilesh Nayak H, and Ram P. Rustagi. "Dynamic resource management using gesture-based user interface." In 2014 20th Annual International Conference on Advanced Computing and Communications (ADCOM). IEEE, 2014. http://dx.doi.org/10.1109/adcom.2014.7103239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ohn-Bar, Eshed, Cuong Tran, and Mohan Trivedi. "Hand gesture-based visual user interface for infotainment." In the 4th International Conference. ACM Press, 2012. http://dx.doi.org/10.1145/2390256.2390274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, Jae-Ho, Hee-Kwon Kim, Seung-Woo Nam, and Hyoung-Shin Kim. "Application adaptation methods of gesture recognition based user interface." In 2013 19th Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV2013). IEEE, 2013. http://dx.doi.org/10.1109/fcv.2013.6485500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kusabuka, Takahiro, and Takuya Indo. "IBUKI: Gesture Input Method Based on Breathing." In UIST '20: The 33rd Annual ACM Symposium on User Interface Software and Technology. ACM, 2020. http://dx.doi.org/10.1145/3379350.3416134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shahi, Soroush, Vimal Mollyn, Cori Tymoszek Park, et al. "Vision-Based Hand Gesture Customization from a Single Demonstration." In UIST '24: The 37th Annual ACM Symposium on User Interface Software and Technology. ACM, 2024. http://dx.doi.org/10.1145/3654777.3676378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

An, Jun-ho, and Kwang-Seok Hong. "Finger gesture-based mobile user interface using a rear-facing camera." In 2011 IEEE International Conference on Consumer Electronics (ICCE). IEEE, 2011. http://dx.doi.org/10.1109/icce.2011.5722596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!