Academic literature on the topic 'Gesture Recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Gesture Recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Gesture Recognition"

1

Patil, Anuradha, Chandrashekhar M. Tavade, and . "Methods on Real Time Gesture Recognition System." International Journal of Engineering & Technology 7, no. 3.12 (July 20, 2018): 982. http://dx.doi.org/10.14419/ijet.v7i3.12.17617.

Full text
Abstract:
Gesture recognition deals with discussion of various methods, techniques and concerned algorithms related to it. Gesture recognition uses a simple & basic sign languages like movement of hand, position of lips & eye ball as well as eye lids positions. The various methods for image capturing, gesture recognition, gesture tracking, gesture segmentation and smoothing methods compared, and by the overweighing advantage of different gesture recognitions and their applications. In recent days gesture recognition is widely utilized in gaming industries, biomedical applications, and medical diagnostics for dumb and deaf people. Due to their wide applications, high efficiency, high accuracy and low expenditure gestures are using in many applications including robotics. By using gestures to develop human computer interaction (HCI) method it is necessary to identify the proper and meaning full gesture from different gesture images. The Gesture recognition avoids use of costly hardware devices for understanding the activities and recognition example lots of I/O devices like keyboard mouse etc. Can be Limited.
APA, Harvard, Vancouver, ISO, and other styles
2

Badagan, Sana, Deeksha R, K. Tarun Sai Teja, and Chetan J. "HAND GESTURE RECOGNITION." International Journal of Engineering Applied Sciences and Technology 8, no. 6 (October 1, 2023): 56–59. http://dx.doi.org/10.33564/ijeast.2023.v08i06.007.

Full text
Abstract:
Continuous and dynamic gesture recognition is a vital research area that aims to develop systems capable of interpreting and understanding hand gestures involving continuous motion and temporal dynamics. This project focuses on addressing the challenges associated with recognizing and analyzing gestures that go beyond static poses. By leveraging techniques such as temporal modeling, motion analysis, and deep learning, the goal is to develop algorithms and models that can robustly track and interpret the fluidity and expressiveness of human hand movements. The project aims to enhance the understanding of gesture sequencing, timing, and smooth transitions between different poses and gestures. The research outcomes will contribute to the advancement of intuitive human-computer interaction, enabling users to express themselves more naturally and seamlessly in applications such as virtual reality, gaming, and humanrobot interaction. Through the use of relevant datasets and advanced algorithms, this project seeks to explore novel approaches in continuous and dynamic gesture recognition and pave the way for future advancements in this field
APA, Harvard, Vancouver, ISO, and other styles
3

Ma, Xianmin, and Xiaofeng Li. "Dynamic Gesture Contour Feature Extraction Method Using Residual Network Transfer Learning." Wireless Communications and Mobile Computing 2021 (October 13, 2021): 1–11. http://dx.doi.org/10.1155/2021/1503325.

Full text
Abstract:
The current dynamic gesture contour feature extraction method has the problems that the recognition rate of dynamic gesture contour feature and the recognition accuracy of dynamic gesture type are low, the recognition time is long, and comprehensive is poor. Therefore, we propose a dynamic gesture contour feature extraction method using residual network transfer learning. Sensors are used to integrate dynamic gesture information. The distance between the dynamic gesture and the acquisition device is detected by transfer learning, the dynamic gesture image is segmented, and the characteristic contour image is initialized. The residual network method is used to accurately identify the contour and texture features of dynamic gestures. Fusion processing weights are used to trace the contour features of dynamic gestures frame by frame, and the contour area of dynamic gestures is processed by gray and binarization to realize the extraction of contour features of dynamic gestures. The results show that the dynamic gesture contour feature recognition rate of the proposed method is 91%, the recognition time is 11.6 s, and the dynamic gesture type recognition accuracy rate is 92%. Therefore, this method can effectively improve the recognition rate and type recognition accuracy of dynamic gesture contour features and shorten the time for dynamic gesture contour feature recognition, and the F value is 0.92, with good comprehensive performance.
APA, Harvard, Vancouver, ISO, and other styles
4

Jasim, Mahmood, Tao Zhang, and Md Hasanuzzaman. "A Real-Time Computer Vision-Based Static and Dynamic Hand Gesture Recognition System." International Journal of Image and Graphics 14, no. 01n02 (January 2014): 1450006. http://dx.doi.org/10.1142/s0219467814500065.

Full text
Abstract:
This paper presents a novel method for computer vision-based static and dynamic hand gesture recognition. Haar-like feature-based cascaded classifier is used for hand area segmentation. Static hand gestures are recognized using linear discriminant analysis (LDA) and local binary pattern (LBP)-based feature extraction methods. Static hand gestures are classified using nearest neighbor (NN) algorithm. Dynamic hand gestures are recognized using the novel text-based principal directional features (PDFs), which are generated from the segmented image sequences. Longest common subsequence (LCS) algorithm is used to classify the dynamic gestures. For testing, the Chinese numeral gesture dataset containing static hand poses and directional gesture dataset containing complex dynamic gestures are prepared. The mean accuracy of LDA-based static hand gesture recognition on the Chinese numeral gesture dataset is 92.42%. The mean accuracy of LBP-based static hand gesture recognition on the Chinese numeral gesture dataset is 87.23%. The mean accuracy of the novel dynamic hand gesture recognition method using PDF on directional gesture dataset is 94%.
APA, Harvard, Vancouver, ISO, and other styles
5

K, Srinivas, and Manoj Kumar Rajagopal. "STUDY OF HAND GESTURE RECOGNITION AND CLASSIFICATION." Asian Journal of Pharmaceutical and Clinical Research 10, no. 13 (April 1, 2017): 25. http://dx.doi.org/10.22159/ajpcr.2017.v10s1.19540.

Full text
Abstract:
To recognize different hand gestures and achieve efficient classification to understand static and dynamic hand movements used for communications.Static and dynamic hand movements are first captured using gesture recognition devices including Kinect device, hand movement sensors, connecting electrodes, and accelerometers. These gestures are processed using hand gesture recognition algorithms such as multivariate fuzzy decision tree, hidden Markov models (HMM), dynamic time warping framework, latent regression forest, support vector machine, and surface electromyogram. Hand movements made by both single and double hands are captured by gesture capture devices with proper illumination conditions. These captured gestures are processed for occlusions and fingers close interactions for identification of right gesture and to classify the gesture and ignore the intermittent gestures. Real-time hand gestures recognition needs robust algorithms like HMM to detect only the intended gesture. Classified gestures are then compared for the effectiveness with training and tested standard datasets like sign language alphabets and KTH datasets. Hand gesture recognition plays a very important role in some of the applications such as sign language recognition, robotics, television control, rehabilitation, and music orchestration.
APA, Harvard, Vancouver, ISO, and other styles
6

Kotavenuka, Swetha, Harshitha Kodakandla, Nimmakayala Sai Krishna, and Dr S. P. V. Subba Rao. "Hand Gesture Recognition." International Journal for Research in Applied Science and Engineering Technology 11, no. 1 (January 31, 2023): 331–35. http://dx.doi.org/10.22214/ijraset.2023.48557.

Full text
Abstract:
Abstract: This work presents a computer-vision-based application for recognizing hand gestures. A live video feed is captured by a camera, and a still image is extracted from that feed with the aid of an interface. At least once per count hand gesture (one, two, three, four, and five), the system is trained. After that, the system is given a test gesture to see if it can identify it. Several algorithms that are capable of distinguishing a hand gesture were studied. It was determined that the highest rate of accuracy was achieved by using the computational neural network known as the Alexnet algorithm. Traditionally, systems have used data gloves or markers as a means of input. We are free to use the system however we like. In this way, the user can make natural hand gestures in front of the camera. The system implemented serves as an extendable basis for future work toward a fully robust hand gesture recognition system, which is still the subject of intensive research and development.
APA, Harvard, Vancouver, ISO, and other styles
7

Chavan, Yogita. "Emotion and Gesture Recognition." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (April 30, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem31827.

Full text
Abstract:
Human Gestures and emotions play an important role in interpersonal relationships. The automatic recognition of emotions and gestures has been an active research topic from early eras. Emotions are reflected from facial expression, speech and gesture of the body. Hence understanding of emotion and gesture has high importance in interaction between humans(human-human) as well as between human and machine communication.In this system captured images are compared with the trained dataset available in the database and then display emotional state and gesture. For this system, fer13 (Facial Expressions Recognition 2013 dataset) and VGG16 (Visual Geometry Group) datasets are used to compare with captured images. In this system emotion and gesture recognition, both will run simultaneously. EDA(Exploratory Data Analysis) is used to analyse and validate the train data for both emotion as well as gesture dataset.
APA, Harvard, Vancouver, ISO, and other styles
8

Park, Jisun, Yong Jin, Seoungjae Cho, Yunsick Sung, and Kyungeun Cho. "Advanced Machine Learning for Gesture Learning and Recognition Based on Intelligent Big Data of Heterogeneous Sensors." Symmetry 11, no. 7 (July 16, 2019): 929. http://dx.doi.org/10.3390/sym11070929.

Full text
Abstract:
With intelligent big data, a variety of gesture-based recognition systems have been developed to enable intuitive interaction by utilizing machine learning algorithms. Realizing a high gesture recognition accuracy is crucial, and current systems learn extensive gestures in advance to augment their recognition accuracies. However, the process of accurately recognizing gestures relies on identifying and editing numerous gestures collected from the actual end users of the system. This final end-user learning component remains troublesome for most existing gesture recognition systems. This paper proposes a method that facilitates end-user gesture learning and recognition by improving the editing process applied on intelligent big data, which is collected through end-user gestures. The proposed method realizes the recognition of more complex and precise gestures by merging gestures collected from multiple sensors and processing them as a single gesture. To evaluate the proposed method, it was used in a shadow puppet performance that could interact with on-screen animations. An average gesture recognition rate of 90% was achieved in the experimental evaluation, demonstrating the efficacy and intuitiveness of the proposed method for editing visualized learning gestures.
APA, Harvard, Vancouver, ISO, and other styles
9

Nyirarugira, Clementine, Hyo-rim Choi, and TaeYong Kim. "Hand Gesture Recognition Using Particle Swarm Movement." Mathematical Problems in Engineering 2016 (2016): 1–8. http://dx.doi.org/10.1155/2016/1919824.

Full text
Abstract:
We present a gesture recognition method derived from particle swarm movement for free-air hand gesture recognition. Online gesture recognition remains a difficult problem due to uncertainty in vision-based gesture boundary detection methods. We suggest an automated process of segmenting meaningful gesture trajectories based on particle swarm movement. A subgesture detection and reasoning method is incorporated in the proposed recognizer to avoid premature gesture spotting. Evaluation of the proposed method shows promising recognition results: 97.6% on preisolated gestures, 94.9% on stream gestures with assistive boundary indicators, and 94.2% for blind gesture spotting on digit gesture vocabulary. The proposed recognizer requires fewer computation resources; thus it is a good candidate for real-time applications.
APA, Harvard, Vancouver, ISO, and other styles
10

Fan, Jinlong, Yang Yue, Yu Wang, Bei Wan, Xudong Li, and Gengpai Hua. "A Continuous Gesture Segmentation and Recognition Method for Human-Robot Interaction." Journal of Physics: Conference Series 2213, no. 1 (March 1, 2022): 012039. http://dx.doi.org/10.1088/1742-6596/2213/1/012039.

Full text
Abstract:
Abstract The process of human-computer cooperation using gesture recognition can make people get rid of the limitations of traditional input devices such as mouse and keyboard, and control artificial intelligence devices more efficiently and naturally. As a new way of human-robot interaction (HRI), gesture recognition has made some progress. There are many ways to realize gesture recognition combined with visual recognition, motion information acquisition and EMG signal. The research on isolated language gesture recognition has been quite mature, but the expression semantics of isolated gestures is single. In order to improve the interaction efficiency, the application of continuous gesture recognition is essential. This paper studies its application in continuous sign language sentence recognition based on inertial sensor and rule matching recognition algorithm. The recognition rate of nine single HRI gestures is 92.7%, and the HRI of combined gestures is realized.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Gesture Recognition"

1

Davis, James W. "Gesture recognition." Honors in the Major Thesis, University of Central Florida, 1994. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/126.

Full text
Abstract:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Arts and Sciences
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
2

Cheng, You-Chi. "Robust gesture recognition." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53492.

Full text
Abstract:
It is a challenging problem to make a general hand gesture recognition system work in a practical operation environment. In this study, it is mainly focused on recognizing English letters and digits performed near the steering wheel of a car and captured by a video camera. Like most human computer interaction (HCI) scenarios, the in-car gesture recognition suffers from various robustness issues, including multiple human factors and highly varying lighting conditions. It therefore brings up quite a few research issues to be addressed. First, multiple gesturing alternatives may share the same meaning, which is not typical in most previous systems. Next, gestures may not be the same as expected because users cannot see what exactly has been written, which increases the gesture diversity significantly.In addition, varying illumination conditions will make hand detection trivial and thus result in noisy hand gestures. And most severely, users will tend to perform letters at a fast pace, which may result in lack of frames for well-describing gestures. Since users are allowed to perform gestures in free-style, multiple alternatives and variations should be considered while modeling gestures. The main contribution of this work is to analyze and address these challenging issues step-by-step such that eventually the robustness of the whole system can be effectively improved. By choosing color-space representation and performing the compensation techniques for varying recording conditions, the hand detection performance for multiple illumination conditions is first enhanced. Furthermore, the issues of low frame rate and different gesturing tempo will be separately resolved via the cubic B-spline interpolation and i-vector method for feature extraction. Finally, remaining issues will be handled by other modeling techniques such as sub-letter stroke modeling. According to experimental results based on the above strategies, the proposed framework clearly improved the system robustness and thus encouraged the future research direction on exploring more discriminative features and modeling techniques.
APA, Harvard, Vancouver, ISO, and other styles
3

Kaâniche, Mohamed Bécha. "Human gesture recognition." Nice, 2009. http://www.theses.fr/2009NICE4032.

Full text
Abstract:
Dans cette thèse, nous voulons reconnaître les gestes (par ex. Lever la main) et plus généralement les actions brèves (par ex. Tomber, se baisser) effectués par un individu. De nombreux travaux ont été proposés afin de reconnaître des gestes dans un contexte précis (par ex. En laboratoire) à l’aide d’une multiplicité de capteurs (par ex. Réseaux de cameras ou individu observé muni de marqueurs). Malgré ces hypothèses simplificatrices, la reconnaissance de gestes reste souvent ambiguë en fonction de la position de l’individu par rapport aux caméras. Nous proposons de réduire ces hypothèses afin de concevoir un algorithme général permettant de reconnaître des gestes d’un individu évoluant dans un environnement quelconque et observé `a l’aide d’un nombre réduit de caméras. Il s’agit d’estimer la vraisemblance de la reconnaissance des gestes en fonction des conditions d’observation. Notre méthode consiste `a classifier un ensemble de gestes `a partir de l’apprentissage de descripteurs de mouvement. Les descripteurs de mouvement sont des signatures locales du mouvement de points d’intérêt associés aux descriptions locales de la texture du voisinage des points considérés. L’approche a été validée sur une base de données de gestes publique KTH et des résultats encourageants ont été obtenus
In this thesis, we aim to recognize gestures (e. G. Hand raising) and more generally short actions (e. G. Fall, bending) accomplished by an individual. Many techniques have already been proposed for gesture recognition in specific environment (e. G. Laboratory) using the cooperation of several sensors (e. G. Camera network, individual equipped with markers). Despite these strong hypotheses, gesture recognition is still brittle and often depends on the position of the individual relatively to the cameras. We propose to reduce these hypotheses in order to conceive general algorithm enabling the recognition of the gesture of an individual involving in an unconstrained environment and observed through limited number of cameras. The goal is to estimate the likelihood of gesture recognition in function of the observation conditions. Our method consists of classifying a set of gestures by learning motion descriptors. These motion descriptors are local signatures of the motion of corner points which are associated with their local textural description. We demonstrate the effectiveness of our motion descriptors by recognizing the actions of the public KTH database
APA, Harvard, Vancouver, ISO, and other styles
4

Semprini, Mattia. "Gesture Recognition: una panoramica." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15672/.

Full text
Abstract:
Per decenni, l’uomo ha interagito con i calcolatori e altri dispositivi quasi esclusivamente premendo i tasti e facendo "click" sul mouse. Al giorno d’oggi, vi è un grande cambiamento in atto a seguito di una ondata di nuove tecnologie che rispondono alle azioni più naturali, come il movimento delle mani o dell’intero corpo. Il mercato tecnologico è stato scosso in un primo momento dalla sostituzione delle tecniche di interazione standard con approcci di tipo "touch and motion sensing"; il passo successivo è l’introduzione di tecniche e tecnologie che permettano all’utente di accedere e manipolare informazioni interagendo con un sistema informatico solamente con gesti ed azioni del corpo. A questo proposito nasce la Gesture Recognition, una parte sostanziale dell’informatica e della tecnologia del linguaggio, che ha come obbiettivo quello di interpretare ed elaborare gesti umani attraverso algoritmi informatici. In questa trattazione andrò a spiegare, nei primi due capitoli la storia delle tecnologie Wearable dai primi orologi che non si limitavano alla sola indicazione dell’orario fino alla nascita dei sistemi utilizzati al giorno d’oggi per la Gesture Recognition. Segue, nel terzo capitolo, un’esposizione dei più utilizzati algoritmi di classificazione delle gesture. Nel quarto andrò ad approfondire uno dei primi framework progettati per fare in modo che lo sviluppatore si concentri sull’applicazione tralasciando la parte di codifica e classificazione delle gesture. Nell’ultima parte verrà esaminato uno dei dispositivi più performanti ed efficaci in questo campo: il Myo Armband. Saranno riportate anche due studi che dimostrano la sua validità.
APA, Harvard, Vancouver, ISO, and other styles
5

Gingir, Emrah. "Hand Gesture Recognition System." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612532/index.pdf.

Full text
Abstract:
This thesis study presents a hand gesture recognition system, which replaces input devices like keyboard and mouse with static and dynamic hand gestures, for interactive computer applications. Despite the increase in the attention of such systems there are still certain limitations in literature. Most applications require different constraints like having distinct lightning conditions, usage of a specific camera, making the user wear a multi-colored glove or need lots of training data. The system mentioned in this study disables all these restrictions and provides an adaptive, effort free environment to the user. Study starts with an analysis of the different color space performances over skin color extraction. This analysis is independent of the working system and just performed to attain valuable information about the color spaces. Working system is based on two steps, namely hand detection and hand gesture recognition. In the hand detection process, normalized RGB color space skin locus is used to threshold the coarse skin pixels in the image. Then an adaptive skin locus, whose varying boundaries are estimated from coarse skin region pixels, segments the distinct skin color in the image for the current conditions. Since face has a distinct shape, face is detected among the connected group of skin pixels by using the shape analysis. Non-face connected group of skin pixels are determined as hands. Gesture of the hand is recognized by improved centroidal profile method, which is applied around the detected hand. A 3D flight war game, a boxing game and a media player, which are controlled remotely by just using static and dynamic hand gestures, were developed as human machine interface applications by using the theoretical background of this study. In the experiments, recorded videos were used to measure the performance of the system and a correct recognition rate of ~90% was acquired with nearly real time computation.
APA, Harvard, Vancouver, ISO, and other styles
6

Dang, Darren Phi Bang. "Template based gesture recognition." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/41404.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 65-66).
by Darren PHi Bang Dang.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Lei. "Personalized Dynamic Hand Gesture Recognition." Thesis, KTH, Medieteknik och interaktionsdesign, MID, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231345.

Full text
Abstract:
Human gestures, with the spatial-temporal variability, are difficult to be recognized by a generic model or classifier that are applicable for everyone. To address the problem, in this thesis, personalized dynamic gesture recognition approaches are proposed. Specifically, based on Dynamic Time Warping(DTW), a novel concept of Subject Relation Network is introduced to describe the similarity of subjects in performing dynamic gestures, which offers a brand new view for gesture recognition. By clustering or arranging training subjects based on the network, two personalization algorithms are proposed respectively for generative models and discriminative models. Moreover, three basic recognition methods, DTW-based template matching, Hidden Markov Model(HMM) and Fisher Vector combining classification, are compared and integrated into the proposed personalized gesture recognition. The proposed approaches are evaluated on a challenging dynamic hand gesture recognition dataset DHG14/28, which contains the depth images and skeleton coordinates returned by the Intel RealSense depth camera. Experimental results show that the proposed personalized algorithms can significantly improve the performance of basic generative&discriminative models and achieve the state-of-the-art accuracy of 86.2%.
Människliga gester, med spatiala/temporala variationer, är svåra att känna igen med en generisk modell eller klassificeringsmetod. För att komma till rätta med problemet, föreslås personifierade, dynamiska gest igenkänningssätt baserade på Dynamisk Time Warping (DTW) och ett nytt koncept: Subjekt-Relativt Nätverk för att beskriva likheter vid utförande av dynamiska gester, vilket ger en ny syn på gest igenkänning. Genom att klustra eller ordna träningssubjekt baserat på nätverket föreslås två personifieringsalgoritmer för generativa och diskriminerande modeller. Dessutom jämförs och integreras tre grundläggande igenkänningsmetoder, DTW-baserad mall-matchning, Hidden Markov Model (HMM) och Fisher Vector-klassificering i den föreslagna personifierade gestigenkännande ansatsen. De föreslagna tillvägagångssätten utvärderas på ett utmanande, dynamiskt handmanipulerings dataset DHG14/28, som innehåller djupbilderna och skelettkoordinaterna som returneras av Intels RealSense-djupkamera. Experimentella resultat visar att de föreslagna personifierade algoritmerna kan förbättra prestandan i jämfört medgrundläggande generativa och diskriminerande modeller och uppnå den högsta nivån på 86,2%.
APA, Harvard, Vancouver, ISO, and other styles
8

Espinoza, Victor. "Gesture Recognition in Tennis Biomechanics." Master's thesis, Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/530096.

Full text
Abstract:
Electrical and Computer Engineering
M.S.E.E.
The purpose of this study is to create a gesture recognition system that interprets motion capture data of a tennis player to determine which biomechanical aspects of a tennis swing best correlate to a swing efficacy. For our learning set this work aimed to record 50 tennis athletes of similar competency with the Microsoft Kinect performing standard tennis swings in the presence of different targets. With the acquired data we extracted biomechanical features that hypothetically correlated to ball trajectory using proper technique and tested them as sequential inputs to our designed classifiers. This work implements deep learning algorithms as variable-length sequence classifiers, recurrent neural networks (RNN), to predict tennis ball trajectory. In attempt to learn temporal dependencies within a tennis swing, we implemented gate-augmented RNNs. This study compared the RNN to two gated models; gated recurrent units (GRU), and long short-term memory (LSTM) units. We observed similar classification performance across models while the gated-methods reached convergence twice as fast as the baseline RNN. The results displayed 1.2 entropy loss and 50 % classification accuracy indicating that the hypothesized biomechanical features were loosely correlated to swing efficacy or that they were not accurately depicted by the sensor
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
9

Nygård, Espen Solberg. "Multi-touch Interaction with Gesture Recognition." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9126.

Full text
Abstract:

This master's thesis explores the world of multi-touch interaction with gesture recognition. The focus is on camera based multi-touch techniques, as these provide a new dimension to multi-touch with its ability to recognize objects. During the project, a multi-touch table based on the technology Diffused Surface Illumination has been built. In addition to building a table, a complete gesture recognition system has been implemented, and different gesture recognition algorithms have been successfully tested in a multi-touch environment. The goal with this table, and the accompanying gesture recognition system, is to create an open and affordable multi-touch solution, with the purpose of bringing multi-touch out to the masses. By doing this, more people will be able to enjoy the benefits of a more natural interaction with computers. In a larger perspective, multi-touch is just the beginning, and by adding additional modalities to our applications, such as speech recognition and full body tracking, a whole new level of computer interaction will be possible.

APA, Harvard, Vancouver, ISO, and other styles
10

Khan, Muhammad. "Hand Gesture Detection & Recognition System." Thesis, Högskolan Dalarna, Datateknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:du-6496.

Full text
Abstract:
The project introduces an application using computer vision for Hand gesture recognition. A camera records a live video stream, from which a snapshot is taken with the help of interface. The system is trained for each type of count hand gestures (one, two, three, four, and five) at least once. After that a test gesture is given to it and the system tries to recognize it.A research was carried out on a number of algorithms that could best differentiate a hand gesture. It was found that the diagonal sum algorithm gave the highest accuracy rate. In the preprocessing phase, a self-developed algorithm removes the background of each training gesture. After that the image is converted into a binary image and the sums of all diagonal elements of the picture are taken. This sum helps us in differentiating and classifying different hand gestures.Previous systems have used data gloves or markers for input in the system. I have no such constraints for using the system. The user can give hand gestures in view of the camera naturally. A completely robust hand gesture recognition system is still under heavy research and development; the implemented system serves as an extendible foundation for future work.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Gesture Recognition"

1

Escalera, Sergio, Isabelle Guyon, and Vassilis Athitsos, eds. Gesture Recognition. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57021-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Konar, Amit, and Sriparna Saha. Gesture Recognition. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-62212-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dempsey, R. Dataglove gesture recognition using a neural network. Manchester: UMIST, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chaudhary, Ankit. Robust Hand Gesture Recognition for Robotic Hand Control. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-4798-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Ming-Hsuan, and Narendra Ahuja. Face Detection and Gesture Recognition for Human-Computer Interaction. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4615-1423-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

1950-, Ahuja Narendra, ed. Face detection and gesture recognition for human-computer interaction. Boston: Kluwer Academic, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Ming-Hsuan. Face Detection and Gesture Recognition for Human-Computer Interaction. Boston, MA: Springer US, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Human activity recognition and gesture spotting with body-worn sensors. Konstanz: Hartung-Gorre Verlag, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sowa, Timo. Understanding coverbal iconic gestures in shape descriptions. Berlin: Akademische Verlagsgesellschaft Aka, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mäntylä, Vesa-Matti. Discrete hidden Markov models with application to isolated user-dependent hand gesture recognition. Espoo [Finland]: Technical Research Centre of Finland, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Gesture Recognition"

1

Escalera, Sergio, Vassilis Athitsos, and Isabelle Guyon. "Challenges in Multi-modal Gesture Recognition." In Gesture Recognition, 1–60. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57021-1_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fanello, Sean Ryan, Ilaria Gori, Giorgio Metta, and Francesca Odone. "Keep It Simple and Sparse: Real-Time Action Recognition." In Gesture Recognition, 303–28. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57021-1_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wan, Jun, Qiuqi Ruan, Wei Li, and Shuang Deng. "One-Shot Learning Gesture Recognition from RGB-D Data Using Bag of Features." In Gesture Recognition, 329–64. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57021-1_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Konečný, Jakub, and Michal Hagara. "One-Shot-Learning Gesture Recognition Using HOG-HOF Features." In Gesture Recognition, 365–85. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57021-1_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jiang, Feng, Shengping Zhang, Shen Wu, Yang Gao, and Debin Zhao. "Multi-layered Gesture Recognition with Kinect." In Gesture Recognition, 387–416. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57021-1_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wu, Jiaxiang, and Jian Cheng. "Bayesian Co-Boosting for Multi-modal Gesture Recognition." In Gesture Recognition, 417–41. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57021-1_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Goussies, Norberto A., Sebastián Ubalde, and Marta Mejail. "Transfer Learning Decision Forests for Gesture Recognition." In Gesture Recognition, 443–66. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57021-1_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pitsikalis, Vassilis, Athanasios Katsamanis, Stavros Theodorakis, and Petros Maragos. "Multimodal Gesture Recognition via Multiple Hypotheses Rescoring." In Gesture Recognition, 467–96. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57021-1_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gillian, Nicholas, and Joseph A. Paradiso. "The Gesture Recognition Toolkit." In Gesture Recognition, 497–502. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57021-1_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Nguyen-Dinh, Long-Van, Alberto Calatroni, and Gerhard Tröster. "Robust Online Gesture Recognition with Crowdsourced Annotations." In Gesture Recognition, 503–37. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57021-1_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Gesture Recognition"

1

Nyaga, Casam, and Ruth Wario. "Towards Kenyan Sign Language Hand Gesture Recognition Dataset." In 14th International Conference on Applied Human Factors and Ergonomics (AHFE 2023). AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1003281.

Full text
Abstract:
Datasets for hand gesture recognition are now an important aspect of machine learning. Many datasets have been created for machine learning purposes. Some of the notable datasets include Modified National Institute of Standards and Technology (MNIST) dataset, Common Objects in Context (COCO) dataset, Canadian Institute For Advanced Research (CIFAR-10) dataset, LeNet-5, AlexNet, GoogLeNet, The American Sign Language Lexicon Video Dataset and 2D Static Hand Gesture Colour Image Dataset for ASL Gestures. However, there is no dataset for Kenya Sign language (KSL). This paper proposes the creation of a KSL hand gesture recognition dataset. The dataset is intended to be in two-fold. One for static hand gestures, and one for dynamic hand gestures. With respect to dynamic hand gestures short videos of the KSL alphabet a to z and numbers 0 to 10 will be considered. Likewise, for the static gestures KSL alphabet a to z will be considered. It is anticipated that this dataset will be vital in creation of sign language hand gesture recognition systems not only for Kenya sign language but of other sign languages as well. This will be possible because of learning transfer ability when implementing sign language systems using neural network models.
APA, Harvard, Vancouver, ISO, and other styles
2

Patel, Shubh, and R. Deepa. "Hand Gesture Recognition Used for Functioning System Using OpenCV." In International Research Conference on IOT, Cloud and Data Science. Switzerland: Trans Tech Publications Ltd, 2023. http://dx.doi.org/10.4028/p-4589o3.

Full text
Abstract:
Recently much attention has been paid to the design of intelligent and natural user-computer interfaces. Hand Gesture Recognition systems has been developed continuously as its ability to interact with the machines. Now-a-days the news of metaverse ecosystem has increased the number of system in gesture recognition. Gestures are used to communicate with the PCs in a virtual environment. In this project Hand gestures are used to communicate information non-verbally which are free of expression to do a particular task. Here the hand gestures are recognized by using hand skeleton recognition using mediapipe library in python. In this project, the PC camera will record live video and recognizes the hand gestures based on which a particular functionality will take place. This project will present virtual keyboard, calculator and control system’s volume using hand gestures recognition technique by coding in Python using the OpenCV library.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Hong, and Jeong-Hoi Koo. "Development of a Wearable Gesture Recognition System." In ASME 2005 International Mechanical Engineering Congress and Exposition. ASMEDC, 2005. http://dx.doi.org/10.1115/imece2005-80061.

Full text
Abstract:
This paper presents an effort of developing a wearable gesture recognition system. The objective of this work is to design and build a mechatronic device that can recognize human gestures. This device can be used to help the communication between humans or humans and machines (such as unmanned vehicles). The device is composed of two main components, a data acquisition system and a gesture recognition system. The data acquisition system obtains sensory information from human motions and encodes the information for transmission to the gesture recognition system. Upon receiving the signals, the gesture recognition system decodes them such that machines can respond to the corresponding human motions. The project was conducted by a group of Mechanical Engineering students in the format of junior/senior engineering clinic. Through this project, they obtained working knowledge of a mechatronic device within a cohesive and dynamic team.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Haodong, Wenjin Tao, Ming C. Leu, and Zhaozheng Yin. "Dynamic Gesture Design and Recognition for Human-Robot Collaboration With Convolutional Neural Networks." In 2020 International Symposium on Flexible Automation. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/isfa2020-9609.

Full text
Abstract:
Abstract Human-robot collaboration (HRC) is a challenging task in modern industry and gesture communication in HRC has attracted much interest. This paper proposes and demonstrates a dynamic gesture recognition system based on Motion History Image (MHI) and Convolutional Neural Networks (CNN). Firstly, ten dynamic gestures are designed for a human worker to communicate with an industrial robot. Secondly, the MHI method is adopted to extract the gesture features from video clips and generate static images of dynamic gestures as inputs to CNN. Finally, a CNN model is constructed for gesture recognition. The experimental results show very promising classification accuracy using this method.
APA, Harvard, Vancouver, ISO, and other styles
5

Yi, Zhigang, Mingyu Zhou, Dan Xue, and Shusheng Peng. "Static Gesture Recognition in the cabin Based on 3D-TOF and Low Computing Power." In SAE 2023 Intelligent and Connected Vehicles Symposium. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2023. http://dx.doi.org/10.4271/2023-01-7068.

Full text
Abstract:
<div class="section abstract"><div class="htmlview paragraph">Traditional static gesture recognition algorithms are easily affected by the complex environment inside the cabin, resulting in low recognition rates. Compared with RGB photos captured by traditional cameras, the depth images captured by 3D-TOF cameras can not only reduce the influence of complex environments inside the cabin, but also protect crew privacy. Therefore, this paper proposes a low-computing static gesture recognition method based on 3D-TOF in the cabin. A low-parameter lightweight convolutional neural network (CNN) is used to train five gestures, and the trained gesture model is deployed on a low-computing embedded platform to detect passenger gestures in real-time while ensuring the recognition speed. The contributions of this paper mainly include: (1) Using the TOF camera to collect 1000 depth images of five gestures inside the car cabin. And these gesture depth maps are preprocessed and trained by lightweight convolutional neural network to obtain the gesture classification model. (2) In the gesture preprocessing stage, a method based on depth information is designed to quickly locate the depth range of the hand area, which can quickly locate the depth range of the hand area in real-time. (3) A low-parameter lightweight convolutional neural network model is proposed, which has fewer training parameters and can be deployed on a low-computing embedded platform. The experimental results show that compared with traditional static gesture recognition algorithms inside the cabin, this method has higher accuracy and stronger robustness and can recognize passenger gestures in real-time on a low-computing embedded platform.</div></div>
APA, Harvard, Vancouver, ISO, and other styles
6

Radkowski, Rafael, and Christian Stritzke. "Comparison Between 2D and 3D Hand Gesture Interaction for Augmented Reality Applications." In ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/detc2011-48155.

Full text
Abstract:
This paper presents a comparison between 2D and 3D interaction techniques for Augmented Reality (AR) applications. The interaction techniques are based on hand gestures and a computer vision-based hand gesture recognition system. We have compared 2D gestures and 3D gestures for interaction in AR application. The 3D recognition system is based on a video camera, which provides an additional depth image to each 2D color image. Thus, spatial interactions become possible. Our major question during this work was: Do depth images and 3D interaction techniques improve the interaction with AR applications, respectively with virtual 3D objects? Therefore, we have tested and compared the hand gesture recognition systems. The results show two things: First, they show that the depth images facilitate a more robust hand recognition and gesture identification. Second, the results are a strong indication that 3D hand gesture interactions techniques are more intuitive than 2D hand gesture interaction techniques. In summary the results emphasis, that depth images improve the hand gesture interaction for AR applications.
APA, Harvard, Vancouver, ISO, and other styles
7

Miral Kazmi, Syeda. "Hand Gesture Recognition for Sign language." In Human Interaction and Emerging Technologies (IHIET-AI 2022) Artificial Intelligence and Future Applications. AHFE International, 2022. http://dx.doi.org/10.54941/ahfe100925.

Full text
Abstract:
We have come to know a very genuine issue of sign language recognition, that problem being the issue of two-way communication i.e. between normal person and deaf/dumb. Current sign language recognition applications lack basic characteristics which are very necessary for the interaction with environment. Our project is focused on providing a portable and customizable solution for understanding sign language through an android app. The report summarizes the basic concepts and methods in creating this android application that uses gestures recognition to understand American sign language words. The project uses different image processing tools to separate the hand from the rest and then uses pattern recognition techniques for gesture recognition. A complete summary of the results obtained from the various tests performed is also provided to demonstrate the validity of the application.
APA, Harvard, Vancouver, ISO, and other styles
8

Teng, Zhiqiang, Haodong Chen, Qitao Hou, Wanbing Song, Chenchen Gu, and Ping Zhao. "Design of a Cognitive Rehabilitation System Based on Gesture Recognition." In ASME 2020 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/imece2020-23579.

Full text
Abstract:
Abstract Computer-assisted cognitive training is an effective intervention for patients with mild cognitive impairment (MCI), which can avoid the disadvantages of traditional cognitive training that consumes a lot of medical resources and is difficult to be standardized. However, many computer-assisted cognitive training systems have unfriendly human-computer interaction, for not considering that most MCI patients have certain difficulties in using computers. In this paper, we design a cognitive training system which allows patients to implement human-computer interaction through gestures. First, a gesture recognition algorithm is proposed, in which we implement gesture segmentation based on YCbCr color space and Otsu algorithm, extract Fourier Descriptors of gesture contour as feature vectors and use SVM algorithm to train a classifier to recognize gestures. Then, the graphical user interface (GUI) of the system is designed to realize the task requirement of cognitive training for the MCI patients. Finally, the results of tests show the accuracy of the algorithm and the feasibility of the GUI. With the above computer-assisted cognitive training system, patients can achieve human-computer interaction only through gestures without the need to use keyboard, mouse, etc., greatly reducing the burden of patients during training.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhukovskaya, V. A., and A. V. Pyataeva. "Recurrent Neural Network for Recognition of Gestures of the Russian Language, Taking into Account the Language Dialect of the Siberian Region." In 32nd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/graphicon-2022-538-547.

Full text
Abstract:
Sign recognition is an important task, in particular for the communication of the deaf and hard of hearing population with people who do not know sign language. Russian sign language is poorly studied, Russian sign language of the Siberian region has significant differences from others within the Russian language group. There is no generally accepted data set for Russian Sign Language. The paper presents a gesture recognition algorithm based on video data. The gesture recognition algorithm is based on the identification of key features of the hands and posture of a person. Gestures were classified using the LSTM recurrent neural network. To train and test the results of gesture recognition, we independently developed a data set consisting of 10 sign words. The selection of words for the data set was made among the most popular words of the Russian language, as well as taking into account the maximum difference in the pronunciation of gestures of the language dialect of the Siberian region. The implementation of the gesture recognition algorithm was carried out using Keras neural network design and deep learning technologies, the OpenCV computer vision library, the MediaPipe machine learning framework, and other auxiliary libraries. Experimental studies conducted on 300 video sequences confirm the effectiveness of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
10

Wei, Jian, Jiaqi Guo, Xiaoyuan Guo, Yong Jia, Qi Wang, and Shigang Wang. "Synchronous Gesture Interaction for Flat-Panel+Integral Imaging." In Digital Holography and Three-Dimensional Imaging. Washington, D.C.: Optica Publishing Group, 2022. http://dx.doi.org/10.1364/dh.2022.w5a.8.

Full text
Abstract:
A method of gesture interaction for simultaneous flat-panel and integral imaging is proposed. It supports synchronously adjusting the viewpoint or scale of the 2D and naked-eye 3D images via viewer’s gestures, by exploiting the high efficiency of NVIDIA Optix in ray casting and Leap Motion in gesture recognition.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Gesture Recognition"

1

Yang, Jie, and Yangsheng Xu. Hidden Markov Model for Gesture Recognition. Fort Belvoir, VA: Defense Technical Information Center, May 1994. http://dx.doi.org/10.21236/ada282845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Morton, Paul R., Edward L. Fix, and Gloria L. Calhoun. Hand Gesture Recognition Using Neural Networks. Fort Belvoir, VA: Defense Technical Information Center, May 1996. http://dx.doi.org/10.21236/ada314933.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Vira, Naren. Gesture Recognition Development for the Interactive Datawall. Fort Belvoir, VA: Defense Technical Information Center, January 2008. http://dx.doi.org/10.21236/ada476755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lampton, Donald R., Bruce W. Knerr, Bryan R. Clark, Glenn A. Martin, and Donald A. Washburn. Gesture Recognition System for Hand and Arm Signals. Fort Belvoir, VA: Defense Technical Information Center, November 2002. http://dx.doi.org/10.21236/ada408459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Venetsky, Larry, Mark Husni, and Mark Yager. Gesture Recognition for UCAV-N Flight Deck Operations. Fort Belvoir, VA: Defense Technical Information Center, January 2003. http://dx.doi.org/10.21236/ada422629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography