To see the other types of publications on this topic, follow the link: Vision Based Gesture Recognition.

Journal articles on the topic 'Vision Based Gesture Recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Vision Based Gesture Recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lou, Xinyue. "Vision-based Hand Gesture Recognition Technology." Applied and Computational Engineering 141, no. 1 (2025): 54–59. https://doi.org/10.54254/2755-2721/2025.21696.

Full text
Abstract:
Human-computer interaction has a wide range of application prospects in many fields such as medicine, entertainment, industry and education. Gesture recognition is one of the most important technologies for gesture interaction between humans and robots, and visual gesture recognition increases the user's comfort and freedom compared with data glove recognition. This paper summarizes the general process of visual gesture recognition based on the literature, including three steps: pre-processing, feature extraction, and gesture classification. It also defines static and dynamic gestures and makes a comparison between their differences and recognition emphases. Based on static and dynamic gesture recognition, this paper summarizes the commonly - used visual gesture recognition methods. For static gesture recognition, it includes methods such as the template - matching method and the AdaBoost - based method. As for dynamic gesture recognition, it encompasses methods like the hidden Markov model method and the dynamic time regularization method. Finally, some applications of visual gesture recognition are introduced, for example, a non-contact system for operating rooms and smart home control.
APA, Harvard, Vancouver, ISO, and other styles
2

P, Hrishikesh, Akshay V, Anugraha K, T. R. Hari Subramaniam, and Jyothisha J. Nair. "Vision Based Gesture Recognition." Procedia Computer Science 235 (2024): 303–15. http://dx.doi.org/10.1016/j.procs.2024.04.031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jasim, Mahmood, Tao Zhang, and Md Hasanuzzaman. "A Real-Time Computer Vision-Based Static and Dynamic Hand Gesture Recognition System." International Journal of Image and Graphics 14, no. 01n02 (2014): 1450006. http://dx.doi.org/10.1142/s0219467814500065.

Full text
Abstract:
This paper presents a novel method for computer vision-based static and dynamic hand gesture recognition. Haar-like feature-based cascaded classifier is used for hand area segmentation. Static hand gestures are recognized using linear discriminant analysis (LDA) and local binary pattern (LBP)-based feature extraction methods. Static hand gestures are classified using nearest neighbor (NN) algorithm. Dynamic hand gestures are recognized using the novel text-based principal directional features (PDFs), which are generated from the segmented image sequences. Longest common subsequence (LCS) algorithm is used to classify the dynamic gestures. For testing, the Chinese numeral gesture dataset containing static hand poses and directional gesture dataset containing complex dynamic gestures are prepared. The mean accuracy of LDA-based static hand gesture recognition on the Chinese numeral gesture dataset is 92.42%. The mean accuracy of LBP-based static hand gesture recognition on the Chinese numeral gesture dataset is 87.23%. The mean accuracy of the novel dynamic hand gesture recognition method using PDF on directional gesture dataset is 94%.
APA, Harvard, Vancouver, ISO, and other styles
4

Gupta, Himanshu, Aniruddh Ramjiwal, and Jasmin T. Jose. "Vision Based Approach to Sign Language Recognition." International Journal of Advances in Applied Sciences 7, no. 2 (2018): 156. http://dx.doi.org/10.11591/ijaas.v7.i2.pp156-161.

Full text
Abstract:
We propose an algorithm for automatically recognizing some certain amount of gestures from hand movements to help deaf and dumb and hard hearing people. Hand gesture recognition is quite a challenging problem in its form. We have considered a fixed set of manual commands and a specific environment, and develop a effective, procedure for gesture recognition. Our approach contains steps for segmenting the hand region, locating the fingers, and finally classifying the gesture which in general terms means detecting, tracking and recognising. The algorithm is non-changing to rotations, translations and scale of the hand. We will be demonstrating the effectiveness of the technique on real imagery.
APA, Harvard, Vancouver, ISO, and other styles
5

Himanshu, Gupta, Ramjiwal Aniruddh, and T. Jose Jasmin. "Vision Based Approach to Sign Language Recognition." International Journal of Advances in Applied Sciences (IJAAS) 7, no. 2 (2018): 156–61. https://doi.org/10.11591/ijaas.v7.i2.pp156-161.

Full text
Abstract:
We propose an algorithm for automatically recognizing some certain amount of gestures from hand movements to help deaf and dumb and hard hearing people. Hand gesture recognition is quite a challenging problem in its form. We have considered a fixed set of manual commands and a specific environment, and develop a effective, procedure for gesture recognition. Our approach contains steps for segmenting the hand region, locating the fingers, and finally classifying the gesture which in general terms means detecting, tracking and recognising. The algorithm is non-changing to rotations, translations and scale of the hand. We will be demonstrating the effectiveness of the technique on real imagery.
APA, Harvard, Vancouver, ISO, and other styles
6

RAUTARAY, SIDDHARTH S., and ANUPAM AGRAWAL. "VISION-BASED APPLICATION-ADAPTIVE HAND GESTURE RECOGNITION SYSTEM." International Journal of Information Acquisition 09, no. 01 (2013): 1350007. http://dx.doi.org/10.1142/s0219878913500071.

Full text
Abstract:
With the increasing role of computing devices, facilitating natural human computer interaction (HCI) will have a positive impact on their usage and acceptance as a whole. For long time, research on HCI has been restricted to techniques based on the use of keyboard, mouse, etc. Recently, this paradigm has changed. Techniques such as vision, sound, speech recognition allow for much richer form of interaction between the user and machine. The emphasis is to provide a natural form of interface for interaction. Gestures are one of the natural forms of interaction between humans. As gesture commands are found to be natural for humans, the development of gesture control systems for controlling devices have become a popular research topic in recent years. Researchers have proposed different gesture recognition systems which act as an interface for controlling the applications. One of the drawbacks of present gesture recognition systems is application dependence which makes it difficult to transfer one gesture control interface into different applications. This paper focuses on designing a vision-based hand gesture recognition system which is adaptive to different applications thus making the gesture recognition systems to be application adaptive. The designed system comprises different processing steps like detection, segmentation, tracking, recognition, etc. For making the system as application-adaptive, different quantitative and qualitative parameters have been taken into consideration. The quantitative parameters include gesture recognition rate, features extracted and root mean square error of the system while the qualitative parameters include intuitiveness, accuracy, stress/comfort, computational efficiency, user's tolerance, and real-time performance related to the proposed system. These parameters have a vital impact on the performance of the proposed application adaptive hand gesture recognition system.
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Lewei. "Real-time gesture-based control of UAVs using multimodal fusion of FMCW radar and vision." Journal of Physics: Conference Series 2664, no. 1 (2023): 012002. http://dx.doi.org/10.1088/1742-6596/2664/1/012002.

Full text
Abstract:
Abstract Gesture-based control has gained prominence as an intuitive and natural means of interaction with unmanned aerial vehicles (UAVs). This paper presents a real-time gesture-based control system for UAVs that leverages the multimodal fusion of Frequency Modulated Continuous Wave (FMCW) radar and vision sensors, aiming to enhance user experience through precise and responsive UAV control via hand gestures. The research focuses on developing an effective fusion framework that combines the complementary advantages of FMCW radar and vision sensors. FMCW radar provides robust range and velocity measurements, while vision sensors capture fine-grained visual information. By integrating data from these modalities, the system achieves a comprehensive understanding of hand gestures, resulting in improved gesture recognition accuracy and robustness. The proposed system comprises three main stages: data acquisition, gesture recognition, and multimodal fusion. In the data acquisition stage, synchronized data streams from FMCW radar and vision sensors are captured. Then, machine learning algorithms are employed in the gesture recognition stage to classify and interpret hand gestures. Finally, the multimodal fusion stage aligns and fuses the data, creating a unified representation that captures the spatial and temporal aspects of hand gestures, enabling real-time control commands for the UAV. Experimental results demonstrate the system‘s effectiveness in accurately recognizing and responding to hand gestures. The multimodal fusion of FMCW radar and vision sensors enables a robust and versatile gesture-based control interface.
APA, Harvard, Vancouver, ISO, and other styles
8

Yu, Hengcheng, and Zhengyu Chen. "Research on contactless control of elevator based on machine vision." Highlights in Science, Engineering and Technology 7 (August 3, 2022): 89–94. http://dx.doi.org/10.54097/hset.v7i.1022.

Full text
Abstract:
Aiming at the problem of cross-infection caused by elevator public buttons during the COVID-19 epidemic, a non-contact elevator button control gesture recognition system based on machine vision is designed. In order to improve the detection speed of gesture recognition, combined with the Spatial Pyramid Pooling (SPP) and replaced the Backbone in YOLOv5 with the lightweight model ShuffleNetV2, an improved YOLOv5_shff algorithm was proposed. After testing, in the task of recognizing gestures, the detection speed of the YOLOv5_shff algorithm is 14% higher than the original model, and the detection accuracy is 0.1% higher than the original model. Taking the improved YOLOv5_shff algorithm as the core, a gesture recognition system that can be applied to elevator button control is designed. The experimental data shows that the gesture recognition accuracy for controlling elevator buttons reaches 99.3%, which can meet the requirements of non-contact control of public elevators.
 Aiming at the problem of cross-infection caused by elevator public buttons during the COVID-19 epidemic, a non-contact elevator button control gesture recognition system based on machine vision is designed. In order to improve the detection speed of gesture recognition, combined with the Spatial Pyramid Pooling (SPP) and replaced the Backbone in YOLOv5 with the lightweight model ShuffleNetV2, an improved YOLOv5_shff algorithm was proposed. After testing, in the task of recognizing gestures, the detection speed of the YOLOv5_shff algorithm is 14% higher than the original model, and the detection accuracy is 0.1% higher than the original model. Taking the improved YOLOv5_shff algorithm as the core, a gesture recognition system that can be applied to elevator button control is designed. The experimental data shows that the gesture recognition accuracy for controlling elevator buttons reaches 99.3%, which can meet the requirements of non-contact control of public elevators.
APA, Harvard, Vancouver, ISO, and other styles
9

Komang Somawirata, I., and Fitri Utaminingrum. "Smart wheelchair controlled by head gesture based on vision." Journal of Physics: Conference Series 2497, no. 1 (2023): 012011. http://dx.doi.org/10.1088/1742-6596/2497/1/012011.

Full text
Abstract:
Abstract Head Gesture Recognition has been developed using a variety of devices that mostly contain a sensor, such as a gyroscope or an accelerometer, for determining the direction and magnitude of movement. This paper explains how to control a smart wheelchair using Head-Gesture Recognition based on Computer Vision. Using the Haar Cascade Algorithm Method for determining the position of the face and nose, determining the order of the head gesture would be easy to do. We classify head gestures to become four, namely: Look down, Look up/center, Turn right and Turn left. The four gesture information is used to control the smart wheelchair as Brake, Accelerate, Turn right and Turn left. The experiment result shows that our system has successfully controlled the smart wheelchair using head gestures.
APA, Harvard, Vancouver, ISO, and other styles
10

Yong Xu. "Research on Dynamic Gesture Recognition and Control System based on Machine Vision." Journal of Electrical Systems 20, no. 2 (2024): 616–28. http://dx.doi.org/10.52783/jes.1215.

Full text
Abstract:
Hand gesture recognition and control is a new type of human-computer interaction that can provide a more convenient and efficient operation mode by utilizing non-contact gesture recognition technology. This paper presents a lightweight dynamic gesture recognition method for intelligent office presentation control. First, we introduce the concept of hand gesture recognition and go over key gesture recognition technologies like classification. The structure, process, and evaluation index of the gesture recognition algorithm are described in detail using a convolutional neural network model. During the experiment's algorithm verification phase, we test and analyze the algorithm using the Python language, compilation environment, and data set. In the control experiment, we evaluated the system's ability to control the office application's start, play, next, previous, and exit functions. We achieve 96.3% accuracy on the test set. Experimental results show that the system can recognize a wide range of hand gestures and accurately control the presentation.
APA, Harvard, Vancouver, ISO, and other styles
11

Narayanpethkar, Sangamesh. "Computer Vision based Media Control using Hand Gestures." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (2023): 6642–46. http://dx.doi.org/10.22214/ijraset.2023.52881.

Full text
Abstract:
Abstract: Hand gestures are a form of nonverbal communication that can be used in several fields such as communication between deaf-mute people, robot control, human–computer interaction (HCI), home automation and medical applications. At this time and age, working with a computer in some capacity is a common task. In most situations, the keyboard and mouse are the primary input devices. However, there are several problems associated with excessive usage of the same interaction medium, such as health problems brought on by continuous use of input devices, etc. Humans basically communicate using gestures and it is indeed one of the best ways to communicate. Gesture-based real-time gesture recognition systems received great attention in recent years because of their ability to interact with systems efficiently through human-computer interaction. This project implements computer vision and gesture recognition techniques and develops a vision based low-cost input software for controlling the media player through gestures.
APA, Harvard, Vancouver, ISO, and other styles
12

Bhumkar, Prathamesh. "HAND GESTURE CONTROLLER." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem35055.

Full text
Abstract:
In this paper we present an interaction between humans and computer, gesture recognition does play a critical role. While technology has developed to such a level it made possible to communicate with computers with the Gesture Recognition system. Having reached all the best possible ways for data acquisition like cameras, hand Movement now these are of less concern. The desire for human-machine interaction is rapidly growing due to advancements in computer vision technology. Gesture recognition is used extensively in many different types of fields. It indicates that research into vision-based hand gesture recognition is an expanding field, with many studies and papers appearing on a regular basis in research publications and conference papers. Our study further assesses the accuracy with which vision-based recognition of hand gestures systems work. The three primary phases are hand shape recognition, hand tracing, and data transformation to the required command. Keywords— Deep Learning, CNN-Convolutional Neural Networks, Hand Gesture Controller, Human-Computer interaction
APA, Harvard, Vancouver, ISO, and other styles
13

Nyirarugira, Clementine, Hyo-rim Choi, and TaeYong Kim. "Hand Gesture Recognition Using Particle Swarm Movement." Mathematical Problems in Engineering 2016 (2016): 1–8. http://dx.doi.org/10.1155/2016/1919824.

Full text
Abstract:
We present a gesture recognition method derived from particle swarm movement for free-air hand gesture recognition. Online gesture recognition remains a difficult problem due to uncertainty in vision-based gesture boundary detection methods. We suggest an automated process of segmenting meaningful gesture trajectories based on particle swarm movement. A subgesture detection and reasoning method is incorporated in the proposed recognizer to avoid premature gesture spotting. Evaluation of the proposed method shows promising recognition results: 97.6% on preisolated gestures, 94.9% on stream gestures with assistive boundary indicators, and 94.2% for blind gesture spotting on digit gesture vocabulary. The proposed recognizer requires fewer computation resources; thus it is a good candidate for real-time applications.
APA, Harvard, Vancouver, ISO, and other styles
14

Wang, Xianghan, Jie Jiang, Yingmei Wei, Lai Kang, and Yingying Gao. "Research on Gesture Recognition Method Based on Computer Vision." MATEC Web of Conferences 232 (2018): 03042. http://dx.doi.org/10.1051/matecconf/201823203042.

Full text
Abstract:
Gesture recognition is an important way of human-computer interaction. With time going on, people are no longer satisfied with gesture recognition based on wearable devices, but hope to perform gesture recognition in a more natural way. Computer vision-based gesture recognition can transfer human feelings and instructions to computers conveniently and efficiently, and improve the efficiency of human-computer interaction significantly. The gesture recognition based on computer vision is mainly based on hidden Markov, dynamic time rounding algorithm and neural network algorithm. The process is roughly divided into three steps: image collection, hand segmentation, gesture recognition and classification. This paper reviews the computer vision-based gesture recognition methods in the past 20 years, analyses the research status at home and abroad, summarizes its current development, the advantages and disadvantages of different gesture recognition methods, and looks forward to the development trend of gesture recognition technology in the next stage.
APA, Harvard, Vancouver, ISO, and other styles
15

N. Balaji, G., S. V. Suryanarayana, and C. Veeramani. "Invariant Hand Gesture Recognition System." International Journal of Engineering & Technology 7, no. 4.6 (2018): 299. http://dx.doi.org/10.14419/ijet.v7i4.6.20717.

Full text
Abstract:
Hand gesture recognition plays a vital role in numerous applications, which can run from mobile phones to 3D analysis of anatomy and from gaming to medicinal science. In a large portion of research applications and current business hand gestures recognition, has been implemented by utilizing either vision based or sensor-based gloves strategies where hues, paperclips of synthetic substances are used on to capture the gestures. Another essential issue associated with vision-based procedures is illuminated conditions. The threshold used for the segmentation is changed based on the light variations. A system is proposed in this paper, which extracts the gesture part from the hand image by preprocessing, followed by extraction of orientation histogram based feature is done. Further, in order to recognize the gestures, the extracted HOG feature vectors are provide for support vector machine (SVM). The proposed system is tested with 84 images and it outperforms with an accuracy of 94.04%.
APA, Harvard, Vancouver, ISO, and other styles
16

N. Balaji, G., S. V. Suryanarayana, and C. Veeramani. "Invariant Hand Gesture Recognition System." International Journal of Engineering & Technology 7, no. 4.6 (2018): 299. http://dx.doi.org/10.14419/ijet.v7i4.6.21196.

Full text
Abstract:
Hand gesture recognition plays a vital role in numerous applications, which can run from mobile phones to 3D analysis of anatomy and from gaming to medicinal science. In a large portion of research applications and current business hand gestures recognition, has been implemented by utilizing either vision based or sensor-based gloves strategies where hues, paperclips of synthetic substances are used on to capture the gestures. Another essential issue associated with vision-based procedures is illuminated conditions. The threshold used for the segmentation is changed based on the light variations. A system is proposed in this paper, which extracts the gesture part from the hand image by preprocessing, followed by extraction of orientation histogram based feature is done. Further, in order to recognize the gestures, the extracted HOG feature vectors are provide for support vector machine (SVM). The proposed system is tested with 84 images and it outperforms with an accuracy of 94.04%.
APA, Harvard, Vancouver, ISO, and other styles
17

Jiang, Du, Zujia Zheng, Gongfa Li, et al. "Gesture recognition based on binocular vision." Cluster Computing 22, S6 (2018): 13261–71. http://dx.doi.org/10.1007/s10586-018-1844-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Venkateswarlu, Dr S. China. "Convolutional Neural Network for Hand Gesture Recognition Using 8 Different Gestures." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–9. https://doi.org/10.55041/ijsrem48820.

Full text
Abstract:
Abstract -- Hand gesture is the main method of communication for people who are hearing-impaired, which poses a difficulty for millions of individuals worldwide when engaging with those who do not have hearing impairments. The significance of technology in enhancing accessibility and thereby increasing the quality of life for individuals with hearing impairments is universally recognized. Therefore, this study conducts a systematic review of existing literature review on hand gesture recognition, with a particular focus on existing methods that address the application of vision, sensor, and hybrid-based methods in the context of hand gesture recognition. This systematic review covers the period from 2018 to 2023, making use of prominent databases including IEEE Xplore, Science Direct, Scopus, and Web of Science. The chosen articles were carefully examined according to predetermined criteria for inclusion and disqualification. Our main focus was on evaluating the hand gesture representation, data acquisition, and accuracy of vision, sensor, and hybrid-based methods for recognizing hand gestures. The accuracy of discernment in scenarios that rely on the specific signer varies from 64% to 98%, with an average of 87.9% among the studies that were analysed. On the other hand, in situations where the signer’s identity is not important, the accuracy of recognition ranges from 52% to 98%, with an average of 79% based on the research analysed. The problems observed in continuous gesture identification highlight the need for more research efforts to improve the practical feasibility of vision-based gesture recognition systems. The findings also indicate that the size of the dataset continues to be a significant obstacle to hand gesture detection. Hence, this study seeks to provide a guide for future research by examining the academic motivations, challenges, and recommendations in the developing field of sign language recognition. Key Words: Sign language recognition, dynamic hand gesture recognition, vision-based hand gesture, sensor-based hand gesture, hybrid-based hand gesture, classification, feature extraction. Processing.
APA, Harvard, Vancouver, ISO, and other styles
19

Zheng, Zepei. "Human Gesture Recognition in Computer Vision Research." SHS Web of Conferences 144 (2022): 03011. http://dx.doi.org/10.1051/shsconf/202214403011.

Full text
Abstract:
Human gesture recognition is a popular issue in the studies of computer vision, since it provides technological expertise required to advance the interaction between people and computers, virtual environments, smart surveillance, motion tracking, as well as other domains. Extraction of the human skeleton is a rather typical gesture recognition approach using existing technologies based on two-dimensional human gesture detection. Likewise, I t cannot be overlooked that objects in the surrounding environment give some information about human gestures. To semantically recognize the posture of the human body, the logic system presented in this research integrates the components recognized in the visual environment alongside the human skeletal position. In principle, it can improve the precision of recognizing postures and semantically represent peoples’ actions. As such, the paper suggests a potential and notion for recognizing human gestures, as well as increasing the quantity of information offered through analysis of images to enhance interaction between humans and computers.
APA, Harvard, Vancouver, ISO, and other styles
20

Heickal, Hasnain, Tao Zhang, and Md Hasanuzzaman. "Computer Vision-Based Real-Time 3D Gesture Recognition Using Depth Image." International Journal of Image and Graphics 15, no. 01 (2015): 1550004. http://dx.doi.org/10.1142/s0219467815500047.

Full text
Abstract:
Gesture is one of the fundamental ways of human machine natural interaction. To understand gesture, the system should be able to interpret 3D movements of human. This paper presents a computer vision-based real-time 3D gesture recognition system using depth image which tracks 3D joint position of head, neck, shoulder, arms, hands and legs. This tracking is done by Kinect motion sensor with OpenNI API and 3D motion gesture is recognized using the movement trajectory of those joints. User to Kinect sensor distance is adapted using proposed center of gravity (COG) correction method and 3D joint position is normalized using proposed joint position normalization method. For gesture learning and recognition, data mining classification algorithms such as Naive Bayes and neural network is used. The system is trained to recognize 12 gestures used by umpires in a cricket match. It is trained and tested using about 2000 training instances for 12 gesture of 15 persons. The system is tested using 5-fold cross validation method and achieved 98.11% accuracy with neural network and 88.84% accuracy with Naive Bayes classification method.
APA, Harvard, Vancouver, ISO, and other styles
21

Oudah, Munir, Ali Al-Naji, and Javaan Chahl. "Hand Gesture Recognition Based on Computer Vision: A Review of Techniques." Journal of Imaging 6, no. 8 (2020): 73. http://dx.doi.org/10.3390/jimaging6080073.

Full text
Abstract:
Hand gestures are a form of nonverbal communication that can be used in several fields such as communication between deaf-mute people, robot control, human–computer interaction (HCI), home automation and medical applications. Research papers based on hand gestures have adopted many different techniques, including those based on instrumented sensor technology and computer vision. In other words, the hand sign can be classified under many headings, such as posture and gesture, as well as dynamic and static, or a hybrid of the two. This paper focuses on a review of the literature on hand gesture techniques and introduces their merits and limitations under different circumstances. In addition, it tabulates the performance of these methods, focusing on computer vision techniques that deal with the similarity and difference points, technique of hand segmentation used, classification algorithms and drawbacks, number and types of gestures, dataset used, detection range (distance) and type of camera used. This paper is a thorough general overview of hand gesture methods with a brief discussion of some possible applications.
APA, Harvard, Vancouver, ISO, and other styles
22

Newby, Gregory B. "Gesture Recognition Based upon Statistical Similarity." Presence: Teleoperators and Virtual Environments 3, no. 3 (1994): 236–43. http://dx.doi.org/10.1162/pres.1994.3.3.236.

Full text
Abstract:
One of the improvements virtual reality offers traditional human-computer interfaces is that it enables the user to interact with virtual objects using gestures. The use of natural hand gestures for computer input provides opportunities for direct manipulation in computing environments, but not without some challenges. The mapping of a human gesture onto a particular system function is not nearly so easy as mapping with a keyboard or mouse. Reasons for this difficulty include individual variations in the exact gesture movement, the problem of knowing when a gesture starts and ends, and variation in the relative positions of other body parts that might help to identify a gesture but are not measured. A further difficulty stems from limitations on the number of gestures that a person can reliably remember and reproduce. This paper describes work on the statistical recognition of gestures based on the sum of squares. A DataGlove™ was employed to measure finger position and “train” software to recognize the letters and numbers of the American Sign Language (ASL) manual alphabet. This technique for gesture recognition is more effective than methods commonly employed in VR applications in that it can distinguish dozens of gestures and is not bound by the input sequences of a particular user. The work described here is limited in that it examines only gestures that do not occur across time. Applications for speakers of ASL and for VR are discussed, and future directions for gesture recognition research are introduced. These include adding a motion tracker and potential for recognizing gestures that do occur across time.
APA, Harvard, Vancouver, ISO, and other styles
23

Faki, Aariz. "Gesture Control Drone: Using Gloves." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 02 (2025): 1–9. https://doi.org/10.55041/ijsrem41378.

Full text
Abstract:
Gesture-controlled drones represent a significant advancement in human-computer interaction, allowing users to operate drones using simple hand movements without the need for traditional controllers. This technology utilizes computer vision, machine learning, and sensor-based systems to interpret gestures and translate them into drone commands such as takeoff, landing, movement, and hovering. A typical gesture-controlled drone employs a combination of cameras, accelerometers, gyroscopes, and deep learning models to recognize predefined gestures in real-time. Image processing techniques, such as OpenCV and deep neural networks, enhance the accuracy of gesture recognition, ensuring seamless communication between the user and the drone. Keywords--- Gesture Recognition, Drone Control, Computer Vision.
APA, Harvard, Vancouver, ISO, and other styles
24

Lin, Weikun. "A Systematic Review of Computer Vision-Based Virtual Conference Assistants and Gesture Recognition." Journal of Computer Technology and Applied Mathematics 1, no. 4 (2024): 28–35. https://doi.org/10.5281/zenodo.13889718.

Full text
Abstract:
In the process of introducing gesture recognition, it is essential to explore its technical background and implementation methods. Gesture recognition algorithms based on deep learning perform exceptionally well when processing real-time video streams. These algorithms can extract gesture features and classify them to identify user intentions. For instance, analyzing gesture images using Convolutional Neural Networks (CNN) can effectively enhance recognition accuracy and real-time performance. Additionally, combining optical flow methods with object detection techniques allows for real-time tracking of user hand movements, leading to more precise recognition results. Factors such as changes in ambient lighting, cluttered backgrounds, and the diversity of user gestures can all impact recognition accuracy. Therefore, researchers need to continuously optimize algorithms to improve the robustness and adaptability of the system. At the same time, when designing virtual conference assistants, the user interface's friendliness and usability should also be considered, enabling users of varying technical skill levels to use the system with ease.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Huihui, Bo Ru, Xin Miao, et al. "MEMS Devices-Based Hand Gesture Recognition via Wearable Computing." Micromachines 14, no. 5 (2023): 947. http://dx.doi.org/10.3390/mi14050947.

Full text
Abstract:
Gesture recognition has found widespread applications in various fields, such as virtual reality, medical diagnosis, and robot interaction. The existing mainstream gesture-recognition methods are primarily divided into two categories: inertial-sensor-based and camera-vision-based methods. However, optical detection still has limitations such as reflection and occlusion. In this paper, we investigate static and dynamic gesture-recognition methods based on miniature inertial sensors. Hand-gesture data are obtained through a data glove and preprocessed using Butterworth low-pass filtering and normalization algorithms. Magnetometer correction is performed using ellipsoidal fitting methods. An auxiliary segmentation algorithm is employed to segment the gesture data, and a gesture dataset is constructed. For static gesture recognition, we focus on four machine learning algorithms, namely support vector machine (SVM), backpropagation neural network (BP), decision tree (DT), and random forest (RF). We evaluate the model prediction performance through cross-validation comparison. For dynamic gesture recognition, we investigate the recognition of 10 dynamic gestures using Hidden Markov Models (HMM) and Attention-Biased Mechanisms for Bidirectional Long- and Short-Term Memory Neural Network Models (Attention-BiLSTM). We analyze the differences in accuracy for complex dynamic gesture recognition with different feature datasets and compare them with the prediction results of the traditional long- and short-term memory neural network model (LSTM). Experimental results demonstrate that the random forest algorithm achieves the highest recognition accuracy and shortest recognition time for static gestures. Moreover, the addition of the attention mechanism significantly improves the recognition accuracy of the LSTM model for dynamic gestures, with a prediction accuracy of 98.3%, based on the original six-axis dataset.
APA, Harvard, Vancouver, ISO, and other styles
26

Yaseen, Oh-Jin Kwon, Jaeho Kim, Jinhee Lee, and Faiz Ullah. "Vision-Based Gesture-Driven Drone Control in a Metaverse-Inspired 3D Simulation Environment." Drones 9, no. 2 (2025): 92. https://doi.org/10.3390/drones9020092.

Full text
Abstract:
Unlike traditional remote control systems for controlling unmanned aerial vehicles (UAVs) and drones, active research is being carried out in the domain of vision-based hand gesture recognition systems for drone control. However, contrary to static and sensor based hand gesture recognition, recognizing dynamic hand gestures is challenging due to the complex nature of multi-dimensional hand gesture data, present in 2D images. In a real-time application scenario, performance and safety is crucial. Therefore we propose a hybrid lightweight dynamic hand gesture recognition system and a 3D simulator based drone control environment for live simulation. We used transfer learning-based computer vision techniques to detect dynamic hand gestures in real-time. The gestures are recognized, based on which predetermine commands are selected and sent to a drone simulation environment that operates on a different computer via socket connectivity. Without conventional input devices, hand gesture detection integrated with the virtual environment offers a user-friendly and immersive way to control drone motions, improving user interaction. Through a variety of test situations, the efficacy of this technique is illustrated, highlighting its potential uses in remote-control systems, gaming, and training. The system is tested and evaluated in real-time, outperforming state-of-the-art methods. The code utilized in this study are publicly accessible. Further details can be found in the “Data Availability Statement”.
APA, Harvard, Vancouver, ISO, and other styles
27

Yoo, Minjeong, Yuseung Na, Hamin Song, et al. "Motion Estimation and Hand Gesture Recognition-Based Human–UAV Interaction Approach in Real Time." Sensors 22, no. 7 (2022): 2513. http://dx.doi.org/10.3390/s22072513.

Full text
Abstract:
As an alternative to traditional remote controller, research on vision-based hand gesture recognition is being actively conducted in the field of interaction between human and unmanned aerial vehicle (UAV). However, vision-based gesture system has a challenging problem in recognizing the motion of dynamic gesture because it is difficult to estimate the pose of multi-dimensional hand gestures in 2D images. This leads to complex algorithms, including tracking in addition to detection, to recognize dynamic gestures, but they are not suitable for human–UAV interaction (HUI) systems that require safe design with high real-time performance. Therefore, in this paper, we propose a hybrid hand gesture system that combines an inertial measurement unit (IMU)-based motion capture system and a vision-based gesture system to increase real-time performance. First, IMU-based commands and vision-based commands are divided according to whether drone operation commands are continuously input. Second, IMU-based control commands are intuitively mapped to allow the UAV to move in the same direction by utilizing estimated orientation sensed by a thumb-mounted micro-IMU, and vision-based control commands are mapped with hand’s appearance through real-time object detection. The proposed system is verified in a simulation environment through efficiency evaluation with dynamic gestures of the existing vision-based system in addition to usability comparison with traditional joystick controller conducted for applicants with no experience in manipulation. As a result, it proves that it is a safer and more intuitive HUI design with a 0.089 ms processing speed and average lap time that takes about 19 s less than the joystick controller. In other words, it shows that it is viable as an alternative to existing HUI.
APA, Harvard, Vancouver, ISO, and other styles
28

Liu, Zhe, Cao Pan, and Hongyuan Wang. "Continuous Gesture Sequences Recognition Based on Few-Shot Learning." International Journal of Aerospace Engineering 2022 (October 11, 2022): 1–12. http://dx.doi.org/10.1155/2022/7868142.

Full text
Abstract:
A large number of demands for space on-orbit services to ensure the on-orbit system completes its specified tasks are foreseeable, and the efficiency and the security are the most significant factors when we carry out an on-orbit mission. And it can improve human-computer interaction efficiency in operations with proper gesture recognition solutions. In actual situations, the operations are complex and changeable, so the gestures used in interaction are also difficult to predict in advance due to the compounding of multiple consecutive gestures. To recognize such gestures based on computer vision (CV) requires complex models trained by a large amount of datasets, it is often unable to obtain enough gesture samples for training a complex model in real tasks, and the cost of labeling the collected gesture samples is quite expensive. Aiming at the problems mentioned above, we propose a few-shot continuous gesture recognition scheme based on RGB video. The scheme uses Mediapipe to detect the key points of each frame in the video stream, decomposes the basic components of gesture features based on certain human palm structure, and then extracts and combines the above basic gesture features by a lightweight autoencoder network. Our scheme can achieve 89.73% recognition accuracy on the 5-way 1-shot gesture recognition task which randomly selected 142 gesture instances of 5 categories from the RWTH German fingerspelling dataset.
APA, Harvard, Vancouver, ISO, and other styles
29

Hu, Hong, Jian Gang Chao, and Zai Qian Zhao. "Study of Vision-Based Hand Gesture Recognition System for Astronaut Virtual Training." Advanced Materials Research 998-999 (July 2014): 1062–65. http://dx.doi.org/10.4028/www.scientific.net/amr.998-999.1062.

Full text
Abstract:
With the fast development of vision-based hand gesture recognition, it is possible to apply the technology to astronaut virtual training. In order to solve problems of hand gesture recognition in future virtual training and to provide an unrestricted natural training for astronauts, this paper proposed a vision-based hand gesture recognition method, and implemented a hierarchical gesture recognition system to provide a gesture-driven interactive interface for astronaut virtual training system. The experiment results showed that this recognition system can be used to help astronaut training.
APA, Harvard, Vancouver, ISO, and other styles
30

H T, Panduranga, and Mani C. "Non – Vision Based Sensors for Dynamic Hand Gesture Recognition Systems: A Comparative Study." International Journal of Engineering & Technology 7, no. 3.12 (2018): 1175. http://dx.doi.org/10.14419/ijet.v7i3.12.17782.

Full text
Abstract:
Gestures are considered as a type of configuration associated with motion in concerned body part, signifying meaningful information or expressing motion or intending to command and control. Wide ranges of sensors working with different technology are available in market. Gesture recognition process involves steps like data acquisition from sensor, segmentation, an algorithm for taking gesture data as input, an algorithm to extract parameters and algorithm to classify hand gestures. Three - dimensional hand gestures have been widely accepted for advanced applications like creation of virtual world where in users can feel the naturality of interacting or playing a musical instrument without presence of any physical device. Techniques for dynamic finger gesture recognition can be classified as visual based and wearable sensor based. The purpose of this paper is to compare various non – vision based sensors with different tracking technologies, updating advantages and drawbacks helping investigators and researchers working on this area.
APA, Harvard, Vancouver, ISO, and other styles
31

Yu, Cun-jiang, Guo-bao Zhou, Cheng-wei Yan, xiao-ying Ding, and Cheng-shuo Li. "An Improved Gesture Recognition Model Based on Mini-Xception." Journal of Physics: Conference Series 2400, no. 1 (2022): 012020. http://dx.doi.org/10.1088/1742-6596/2400/1/012020.

Full text
Abstract:
Abstract With the rapid development of artificial intelligence technology, gestures have become the mainstream in the field of human-computer interaction because of their simplicity, easy understanding and non-contact. Compared with the early data gloves, the vision-based non-contact gesture recognition interaction method has obvious advantages. However, the variability of the gesture itself, the complexity of the background and the influence of different lighting conditions have impacted the accuracy of gesture recognition. With the rapid development of deep learning technology, gesture recognition has achieved amazing results in accuracy. To improve gesture recognition accuracy in complex background, this paper optimizes the mini-xception network model of lightweight convolutional neural network algorithm, introduce the transfer learner and integrate YOLOV4-tiny into the mini-xception model, generate a new network model YT_ mini-Xception. After experimental verification, by YT_ Mini-xception network model, the average accuracy of 0-9 gesture recognition on complex background data set is 96.64%, the average recognition time was 39.8 milliseconds, and the expected goal was achieved.
APA, Harvard, Vancouver, ISO, and other styles
32

Kotavenuka, Swetha, Harshitha Kodakandla, Nimmakayala Sai Krishna, and Dr S. P. V. Subba Rao. "Hand Gesture Recognition." International Journal for Research in Applied Science and Engineering Technology 11, no. 1 (2023): 331–35. http://dx.doi.org/10.22214/ijraset.2023.48557.

Full text
Abstract:
Abstract: This work presents a computer-vision-based application for recognizing hand gestures. A live video feed is captured by a camera, and a still image is extracted from that feed with the aid of an interface. At least once per count hand gesture (one, two, three, four, and five), the system is trained. After that, the system is given a test gesture to see if it can identify it. Several algorithms that are capable of distinguishing a hand gesture were studied. It was determined that the highest rate of accuracy was achieved by using the computational neural network known as the Alexnet algorithm. Traditionally, systems have used data gloves or markers as a means of input. We are free to use the system however we like. In this way, the user can make natural hand gestures in front of the camera. The system implemented serves as an extendable basis for future work toward a fully robust hand gesture recognition system, which is still the subject of intensive research and development.
APA, Harvard, Vancouver, ISO, and other styles
33

Wu, Bi-Xiao, Chen-Guang Yang, and Jun-Pei Zhong. "Research on Transfer Learning of Vision-based Gesture Recognition." International Journal of Automation and Computing 18, no. 3 (2021): 422–31. http://dx.doi.org/10.1007/s11633-020-1273-9.

Full text
Abstract:
AbstractGesture recognition has been widely used for human-robot interaction. At present, a problem in gesture recognition is that the researchers did not use the learned knowledge in existing domains to discover and recognize gestures in new domains. For each new domain, it is required to collect and annotate a large amount of data, and the training of the algorithm does not benefit from prior knowledge, leading to redundant calculation workload and excessive time investment. To address this problem, the paper proposes a method that could transfer gesture data in different domains. We use a red-green-blue (RGB) Camera to collect images of the gestures, and use Leap Motion to collect the coordinates of 21 joint points of the human hand. Then, we extract a set of novel feature descriptors from two different distributions of data for the study of transfer learning. This paper compares the effects of three classification algorithms, i.e., support vector machine (SVM), broad learning system (BLS) and deep learning (DL). We also compare learning performances with and without using the joint distribution adaptation (JDA) algorithm. The experimental results show that the proposed method could effectively solve the transfer problem between RGB Camera and Leap Motion. In addition, we found that when using DL to classify the data, excessive training on the source domain may reduce the accuracy of recognition in the target domain.
APA, Harvard, Vancouver, ISO, and other styles
34

Kolhe, Ashwini, R. R. Itkarkar, and Anilkumar V. Nandani. "Robust Part-Based Hand Gesture Recognition Using Finger-Earth Mover’s Distance." International Journal of Advanced Research in Computer Science and Software Engineering 7, no. 7 (2017): 131. http://dx.doi.org/10.23956/ijarcsse/v7i7/0196.

Full text
Abstract:
Hand gesture recognition is of great importance for human-computer interaction (HCI), because of its extensive applications in virtual reality, sign language recognition, and computer games. Despite lots of previous work, traditional vision-based hand gesture recognition methods are still far from satisfactory for real-life applications. Because of the nature of optical sensing, the quality of the captured images is sensitive to lighting conditions and cluttered backgrounds, thus optical sensor based methods are usually unable to detect and track the hands robustly, which largely affects the performance of hand gesture recognition. Compared to the entire human body, the hand is a smaller object with more complex articulations and more easily affected by segmentation errors. It is thus a very challenging problem to recognize hand gestures. This work focuses on building a robust part-based hand gesture recognition system. To handle the noisy hand shapes obtained from digital camera, we propose a novel distance metric, Finger-Earth Mover’s Distance (FEMD), to measure the dissimilarity between hand shapes. As it only matches the finger parts while not the whole hand, it can better distinguish the hand gestures of slight differences. The experiments demonstrate that proposed hand gesture recognition system’s mean accuracy is 80.4% which is measured on 6 gesture database.
APA, Harvard, Vancouver, ISO, and other styles
35

Jhaung, Yu-Chiao, Yu-Ming Lin, Chiao Zha, Jenq-Shiou Leu, and Mario Köppen. "Implementing a Hand Gesture Recognition System Based on Range-Doppler Map." Sensors 22, no. 11 (2022): 4260. http://dx.doi.org/10.3390/s22114260.

Full text
Abstract:
There have been several studies of hand gesture recognition for human–machine interfaces. In the early work, most solutions were vision-based and usually had privacy problems that make them unusable in some scenarios. To address the privacy issues, more and more research on non-vision-based hand gesture recognition techniques has been proposed. This paper proposes a dynamic hand gesture system based on 60 GHz FMCW radar that can be used for contactless device control. In this paper, we receive the radar signals of hand gestures and transform them into human-understandable domains such as range, velocity, and angle. With these signatures, we can customize our system to different scenarios. We proposed an end-to-end training deep learning model (neural network and long short-term memory), that extracts the transformed radar signals into features and classifies the extracted features into hand gesture labels. In our training data collecting effort, a camera is used only to support labeling hand gesture data. The accuracy of our model can reach 98%.
APA, Harvard, Vancouver, ISO, and other styles
36

Tasfia, Rifa, Zeratul Izzah Mohd Yusoh, Adria Binte Habib, and Tousif Mohaimen. "An overview of hand gesture recognition based on computer vision." International Journal of Electrical and Computer Engineering (IJECE) 14, no. 4 (2024): 4636. http://dx.doi.org/10.11591/ijece.v14i4.pp4636-4645.

Full text
Abstract:
Hand gesture recognition emerges as one of the foremost sectors which has gone through several developments within pattern recognition. Numerous studies and research endeavors have explored methodologies grounded in computer vision within this domain. Despite extensive research endeavors, there is still a need for a more thorough evaluation of the efficiency of various methods in different environments along with the challenges encountered during the application of these methods. The focal point of this paper is the comparison of different research in the domain of vision-based hand gesture recognition. The objective is to find out the most prominent methods by reviewing efficiency. Concurrently, the paper delves into presenting potential solutions for challenges faced in different research. A comparative analysis particularly centered around traditional methods and convolutional neural networks like random forest, long short-term memory (LSTM), heatmap, and you only look once (YOLO). considering their efficacy. Where convolutional neural network-based algorithms performed best for recognizing the gestures and gave effective solutions for the challenges faced by the researchers. In essence, the findings of this review paper aim to contribute to future implementations and the discovery of more efficient approaches in the gesture recognition sector.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhuang, Guang Li, Jia Lin Tang, Shu Fen Chen, Xi Ying Li, and Bin Hua Su. "Study on the Process of 3D Gesture Recognition Technology Based on Computer Vision." Applied Mechanics and Materials 643 (September 2014): 201–7. http://dx.doi.org/10.4028/www.scientific.net/amm.643.201.

Full text
Abstract:
This paper presents a 3D gesture recognition technology based on machine vision as the center. Based on a large number of experiments, this paper sums up and introduces the existing gesture recognition technology, the key research contents of gesture recognition, as well as the history of development of gesture recognition technology. Then, the paper does research in the main technology of gesture recognition .The experimental results show that the method can realize 3D gesture recognition in video sequences with real-time and stability, even more; it can get better recognition result.
APA, Harvard, Vancouver, ISO, and other styles
38

M, Chandraman, Santhiyakumari N, Ganesh Venkateshwaran S, Damodharan M, Santhiya C, and Subalakshimi V. "EchoGesture Communication: Gesture-based Systems for Individuals with Disabilities." September 2024 6, no. 3 (2024): 253–61. http://dx.doi.org/10.36548/jei.2024.3.004.

Full text
Abstract:
EchoGesture Communication revolutionizes the interaction of differently-abled individuals using hand gestures. People with disabilities often face difficulties in using the conventional electronic gadgets. The proposed study, utilizes sensors, microcontroller, computer vision, and machine learning, to enable real-time recognition of hand gestures, facilitating effective communication. Additionally, Convolutional Neural Network (CNN) is used in the research to achieve accurate gesture recognition. The proposed system allows individuals with disability to communicate effectively using hand gestures.
APA, Harvard, Vancouver, ISO, and other styles
39

Wang, Zhaocheng, Guangxuan Hu, Shuo Zhao, Ruonan Wang, Hailong Kang, and Feng Luo. "Local Pyramid Vision Transformer: Millimeter-Wave Radar Gesture Recognition Based on Transformer with Integrated Local and Global Awareness." Remote Sensing 16, no. 23 (2024): 4602. https://doi.org/10.3390/rs16234602.

Full text
Abstract:
A millimeter-wave radar is widely accepted by the public due to its low susceptibility to interference, such as changes in light, and the protection of personal privacy. With the development of the deep learning theory, the deep learning method has been dominant in the millimeter-wave radar field, which usually uses convolutional neural networks for feature extraction. In recent years, transformer networks have also been highly valued by researchers due to their parallel processing capabilities and long-distance dependency modeling capabilities. However, traditional convolutional neural networks (CNNs) and vision transformers each have their limitations: CNNs usually overlook the global features of images and vision transformers may neglect local image continuity, and both of them may impede gesture recognition performance. In addition, whether CNN or transformer, their implementation is hindered by the scarcity of public radar gesture datasets. To address these limitations, this paper proposes a new recognition method using a local pyramid visual transformer (LPVT) based on millimeter-wave radar. LPVT can capture global and local features in dynamic gesture spectrograms, ultimately improving the recognition ability of gestures. In this paper, we mainly carried out the following two tasks: building the corresponding datasets and executing gesture recognition. First, we constructed a gesture dataset for training. In this stage, we use a 77 GHz radar to collect the echo signals of gestures and preprocess them to build a dataset. Second, we propose the LPVT network specifically designed for gesture recognition tasks. By integrating local sensing into the globally focused transformer, we improve its capacity to capture both global and local features in dynamic gesture spectrograms. The experimental results using the dataset we constructed show that the proposed LPVT network achieved a gesture recognition accuracy of 92.2%, which exceeds the performance of other networks.
APA, Harvard, Vancouver, ISO, and other styles
40

Chen, Shang-Liang, and Li-Wu Huang. "Using Deep Learning Technology to Realize the Automatic Control Program of Robot Arm Based on Hand Gesture Recognition." International Journal of Engineering and Technology Innovation 11, no. 4 (2021): 241–50. http://dx.doi.org/10.46604/ijeti.2021.7342.

Full text
Abstract:
In this study, the robot arm control, computer vision, and deep learning technologies are combined to realize an automatic control program. There are three functional modules in this program, i.e., the hand gesture recognition module, the robot arm control module, and the communication module. The hand gesture recognition module records the user’s hand gesture images to recognize the gestures’ features using the YOLOv4 algorithm. The recognition results are transmitted to the robot arm control module by the communication module. Finally, the received hand gesture commands are analyzed and executed by the robot arm control module. With the proposed program, engineers can interact with the robot arm through hand gestures, teach the robot arm to record the trajectory by simple hand movements, and call different scripts to satisfy robot motion requirements in the actual production environment.
APA, Harvard, Vancouver, ISO, and other styles
41

Jiang, Hairong, Juan P. Wachs, and Bradley S. Duerstock. "Integrated vision-based system for efficient, semi-automated control of a robotic manipulator." International Journal of Intelligent Computing and Cybernetics 7, no. 3 (2014): 253–66. http://dx.doi.org/10.1108/ijicc-09-2013-0042.

Full text
Abstract:
Purpose – The purpose of this paper is to develop an integrated, computer vision-based system to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In addition, a gesture recognition interface system was developed specially for individuals with upper-level spinal cord injuries including object tracking and face recognition to function as an efficient, hands-free WMRM controller. Design/methodology/approach – Two Kinect® cameras were used synergistically to perform a variety of simple object retrieval tasks. One camera was used to interpret the hand gestures and locate the operator's face for object positioning, and then send those as commands to control the WMRM. The other sensor was used to automatically recognize different daily living objects selected by the subjects. An object recognition module employing the Speeded Up Robust Features algorithm was implemented and recognition results were sent as a commands for “coarse positioning” of the robotic arm near the selected object. Automatic face detection was provided as a shortcut enabling the positing of the objects close by the subject's face. Findings – The gesture recognition interface incorporated hand detection, tracking and recognition algorithms, and yielded a recognition accuracy of 97.5 percent for an eight-gesture lexicon. Tasks’ completion time were conducted to compare manual (gestures only) and semi-manual (gestures, automatic face detection, and object recognition) WMRM control modes. The use of automatic face and object detection significantly reduced the completion times for retrieving a variety of daily living objects. Originality/value – Integration of three computer vision modules were used to construct an effective and hand-free interface for individuals with upper-limb mobility impairments to control a WMRM.
APA, Harvard, Vancouver, ISO, and other styles
42

Shah, Pranit, Krishna Pandya, Harsh Shah, and Jay Gandhi. "Survey on Vision based Hand Gesture Recognition." International Journal of Computer Sciences and Engineering 7, no. 5 (2019): 281–88. http://dx.doi.org/10.26438/ijcse/v7i5.281288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Raheem, S. Abdul, A. Shiva Sai, B. Keerthana, and Asst Prof Mrs A. Amulya. "Vision and Voice Based Hand Gesture Recognition." International Journal for Research in Applied Science and Engineering Technology 11, no. 6 (2023): 690–94. http://dx.doi.org/10.22214/ijraset.2023.53731.

Full text
Abstract:
Abstract: The development of sign language over time has been astounding. It's unfortunate that this language has some unpleasant side effects. Not everyone can understand spoken sign language when using sign language among a mute or deaf person. For deaf-dumb people, hand gestures and sign language are important forms of communication. Communication is difficult without an interpreter, hence it is necessary to translate sign language so that it is understandable to the general public. The goal is to increase the participation of the deaf and the mute in communication. In order to incorporate colour, depth, and trajectory information to increase performance, the CNN is fed numerous channels of video streams that include body joint locations in addition to depth cues and colour information. We demonstrate how the proposed model outperforms conventional approaches based on hand-crafted features by testing it on a real dataset gathered using Microsoft Kinect.
APA, Harvard, Vancouver, ISO, and other styles
44

Satybaldina, Dina, and Gulzia Kalymova. "Deep learning based static hand gesture recognition." Indonesian Journal of Electrical Engineering and Computer Science 21, no. 1 (2021): 398. http://dx.doi.org/10.11591/ijeecs.v21.i1.pp398-405.

Full text
Abstract:
Hand gesture recognition becomes a popular topic of deep learning and provides many application fields for bridging the human–computer barrier and has a positive impact on our daily life. The primary idea of our project is a static gesture acquisition from depth camera and to process the input images to train the deep convolutional neural network pre-trained on ImageNet dataset. Proposed system consists of gesture capture device (Intel® RealSense™ depth camera D435), pre-processing and image segmentation algorithms, feature extraction algorithm and object classification. For pre-processing and image segmentation algorithms computer vision methods from the OpenCV and Intel Real Sense libraries are used. The subsystem for features extracting and gestures classification is based on the modified VGG-16 by using the TensorFlow&Keras deep learning framework. Performance of the static gestures recognition system is evaluated using maching learning metrics. Experimental results show that the proposed model, trained on a database of 2000 images, provides high recognition accuracy both at the training and testing stages.
APA, Harvard, Vancouver, ISO, and other styles
45

Satybaldina, Dina, and Gulzia Kalymova. "Deep learning based static hand gesture recognition." Indonesian Journal of Electrical Engineering and Computer Science 21, no. 1 (2021): 398–405. https://doi.org/10.11591/ijeecs.v21.i1.pp398-405.

Full text
Abstract:
Hand gesture recognition becomes a popular topic of deep learning and provides many application fields for bridging the human-computer barrier and has a positive impact on our daily life. The primary idea of our project is a static gesture acquisition from depth camera and to process the input images to train the deep convolutional neural network pre-trained on ImageNet dataset. Proposed system consists of gesture capture device (Intel® RealSense™ depth camera D435), pre-processing and image segmentation algorithms, feature extraction algorithm and object classification. For preprocessing and image segmentation algorithms computer vision methods from the OpenCV and Intel Real Sense libraries are used. The subsystem for features extracting and gestures classification is based on the modified VGG16 by using the TensorFlow&Keras deep learning framework. Performance of the static gestures recognition system is evaluated using maching learning metrics. Experimental results show that the proposed model, trained on a database of 2000 images, provides high recognition accuracy both at the training and testing stages.
APA, Harvard, Vancouver, ISO, and other styles
46

Shukla, Akhilesh, Dr Devesh Katiyar, and Mr Gaurav Goel. "Gesture Recognition-based AI Virtual Mouse." International Journal for Research in Applied Science and Engineering Technology 10, no. 3 (2022): 1583–88. http://dx.doi.org/10.22214/ijraset.2022.40937.

Full text
Abstract:
Abstract: An amazing invention in Computer Technology is the mouse. Nowadays, A Bluetooth mouse or wireless mouse still has some limitations as it requires a battery for power and a dongle to connect to a PC. This issue may be solved in the proposed gesture-based AI virtual mouse by capturing hand motions and revealing hand tips with a webcam or integrated camera, as gestures are a powerful means of communication between people. Based on hand gestures, the computer can be almost controlled and can perform right- clicking, left-clicking, scrolling and computer cursor functions without using the physical mouse. Therefore, the proposed system will help avoid the dissemination of COVID19 by hindering the deadly intervention and confidence of bias in the control of the computer. Keywords: AI, Virtual Mouse, OpenCV, Computer Vision, Mediapipe
APA, Harvard, Vancouver, ISO, and other styles
47

Tasfia, Rifa, Mohd Yusoh Zeratul Izzah, Habib Adria Binte, and Tousif Mohaimen. "An overview of hand gesture recognition based on computer vision." An overview of hand gesture recognition based on computer vision 14, no. 4 (2024): 4636–45. https://doi.org/10.11591/ijece.v14i4.pp4636-4645.

Full text
Abstract:
Hand gesture recognition emerges as one of the foremost sectors which has gone through several developments within pattern recognition. Numerous studies and research endeavors have explored methodologies grounded in computer vision within this domain. Despite extensive research endeavors, there is still a need for a more thorough evaluation of the efficiency of various methods in different environments along with the challenges encountered during the application of these methods. The focal point of this paper is the comparison of different research in the domain of vision-based hand gesture recognition. The objective is to find out the most prominent methods by reviewing efficiency. Concurrently, the paper delves into presenting potential solutions for challenges faced in different research. A comparative analysis particularly centered around traditional methods and convolutional neural networks like random forest, long short-term memory (LSTM), heatmap, and you only look once (YOLO). considering their efficacy. Where convolutional neural network-based algorithms performed best for recognizing the gestures and gave effective solutions for the challenges faced by the researchers. In essence, the findings of this review paper aim to contribute to future implementations and the discovery of more efficient approaches in the gesture recognition sector.
APA, Harvard, Vancouver, ISO, and other styles
48

Naveen, Y., and Ch Navya Sree. "GestureFlow: Advanced Hand Gesture Control System." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 04 (2025): 1–9. https://doi.org/10.55041/ijsrem44540.

Full text
Abstract:
Our project "GestureFlow: Advanced Hand Gesture Control System" leverages real-time computer vision and deep learning techniques to create a robust, touchless control interface using hand gestures. The system utilizes MediaPipe Hands for efficient hand landmark detection and processes dynamic hand movements and finger configurations to identify a wide range of intuitive gestures such as swipes, pinches, and specific finger patterns. These gestures are mapped to actions like mouse control, clicks, volume adjustment, media playback, screenshot capture, window management, and many more. The system includes smoothed cursor tracking, velocity-based gesture recognition, and responsive command execution to ensure real-time performance. It also offers dynamic visual feedback and adaptive handling of gesture timing to improve precision and usability. Overall, the project presents an accessible, multi-functional human-computer interaction framework aimed at enhancing hands-free control and reducing reliance on traditional input devices in everyday computing environments. Keywords: Hand Gesture Recognition, Human-Computer Interaction, Computer Vision, Deep Learning, MediaPipe, Real-Time Control, Touchless Interface, Gesture Classification, Cursor Navigation, Accessibility, Adaptive Gestures, Visual Feedback.
APA, Harvard, Vancouver, ISO, and other styles
49

Jahnavi, Mudili. "Controlling Computer Using Hand Gestures." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–9. https://doi.org/10.55041/ijsrem49260.

Full text
Abstract:
Abstract: In the realm of Human-Computer Interaction (HCI), the integration of webcams and various sensors has made gesture recognition increasingly accessible and impactful. Hand gestures provide a natural and intuitive mode of communication, enabling seamless interaction between humans and computers. This paper highlights the potential of hand gestures as an effective medium for non-verbal communication and control, with applications spanning across multiple domains. The proposed system leverages image processing techniques, sensor technologies, and computer vision to enable gesture-based computer control. Emphasis is placed on the interdisciplinary nature of the research, including its applications in fields such as machine learning, healthcare, and mobile technology. Keywords: Hand Gesture Recognition, Human-Computer Interaction, Sensor Technology, Image Processing, Machine Learning, Android Application, Diabetes Monitoring, Computer Vision
APA, Harvard, Vancouver, ISO, and other styles
50

Ramadhani, Arief, Achmad Rizal, and Erwin Susanto. "Development of Hand Gesture Based Electronic Key Using Microsoft Kinect." MATEC Web of Conferences 218 (2018): 02014. http://dx.doi.org/10.1051/matecconf/201821802014.

Full text
Abstract:
Computer vision is one of the fields of research that can be applied in a various subject. One application of computer vision is the hand gesture recognition system. The hand gesture is one of the ways to interact with computers or machines. In this study, hand gesture recognition was used as a password for electronic key systems. The hand gesture recognition in this study utilized the depth sensor in Microsoft Kinect Xbox 360. Depth sensor captured the hand image and segmented using a threshold. By scanning each pixel, we detected the thumb and the number of other fingers that open. The hand gesture recognition result was used as a password to unlock the electronic key. This system could recognize nine types of hand gesture represent number 1, 2, 3, 4, 5, 6, 7, 8, and 9. The average accuracy of the hand gesture recognition system was 97.78% for one single hand sign and 86.5% as password of three hand signs.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!