To see the other types of publications on this topic, follow the link: Gesture classification and feature extraction.

Journal articles on the topic 'Gesture classification and feature extraction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Gesture classification and feature extraction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wu, Yutong, Xinhui Hu, Ziwei Wang, Jian Wen, Jiangming Kan, and Wenbin Li. "Exploration of Feature Extraction Methods and Dimension for sEMG Signal Classification." Applied Sciences 9, no. 24 (2019): 5343. http://dx.doi.org/10.3390/app9245343.

Full text
Abstract:
It is necessary to complete the two parts of gesture recognition and wireless remote control to realize the gesture control of the automatic pruning machine. To realize gesture recognition, in this paper, we have carried out the research of gesture recognition technology based on surface electromyography signal, and discussed the influence of different numbers and different gesture combinations on the optimal size. We have calculated the 630-dimensional eigenvector from the benchmark scientific database of sEMG signals and extracted the features using principal component analysis (PCA). Discriminant analysis (DA) has been used to compare the processing effects of each feature extraction method. The experimental results have shown that the recognition rate of four gestures can reach 100.0%, the recognition rate of six gestures can reach 98.29%, and the optimal size is 516~523 dimensions. This study lays a foundation for the follow-up work of the pruning machine gesture control, and p rovides a compelling new way to promote the creative and human computer interaction process of forestry machinery.
APA, Harvard, Vancouver, ISO, and other styles
2

Trivedi, Kaustubh, Priyanka Gaikwad, Mahalaxmi Soma, Komal Bhore, and Prof Richa Agarwal. "Improve the Recognition Accuracy of Sign Language Gesture." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (2022): 4343–47. http://dx.doi.org/10.22214/ijraset.2022.43220.

Full text
Abstract:
Abstract: Image classification is one of classical issue of concern in image processing. There are various techniques for solving this issue. Sign languages are natural language that used to communicate with deaf and mute people. There is much different sign language in the world. But the main focused of system is on Sign Language (SL) which is on the way of standardization in that the system will concentrated on hand gestures only. Hand gesture is very important part of the body for exchange ideas, messages, and thoughts among deaf and dumb people. The proposed system will recognize the number 0 to 9 and alphabets from American Sign Language. It will divide into three parts i.e. preprocessing, feature extraction, classification. It will initially identify the gestures from American Sign language. Finally, the system processes that gesture to recognize number with the help of classification using CNN. Additionally we will play the speech of that identified alphabets. Keywords: Hybrid Approach, American Sign Language, Gesture Recognition. Feature Extraction
APA, Harvard, Vancouver, ISO, and other styles
3

Gaikwad, Priyanka, Kaustubh Trivedi, Mahalaxmi Soma, Komal Bhore, and Prof Richa Agarwal. "A Survey on Sign Language Recognition with Efficient Hand Gesture Representation." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (2022): 21–25. http://dx.doi.org/10.22214/ijraset.2022.41963.

Full text
Abstract:
Abstract: Image classification is one amongst classical issue of concern in image processing. There are various techniques for solving this issue. Sign languages are natural language that want to communicate with deaf and mute people. There's much different sign language within the world. But the most focused of system is on Sign language (SL) which is on the way of standardization there in the system will focused on hand gestures only. Hand gesture is extremely important a part of the body for exchange ideas, messages, and thoughts among deaf and dumb people. The proposed system will recognize the number 0 to 9 and alphabets from American language. It'll divide into three parts i.e., pre-processing, feature extraction, classification. It'll initially identify the gestures from American Sign language. Finally, the system processes that gesture to recognize number with the assistance of classification using CNN. Additionally, we'll play the speech of that identified alphabets. Keywords: Hybrid Approach, American Sign Language, Number Gesture Recognition. Feature Extraction.
APA, Harvard, Vancouver, ISO, and other styles
4

Wei Li, Wei Li, Yang Gao Wei Li, Jun Chen Yang Gao, Si-Yi Niu Jun Chen, Jia-Hao Jiang Si-Yi Niu, and Qi Li Jia-Hao Jiang. "Human Gesture Recognition Based on Millimeter-Wave Radar Using Improved C3D Convolutional Neural Network." 電腦學刊 34, no. 3 (2023): 001–18. http://dx.doi.org/10.53106/199115992023063403001.

Full text
Abstract:
<p>In this paper, we propose a time sequential IC3D convolutional neural network approach for hand gesture recognition based on frequency modulated continuous wave (FMCW) radar. Firstly, the FMCW radar is used to collect the echoes of human hand gestures. A two-dimensional fast Fourier transform calculates the range and velocity information of hand gestures in each frame signal to construct the Range-Doppler heat map dataset of hand gestures. Then, we design an IC3D network for feature extraction and classification of the dynamic gesture heat map. Finally, the experiment results show that the gesture recognition system designed in this paper effectively solves the problems of the difficulty of human gesture feature extraction and low utilization of time series information, and the average recognition accuracy rate can reach more than 99.8%.</p> <p> </p>
APA, Harvard, Vancouver, ISO, and other styles
5

Chang, Ying, Lan Wang, Lingjie Lin, and Ming Liu. "Deep Neural Network for Electromyography Signal Classification via Wearable Sensors." International Journal of Distributed Systems and Technologies 13, no. 3 (2022): 1–11. http://dx.doi.org/10.4018/ijdst.307988.

Full text
Abstract:
The human-computer interaction has been widely used in many fields, such intelligent prosthetic control, sports medicine, rehabilitation medicine, and clinical medicine. It has gradually become a research focus of social scientists. In the field of intelligent prosthesis, sEMG signal has become the most widely used control signal source because it is easy to obtain. The off-line sEMG control intelligent prosthesis needs to recognize the gestures to execute associated action. In order solve this issue, this paper adopts a CNN plus BiLSTM to automatically extract sEMG features and recognize the gestures. The CNN plus BiLSTM can overcome the drawbacks in the manual feature extraction methods. The experimental results show that the proposed gesture recognition framework can extract overall gesture features, which can improve the recognition rate.
APA, Harvard, Vancouver, ISO, and other styles
6

Ansar, Hira, Ahmad Jalal, Munkhjargal Gochoo, and Kibum Kim. "Hand Gesture Recognition Based on Auto-Landmark Localization and Reweighted Genetic Algorithm for Healthcare Muscle Activities." Sustainability 13, no. 5 (2021): 2961. http://dx.doi.org/10.3390/su13052961.

Full text
Abstract:
Due to the constantly increasing demand for the automatic localization of landmarks in hand gesture recognition, there is a need for a more sustainable, intelligent, and reliable system for hand gesture recognition. The main purpose of this study was to develop an accurate hand gesture recognition system that is capable of error-free auto-landmark localization of any gesture dateable in an RGB image. In this paper, we propose a system based on landmark extraction from RGB images regardless of the environment. The extraction of gestures is performed via two methods, namely, fused and directional image methods. The fused method produced greater extracted gesture recognition accuracy. In the proposed system, hand gesture recognition (HGR) is done via several different methods, namely, (1) HGR via point-based features, which consist of (i) distance features, (ii) angular features, and (iii) geometric features; (2) HGR via full hand features, which are composed of (i) SONG mesh geometry and (ii) active model. To optimize these features, we applied gray wolf optimization. After optimization, a reweighted genetic algorithm was used for classification and gesture recognition. Experimentation was performed on five challenging datasets: Sign Word, Dexter1, Dexter + Object, STB, and NYU. Experimental results proved that auto landmark localization with the proposed feature extraction technique is an efficient approach towards developing a robust HGR system. The classification results of the reweighted genetic algorithm were compared with Artificial Neural Network (ANN) and decision tree. The developed system plays a significant role in healthcare muscle exercise.
APA, Harvard, Vancouver, ISO, and other styles
7

Satybaldina, Dina, and Gulzia Kalymova. "Deep learning based static hand gesture recognition." Indonesian Journal of Electrical Engineering and Computer Science 21, no. 1 (2021): 398. http://dx.doi.org/10.11591/ijeecs.v21.i1.pp398-405.

Full text
Abstract:
Hand gesture recognition becomes a popular topic of deep learning and provides many application fields for bridging the human–computer barrier and has a positive impact on our daily life. The primary idea of our project is a static gesture acquisition from depth camera and to process the input images to train the deep convolutional neural network pre-trained on ImageNet dataset. Proposed system consists of gesture capture device (Intel® RealSense™ depth camera D435), pre-processing and image segmentation algorithms, feature extraction algorithm and object classification. For pre-processing and image segmentation algorithms computer vision methods from the OpenCV and Intel Real Sense libraries are used. The subsystem for features extracting and gestures classification is based on the modified VGG-16 by using the TensorFlow&Keras deep learning framework. Performance of the static gestures recognition system is evaluated using maching learning metrics. Experimental results show that the proposed model, trained on a database of 2000 images, provides high recognition accuracy both at the training and testing stages.
APA, Harvard, Vancouver, ISO, and other styles
8

Satybaldina, Dina, and Gulzia Kalymova. "Deep learning based static hand gesture recognition." Indonesian Journal of Electrical Engineering and Computer Science 21, no. 1 (2021): 398–405. https://doi.org/10.11591/ijeecs.v21.i1.pp398-405.

Full text
Abstract:
Hand gesture recognition becomes a popular topic of deep learning and provides many application fields for bridging the human-computer barrier and has a positive impact on our daily life. The primary idea of our project is a static gesture acquisition from depth camera and to process the input images to train the deep convolutional neural network pre-trained on ImageNet dataset. Proposed system consists of gesture capture device (Intel® RealSense™ depth camera D435), pre-processing and image segmentation algorithms, feature extraction algorithm and object classification. For preprocessing and image segmentation algorithms computer vision methods from the OpenCV and Intel Real Sense libraries are used. The subsystem for features extracting and gestures classification is based on the modified VGG16 by using the TensorFlow&Keras deep learning framework. Performance of the static gestures recognition system is evaluated using maching learning metrics. Experimental results show that the proposed model, trained on a database of 2000 images, provides high recognition accuracy both at the training and testing stages.
APA, Harvard, Vancouver, ISO, and other styles
9

Bai, Duanyuan, Dong Zhang, Yongheng Zhang, Yingjie Shi, and Tingyi Wu. "Gesture Recognition of sEMG Signals Based on CNN-GRU Network." Journal of Physics: Conference Series 2637, no. 1 (2023): 012054. http://dx.doi.org/10.1088/1742-6596/2637/1/012054.

Full text
Abstract:
Abstract To improve the accuracy of surface electromyogram signal (sEMG) gesture recognition algorithm and solve the problem of manually extracting many features, this paper proposes a deep neural network-based gesture recognition method. A neural network integrating CNN and GRU was designed. The 8-channel sEMG data collected by the MYO armband is input to the CNN for feature extraction, and then the obtained feature sequence is input to the GRU network for gesture classification, and finally the recognition result of the gesture category is output. The experimental findings that the proposed technology reaches 76.41% recognition accuracy on the MyoUP dataset. This demonstrates the practicality of the suggested plan.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Zhiyuan, Chongyuan Bi, Songhui You, and Junjie Yao. "Hidden Markov Model-Based Video Recognition for Sports." Advances in Mathematical Physics 2021 (December 20, 2021): 1–12. http://dx.doi.org/10.1155/2021/5183088.

Full text
Abstract:
In this paper, we conduct an in-depth study and analysis of sports video recognition by improved hidden Markov model. The feature module is a complex gesture recognition module based on hidden Markov model gesture features, which applies the hidden Markov model features to gesture recognition and performs the recognition of complex gestures made by combining simple gestures based on simple gesture recognition. The combination of the two modules forms the overall technology of this paper, which can be applied to many scenarios, including some special scenarios with high-security levels that require real-time feedback and some public indoor scenarios, which can achieve different prevention and services for different age groups. With the increase of the depth of the feature extraction network, the experimental effect is enhanced; however, the two-dimensional convolutional neural network loses temporal information when extracting features, so the three-dimensional convolutional network is used in this paper to extract features from the video in time and space. Multiple binary classifications of the extracted features are performed to achieve the goal of multilabel classification. A multistream residual neural network is used to extract features from video data of three modalities, and the extracted feature vectors are fed into the attention mechanism network, then, the more critical information for video recognition is selected from a large amount of spatiotemporal information, further learning the temporal dependencies existing between consecutive video frames, and finally fusing the multistream network outputs to obtain the final prediction category. By training and optimizing the model in an end-to-end manner, recognition accuracies of 92.7% and 64.4% are achieved on the dataset, respectively.
APA, Harvard, Vancouver, ISO, and other styles
11

Tang, Gaopeng, Tongning Wu, and Congsheng Li. "Dynamic Gesture Recognition Based on FMCW Millimeter Wave Radar: Review of Methodologies and Results." Sensors 23, no. 17 (2023): 7478. http://dx.doi.org/10.3390/s23177478.

Full text
Abstract:
As a convenient and natural way of human-computer interaction, gesture recognition technology has broad research and application prospects in many fields, such as intelligent perception and virtual reality. This paper summarized the relevant literature on gesture recognition using Frequency Modulated Continuous Wave (FMCW) millimeter-wave radar from January 2015 to June 2023. In the manuscript, the widely used methods involved in data acquisition, data processing, and classification in gesture recognition were systematically investigated. This paper counts the information related to FMCW millimeter wave radar, gestures, data sets, and the methods and results in feature extraction and classification. Based on the statistical data, we provided analysis and recommendations for other researchers. Key issues in the studies of current gesture recognition, including feature fusion, classification algorithms, and generalization, were summarized and discussed. Finally, this paper discussed the incapability of the current gesture recognition technologies in complex practical scenes and their real-time performance for future development.
APA, Harvard, Vancouver, ISO, and other styles
12

Nandyal, Suvarna, and Suvarna Laxmikant Kattimani. "Umpire Gesture Detection and Recognition using HOG and Non-Linear Support Vector Machine (NL-SVM) Classification of Deep Features in Cricket Videos." Journal of Physics: Conference Series 2070, no. 1 (2021): 012148. http://dx.doi.org/10.1088/1742-6596/2070/1/012148.

Full text
Abstract:
Abstract Gesture Recognition pertains to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head, and/or body. It is of utmost importance in designing an intelligent and efficient human–computer interface. The applications of gesture recognition are manifold, ranging from sign language through medical rehabilitation, monitoring patients or elder people, surveillance systems, sports gesture analysis, human behaviour analysis etc., to virtual reality. In recent years, there has been increased interest in video summarization and automatic sports highlights generation in the game of Cricket. In Cricket, the Umpire has the authority to make important decisions about events on the field. The Umpire signals important events using unique hand on signals and gestures. The primary intention of our work is to design and develop a new robust method for Umpire Action and Non-Action Gesture Identification and Recognition based on the Umpire Segmentation and the proposed Histogram Oriented Gradient (HOG) feature Extraction oriented Non-Linear Support Vector Machine (NL-SVM) classification of Deep Features. Primarily the 80% of Umpire action and non-action images in a cricket match, about 1, 93, 000 frames, the Histogram of Oriented Gradient Deep Features are calculated and trained the system having six gestures of Umpire pose. The proposed HOG Feature Extraction oriented Non-Linear Support Vector Machine classification method achieves the maximal accuracy of 97.95%, the maximal sensitivity of 98.87%, the maximal specificity of 98.89% and maximal Precision of 97.02% which indicates its superiority.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhang, Yajun, Bo Yuan, Zhixiong Yang, Zijian Li, and Xu Liu. "Wi-NN: Human Gesture Recognition System Based on Weighted KNN." Applied Sciences 13, no. 6 (2023): 3743. http://dx.doi.org/10.3390/app13063743.

Full text
Abstract:
Gesture recognition, the basis of human–computer interaction (HCI), is a significant component for the development of smart home, VR, and senior care management. Most gesture recognition methods still depend on sensors worn by the user or video-based gestures for recognition, can be used for fine-grained gesture recognition. our paper implements a gesture recognition method that is independent of environment and gesture drawing direction, and it achieves gesture recognition classification by using small sample data. Wi-NN, proposed in this study, does not require the user to wear additional device. In this case, channel state information (CSI) extracted from Wi-Fi signal is used to capture the action information of the human body via CSI. After pre-processing to reduce the interference of environmental noise as much as possible, clear action information is extracted using the feature extraction method based on time domain to obtain the gesture action feature data. The gathered data are integrated with the weighted k-nearest neighbor (KNN) classification recognizer for classification task. The experiment outcomes revealed that the accuracy scores of the same gesture for different users and different gestures for the same user under the same environment were 93.1% and 89.6%, respectively. The experiments in different environments also achieved good recognition results, and by comparing with other experimental methods, the experiments in this paper have better recognition results. Evidently, good classification results were generated after the original data were processed and incorporated into the weighted KNN.
APA, Harvard, Vancouver, ISO, and other styles
14

Chopparapu, SaiTeja, and Joseph Beatrice Seventline. "An Efficient Multi-modal Facial Gesture-based Ensemble Classification and Reaction to Sound Framework for Large Video Sequences." Engineering, Technology & Applied Science Research 13, no. 4 (2023): 11263–70. http://dx.doi.org/10.48084/etasr.6087.

Full text
Abstract:
Machine learning-based feature extraction and classification models play a vital role in evaluating and detecting patterns in multivariate facial expressions. Most conventional feature extraction and multi-modal pattern detection models are independent of filters for multi-class classification problems. In traditional multi-modal facial feature extraction models, it is difficult to detect the dependent correlated feature sets and use ensemble classification processes. This study used advanced feature filtering, feature extraction measures, and ensemble multi-class expression prediction to optimize the efficiency of feature classification. A filter-based multi-feature ranking-based voting framework was implemented on different multiple-based classifiers. Experimental results were evaluated on different multi-modal facial features for the automatic emotions listener using a speech synthesis library. The evaluation results showed that the proposed model had better feature classification, feature selection, prediction, and runtime than traditional approaches on heterogeneous facial databases.
APA, Harvard, Vancouver, ISO, and other styles
15

Sevim, Yusuf. "A New Feature Extraction Method for EMG Signals." Traitement du Signal 39, no. 5 (2022): 1615–20. http://dx.doi.org/10.18280/ts.390518.

Full text
Abstract:
Surface Electromyography (sEMG) is an important tool for gesture recognition. Features and classification methods have to be carefully selected to be successful in the recognition of electromyografic signals. In most of the sEMG studies, time and frequency domain features have been extracted and classified with a single classifier. But neither one feature nor one classifier alone has achieved high classification accuracies. Using a feature and classifier combination would be a solution for this problem, and increase the accuracies. As a contribution to this field, a new time domain EMG feature is suggested and its classification performance is examined for feature and classifier combinations in this study. According to the results of this study, the new feature has high classification accuracy, and when it is used with AR and ST features, the average of the classification accuracy reaches 99.57% for multiple SVM classifier. Besides, the new feature+AR+ ST feature combination shows high classification accuracy for single classifier, and this eliminates the need for multiple classifiers.
APA, Harvard, Vancouver, ISO, and other styles
16

Nurul Khotimah, Wijayanti, Nanik Suciati, and Tiara Anggita. "Indonesian sign language recognition using kinect and dynamic time warping." Indonesian Journal of Electrical Engineering and Computer Science 15, no. 1 (2019): 495. http://dx.doi.org/10.11591/ijeecs.v15.i1.pp495-503.

Full text
Abstract:
Sign Language Recognition System (SLRS) is a system to recognise sign language and then translate them into text. This system can be developed by using a sensor-based technique. Some studies have implemented various feature extraction and classification methods to recognise sign language in the different country. However, their systems were user dependent (the accuracy was high when the trained and the tested user were the same people, but it was getting worse when the tested user was different to the trained user). Therefore in this study, we proposed a feature extraction method which is invariant to a user. We used the distance between two users’ skeleton instead of using the users’ skeleton positions because the skeleton distance is independent to the user posture. Finally, forty-five features were extracted in this proposed method. Further, we classified the features by using a classification method that is suitable with sign language gestures characteristic (time-dependent sequence data). The classification method is Dynamic Time Wrapping. For the experiment, we used twenty Indonesian sign languages from different semantic groups (greetings, questions, pronouns, places, family and others) and different gesture characteristic (static gesture and dynamic gesture). Then the system was tested by a different user with the user who did the training. The result was promising, this proposed method produced high accuracy, reach 91% which shows that this proposed method is user independent.
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Jianyong, Chengbei Li, Jihui Han, Yuefeng Shi, Guibin Bian, and Shuai Zhou. "Robust Hand Gesture Recognition Using HOG-9ULBP Features and SVM Model." Electronics 11, no. 7 (2022): 988. http://dx.doi.org/10.3390/electronics11070988.

Full text
Abstract:
Hand gesture recognition is an area of study that attempts to identify human gestures through mathematical algorithms, and can be used in several fields, such as communication between deaf-mute people, human–computer interaction, intelligent driving, and virtual reality. However, changes in scale and angle, as well as complex skin-like backgrounds, make gesture recognition quite challenging. In this paper, we propose a robust recognition approach for multi-scale as well as multi-angle hand gestures against complex backgrounds. First, hand gestures are segmented from complex backgrounds using the single Gaussian model and K-means algorithm. Then, the HOG feature and an improved 9ULBP feature are fused into the HOG-9ULBP feature, which is invariant in scale and rotation and enables accurate feature extraction. Finally, SVM is adopted to complete the hand gesture classification. Experimental results show that the proposed method achieves the highest accuracy of 99.01%, 97.50%, and 98.72% on the self-collected dataset, the NUS dataset, and the MU HandImages ASL dataset, respectively.
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Ling Hua, and Ji Fang Du. "Visual Based Hand Gesture Recognition Systems." Applied Mechanics and Materials 263-266 (December 2012): 2422–25. http://dx.doi.org/10.4028/www.scientific.net/amm.263-266.2422.

Full text
Abstract:
This paper describes the techniques used in visual based hand gesture recognition systems. The study is discussed from three aspects: the two categories, the five components, and the methods of feature extraction of visual based hand gesture recognition systems. The two categories are 3D model based systems and appearance model based systems. The five components are image sequences capture, pre-processing, hand regions detection, feature extraction and gesture classification. The methods of feature extraction are Hidden Markov Model (HMM), Artificial Neural Networks (ANN), and Support Vector Machines (SVM). The main ideas of each technique are described in detail.
APA, Harvard, Vancouver, ISO, and other styles
19

Olaniyi, Abiodun AYENI. "A Robust Facial Expression Recognition System for Android Devices." J. of Advancement in Engineering and Technology 7, no. 3 (2020): 04. https://doi.org/10.5281/zenodo.3750534.

Full text
Abstract:
This research work presents an idea for detecting an unknown human face in an input imagery and recognizing the facial expression. The objective of this research is to develop a highly intelligent android application for facial expression recognition. A Facial Expression Recognition system needs to solve the following problems: detection and location of faces in a clustered scene, facial feature extraction, and facial expression classification. In this research work three basic expressions were considered, which are: Happy, Sad, and Angry. Georgia Tech face detection dataset and some locally captured face of Federal University of Technology, Akure students were also used. Local Binary Pattern (LBP) and Random Forest (RF) were used for Feature Extraction and classification respectively for the three emotions. The experiments show that the proposed facial expression recognition framework yields 80% accuracy for angry gesture, 60% for sad gesture and 73.33% for happy gesture.
APA, Harvard, Vancouver, ISO, and other styles
20

Lu, Ming-Xing, Guo-Zhen Du, and Zhan-Fang Li. "Multimode Gesture Recognition Algorithm Based on Convolutional Long Short-Term Memory Network." Computational Intelligence and Neuroscience 2022 (March 2, 2022): 1–10. http://dx.doi.org/10.1155/2022/4068414.

Full text
Abstract:
Gesture recognition utilizes deep learning network model to automatically extract deep features of data; however, traditional machine learning algorithms rely on manual feature extraction and poor model generalization ability. In this paper, a multimodal gesture recognition algorithm based on convolutional long-term memory network is proposed. First, a convolutional neural network (CNN) is employed to automatically extract the deeply hidden features of multimodal gesture data. Then, a time series model is constructed using a long short-term memory (LSTM) network to learn the long-term dependence of multimodal gesture features on the time series. On this basis, the classification of multimodal gestures is realized by the SoftMax classifier. Finally, the method is experimented and evaluated on two dynamic gesture datasets, VIVA and NVGesture. Experimental results indicate that the accuracy rates of the proposed method on the VIVA and NVGesture datasets are 92.55% and 87.38%, respectively, and its recognition accuracy and convergence performance are better than those of other comparison algorithms.
APA, Harvard, Vancouver, ISO, and other styles
21

Zheng, Lianqing, Jie Bai, Xichan Zhu, et al. "Dynamic Hand Gesture Recognition in In-Vehicle Environment Based on FMCW Radar and Transformer." Sensors 21, no. 19 (2021): 6368. http://dx.doi.org/10.3390/s21196368.

Full text
Abstract:
Hand gesture recognition technology plays an important role in human-computer interaction and in-vehicle entertainment. Under in-vehicle conditions, it is a great challenge to design gesture recognition systems due to variable driving conditions, complex backgrounds, and diversified gestures. In this paper, we propose a gesture recognition system based on frequency-modulated continuous-wave (FMCW) radar and transformer for an in-vehicle environment. Firstly, the original range-Doppler maps (RDMs), range-azimuth maps (RAMs), and range-elevation maps (REMs) of the time sequence of each gesture are obtained by radar signal processing. Then we preprocess the obtained data frames by region of interest (ROI) extraction, vibration removal algorithm, background removal algorithm, and standardization. We propose a transformer-based radar gesture recognition network named RGTNet. It fully extracts and fuses the spatial-temporal information of radar feature maps to complete the classification of various gestures. The experimental results show that our method can better complete the eight gesture classification tasks in the in-vehicle environment. The recognition accuracy is 97.56%.
APA, Harvard, Vancouver, ISO, and other styles
22

Arozi, Moh, Wahyu Caesarendra, Mochammad Ariyanto, M. Munadi, Joga D. Setiawan, and Adam Glowacz. "Pattern Recognition of Single-Channel sEMG Signal Using PCA and ANN Method to Classify Nine Hand Movements." Symmetry 12, no. 4 (2020): 541. http://dx.doi.org/10.3390/sym12040541.

Full text
Abstract:
A number of researchers prefer using multi-channel surface electromyography (sEMG) pattern recognition in hand gesture recognition to increase classification accuracy. Using this method can lead to computational complexity. Hand gesture classification by employing single channel sEMG signal acquisition is quite challenging, especially for low-rate sampling frequency. In this paper, a study on the pattern recognition method for sEMG signals of nine finger movements is presented. Common surface single channel electromyography (sEMG) was used to measure five different subjects with no neurological or muscular disorder by having nine hand movements. This research had several sequential processes (i.e., feature extraction, feature reduction, and feature classification). Sixteen time-domain features were employed for feature extraction. The features were then reduced using principal component analysis (PCA) into two and three-dimensional feature space. The artificial neural network (ANN) classifier was tested on two different feature sets: (1) using all principal components obtained from PCA (PC1–PC3) and (2) using selected principal components (PC2 and PC3). The third best principal components were then used for classification using ANN. The average accuracy using all subject signals was 86.7% to discriminate the nine finger movements.
APA, Harvard, Vancouver, ISO, and other styles
23

Hellara, Hiba, Rim Barioul, Salwa Sahnoun, Ahmed Fakhfakh, and Olfa Kanoun. "Comparative Study of sEMG Feature Evaluation Methods Based on the Hand Gesture Classification Performance." Sensors 24, no. 11 (2024): 3638. http://dx.doi.org/10.3390/s24113638.

Full text
Abstract:
Effective feature extraction and selection are crucial for the accurate classification and prediction of hand gestures based on electromyographic signals. In this paper, we systematically compare six filter and wrapper feature evaluation methods and investigate their respective impacts on the accuracy of gesture recognition. The investigation is based on several benchmark datasets and one real hand gesture dataset, including 15 hand force exercises collected from 14 healthy subjects using eight commercial sEMG sensors. A total of 37 time- and frequency-domain features were extracted from each sEMG channel. The benchmark dataset revealed that the minimum Redundancy Maximum Relevance (mRMR) feature evaluation method had the poorest performance, resulting in a decrease in classification accuracy. However, the RFE method demonstrated the potential to enhance classification accuracy across most of the datasets. It selected a feature subset comprising 65 features, which led to an accuracy of 97.14%. The Mutual Information (MI) method selected 200 features to reach an accuracy of 97.38%. The Feature Importance (FI) method reached a higher accuracy of 97.62% but selected 140 features. Further investigations have shown that selecting 65 and 75 features with the RFE methods led to an identical accuracy of 97.14%. A thorough examination of the selected features revealed the potential for three additional features from three specific sensors to enhance the classification accuracy to 97.38%. These results highlight the significance of employing an appropriate feature selection method to significantly reduce the number of necessary features while maintaining classification accuracy. They also underscore the necessity for further analysis and refinement to achieve optimal solutions.
APA, Harvard, Vancouver, ISO, and other styles
24

Theresa, W. Gracy, S. Santhana Prabha, D. Thilagavathy, and S. Pournima. "Analysis of the Efficacy of Real-Time Hand Gesture Detection with Hog and Haar-Like Features Using SVM Classification." International Journal on Recent and Innovation Trends in Computing and Communication 10, no. 2s (2022): 199–207. http://dx.doi.org/10.17762/ijritcc.v10i2s.5929.

Full text
Abstract:
The field of hand gesture recognition has recently reached new heights thanks to its widespread use in domains like remote sensing, robotic control, and smart home appliances, among others. Despite this, identifying gestures is difficult because of the intransigent features of the human hand, which make the codes used to decode them illegible and impossible to compare. Differentiating regional patterns is the job of pattern recognition. Pattern recognition is at the heart of sign language. People who are deaf or mute may understand the spoken language of the rest of the world by learning sign language. Any part of the body may be used to create signs in sign language. The suggested system employs a gesture recognition system trained on Indian sign language. The methods of preprocessing, hand segmentation, feature extraction, gesture identification, and classification of hand gestures are discussed in this work as they pertain to hand gesture sign language. A hybrid approach is used to extract the features, which combines the usage of Haar-like features with the application of Histogram of Oriented Gradients (HOG).The SVM classifier is then fed the characteristics it has extracted from the pictures in order to make an accurate classification. A false rejection error rate of 8% is achieved while the accuracy of hand gesture detection is improved by 93.5%.
APA, Harvard, Vancouver, ISO, and other styles
25

Saykol, Ediz, Halit Talha Türe, Ahmet Mert Sirvanci, and Mert Turan. "Posture labeling based gesture classification for Turkish sign language using depth values." Kybernetes 45, no. 4 (2016): 604–21. http://dx.doi.org/10.1108/k-04-2015-0107.

Full text
Abstract:
Purpose – The purpose of this paper to classify a set of Turkish sign language (TSL) gestures by posture labeling based finite-state automata (FSA) that utilize depth values in location-based features. Gesture classification/recognition is crucial not only in communicating visually impaired people but also for educational purposes. The paper also demonstrates the practical use of the techniques for TSL. Design/methodology/approach – Gesture classification is based on the sequence of posture labels that are assigned by location-based features, which are invariant under rotation and scale. Grid-based signing space clustering scheme is proposed to guide the feature extraction step. Gestures are then recognized by FSA that process temporally ordered posture labels. Findings – Gesture classification accuracies and posture labeling performance are compared to k-nearest neighbor to show that the technique provides a reasonable framework for recognition of TSL gestures. A challenging set of gestures is tested, however the technique is extendible, and extending the training set will increase the performance. Practical implications – The outcomes can be utilized as a system for educational purposes especially for visually impaired children. Besides, a communication system would be designed based on this framework. Originality/value – The posture labeling scheme, which is inspired from keyframe labeling concept of video processing, is the original part of the proposed gesture classification framework. The search space is reduced to single dimension instead of 3D signing space, which also facilitates design of recognition schemes. Grid-based clustering scheme and location-based features are also new and depth values are received from Kinect. The paper is of interest for researchers in pattern recognition and computer vision.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhao, Chuanxin, Fei Xiong, Taochun Wang, Yang Wang, Fulong Chen, and Zhiqiang Xu. "Wear-free gesture recognition based on residual features of RFID signals." Intelligent Data Analysis 26, no. 4 (2022): 1051–70. http://dx.doi.org/10.3233/ida-215972.

Full text
Abstract:
Traditionally, RFID is frequently used in identification and localization. In this paper, an extension application of RFID is designed to recognize gestures. Currently, gesture recognition is mainly used for feature extraction through wearable sensors and video cameras, which have shortcomings such as inconvenience to carry and interference with obstacles. This paper proposes a gesture recognition system based on radio frequency identification (RFID), where users do not need to wear devices. In the proposed model, the interference information generated by the gesture action on the tag signal is used as the fingerprint feature of the action. To obtain satisfactory recognition, the signal diversity is first increased through the tag array. Then, the RSSI and phase signal are normalized to eliminate offset and noise before training. Furthermore, a residual neural network (ResNet) is carefully built as a gesture classification model. The experimental results show that the recognition system achieves more recognition accuracy than existing methods, and the average gesture recognition accuracy reaches 95.5%.
APA, Harvard, Vancouver, ISO, and other styles
27

Erizka, Banuwati Candrasari, Novamizanti Ledya, and Aulia Suci. "Hand gesture recognition using discrete wavelet transform and hidden Markov models." TELKOMNIKA Telecommunication, Computing, Electronics and Control 18, no. 5 (2020): 2265–75. https://doi.org/10.12928/TELKOMNIKA.v18i5.13725.

Full text
Abstract:
Gesture recognition based on computer-vision is an important part of human-computer interaction. But it lacks in several points, that was image brightness, recognition time, and accuracy. Because of that goal of this research was to create a hand gesture recognition system that had good performances using discrete wavelet transform and hidden Markov models. The first process was pre-processing, which done by resizing the image to 128x128 pixels and then segmented the skin color. The second process was feature extraction using the discrete wavelet transform. The result was the feature value in the form of a feature vector from the image. The last process was gesture classification using hidden Markov models to calculate the highest probability of feature matrix which had obtained from the feature extraction process. The result of the system had 72% of accuracy using 150 training and 100 test data images that consist five gestures. The newness thing found in this experiment were the effect of acquisition and pre-processing. The accuracy had been escalated by 14% compared to Sebastien’s dataset at 58%. The increment effect propped by brightness and contrast value.
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Yu. "Research on the Construction of Human-Computer Interaction System Based on a Machine Learning Algorithm." Journal of Sensors 2022 (January 10, 2022): 1–11. http://dx.doi.org/10.1155/2022/3817226.

Full text
Abstract:
In this paper, we use machine learning algorithms to conduct in-depth research and analysis on the construction of human-computer interaction systems and propose a simple and effective method for extracting salient features based on contextual information. The method can retain the dynamic and static information of gestures intact, which results in a richer and more robust feature representation. Secondly, this paper proposes a dynamic planning algorithm based on feature matching, which uses the consistency and accuracy of feature matching to measure the similarity of two frames and then uses a dynamic planning algorithm to find the optimal matching distance between two gesture sequences. The algorithm ensures the continuity and accuracy of the gesture description and makes full use of the spatiotemporal location information of the features. The features and limitations of common motion target detection methods in motion gesture detection and common machine learning tracking methods in gesture tracking are first analyzed, and then, the kernel correlation filter method is improved by designing a confidence model and introducing a scale filter, and finally, comparison experiments are conducted on a self-built gesture dataset to verify the effectiveness of the improved method. During the training and validation of the model by the corpus, the complementary feature extraction methods are ablated and learned, and the corresponding results obtained are compared with the three baseline methods. But due to this feature, GMMs are not suitable when users want to model the time structure. It has been widely used in classification tasks. By using the kernel function, the support vector machine can transform the original input set into a high-dimensional feature space. After experiments, the speech emotion recognition method proposed in this paper outperforms the baseline methods, proving the effectiveness of complementary feature extraction and the superiority of the deep learning model. The speech is used as the input of the system, and the emotion recognition is performed on the input speech, and the corresponding emotion obtained is successfully applied to the human-computer dialogue system in combination with the online speech recognition method, which proves that the speech emotion recognition applied to the human-computer dialogue system has application research value.
APA, Harvard, Vancouver, ISO, and other styles
29

Paraskevopoulos, Georgios, Evaggelos Spyrou, Dimitrios Sgouropoulos, Theodoros Giannakopoulos, and Phivos Mylonas. "Real-Time Arm Gesture Recognition Using 3D Skeleton Joint Data." Algorithms 12, no. 5 (2019): 108. http://dx.doi.org/10.3390/a12050108.

Full text
Abstract:
In this paper we present an approach towards real-time hand gesture recognition using the Kinect sensor, investigating several machine learning techniques. We propose a novel approach for feature extraction, using measurements on joints of the extracted skeletons. The proposed features extract angles and displacements of skeleton joints, as the latter move into a 3D space. We define a set of gestures and construct a real-life data set. We train gesture classifiers under the assumptions that they shall be applied and evaluated to both known and unknown users. Experimental results with 11 classification approaches prove the effectiveness and the potential of our approach both with the proposed dataset and also compared to state-of-the-art research works.
APA, Harvard, Vancouver, ISO, and other styles
30

Dunai, Larisa, Isabel Seguí Verdú, Dinu Turcanu, and Viorel Bostan. "Prosthetic Hand Based on Human Hand Anatomy Controlled by Surface Electromyography and Artificial Neural Network." Technologies 13, no. 1 (2025): 21. https://doi.org/10.3390/technologies13010021.

Full text
Abstract:
Humans have a complex way of expressing their intuitive intentions in real gestures. That is why many gesture detection and recognition techniques have been studied and developed. There are many methods of human hand signal reading, such as those using electroencephalography, electrocorticography, and electromyography, as well as methods for gesture recognition. In this paper, we present a method based on real-time surface electroencephalography hand-based gesture recognition using a multilayer neural network. For this purpose, the sEMG signals have been amplified, filtered and sampled; then, the data have been segmented, feature extracted and classified for each gesture. To validate the method, 100 signals for three gestures with 64 samples each signal have been recorded from 2 users with OYMotion sensors and 100 signals for three gestures from 4 users with the MyWare sensors. These signals were used for feature extraction and classification using an artificial neuronal network. The model converges after 10 sessions, achieving 98% accuracy. As a result, an algorithm was developed that aimed to recognize two specific gestures (handling a bottle and pointing with the index finger) in real time with 95% accuracy.
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Yong, Di Wang, Yunhai Fu, Dengke Yao, Liangbo Xie, and Mu Zhou. "Multi-Hand Gesture Recognition Using Automotive FMCW Radar Sensor." Remote Sensing 14, no. 10 (2022): 2374. http://dx.doi.org/10.3390/rs14102374.

Full text
Abstract:
With the development of human–computer interaction(s) (HCI), hand gestures are playing increasingly important roles in our daily lives. With hand gesture recognition (HGR), users can play virtual games together, control the smart equipment, etc. As a result, this paper presents a multi-hand gesture recognition system using automotive frequency modulated continuous wave (FMCW) radar. Specifically, we first constructed the range-Doppler map (RDM) and range-angle map (RAM), and then suppressed the spectral leakage, and dynamic and static interferences. Since the received echo signals with multi-hand gestures are mixed together, we propose a spatiotemporal path selection algorithm to separate the mixed multi-hand gestures. A dual 3D convolutional neural network-based feature fusion network is proposed for feature extraction and classification. We developed the FMCW radar-based platform to evaluate the performance of the proposed multi-hand gesture recognition method; the experimental results show that the proposed method can achieve an average recognition accuracy of 93.12% when eight gestures with two hands are performed simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
32

Ewe, Edmond Li Ren, Chin Poo Lee, Lee Chung Kwek, and Kian Ming Lim. "Hand Gesture Recognition via Lightweight VGG16 and Ensemble Classifier." Applied Sciences 12, no. 15 (2022): 7643. http://dx.doi.org/10.3390/app12157643.

Full text
Abstract:
Gesture recognition has been studied for a while within the fields of computer vision and pattern recognition. A gesture can be defined as a meaningful physical movement of the fingers, hands, arms, or other parts of the body with the purpose to convey information for the environment interaction. For instance, hand gesture recognition (HGR) can be used to recognize sign language which is the primary means of communication by the deaf and mute. Vision-based HGR is critical in its application; however, there are challenges that will need to be overcome such as variations in the background, illuminations, hand orientation and size and similarities among gestures. The traditional machine learning approach has been widely used in vision-based HGR in recent years but the complexity of its processing has been a major challenge—especially on the handcrafted feature extraction. The effectiveness of the handcrafted feature extraction technique was not proven across various datasets in comparison to deep learning techniques. Therefore, a hybrid network architecture dubbed as Lightweight VGG16 and Random Forest (Lightweight VGG16-RF) is proposed for vision-based hand gesture recognition. The proposed model adopts feature extraction techniques via the convolutional neural network (CNN) while using the machine learning method to perform classification. Experiments were carried out on publicly available datasets such as American Sign Language (ASL), ASL Digits and NUS Hand Posture dataset. The experimental results demonstrate that the proposed model, a combination of lightweight VGG16 and random forest, outperforms other methods.
APA, Harvard, Vancouver, ISO, and other styles
33

Nagadeepa.Ch, Dr.N.Balaji, and Dr.V.Padmaja. "ANALYSIS OF INERTIAL SENSOR DATA USING TRAJECTORY RECOGNITION ALGORITHM." International Journal on Cybernetics & Informatics (IJCI) 5, no. 4 (2017): 101–7. https://doi.org/10.5121/ijci.2016.5412.

Full text
Abstract:
This paper describes a digital pen based on IMU sensor for gesture and handwritten digit gesture trajectory recognition applications. This project allows human and Pc interaction. Handwriting Recognition is mainly used for applications in the field of security and authentication. By using embedded pen the user can make hand gesture or write a digit and also an alphabetical character. The embedded pen contains an inertial sensor, microcontroller and a module having Zigbee wireless transmitter for creating handwriting and trajectories using gestures. The propound trajectory recognition algorithm constitute the sensing signal attainment, pre-processing techniques, feature origination, feature extraction, classification technique. The user hand motion is measured using the sensor and the sensing information is wirelessly imparted to PC for recognition. In this process initially excerpt the time domain and frequency domain features from pre-processed signal, later it performs linear discriminant analysis in order to represent features with reduced dimension. The dimensionally reduced features are processed with two classifiers – State Vector Machine (SVM) and k-Nearest Neighbour (kNN). Through this algorithm with SVM classifier provides recognition rate is 98.5% and with kNN classifier recognition rate is 95.5% .
APA, Harvard, Vancouver, ISO, and other styles
34

IZADPANAHI, SHIMA, and ÖNSEN TOYGAR. "HUMAN AGE CLASSIFICATION WITH OPTIMAL GEOMETRIC RATIOS AND WRINKLE ANALYSIS." International Journal of Pattern Recognition and Artificial Intelligence 28, no. 02 (2014): 1456003. http://dx.doi.org/10.1142/s0218001414560035.

Full text
Abstract:
This paper presents geometric feature-based model for age group classification of facial images. The feature extraction is performed considering significance of the effects that age has on facial anthropometry. Particle Swarm Optimization (PSO) technique is used to find optimized subset of geometric features. The relevance and importance of age differentiation capability of the features are evaluated using support vector classifier. The facial images are categorized in seven major age groups. The effectiveness and accuracy of the proposed feature extraction is demonstrated with the experiments that are conducted on two publicly available databases namely Face and Gesture Recognition Research Network (FGNET) Aging Database and Iranian Face Database (IFDB). The results demonstrate that the success rate of the classification is 92.62%. The results also show significant improvement compared to the state-of-the-art models.
APA, Harvard, Vancouver, ISO, and other styles
35

Kaushik, Kartik. "Hand Gestures for Personal Computer Control." International Journal for Research in Applied Science and Engineering Technology 12, no. 4 (2024): 5183–92. http://dx.doi.org/10.22214/ijraset.2024.60894.

Full text
Abstract:
Abstract: Hand gestures have emerged as a promising modality for enhancing personal computer (PC) control, offering intuitive and natural interaction methods. This research paper explores the design, implementation, and evaluation of a hand gesture recognition system for PC interaction. We review existing methods of PC control and discuss the limitations of traditional input modalities. The paper outlines the process of hand gesture recognition, including data acquisition, preprocessing, feature extraction, and classification. We describe the design considerations and implementation details of the gesture recognition system, highlighting the hardware and software requirements. Empirical evaluations are presented to assess the system's performance in terms of recognition accuracy, response time, and user satisfaction. Furthermore, we explore potential applications of hand gesture control across various domains and discuss challenges and future directions in the field. This research contributes to advancing the state-of-the-art in HCI by demonstrating the feasibility and effectiveness of hand gestures for PC interaction
APA, Harvard, Vancouver, ISO, and other styles
36

Lou, Xinyue. "Vision-based Hand Gesture Recognition Technology." Applied and Computational Engineering 141, no. 1 (2025): 54–59. https://doi.org/10.54254/2755-2721/2025.21696.

Full text
Abstract:
Human-computer interaction has a wide range of application prospects in many fields such as medicine, entertainment, industry and education. Gesture recognition is one of the most important technologies for gesture interaction between humans and robots, and visual gesture recognition increases the user's comfort and freedom compared with data glove recognition. This paper summarizes the general process of visual gesture recognition based on the literature, including three steps: pre-processing, feature extraction, and gesture classification. It also defines static and dynamic gestures and makes a comparison between their differences and recognition emphases. Based on static and dynamic gesture recognition, this paper summarizes the commonly - used visual gesture recognition methods. For static gesture recognition, it includes methods such as the template - matching method and the AdaBoost - based method. As for dynamic gesture recognition, it encompasses methods like the hidden Markov model method and the dynamic time regularization method. Finally, some applications of visual gesture recognition are introduced, for example, a non-contact system for operating rooms and smart home control.
APA, Harvard, Vancouver, ISO, and other styles
37

Ismail, Mohammad H., Shefa A. Dawwd, and Fakhradeen H. Ali. "Static hand gesture recognition of Arabic sign language by using deep CNNs." Indonesian Journal of Electrical Engineering and Computer Science 24, no. 1 (2021): 178. http://dx.doi.org/10.11591/ijeecs.v24.i1.pp178-188.

Full text
Abstract:
An Arabic sign language recognition using two concatenated deep convolution neural network models DenseNet121 & VGG16 is presented. The pre-trained models are fed with images, and then the system can automatically recognize the Arabic sign language. To evaluate the performance of concatenated two models in the Arabic sign language recognition, the red-green-blue (RGB) images for various static signs are collected in a dataset. The dataset comprises 220,000 images for 44 categories: 32 letters, 11 numbers (0:10), and 1 for none. For each of the static signs, there are 5000 images collected from different volunteers. The pre-trained models were used and trained on prepared Arabic sign language data. These models were used after some modification. Also, an attempt has been made to adopt two models from the previously trained models, where they are trained in parallel deep feature extractions. Then they are combined and prepared for the classification stage. The results demonstrate the comparison between the performance of the single model and multi-model. It appears that most of the multi-model is better in feature extraction and classification than the single models. And also show that when depending on the total number of incorrect recognize sign image in training, validation and testing dataset, the best convolutional neural networks (CNN) model in feature extraction and classification Arabic sign language is the DenseNet121 for a single model using and DenseNet121 & VGG16 for multi-model using.
APA, Harvard, Vancouver, ISO, and other styles
38

Ismail, Mohammad H., Shefa A. Dawwd, and Fakhradeen H. Ali. "Static hand gesture recognition of Arabic sign language by using deep CNNs." Indonesian Journal of Electrical Engineering and Computer Science 24, no. 1 (2021): 178–88. https://doi.org/10.11591/ijeecs.v24.i1.pp178-188.

Full text
Abstract:
An Arabic sign language recognition using two concatenated deep convolution neural network models DenseNet121 & VGG16 is presented. The pre-trained models are fed with images, and then the system can automatically recognize the Arabic sign language. To evaluate the performance of concatenated two models in the Arabic sign language recognition, the red-green-blue (RGB) images for various static signs are collected in a dataset. The dataset comprises 220,000 images for 44 categories: 32 letters, 11 numbers (0:10), and 1 for none. For each of the static signs, there are 5000 images collected from different volunteers. The pre-trained models were used and trained on prepared Arabic sign language data. These models were used after some modification. Also, an attempt has been made to adopt two models from the previously trained models, where they are trained in parallel deep feature extractions. Then they are combined and prepared for the classification stage. The results demonstrate the comparison between the performance of the single model and multi-model. It appears that most of the multi-model is better in feature extraction and classification than the single models. And also show that when depending on the total number of incorrect recognize sign image in training, validation and testing dataset, the best convolutional neural networks (CNN) model in feature extraction and classification Arabic sign language is the DenseNet121 for a single model using and DenseNet121 & VGG16 for multi-model using.
APA, Harvard, Vancouver, ISO, and other styles
39

Aljumaily, Mustafa S., and Ghaida A. Al-Suhail. "Towards ubiquitous human gestures recognition using wireless networks." International Journal of Pervasive Computing and Communications 13, no. 4 (2017): 408–18. http://dx.doi.org/10.1108/ijpcc-d-17-00005.

Full text
Abstract:
Purpose Recently, many researches have been devoted to studying the possibility of using wireless signals of the Wi-Fi networks in human-gesture recognition. They focus on classifying gestures despite who is performing them, and only a few of the previous work make use of the wireless channel state information in identifying humans. This paper aims to recognize different humans and their multiple gestures in an indoor environment. Design/methodology/approach The authors designed a gesture recognition system that consists of channel state information data collection, preprocessing, features extraction and classification to guess the human and the gesture in the vicinity of a Wi-Fi-enabled device with modified Wi-Fi-device driver to collect the channel state information, and process it in real time. Findings The proposed system proved to work well for different humans and different gestures with an accuracy that ranges from 87 per cent for multiple humans and multiple gestures to 98 per cent for individual humans’ gesture recognition. Originality/value This paper used new preprocessing and filtering techniques, proposed new features to be extracted from the data and new classification method that have not been used in this field before.
APA, Harvard, Vancouver, ISO, and other styles
40

Sharma, Naina, Vaishali Nirgude, Tanya Shah, Chirag Bhagat, Amithesh Gupta, and Yash Gupta. "GESTURE RECOGNITION FOR TOUCH-FREE PC CONTROL USING A NEURAL NETWORK APPROACH." ICTACT Journal on Data Science and Machine Learning 5, no. 4 (2024): 690–97. https://doi.org/10.21917/ijdsml.2024.0142.

Full text
Abstract:
In the pursuit of advancing the field of touch-free human-computer interaction, this paper is focused on developing a gesture enabled PC control system that aims for enhancing user engagement and providing intuitive and flexible control methods, across various applications, particularly those benefiting individuals with mobility impairments. This system has expanding potential use in virtual and augmented reality environments. This study describes a unique method for temporal gesture identification that employs gesture kinematics for feature extraction and classification. Real-time hand tracking and key point identification were performed using MediaPipe. The Euclidean distances between the key points was normalised and input into a Multilayer perceptron model, which classified the gestures and mapped them to specific commands for controlling PC functions. This approach performed well over a large dataset, improving accuracy and usability. The gesture recognition system achieved an average accuracy of 97%, with precision, recall, and F1 score of 0.924, 0.924, and 0.926, respectively, across the five gestures. This system provides the ability of customization to users which allows them to create and map their own gestures to specific commands, in addition to using predefined ones. This level of personalization and flexibility is a significant advancement over existing systems, which typically offer fixed gesture-command mappings.
APA, Harvard, Vancouver, ISO, and other styles
41

Magrofuoco, Nathan, Paolo Roselli, and Jean Vanderdonckt. "Two-dimensional Stroke Gesture Recognition." ACM Computing Surveys 54, no. 7 (2021): 1–36. http://dx.doi.org/10.1145/3465400.

Full text
Abstract:
The expansion of touch-sensitive technologies, ranging from smartwatches to wall screens, triggered a wider use of gesture-based user interfaces and encouraged researchers to invent recognizers that are fast and accurate for end-users while being simple enough for practitioners. Since the pioneering work on two-dimensional (2D) stroke gesture recognition based on feature extraction and classification, numerous approaches and techniques have been introduced to classify uni- and multi-stroke gestures, satisfying various properties of articulation-, rotation-, scale-, and translation-invariance. As the domain abounds in different recognizers, it becomes difficult for the practitioner to choose the right recognizer, depending on the application and for the researcher to understand the state-of-the-art. To address these needs, a targeted literature review identified 16 significant 2D stroke gesture recognizers that were submitted to a descriptive analysis discussing their algorithm, performance, and properties, and a comparative analysis discussing their similarities and differences. Finally, some opportunities for expanding 2D stroke gesture recognition are drawn from these analyses.
APA, Harvard, Vancouver, ISO, and other styles
42

Priya, Sathiya, and Sumathi B. "Feature Extraction Based on Cellular Particle Swarm Optimization Algorithm for American Sign Language Images." International Journal of Innovative Research in Information Security 09, no. 03 (2023): 204–11. http://dx.doi.org/10.26562/ijiris.2023.v0903.27.

Full text
Abstract:
Image Recognition is becoming a critical task and many problem-solving systems and approaches for image detection, analysis and classification are introduced by many modern researcher. The techniques should be user friendly and ease to interpret for both normal people and special people, it they should be analysed. The sign language act as a mode to transfer and exchange the message, information, knowledge and ideas from deaf to common people. The gaining information and responses to the pattern or gesture is called as sign. Sign Language is only mode to communication between the hearing-impaired people and common people. In this method each individual gesture is called as sign. In this system the American Sign Language recognition system which is used for visually impaired people to access and educate the special children’s. American Sign Language (ASL) images of a special people is collected with some constraints are taken as the dataset. The objective of the research work is done with three different Phase. Phase1 is pre-processing, Phase 2 is segmentation and Phase 3 is feature extraction for static hand gestures with the maximum possible accuracy rate. A novel approach for the proposed system is analysed with its structural feature extraction and compared with its parameters. The novel feature extraction is made with 8 different structural features they are Bounding Box, Area, Perimeter, Centroid, Roundness, EquiDiameter, Number of Boundaries and Angle. F-Measure, Recall and Precision based on this Accuracy, Sensitivity and Specificity are measured for features are extracted images. The accuracy of the new Phase has been found significantly higher.
APA, Harvard, Vancouver, ISO, and other styles
43

Akhtar, Zain Ul Abiden, and Hongyu Wang. "WiFi-Based Gesture Recognition for Vehicular Infotainment System—An Integrated Approach." Applied Sciences 9, no. 24 (2019): 5268. http://dx.doi.org/10.3390/app9245268.

Full text
Abstract:
In the realm of intelligent vehicles, gestures can be characterized for promoting automotive interfaces to control in-vehicle functions without diverting the driver’s visual attention from the road. Driver gesture recognition has gained more attention in advanced vehicular technology because of its substantial safety benefits. This research work demonstrates a novel WiFi-based device-free approach for driver gestures recognition for automotive interface to control secondary systems in a vehicle. Our proposed wireless model can recognize human gestures very accurately for the application of in-vehicle infotainment systems, leveraging Channel State Information (CSI). This computationally efficient framework is based on the properties of K Nearest Neighbors (KNN), induced in sparse representation coefficients for significant improvement in gestures classification. In this typical approach, we explore the mean of nearest neighbors to address the problem of computational complexity of Sparse Representation based Classification (SRC). The presented scheme leads to designing an efficient integrated classification model with reduced execution time. Both KNN and SRC algorithms are complimentary candidates for integration in the sense that KNN is simple yet optimized, whereas SRC is computationally complex but efficient. More specifically, we are exploiting the mean-based nearest neighbor rule to further improve the efficiency of SRC. The ultimate goal of this framework is to propose a better feature extraction and classification model as compared to the traditional algorithms that have already been used for WiFi-based device-free gesture recognition. Our proposed method improves the gesture recognition significantly for diverse scale of applications with an average accuracy of 91.4%.
APA, Harvard, Vancouver, ISO, and other styles
44

Freitas, Melissa La Banca, José Jair Alves Mendes, Thiago Simões Dias, Hugo Valadares Siqueira, and Sergio Luiz Stevan. "Surgical Instrument Signaling Gesture Recognition Using Surface Electromyography Signals." Sensors 23, no. 13 (2023): 6233. http://dx.doi.org/10.3390/s23136233.

Full text
Abstract:
Surgical Instrument Signaling (SIS) is compounded by specific hand gestures used by the communication between the surgeon and surgical instrumentator. With SIS, the surgeon executes signals representing determined instruments in order to avoid error and communication failures. This work presented the feasibility of an SIS gesture recognition system using surface electromyographic (sEMG) signals acquired from the Myo armband, aiming to build a processing routine that aids telesurgery or robotic surgery applications. Unlike other works that use up to 10 gestures to represent and classify SIS gestures, a database with 14 selected gestures for SIS was recorded from 10 volunteers, with 30 repetitions per user. Segmentation, feature extraction, feature selection, and classification were performed, and several parameters were evaluated. These steps were performed by taking into account a wearable application, for which the complexity of pattern recognition algorithms is crucial. The system was tested offline and verified as to its contribution for all databases and each volunteer individually. An automatic segmentation algorithm was applied to identify the muscle activation; thus, 13 feature sets and 6 classifiers were tested. Moreover, 2 ensemble techniques aided in separating the sEMG signals into the 14 SIS gestures. Accuracy of 76% was obtained for the Support Vector Machine classifier for all databases and 88% for analyzing the volunteers individually. The system was demonstrated to be suitable for SIS gesture recognition using sEMG signals for wearable applications.
APA, Harvard, Vancouver, ISO, and other styles
45

Alabdullah, Bayan Ibrahimm, Hira Ansar, Naif Al Mudawi, et al. "Smart Home Automation-Based Hand Gesture Recognition Using Feature Fusion and Recurrent Neural Network." Sensors 23, no. 17 (2023): 7523. http://dx.doi.org/10.3390/s23177523.

Full text
Abstract:
Gestures have been used for nonverbal communication for a long time, but human–computer interaction (HCI) via gestures is becoming more common in the modern era. To obtain a greater recognition rate, the traditional interface comprises various devices, such as gloves, physical controllers, and markers. This study provides a new markerless technique for obtaining gestures without the need for any barriers or pricey hardware. In this paper, dynamic gestures are first converted into frames. The noise is removed, and intensity is adjusted for feature extraction. The hand gesture is first detected through the images, and the skeleton is computed through mathematical computations. From the skeleton, the features are extracted; these features include joint color cloud, neural gas, and directional active model. After that, the features are optimized, and a selective feature set is passed through the classifier recurrent neural network (RNN) to obtain the classification results with higher accuracy. The proposed model is experimentally assessed and trained over three datasets: HaGRI, Egogesture, and Jester. The experimental results for the three datasets provided improved results based on classification, and the proposed system achieved an accuracy of 92.57% over HaGRI, 91.86% over Egogesture, and 91.57% over the Jester dataset, respectively. Also, to check the model liability, the proposed method was tested on the WLASL dataset, attaining 90.43% accuracy. This paper also includes a comparison with other-state-of-the art methods to compare our model with the standard methods of recognition. Our model presented a higher accuracy rate with a markerless approach to save money and time for classifying the gestures for better interaction.
APA, Harvard, Vancouver, ISO, and other styles
46

Bhuiyan, Rasel Ahmed, Abdul Matin, Md Shafiur Raihan Shafi, and Amit Kumar Kundu. "A Bag-of-Words Based Feature Extraction Scheme for American Sign Language Number Recognition from Hand Gesture Images." International Journal of Machine Learning and Computing 11, no. 1 (2021): 85–91. http://dx.doi.org/10.18178/ijmlc.2021.11.1.1018.

Full text
Abstract:
Human Computer Interaction (HCI) focuses on the interaction between humans and machines. An extensive list of applications exists for hand gesture recognition techniques, major candidates for HCI. The list covers various fields, one of which is sign language recognition. In this field, however, high accuracy and robustness are both needed; both present a major challenge. In addition, feature extraction from hand gesture images is a tough task because of the many parameters associated with them. This paper proposes an approach based on a bag-of-words (BoW) model for automatic recognition of American Sign Language (ASL) numbers. In this method, the first step is to obtain the set of representative vocabularies by applying a K-means clustering algorithm to a few randomly chosen images. Next, the vocabularies are used as bin centers for BoW histogram construction. The proposed histograms are shown to provide distinguishable features for classification of ASL numbers. For the purpose of classification, the K-nearest neighbors (kNN) classifier is employed utilizing the BoW histogram bin frequencies as features. For validation, very large experiments are done on two large ASL number-recognition datasets; the proposed method shows superior performance in classifying the numbers, achieving an F1 score of 99.92% in the Kaggle ASL numbers dataset.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhou, Qizhen, Jianchun Xing, Wei Chen, Xuewei Zhang, and Qiliang Yang. "From Signal to Image: Enabling Fine-Grained Gesture Recognition with Commercial Wi-Fi Devices." Sensors 18, no. 9 (2018): 3142. http://dx.doi.org/10.3390/s18093142.

Full text
Abstract:
Gesture recognition acts as a key enabler for user-friendly human-computer interfaces (HCI). To bridge the human-computer barrier, numerous efforts have been devoted to designing accurate fine-grained gesture recognition systems. Recent advances in wireless sensing hold promise for a ubiquitous, non-invasive and low-cost system with existing Wi-Fi infrastructures. In this paper, we propose DeepNum, which enables fine-grained finger gesture recognition with only a pair of commercial Wi-Fi devices. The key insight of DeepNum is to incorporate the quintessence of deep learning-based image processing so as to better depict the influence induced by subtle finger movements. In particular, we make multiple efforts to transfer sensitive Channel State Information (CSI) into depth radio images, including antenna selection, gesture segmentation and image construction, followed by noisy image purification using high-dimensional relations. To fulfill the restrictive size requirements of deep learning model, we propose a novel region-selection method to constrain the image size and select qualified regions with dominant color and texture features. Finally, a 7-layer Convolutional Neural Network (CNN) and SoftMax function are adopted to achieve automatic feature extraction and accurate gesture classification. Experimental results demonstrate the excellent performance of DeepNum, which recognizes 10 finger gestures with overall accuracy of 98% in three typical indoor scenarios.
APA, Harvard, Vancouver, ISO, and other styles
48

Remya PK, Rajkumar KK. "DHGNet: Devatha Hastha Gesture Network with Advanced Graph Enhancement for Gesture Identification and Recognition." Journal of Information Systems Engineering and Management 10, no. 28s (2025): 520–32. https://doi.org/10.52783/jisem.v10i28s.4353.

Full text
Abstract:
This study aims to develop an AI-powered system to classify and interpret Devatha Hasthas in Indian classical dance. By combining cultural preservation with modern technology, the system enhances accessibility and supports effective learning and documentation of intricate hand gestures, contributing to the promotion and understanding of traditional art forms. The study utilized a dataset of 16 Devatha Hasthas, MediaPipe hand tracking for segmentation, and feature extraction combining Hu moments and VGG19. Dimensionality reduction was performed using an ExtraTree classifier, followed by gesture classification through a Dense Neural Network. A Neo4j graph database was used for structured visualization and interaction. The system achieved an impressive classification accuracy of 96%, highlighting its effectiveness in accurately identifying Devatha Hasthas. Additionally, the integration of Neo4j graph database provided insightful interpretations of gesture relationships, demonstrating the potential of graph-based modeling to enhance the analysis of gesture interactions and cultural dynamics in classical dance. This study holds significant value for fields such as gesture recognition, AI, cultural heritage preservation, dance education, and digital humanities. By bridging traditional art forms with modern technologies, it empowers researchers, educators, and practitioners to enhance learning, fostering a deeper connection between cultural traditions and innovative technological advancements. This study introduces a novel integration of AI, deep learning, and graph-based modeling to interpret classical dance gestures, providing fresh perspectives on gesture interactions. It enhances current knowledge by bridging traditional art forms with advanced technologies, opening new possibilities for cultural studies, gesture recognition, and innovative approaches to preserving and learning intricate dance traditions.
APA, Harvard, Vancouver, ISO, and other styles
49

Mopidevi, Suneetha, Shivananda Biradhar, Neha Bobberla, and Kiran Sai Buddati. "Hand gesture recognition and voice conversion for deaf and Dumb." E3S Web of Conferences 391 (2023): 01060. http://dx.doi.org/10.1051/e3sconf/202339101060.

Full text
Abstract:
In this paper, we purpose a Hand gesture recognition model which can be used in real time application. This model is based on the mediapipe frame work of the google, Tensor flow in openCv and python and classification using feed forward neural network with keras model. The structure of the proposed work consists of 3 modules: Grabbing the frames, detecting hand landmarks and classification. The proposed model has the accuracy 95.7% at recognizing 10 kinds of hand gestures(Thumbs up, Thumbs down, Peace, Smile, Rock, Ok, Fist, livelong, call me, stop). A hand gesture recognition model that reacts rapidly and with generally acceptable accuracy is one of this work’s primary achievements and pre trained model for feature extraction. The unique approach of the suggested approach is that it detects hand landmarks using Google’s MediaPipe, which is faster and more accurate than traditional methods that rely on geometry, form, and edge data. For modelling sequence data and for recognising gestures, the LSTM model has proven to be quite successful.
APA, Harvard, Vancouver, ISO, and other styles
50

Niu, Yinxi, Wensheng Chen, Hui Zeng, Zhenhua Gan, and Baoping Xiong. "Optimizing sEMG Gesture Recognition: Leveraging Channel Selection and Feature Compression for Improved Accuracy and Computational Efficiency." Applied Sciences 14, no. 8 (2024): 3389. http://dx.doi.org/10.3390/app14083389.

Full text
Abstract:
In the task of upper-limb pattern recognition, effective feature extraction, channel selection, and classification methods are crucial for the construction of an efficient surface electromyography (sEMG) signal classification framework. However, existing deep learning models often face limitations due to improper channel selection methods and overly specific designs, leading to high computational complexity and limited scalability. To address this challenge, this study introduces a deep learning network based on channel feature compression—partial channel selection sEMG net (PCS-EMGNet). This network combines channel feature compression (channel selection) and feature extraction (partial block), aiming to reduce the model’s parameter count while maintaining recognition accuracy. PCS-EMGNet extracts high-dimensional feature vectors from sEMG signals through the partial block, decoding spatial and temporal feature information. Subsequently, channel selection compresses and filters these high-dimensional feature vectors, accurately selecting channel features to reduce the model’s parameter count, thereby decreasing computational complexity and enhancing the model’s processing speed. Moreover, the proposed method ensures the stability of classification, further improving the model’s capability of recognizing features in sEMG signal data. Experimental validation was conducted on five benchmark databases, namely the NinaPro DB4, NinaPro DB5, BioPatRec DB1, BioPatRec DB2, and BioPatRec DB3 datasets. Compared to traditional gesture recognition methods, PCS-EMGNet significantly enhanced recognition accuracy and computational efficiency, broadening its application prospects in real-world settings. The experimental results showed that our model achieved the highest average accuracy of 88.34% across these databases, marking a 9.96% increase in average accuracy compared to models with similar parameter counts. Simultaneously, our model’s parameter size was reduced by an average of 80% compared to previous gesture recognition models, demonstrating the effectiveness of channel feature compression in maintaining recognition accuracy while significantly reducing the parameter count.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography