Academic literature on the topic 'Gesture classification and feature extraction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Gesture classification and feature extraction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Gesture classification and feature extraction"

1

Wu, Yutong, Xinhui Hu, Ziwei Wang, Jian Wen, Jiangming Kan, and Wenbin Li. "Exploration of Feature Extraction Methods and Dimension for sEMG Signal Classification." Applied Sciences 9, no. 24 (2019): 5343. http://dx.doi.org/10.3390/app9245343.

Full text
Abstract:
It is necessary to complete the two parts of gesture recognition and wireless remote control to realize the gesture control of the automatic pruning machine. To realize gesture recognition, in this paper, we have carried out the research of gesture recognition technology based on surface electromyography signal, and discussed the influence of different numbers and different gesture combinations on the optimal size. We have calculated the 630-dimensional eigenvector from the benchmark scientific database of sEMG signals and extracted the features using principal component analysis (PCA). Discriminant analysis (DA) has been used to compare the processing effects of each feature extraction method. The experimental results have shown that the recognition rate of four gestures can reach 100.0%, the recognition rate of six gestures can reach 98.29%, and the optimal size is 516~523 dimensions. This study lays a foundation for the follow-up work of the pruning machine gesture control, and p rovides a compelling new way to promote the creative and human computer interaction process of forestry machinery.
APA, Harvard, Vancouver, ISO, and other styles
2

Trivedi, Kaustubh, Priyanka Gaikwad, Mahalaxmi Soma, Komal Bhore, and Prof Richa Agarwal. "Improve the Recognition Accuracy of Sign Language Gesture." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (2022): 4343–47. http://dx.doi.org/10.22214/ijraset.2022.43220.

Full text
Abstract:
Abstract: Image classification is one of classical issue of concern in image processing. There are various techniques for solving this issue. Sign languages are natural language that used to communicate with deaf and mute people. There is much different sign language in the world. But the main focused of system is on Sign Language (SL) which is on the way of standardization in that the system will concentrated on hand gestures only. Hand gesture is very important part of the body for exchange ideas, messages, and thoughts among deaf and dumb people. The proposed system will recognize the number 0 to 9 and alphabets from American Sign Language. It will divide into three parts i.e. preprocessing, feature extraction, classification. It will initially identify the gestures from American Sign language. Finally, the system processes that gesture to recognize number with the help of classification using CNN. Additionally we will play the speech of that identified alphabets. Keywords: Hybrid Approach, American Sign Language, Gesture Recognition. Feature Extraction
APA, Harvard, Vancouver, ISO, and other styles
3

Gaikwad, Priyanka, Kaustubh Trivedi, Mahalaxmi Soma, Komal Bhore, and Prof Richa Agarwal. "A Survey on Sign Language Recognition with Efficient Hand Gesture Representation." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (2022): 21–25. http://dx.doi.org/10.22214/ijraset.2022.41963.

Full text
Abstract:
Abstract: Image classification is one amongst classical issue of concern in image processing. There are various techniques for solving this issue. Sign languages are natural language that want to communicate with deaf and mute people. There's much different sign language within the world. But the most focused of system is on Sign language (SL) which is on the way of standardization there in the system will focused on hand gestures only. Hand gesture is extremely important a part of the body for exchange ideas, messages, and thoughts among deaf and dumb people. The proposed system will recognize the number 0 to 9 and alphabets from American language. It'll divide into three parts i.e., pre-processing, feature extraction, classification. It'll initially identify the gestures from American Sign language. Finally, the system processes that gesture to recognize number with the assistance of classification using CNN. Additionally, we'll play the speech of that identified alphabets. Keywords: Hybrid Approach, American Sign Language, Number Gesture Recognition. Feature Extraction.
APA, Harvard, Vancouver, ISO, and other styles
4

Wei Li, Wei Li, Yang Gao Wei Li, Jun Chen Yang Gao, Si-Yi Niu Jun Chen, Jia-Hao Jiang Si-Yi Niu, and Qi Li Jia-Hao Jiang. "Human Gesture Recognition Based on Millimeter-Wave Radar Using Improved C3D Convolutional Neural Network." 電腦學刊 34, no. 3 (2023): 001–18. http://dx.doi.org/10.53106/199115992023063403001.

Full text
Abstract:
<p>In this paper, we propose a time sequential IC3D convolutional neural network approach for hand gesture recognition based on frequency modulated continuous wave (FMCW) radar. Firstly, the FMCW radar is used to collect the echoes of human hand gestures. A two-dimensional fast Fourier transform calculates the range and velocity information of hand gestures in each frame signal to construct the Range-Doppler heat map dataset of hand gestures. Then, we design an IC3D network for feature extraction and classification of the dynamic gesture heat map. Finally, the experiment results show that the gesture recognition system designed in this paper effectively solves the problems of the difficulty of human gesture feature extraction and low utilization of time series information, and the average recognition accuracy rate can reach more than 99.8%.</p> <p> </p>
APA, Harvard, Vancouver, ISO, and other styles
5

Chang, Ying, Lan Wang, Lingjie Lin, and Ming Liu. "Deep Neural Network for Electromyography Signal Classification via Wearable Sensors." International Journal of Distributed Systems and Technologies 13, no. 3 (2022): 1–11. http://dx.doi.org/10.4018/ijdst.307988.

Full text
Abstract:
The human-computer interaction has been widely used in many fields, such intelligent prosthetic control, sports medicine, rehabilitation medicine, and clinical medicine. It has gradually become a research focus of social scientists. In the field of intelligent prosthesis, sEMG signal has become the most widely used control signal source because it is easy to obtain. The off-line sEMG control intelligent prosthesis needs to recognize the gestures to execute associated action. In order solve this issue, this paper adopts a CNN plus BiLSTM to automatically extract sEMG features and recognize the gestures. The CNN plus BiLSTM can overcome the drawbacks in the manual feature extraction methods. The experimental results show that the proposed gesture recognition framework can extract overall gesture features, which can improve the recognition rate.
APA, Harvard, Vancouver, ISO, and other styles
6

Ansar, Hira, Ahmad Jalal, Munkhjargal Gochoo, and Kibum Kim. "Hand Gesture Recognition Based on Auto-Landmark Localization and Reweighted Genetic Algorithm for Healthcare Muscle Activities." Sustainability 13, no. 5 (2021): 2961. http://dx.doi.org/10.3390/su13052961.

Full text
Abstract:
Due to the constantly increasing demand for the automatic localization of landmarks in hand gesture recognition, there is a need for a more sustainable, intelligent, and reliable system for hand gesture recognition. The main purpose of this study was to develop an accurate hand gesture recognition system that is capable of error-free auto-landmark localization of any gesture dateable in an RGB image. In this paper, we propose a system based on landmark extraction from RGB images regardless of the environment. The extraction of gestures is performed via two methods, namely, fused and directional image methods. The fused method produced greater extracted gesture recognition accuracy. In the proposed system, hand gesture recognition (HGR) is done via several different methods, namely, (1) HGR via point-based features, which consist of (i) distance features, (ii) angular features, and (iii) geometric features; (2) HGR via full hand features, which are composed of (i) SONG mesh geometry and (ii) active model. To optimize these features, we applied gray wolf optimization. After optimization, a reweighted genetic algorithm was used for classification and gesture recognition. Experimentation was performed on five challenging datasets: Sign Word, Dexter1, Dexter + Object, STB, and NYU. Experimental results proved that auto landmark localization with the proposed feature extraction technique is an efficient approach towards developing a robust HGR system. The classification results of the reweighted genetic algorithm were compared with Artificial Neural Network (ANN) and decision tree. The developed system plays a significant role in healthcare muscle exercise.
APA, Harvard, Vancouver, ISO, and other styles
7

Satybaldina, Dina, and Gulzia Kalymova. "Deep learning based static hand gesture recognition." Indonesian Journal of Electrical Engineering and Computer Science 21, no. 1 (2021): 398. http://dx.doi.org/10.11591/ijeecs.v21.i1.pp398-405.

Full text
Abstract:
Hand gesture recognition becomes a popular topic of deep learning and provides many application fields for bridging the human–computer barrier and has a positive impact on our daily life. The primary idea of our project is a static gesture acquisition from depth camera and to process the input images to train the deep convolutional neural network pre-trained on ImageNet dataset. Proposed system consists of gesture capture device (Intel® RealSense™ depth camera D435), pre-processing and image segmentation algorithms, feature extraction algorithm and object classification. For pre-processing and image segmentation algorithms computer vision methods from the OpenCV and Intel Real Sense libraries are used. The subsystem for features extracting and gestures classification is based on the modified VGG-16 by using the TensorFlow&Keras deep learning framework. Performance of the static gestures recognition system is evaluated using maching learning metrics. Experimental results show that the proposed model, trained on a database of 2000 images, provides high recognition accuracy both at the training and testing stages.
APA, Harvard, Vancouver, ISO, and other styles
8

Satybaldina, Dina, and Gulzia Kalymova. "Deep learning based static hand gesture recognition." Indonesian Journal of Electrical Engineering and Computer Science 21, no. 1 (2021): 398–405. https://doi.org/10.11591/ijeecs.v21.i1.pp398-405.

Full text
Abstract:
Hand gesture recognition becomes a popular topic of deep learning and provides many application fields for bridging the human-computer barrier and has a positive impact on our daily life. The primary idea of our project is a static gesture acquisition from depth camera and to process the input images to train the deep convolutional neural network pre-trained on ImageNet dataset. Proposed system consists of gesture capture device (Intel® RealSense™ depth camera D435), pre-processing and image segmentation algorithms, feature extraction algorithm and object classification. For preprocessing and image segmentation algorithms computer vision methods from the OpenCV and Intel Real Sense libraries are used. The subsystem for features extracting and gestures classification is based on the modified VGG16 by using the TensorFlow&Keras deep learning framework. Performance of the static gestures recognition system is evaluated using maching learning metrics. Experimental results show that the proposed model, trained on a database of 2000 images, provides high recognition accuracy both at the training and testing stages.
APA, Harvard, Vancouver, ISO, and other styles
9

Bai, Duanyuan, Dong Zhang, Yongheng Zhang, Yingjie Shi, and Tingyi Wu. "Gesture Recognition of sEMG Signals Based on CNN-GRU Network." Journal of Physics: Conference Series 2637, no. 1 (2023): 012054. http://dx.doi.org/10.1088/1742-6596/2637/1/012054.

Full text
Abstract:
Abstract To improve the accuracy of surface electromyogram signal (sEMG) gesture recognition algorithm and solve the problem of manually extracting many features, this paper proposes a deep neural network-based gesture recognition method. A neural network integrating CNN and GRU was designed. The 8-channel sEMG data collected by the MYO armband is input to the CNN for feature extraction, and then the obtained feature sequence is input to the GRU network for gesture classification, and finally the recognition result of the gesture category is output. The experimental findings that the proposed technology reaches 76.41% recognition accuracy on the MyoUP dataset. This demonstrates the practicality of the suggested plan.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Zhiyuan, Chongyuan Bi, Songhui You, and Junjie Yao. "Hidden Markov Model-Based Video Recognition for Sports." Advances in Mathematical Physics 2021 (December 20, 2021): 1–12. http://dx.doi.org/10.1155/2021/5183088.

Full text
Abstract:
In this paper, we conduct an in-depth study and analysis of sports video recognition by improved hidden Markov model. The feature module is a complex gesture recognition module based on hidden Markov model gesture features, which applies the hidden Markov model features to gesture recognition and performs the recognition of complex gestures made by combining simple gestures based on simple gesture recognition. The combination of the two modules forms the overall technology of this paper, which can be applied to many scenarios, including some special scenarios with high-security levels that require real-time feedback and some public indoor scenarios, which can achieve different prevention and services for different age groups. With the increase of the depth of the feature extraction network, the experimental effect is enhanced; however, the two-dimensional convolutional neural network loses temporal information when extracting features, so the three-dimensional convolutional network is used in this paper to extract features from the video in time and space. Multiple binary classifications of the extracted features are performed to achieve the goal of multilabel classification. A multistream residual neural network is used to extract features from video data of three modalities, and the extracted feature vectors are fed into the attention mechanism network, then, the more critical information for video recognition is selected from a large amount of spatiotemporal information, further learning the temporal dependencies existing between consecutive video frames, and finally fusing the multistream network outputs to obtain the final prediction category. By training and optimizing the model in an end-to-end manner, recognition accuracies of 92.7% and 64.4% are achieved on the dataset, respectively.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Gesture classification and feature extraction"

1

Goodman, Steve. "Feature extraction and classification." Thesis, University of Sunderland, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Raymond. "Feature extraction in classification." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/23634.

Full text
Abstract:
Feature extraction, or dimensionality reduction, is an essential part of many machine learning applications. The necessity for feature extraction stems from the curse of dimensionality and the high computational cost of manipulating high-dimensional data. In this thesis we focus on feature extraction for classification. There are several approaches, and we will focus on two such: the increasingly popular information-theoretic approach, and the classical distance-based, or variance-based approach. Current algorithms for information-theoretic feature extraction are usually iterative. In contrast, PCA and LDA are popular examples of feature extraction techniques that can be solved by eigendecomposition, and do not require an iterative procedure. We study the behaviour of an example of iterative algorithm that maximises Kapur's quadratic mutual information by gradient ascent, and propose a new estimate of mutual information that can be maximised by closed-form eigendecomposition. This new technique is more computationally efficient than iterative algorithms, and its behaviour is more reliable and predictable than gradient ascent. Using a general framework of eigendecomposition-based feature extraction, we show a connection between information-theoretic and distance-based feature extraction. Using the distance-based approach, we study the effects of high input dimensionality and over-fitting on feature extraction, and propose a family of eigendecomposition-based algorithms that can solve this problem. We investigate the relationship between class-discrimination and over-fitting, and show why the advantages of information-theoretic feature extraction become less relevant in high-dimensional spaces.
APA, Harvard, Vancouver, ISO, and other styles
3

Elliott, Rodney Bruce. "Feature extraction techniques for grasp classification." Thesis, University of Canterbury. Mechanical Engineering, 1998. http://hdl.handle.net/10092/3447.

Full text
Abstract:
This thesis examines the ability of four signal parameterisation techniques to provide discriminatory information between six different classes of signal. This was done with a view to assessing the suitability of the four techniques for inclusion in the real-time control scheme of a next generation robotic prosthesis. Each class of signal correlates to a particular type of grasp that the robotic prosthesis is able to form. Discrimination between the six classes of signal was done on the basis of parameters extracted from four channels of electromyographie (EMG) data that was recorded from muscles in the forearm. Human skeletal muscle tissue produces EMG signals whenever it contracts. Therefore, providing that the EMG signals of the muscles controlling the movements of the hand vary sufficiently when forming the different grasp types, discrimination between the grasps is possible. While it is envisioned that the chosen command discrimination system will be used by mid-forearm amputees to control a robotic prosthesis, the viability of the different parameterisation techniques was tested on data gathered from able-bodied volunteers in order to establish an upper limit of performance. The muscles from which signals were recorded are: the extensor pollicis brevis and extensor pollicis longus pair (responsible for moving the thumb); the extensor communis digitorum (responsible for moving the middle and index fingers); and the extensor carpi ulnaris (responsible for moving the little finger). The four signal parameterisation techniques that were evaluated are: 1. Envelope Maxima. This method parameterises each EMG signal by the maximum value of a smoothed fitted signal envelope. A tenth order polynomial is fitted to the rectified EMG signal peaks, and the maximum value of the polynomial is used to parameterise the signal. 2. Orthogonal Decomposition. This method uses a set of orthogonal functions to decompose the EMG signal into a finite set of orthogonal components. Each burst is then parameterised by the coefficients of the set of orthogonal functions. Two sets of orthogonal functions were tested: the Legendre polynomials, and the wavelet packets associated with the scaling functions of the Haar wavelet (referred to as the Haar wavelet for brevity). 3. Global Dynamical Model. This method uses a discretised set of nonlinear ordinary differential equations to model the dynamical processes that produced the recorded EMG signals. The coefficients of this model are then used to parameterise the EMG signal 4. EMG Histogram. This method formulates a histogram detailing the frequency with which the EMG signal enters particular voltage bins) and uses these frequency measurements to parameterise the signal. Ten sets of EMG data were gathered and processed to extract the desired parameters. Each data set consisted of 600 grasps- lOO grasp records of four channels of EMG data for each of the six grasp classes. From this data a hit rate statistic was formed for each feature extraction technique. The mean hit rates obtained from the four signal parameterisation techniques that were tested are summarised in Table 1. The EMG histogram provided Parameterisation Technique Hit Rate (%) Envelope Maxima 75 Legendre Polynomials 77 Haar Wavelets 79 Global Dynamical Model 75 EMG Histogram 81 Table 1: Hit Rate Summary. the best mean hit rate of all the signal parameterisation techniques of 81%. However, like all of the signal parameterisations that were tested, there was considerable variance in hit rates between the ten sets of data. This has been attributed to the manner in which the electrodes used to record the EMG signals were positioned. By locating the muscles of interest more accurately, consistent hit rates of 95% are well within reach. The fact that the EMG histogram produces the best mean hit rates is surprising given its relative simplicity. However, this simplicity makes the EMG histogram feature ideal for inclusion in a real-time control scheme.
APA, Harvard, Vancouver, ISO, and other styles
4

Forsberg, Axel. "A Wavelet-Based Surface Electromyogram Feature Extraction for Hand Gesture Recognition." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-39766.

Full text
Abstract:
The research field of robotic prosthetic hands have expanded immensely in the last couple of decades and prostheses are in more commercial use than ever. Classification of hand gestures using sensory data from electromyographic signals in the forearm are primary for any advanced prosthetic hand. Improving classification accuracy could lead to more user friendly and more naturally controlled prostheses. In this thesis, features were extracted from wavelet transform coefficients of four channel electromyographic data and used for classifying ten different hand gestures. Extensive search for suitable combinations of wavelet transform, feature extraction, feature reduction, and classifier was performed and an in-depth comparison between classification results of selected groups of combinations was conducted. Classification results of combinations were carefully evaluated with extensive statistical analysis. It was shown in this study that logarithmic features outperforms non-logarithmic features in terms of classification accuracy. Then a subset of all combinations containing only suitable combinations based on the statistical analysis is presented and the novelty of these results can direct future work for hand gesture recognition in a promising direction.
APA, Harvard, Vancouver, ISO, and other styles
5

Chaofan, Hao, and Yu Haisheng. "Feature Extraction of Gesture Recognition Based on Image Analysis by Using Matlab." Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-17367.

Full text
Abstract:
This thesis mainly focuses on the research of gesture extraction and finger segmentation in the gesture recognition. In this paper, we used image analysis technologies to create an application by encoding in Matlab program. We used this application to segment and extract the finger from one specific gesture (the gesture "one") and ran successfully. We explored the success rate of extracting the characteristic of the specific gesture "one" in different natural environments. We divided the natural environment into three different conditions which are glare and dark condition, similar object condition and different distances condition, then collected the results to calculate the successful extraction rate. We also evaluated and analyzed the inadequacies and future works of this application.<br>Technology
APA, Harvard, Vancouver, ISO, and other styles
6

Chilo, José. "Feature extraction for low-frequency signal classification /." Stockholm : Fysik, Physics, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4661.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Aktaruzzaman, M. "FEATURE EXTRACTION AND CLASSIFICATION THROUGH ENTROPY MEASURES." Doctoral thesis, Università degli Studi di Milano, 2015. http://hdl.handle.net/2434/277947.

Full text
Abstract:
Entropy is a universal concept that represents the uncertainty of a series of random events. The notion “entropy" is differently understood in different disciplines. In physics, it represents the thermodynamical state variable; in statistics it measures the degree of disorder. On the other hand, in computer science, it is used as a powerful tool for measuring the regularity (or complexity) in signals or time series. In this work, we have studied entropy based features in the context of signal processing. The purpose of feature extraction is to select the relevant features from an entity. The type of features depends on the signal characteristics and classification purpose. Many real world signals are nonlinear and nonstationary and they contain information that cannot be described by time and frequency domain parameters, instead they might be described well by entropy. However, in practice, estimation of entropy suffers from some limitations and is highly dependent on series length. To reduce this dependence, we have proposed parametric estimation of various entropy indices and have derived analytical expressions (when possible) as well. Then we have studied the feasibility of parametric estimations of entropy measures on both synthetic and real signals. The entropy based features have been finally employed for classification problems related to clinical applications, activity recognition, and handwritten character recognition. Thus, from a methodological point of view our study deals with feature extraction, machine learning, and classification methods. The different versions of entropy measures are found in the literature for signals analysis. Among them, approximate entropy (ApEn), sample entropy (SampEn) followed by corrected conditional entropy (CcEn) are mostly used for physiological signals analysis. Recently, entropy features are used also for image segmentation. A related measure of entropy is Lempel-Ziv complexity (LZC), which measures the complexity of a time-series, signal, or sequences. The estimation of LZC also relies on the series length. In particular, in this study, analytical expressions have been derived for ApEn, SampEn, and CcEn of an auto-regressive (AR) models. It should be mentioned that AR models have been employed for maximum entropy spectral estimation since many years. The feasibility of parametric estimates of these entropy measures have been studied on both synthetic series and real data. In feasibility study, the agreement between numeral estimates of entropy and estimates obtained through a certain number of realizations of the AR model using Montecarlo simulations has been observed. This agreement or disagreement provides information about nonlinearity, nonstationarity, or nonGaussinaity presents in the series. In some classification problems, the probability of agreement or disagreement have been proved as one of the most relevant features. VII After feasibility study of the parametric entropy estimates, the entropy and related measures have been applied in heart rate and arterial blood pressure variability analysis. The use of entropy and related features have been proved more relevant in developing sleep classification, handwritten character recognition, and physical activity recognition systems. The novel methods for feature extraction researched in this thesis give a good classification or recognition accuracy, in many cases superior to the features reported in the literature of concerned application domains, even with less computational costs.
APA, Harvard, Vancouver, ISO, and other styles
8

Graf, Arnulf B. A. "Classification and feature extraction in man and machine." [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=972533508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hamsici, Onur C. "Bayes Optimality in Classification, Feature Extraction and Shape Analysis." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1218513562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Khan, Muhammad. "Hand Gesture Detection & Recognition System." Thesis, Högskolan Dalarna, Datateknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:du-6496.

Full text
Abstract:
The project introduces an application using computer vision for Hand gesture recognition. A camera records a live video stream, from which a snapshot is taken with the help of interface. The system is trained for each type of count hand gestures (one, two, three, four, and five) at least once. After that a test gesture is given to it and the system tries to recognize it.A research was carried out on a number of algorithms that could best differentiate a hand gesture. It was found that the diagonal sum algorithm gave the highest accuracy rate. In the preprocessing phase, a self-developed algorithm removes the background of each training gesture. After that the image is converted into a binary image and the sums of all diagonal elements of the picture are taken. This sum helps us in differentiating and classifying different hand gestures.Previous systems have used data gloves or markers for input in the system. I have no such constraints for using the system. The user can give hand gestures in view of the camera naturally. A completely robust hand gesture recognition system is still under heavy research and development; the implemented system serves as an extendible foundation for future work.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Gesture classification and feature extraction"

1

Lee, Chulhee. Feature extraction and classification algorithms for high dimensional data. School of Electrical Engineering, Purdue University, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Eyben, Florian. Real-time Speech and Music Classification by Large Audio Feature Space Extraction. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-27299-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ding, Yao, Zhili Zhang, Haojie Hu, Fang He, Shuli Cheng, and Yijun Zhang. Graph Neural Network for Feature Extraction and Classification of Hyperspectral Remote Sensing Images. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-8009-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

K, Sharma R., Munish Kumar, Manish Kumar Jindal, Simpel Rani Jindal, and Anupam Garg. Feature Extraction and Classification Techniques for Text Recognition. IGI Global, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kumar, Munish. Feature Extraction and Classification Techniques for Text Recognition. IGI Global, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

K, Sharma R., Munish Kumar, Manish Kumar Jindal, Simpel Rani Jindal, and Anupam Garg. Feature Extraction and Classification Techniques for Text Recognition. IGI Global, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

K, Sharma R., Munish Kumar, Manish Kumar Jindal, Simpel Rani Jindal, and Anupam Garg. Feature Extraction and Classification Techniques for Text Recognition. IGI Global, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

K, Sharma R., Munish Kumar, Manish Kumar Jindal, Simpel Rani Jindal, and Anupam Garg. Feature Extraction and Classification Techniques for Text Recognition. IGI Global, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wai Yie Leong, ed. EEG Signal Processing: Feature extraction, selection and classification methods. Institution of Engineering and Technology, 2019. http://dx.doi.org/10.1049/pbhe016e.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Leong, Wai Yie. EEG Signal Processing: Feature Extraction, Selection and Classification Methods. Institution of Engineering & Technology, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Gesture classification and feature extraction"

1

Chen, Zengzhao, Cong Wang, Chunlin Deng, Xiaochao Feng, and Chao Zhang. "Feature Extraction and Classification of Dynamic and Static Gestures Based on RealSense." In Advances in Intelligent Systems and Computing. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8944-2_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Barbhuiya, Abul Abbas, Ram Kumar Karsh, and Samiran Dutta. "AlexNet-CNN Based Feature Extraction and Classification of Multiclass ASL Hand Gestures." In Lecture Notes in Electrical Engineering. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-0275-7_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Amin, Praahas, Airani Mohammad Khan, Akshay Ram Bhat, and Gautham Rao. "Feature Extraction and Classification of Gestures from Myo-Electric Data Using a Neural Network Classifier." In Evolution in Computational Intelligence. Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5788-0_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cesare, Silvio, and Yang Xiang. "Feature Extraction." In Software Similarity and Classification. Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-2909-7_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dougherty, Geoff. "Feature Extraction and Selection." In Pattern Recognition and Classification. Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-5323-9_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Abe, Shigeo. "Feature Selection and Extraction." In Support Vector Machines for Pattern Classification. Springer London, 2010. http://dx.doi.org/10.1007/978-1-84996-098-4_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Verma, B., and S. Kulkarni. "Texture Feature Extraction and Classification." In Computer Analysis of Images and Patterns. Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44692-3_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vallez, Noelia, Anibal Pedraza, Carlos Sánchez, Jesus Salido, Oscar Deniz, and Gloria Bueno. "Diatom Feature Extraction and Classification." In Modern Trends in Diatom Identification. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-39212-3_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ghosh, Anil Kumar, and Smarajit Bose. "Feature Extraction for Nonlinear Classification." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11590316_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ortega, Francisco R., Naphtali Rishe, Armando Barreto, Fatemeh Abyarjoo, and Malek Adjouadi. "Multi-Touch Gesture Recognition Using Feature Extraction." In Lecture Notes in Electrical Engineering. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-06773-5_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Gesture classification and feature extraction"

1

Liu, Dongbo, Yun Liu, Yu Fang, and Minsheng Yi. "A Personalized Feature Extraction Method for Hand Gesture Classification of sEMG Signals." In 2024 3rd International Conference on Electronics and Information Technology (EIT). IEEE, 2024. https://doi.org/10.1109/eit63098.2024.10762663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Habib, Gousia, Ishfaq Ahmad Malik, and Shaima Qureshi. "Feature Extraction and Classification Using Deep Learning." In 2024 1st International Conference on Logistics (ICL). IEEE, 2024. https://doi.org/10.1109/icl62932.2024.10788640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mannam, Abhilashitha, Moulyasri Amudalapalli, Akhila Vudatha, Guru Sai Keerthana Kothuri, Radha Abburi, and Sibendu Samanta. "Feature Extraction and Classification of PCG Signal." In 2024 OITS International Conference on Information Technology (OCIT). IEEE, 2024. https://doi.org/10.1109/ocit65031.2024.00011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Niese, Robert, Ayoub Al-Hamadi, Faisal Aziz, and Bernd Michaelis. "Robust facial expression recognition based on 3-d supported feature extraction and SVM classification." In Gesture Recognition (FG). IEEE, 2008. http://dx.doi.org/10.1109/afgr.2008.4813427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mishra, Saurabh, Neha Singh, Mayank Kumar, Suraj Singh, and Shubham Mishra. "REVIEW ON VISIOSENSE: NAVIGATING THE LANDSCAPE OF INDIAN SIGN LANGUAGE DETECTION AND RECOGNITION." In Computing for Sustainable Innovation: Shaping Tomorrow’s World. Innovative Research Publication, 2024. http://dx.doi.org/10.55524/csistw.2024.12.1.65.

Full text
Abstract:
This paper introduces a system for instant recognition of Indian Sign Language (ISL) and gesture identification using grid-based features. Addressing communication barriers between hearing-impaired individuals and society, the system achieves high accuracy without external devices like gloves or Microsoft Kinect sensors. Leveraging face detection, object fixation, and skin color technologies for hand detection and tracking, the Laptop's camera captures ISL gestures. Grid-based feature extraction represents hand movements as feature vectors, classified through the k-nearest neighbor algorithm. Hand gesture classification employs hidden Markov models, achieving 99.7% accuracy for static tasks and 97.23% for orientation recognition.
APA, Harvard, Vancouver, ISO, and other styles
6

Dixit, Akanksha, Varun Bajaj, Irshad Ahmad Ansari, and Prabin Kumar Padhy. "Time Frequency Analysis and Deep Feature Extraction for Hand Gesture Classification." In 2024 IEEE International Students' Conference on Electrical, Electronics and Computer Science (SCEECS). IEEE, 2024. http://dx.doi.org/10.1109/sceecs61402.2024.10482258.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

de Lima, Gabriel Molina, Daniel Prado Campos, and Rafael Gomes Mantovani. "A Review on the Recent use of Machine Learning for Gesture Recognition using Myoelectric Signals." In Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2024. https://doi.org/10.5753/eniac.2024.245071.

Full text
Abstract:
Gesture recognition using myoelectric signals (sEMG) is a powerful tool for Human-Machine Interfaces (HMIs). While significant progress has been made with various machine learning algorithms, more recent and robust solutions in the sEMG pipeline must be explored. This study reviews recent gesture recognition research to identify gaps and analyze standard classification and feature extraction approaches from sEMG signals. We performed a review considering studies published between 2018 and 2024. Our findings reveal a prevalence of public datasets and time-domain features. We highlight the need for further research on feature engineering, algorithm exploration beyond traditional choices, and integration of DL for feature extraction.
APA, Harvard, Vancouver, ISO, and other styles
8

Hozyn, Stanislaw. "Hand Gesture Recognition For Human-Robot Cooperation In Manufacturing Applications." In 37th ECMS International Conference on Modelling and Simulation. ECMS, 2023. http://dx.doi.org/10.7148/2023-0373.

Full text
Abstract:
Human-robot cooperation plays an increasingly important role in manufacturing applications. Together, humans and robots display an exceptional skill level that neither can achieve independently. For such cooperation, hand gesture communication using computer vision has been proven to be the most suitable due to the low cost of implementation and flexibility. Therefore, this work focuses on the hand gesture classification problem in view of human and robot collaboration. To facilitate collaboration, six of the most common gestures applicable in manufacturing applications were selected. The first part of the research was devoted to creating an image dataset using the proposed acquisition system. Then, pre-trained neural networks were designed and tested. In this step, the feature extraction approach was adopted, which utilises the representations learned by a previous network to extract meaningful features. The results suggest that all developed pre-trained networks attained high accuracy (above 98,9%). Among them, the VGG19 demonstrated the best performance, achieving accuracy equal to 99,63%. The proposed approach can be easily adapted to recognise a more extensive or different set of gestures. Utilising the proposed vision system and the developed neural network architectures, the adaptation demands only acquiring a set of images and retraining the developed networks.
APA, Harvard, Vancouver, ISO, and other styles
9

Ge, Yuncheng, Yewei Huang, Ye Julei, Huazixi Zeng, Hechong Su, and Zengyao Yang. "DMGR: Divisible Multi-complex Gesture Recognition Based on Word Segmentation Processing." In 2024 AHFE International Conference on Human Factors in Design, Engineering, and Computing (AHFE 2024 Hawaii Edition). AHFE International, 2024. http://dx.doi.org/10.54941/ahfe1005654.

Full text
Abstract:
In the realm of gesture recognition and computer algorithm optimization, traditional approaches have predominantly focused on recognizing isolated gestures. However, this paradigm proves inadequate when confronted with complex gestural sequences, resulting in cumbersome recognition processes and diminished accuracy. Contemporary human-computer interaction (HCI) applications often necessitate users to perform intricate series of gestures, rather than isolated movements. Consequently, there is a pressing need for systems capable of not only recognizing individual gestures but also accurately segmenting and interpreting sequences of complex gestures to infer user intent and provide natural, intuitive responses.Drawing parallels with natural language processing (NLP), where understanding complex sentences requires word segmentation, structural analysis, and contextual comprehension, the field of HCI faces similar challenges in multi-complex dynamic gesture interaction. The cornerstone of effective gesture-based interaction lies in precise gesture segmentation, recognition, and intention understanding. The crux of the matter is developing methods to accurately delineate individual gestures within a continuous sequence and establish contextual relationships between them to discern the user's overarching intent. To address these challenges and facilitate more natural and user-friendly multi-complex dynamic gesture interaction, this paper introduces a novel recognition model and segmentation algorithm. The proposed framework draws inspiration from word processing techniques in NLP, applying a list model to the multi-complex gesture task machine. This approach decomposes complex gestural sequences into constituent operations, which are further subdivided into consecutive actions corresponding to individual gestures. By recognizing each gesture independently and then synthesizing this information, the system can interpret the entire complex gesture task. The algorithm incorporates the concept of action elements to reduce gesture dimension and employs a probability density distribution-based segmentation and optimization technique to accurately partition gestures within multi-complex tasks. This innovative approach not only enhances recognition accuracy but also significantly reduces computational complexity, as demonstrated by experimental results on a multi-complex gesture task database.The paper is structured as follows: First, it elucidates the algorithm framework for divisible multi-complex dynamic gesture task recognition and the underlying model based on word processing techniques. Subsequently, it provides a detailed exposition of the algorithm's implementation, encompassing feature extraction, gesture classification, segmentation, and optimization methodologies. Finally, the paper presents the experimental design and results, offering empirical validation of the proposed approach's efficacy.This research represents a significant advancement in the field of gesture recognition, particularly in handling complex, multi-gesture sequences. By addressing the limitations of traditional single-gesture recognition systems, this work paves the way for more sophisticated and intuitive human-computer interaction paradigms. The proposed model's ability to accurately segment and interpret complex gesture sequences opens up new possibilities for applications in various domains, from virtual reality interfaces to robotic control systems. The integration of concepts from NLP into gesture recognition underscores the interdisciplinary nature of this research, highlighting the potential for cross-pollination of ideas between different fields of computer science. Furthermore, the emphasis on reducing computational complexity while maintaining high accuracy addresses a crucial concern in real-time interactive systems. In conclusion, this study makes substantial contributions to the field of gesture recognition and HCI, offering a robust framework for handling multi-complex dynamic gesture tasks. The proposed algorithms and models not only advance the state of the art in gesture recognition but also lay the groundwork for more natural and efficient human-computer interaction modalities in future applications.
APA, Harvard, Vancouver, ISO, and other styles
10

Salunke, Tejashree P., and S. D. Bharkad. "Power point control using hand gesture recognition based on hog feature extraction and k-nn classification." In 2017 International Conference on Computing Methodologies and Communication (ICCMC). IEEE, 2017. http://dx.doi.org/10.1109/iccmc.2017.8282654.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Gesture classification and feature extraction"

1

Carin, Lawrence. ICA Feature Extraction and SVM Classification of FLIR Imagery. Defense Technical Information Center, 2005. http://dx.doi.org/10.21236/ada441506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Coleman, Olivia, Dan Rosa de Jesus, and Romarie Morales Rosado. Feature Extraction: Improving Remote Sensor Classification of Non-Proliferation. Office of Scientific and Technical Information (OSTI), 2021. http://dx.doi.org/10.2172/2349078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bahrampour, Soheil, Asok Ray, Soumalya Sarka, Thyagaraju Damarla, and Nasser M. Nasrabadi. Performance Comparison of Feature Extraction Algorithms for Target Detection and Classification. Defense Technical Information Center, 2013. http://dx.doi.org/10.21236/ada580366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hurd, Harry L. Workstation Tools for Feature Extraction and Classification for Nonstationary and Transient Signals. Defense Technical Information Center, 1992. http://dx.doi.org/10.21236/ada255389.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pasion, Leonard. Feature Extraction and Classification of Magnetic and EMI Data, Camp Beale, CA. Defense Technical Information Center, 2012. http://dx.doi.org/10.21236/ada569666.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Huynh, Quyen Q., Leon N. Cooper, Nathan Intrator, and Harel Shouval. Classification of Underwater Mammals using Feature Extraction Based on Time-Frequency Analysis and BCM Theory. Defense Technical Information Center, 1996. http://dx.doi.org/10.21236/ada316962.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

De Voir, Christopher. Wavelet Based Feature Extraction and Dimension Reduction for the Classification of Human Cardiac Electrogram Depolarization Waveforms. Portland State University Library, 2000. http://dx.doi.org/10.15760/etd.1739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Richardson, J. Automatic feature extraction and classification from digital x-ray images. Final report, period ending 1 May 1995. Office of Scientific and Technical Information (OSTI), 1995. http://dx.doi.org/10.2172/224901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Billings, Stephen. Data Modeling, Feature Extraction, and Classification of Magnetic and EMI Data, ESTCP Discrimination Study, Camp Sibert, AL. Demonstration Report. Defense Technical Information Center, 2008. http://dx.doi.org/10.21236/ada495600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Asari, Vijayan, Paheding Sidike, Binu Nair, Saibabu Arigela, Varun Santhaseelan, and Chen Cui. PR-433-133700-R01 Pipeline Right-of-Way Automated Threat Detection by Advanced Image Analysis. Pipeline Research Council International, Inc. (PRCI), 2015. http://dx.doi.org/10.55274/r0010891.

Full text
Abstract:
A novel algorithmic framework for the robust detection and classification of machinery threats and other potentially harmful objects intruding onto a pipeline right-of-way (ROW) is designed from three perspectives: visibility improvement, context-based segmentation, and object recognition/classification. In the first part of the framework, an adaptive image enhancement algorithm is utilized to improve the visibility of aerial imagery to aid in threat detection. In this technique, a nonlinear transfer function is developed to enhance the processing of aerial imagery with extremely non-uniform lighting conditions. In the second part of the framework, the context-based segmentation is developed to eliminate regions from imagery that are not considered to be a threat to the pipeline. Context based segmentation makes use of a cascade of pre-trained classifiers to search for regions that are not threats. The context based segmentation algorithm accelerates threat identification and improves object detection rates. The last phase of the framework is an efficient object detection model. Efficient object detection �follows a three-stage approach which includes extraction of the local phase in the image and the use of local phase characteristics to locate machinery threats. The local phase is an image feature extraction technique which partially removes the lighting variance and preserves the edge information of the object. Multiple orientations of the same object are matched and the correct orientation is selected using feature matching by histogram of local phase in a multi-scale framework. The classifier outputs locations of threats to pipeline.�The advanced automatic image analysis system is intended to be capable of detecting construction equipment along the ROW of pipelines with a very high degree of accuracy in comparison with manual threat identification by a human analyst. �
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography