Dissertations / Theses on the topic 'Gesture Recognition'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Gesture Recognition.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Davis, James W. "Gesture recognition." Honors in the Major Thesis, University of Central Florida, 1994. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/126.
Full textBachelors
Arts and Sciences
Computer Science
Cheng, You-Chi. "Robust gesture recognition." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53492.
Full textKaâniche, Mohamed Bécha. "Human gesture recognition." Nice, 2009. http://www.theses.fr/2009NICE4032.
Full textIn this thesis, we aim to recognize gestures (e. G. Hand raising) and more generally short actions (e. G. Fall, bending) accomplished by an individual. Many techniques have already been proposed for gesture recognition in specific environment (e. G. Laboratory) using the cooperation of several sensors (e. G. Camera network, individual equipped with markers). Despite these strong hypotheses, gesture recognition is still brittle and often depends on the position of the individual relatively to the cameras. We propose to reduce these hypotheses in order to conceive general algorithm enabling the recognition of the gesture of an individual involving in an unconstrained environment and observed through limited number of cameras. The goal is to estimate the likelihood of gesture recognition in function of the observation conditions. Our method consists of classifying a set of gestures by learning motion descriptors. These motion descriptors are local signatures of the motion of corner points which are associated with their local textural description. We demonstrate the effectiveness of our motion descriptors by recognizing the actions of the public KTH database
Semprini, Mattia. "Gesture Recognition: una panoramica." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15672/.
Full textGingir, Emrah. "Hand Gesture Recognition System." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612532/index.pdf.
Full textDang, Darren Phi Bang. "Template based gesture recognition." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/41404.
Full textIncludes bibliographical references (p. 65-66).
by Darren PHi Bang Dang.
M.S.
Wang, Lei. "Personalized Dynamic Hand Gesture Recognition." Thesis, KTH, Medieteknik och interaktionsdesign, MID, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231345.
Full textMänniskliga gester, med spatiala/temporala variationer, är svåra att känna igen med en generisk modell eller klassificeringsmetod. För att komma till rätta med problemet, föreslås personifierade, dynamiska gest igenkänningssätt baserade på Dynamisk Time Warping (DTW) och ett nytt koncept: Subjekt-Relativt Nätverk för att beskriva likheter vid utförande av dynamiska gester, vilket ger en ny syn på gest igenkänning. Genom att klustra eller ordna träningssubjekt baserat på nätverket föreslås två personifieringsalgoritmer för generativa och diskriminerande modeller. Dessutom jämförs och integreras tre grundläggande igenkänningsmetoder, DTW-baserad mall-matchning, Hidden Markov Model (HMM) och Fisher Vector-klassificering i den föreslagna personifierade gestigenkännande ansatsen. De föreslagna tillvägagångssätten utvärderas på ett utmanande, dynamiskt handmanipulerings dataset DHG14/28, som innehåller djupbilderna och skelettkoordinaterna som returneras av Intels RealSense-djupkamera. Experimentella resultat visar att de föreslagna personifierade algoritmerna kan förbättra prestandan i jämfört medgrundläggande generativa och diskriminerande modeller och uppnå den högsta nivån på 86,2%.
Espinoza, Victor. "Gesture Recognition in Tennis Biomechanics." Master's thesis, Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/530096.
Full textM.S.E.E.
The purpose of this study is to create a gesture recognition system that interprets motion capture data of a tennis player to determine which biomechanical aspects of a tennis swing best correlate to a swing efficacy. For our learning set this work aimed to record 50 tennis athletes of similar competency with the Microsoft Kinect performing standard tennis swings in the presence of different targets. With the acquired data we extracted biomechanical features that hypothetically correlated to ball trajectory using proper technique and tested them as sequential inputs to our designed classifiers. This work implements deep learning algorithms as variable-length sequence classifiers, recurrent neural networks (RNN), to predict tennis ball trajectory. In attempt to learn temporal dependencies within a tennis swing, we implemented gate-augmented RNNs. This study compared the RNN to two gated models; gated recurrent units (GRU), and long short-term memory (LSTM) units. We observed similar classification performance across models while the gated-methods reached convergence twice as fast as the baseline RNN. The results displayed 1.2 entropy loss and 50 % classification accuracy indicating that the hypothesized biomechanical features were loosely correlated to swing efficacy or that they were not accurately depicted by the sensor
Temple University--Theses
Nygård, Espen Solberg. "Multi-touch Interaction with Gesture Recognition." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9126.
Full textThis master's thesis explores the world of multi-touch interaction with gesture recognition. The focus is on camera based multi-touch techniques, as these provide a new dimension to multi-touch with its ability to recognize objects. During the project, a multi-touch table based on the technology Diffused Surface Illumination has been built. In addition to building a table, a complete gesture recognition system has been implemented, and different gesture recognition algorithms have been successfully tested in a multi-touch environment. The goal with this table, and the accompanying gesture recognition system, is to create an open and affordable multi-touch solution, with the purpose of bringing multi-touch out to the masses. By doing this, more people will be able to enjoy the benefits of a more natural interaction with computers. In a larger perspective, multi-touch is just the beginning, and by adding additional modalities to our applications, such as speech recognition and full body tracking, a whole new level of computer interaction will be possible.
Khan, Muhammad. "Hand Gesture Detection & Recognition System." Thesis, Högskolan Dalarna, Datateknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:du-6496.
Full textGlatt, Ruben [UNESP]. "Deep learning architecture for gesture recognition." Universidade Estadual Paulista (UNESP), 2014. http://hdl.handle.net/11449/115718.
Full textO reconhecimento de atividade de visão de computador desempenha um papel importante na investigação para aplicações como interfaces humanas de computador, ambientes inteligentes, vigilância ou sistemas médicos. Neste trabalho, é proposto um sistema de reconhecimento de gestos com base em uma arquitetura de aprendizagem profunda. Ele é usado para analisar o desempenho quando treinado com os dados de entrada multi-modais em um conjunto de dados de linguagem de sinais italiana. A área de pesquisa subjacente é um campo chamado interação homem-máquina. Ele combina a pesquisa sobre interfaces naturais, reconhecimento de gestos e de atividade, aprendizagem de máquina e tecnologias de sensores que são usados para capturar a entrada do meio ambiente para processamento posterior. Essas áreas são introduzidas e os conceitos básicos são descritos. O ambiente de desenvolvimento para o pré-processamento de dados e algoritmos de aprendizagem de máquina programada em Python é descrito e as principais bibliotecas são discutidas. A coleta dos fluxos de dados é explicada e é descrito o conjunto de dados utilizado. A arquitetura proposta de aprendizagem consiste em dois passos. O pré-processamento dos dados de entrada e a arquitetura de aprendizagem. O pré-processamento é limitado a três estratégias diferentes, que são combinadas para oferecer seis diferentes perfis de préprocessamento. No segundo passo, um Deep Belief Network é introduzido e os seus componentes são explicados. Com esta definição, 294 experimentos são realizados com diferentes configurações. As variáveis que são alteradas são as definições de pré-processamento, a estrutura de camadas do modelo, a taxa de aprendizagem de pré-treino e a taxa de aprendizagem de afinação. A avaliação dessas experiências mostra que a abordagem de utilização de uma arquitetura ... (Resumo completo, clicar acesso eletrônico abaixo)
Activity recognition from computer vision plays an important role in research towards applications like human computer interfaces, intelligent environments, surveillance or medical systems. In this work, a gesture recognition system based on a deep learning architecture is proposed. It is used to analyze the performance when trained with multi-modal input data on an Italian sign language dataset. The underlying research area is a field called human-machine interaction. It combines research on natural user interfaces, gesture and activity recognition, machine learning and sensor technologies, which are used to capture the environmental input for further processing. Those areas are introduced and the basic concepts are described. The development environment for preprocessing data and programming machine learning algorithms with Python is described and the main libraries are discussed. The gathering of the multi-modal data streams is explained and the used dataset is outlined. The proposed learning architecture consists of two steps. The preprocessing of the input data and the actual learning architecture. The preprocessing is limited to three different strategies, which are combined to offer six different preprocessing profiles. In the second step, a Deep Belief network is introduced and its components are explained. With this setup, 294 experiments are conducted with varying configuration settings. The variables that are altered are the preprocessing settings, the layer structure of the model, the pretraining and the fine-tune learning rate. The evaluation of these experiments show that the approach of using a deep learning architecture on an activity or gesture recognition task yields acceptable results, but has not yet reached a level of maturity, which would allow to use the developed models in serious applications.
Gillian, N. E. "Gesture recognition for musician computer interaction." Thesis, Queen's University Belfast, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.546348.
Full textCairns, Alistair Y. "Towards the automatic recognition of gesture." Thesis, University of Dundee, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.385803.
Full textHarding, Peter Reginald George. "Gesture recognition by Fourier analysis techniques." Thesis, City University London, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.440735.
Full textTanguay, Donald O. (Donald Ovila). "Hidden Markov models for gesture recognition." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/37796.
Full textIncludes bibliographical references (p. 41-42).
by Donald O. Tanguay, Jr.
M.Eng.
Yao, Yi. "Hand gesture recognition in uncontrolled environments." Thesis, University of Warwick, 2014. http://wrap.warwick.ac.uk/74268/.
Full textGlatt, Ruben. "Deep learning architecture for gesture recognition /." Guaratinguetá, 2014. http://hdl.handle.net/11449/115718.
Full textCoorientador: Daniel Julien Barros da Silva Sampaio
Banca: Galeno José de Sena
Banca: Luiz de Siqueira Martins Filho
Resumo: O reconhecimento de atividade de visão de computador desempenha um papel importante na investigação para aplicações como interfaces humanas de computador, ambientes inteligentes, vigilância ou sistemas médicos. Neste trabalho, é proposto um sistema de reconhecimento de gestos com base em uma arquitetura de aprendizagem profunda. Ele é usado para analisar o desempenho quando treinado com os dados de entrada multi-modais em um conjunto de dados de linguagem de sinais italiana. A área de pesquisa subjacente é um campo chamado interação homem-máquina. Ele combina a pesquisa sobre interfaces naturais, reconhecimento de gestos e de atividade, aprendizagem de máquina e tecnologias de sensores que são usados para capturar a entrada do meio ambiente para processamento posterior. Essas áreas são introduzidas e os conceitos básicos são descritos. O ambiente de desenvolvimento para o pré-processamento de dados e algoritmos de aprendizagem de máquina programada em Python é descrito e as principais bibliotecas são discutidas. A coleta dos fluxos de dados é explicada e é descrito o conjunto de dados utilizado. A arquitetura proposta de aprendizagem consiste em dois passos. O pré-processamento dos dados de entrada e a arquitetura de aprendizagem. O pré-processamento é limitado a três estratégias diferentes, que são combinadas para oferecer seis diferentes perfis de préprocessamento. No segundo passo, um Deep Belief Network é introduzido e os seus componentes são explicados. Com esta definição, 294 experimentos são realizados com diferentes configurações. As variáveis que são alteradas são as definições de pré-processamento, a estrutura de camadas do modelo, a taxa de aprendizagem de pré-treino e a taxa de aprendizagem de afinação. A avaliação dessas experiências mostra que a abordagem de utilização de uma arquitetura ... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: Activity recognition from computer vision plays an important role in research towards applications like human computer interfaces, intelligent environments, surveillance or medical systems. In this work, a gesture recognition system based on a deep learning architecture is proposed. It is used to analyze the performance when trained with multi-modal input data on an Italian sign language dataset. The underlying research area is a field called human-machine interaction. It combines research on natural user interfaces, gesture and activity recognition, machine learning and sensor technologies, which are used to capture the environmental input for further processing. Those areas are introduced and the basic concepts are described. The development environment for preprocessing data and programming machine learning algorithms with Python is described and the main libraries are discussed. The gathering of the multi-modal data streams is explained and the used dataset is outlined. The proposed learning architecture consists of two steps. The preprocessing of the input data and the actual learning architecture. The preprocessing is limited to three different strategies, which are combined to offer six different preprocessing profiles. In the second step, a Deep Belief network is introduced and its components are explained. With this setup, 294 experiments are conducted with varying configuration settings. The variables that are altered are the preprocessing settings, the layer structure of the model, the pretraining and the fine-tune learning rate. The evaluation of these experiments show that the approach of using a deep learning architecture on an activity or gesture recognition task yields acceptable results, but has not yet reached a level of maturity, which would allow to use the developed models in serious applications.
Mestre
Caceres, Carlos Antonio. "Machine Learning Techniques for Gesture Recognition." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/52556.
Full textMaster of Science
Pfister, Tomas. "Advancing human pose and gesture recognition." Thesis, University of Oxford, 2015. http://ora.ox.ac.uk/objects/uuid:64e5b1be-231e-49ed-b385-e87db6dbeed8.
Full textAl-Rajab, Moaath. "Hand gesture recognition for multimedia applications." Thesis, University of Leeds, 2008. http://etheses.whiterose.ac.uk/607/.
Full textJia, Jia. "Interactive Imaging via Hand Gesture Recognition." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4259.
Full textToure, Zikra. "Human-Machine Interface Using Facial Gesture Recognition." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc1062841/.
Full textLiu, Nianjun. "Hand gesture recognition by Hidden Markov Models /." [St. Lucia, Qld.], 2004. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe18158.pdf.
Full textPun, James Chi-Him. "Gesture recognition with application in music arrangement." Diss., University of Pretoria, 2006. http://upetd.up.ac.za/thesis/available/etd-11052007-171910/.
Full textChan, Siu Chi 1979. "Hand and fingertip tracking for gesture recognition." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=83855.
Full textKolesnik, Paul. "Conducting gesture recognition, analysis and performance system." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=81499.
Full textKing, Rachel C. "Hand gesture recognition for minimally invasive surgery." Thesis, Imperial College London, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.497748.
Full textPuranam, Muthukumar B. "Towards Full-Body Gesture Analysis and Recognition." UKnowledge, 2005. http://uknowledge.uky.edu/gradschool_theses/227.
Full textZanghieri, Marcello. "sEMG-based hand gesture recognition with deep learning." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18112/.
Full textBernard, Arnaud Jean Marc. "Human computer interface based on hand gesture recognition." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/42748.
Full textZhu, Hong Min. "Real-time hand gesture recognition using motion tracking." Thesis, University of Macau, 2010. http://umaclib3.umac.mo/record=b2182870.
Full textWilson, Andrew David. "Adaptive models for the recognition of human gesture." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/62951.
Full textIncludes bibliographical references (leaves 135-140).
Tomorrow's ubiquitous computing environments will go beyond the keyboard, mouse and monitor paradigm of interaction and will require the automatic interpretation of human motion using a variety of sensors including video cameras. I present several techniques for human motion recognition that are inspired by observations on human gesture, the class of communicative human movement. Typically, gesture recognition systems are unable to handle systematic variation in the input signal, and so are too brittle to be applied successfully in many real-world situations. To address this problem, I present modeling and recognition techniques to adapt gesture models to the situation at hand. A number of systems and frameworks that use adaptive gesture models are presented. First, the parametric hidden Markov model (PHMM) addresses the representation and recognition of gesture families, to extract how a gesture is executed. Second, strong temporal models drawn from natural gesture theory are exploited to segment two kinds of natural gestures from video sequences. Third, a realtime computer vision system learns gesture models online from time-varying context. Fourth, a realtime computer vision system employs hybrid Bayesian networks to unify and extend the previous approaches, as well as point the way for future work.
by Andrew David Wilson.
Ph.D.
Moy, Milyn C. (Milyn Cecilia) 1975. "Real-time hand gesture recognition in complex environments." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/50054.
Full textIncludes bibliographical references (leaves 65-68).
by Milyn C. Moy.
S.B.and M.Eng.
Bailey, Sam. "Interactive exploration of historic information via gesture recognition." Thesis, University of East Anglia, 2012. https://ueaeprints.uea.ac.uk/42540/.
Full text李世淵. "Anti-Gesture Model For Gesture Recognition." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/88514897535275004323.
Full textTsai, Jui-Che, and 蔡睿哲. "Hand Gesture Recognition." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/e6nbcb.
Full text亞東技術學院
資訊與通訊工程研究所
100
In recent years, image processing has been developed for a long time. Hand recognition systems attract many researchers. In this paper, using a easy hand gesture recognition algorithm reduces the amount of data and obtains the desired result. First of all, the computer gets two pictures by a webcam we set up. The resolution of pictures are set as 320*240. The background subtraction method from two pictures is used to reduce the amount of data. Then, erosion and dilation methods are used to reduce the noise. The remaining image is only the hand region. Then we find the centroid of the hand region. According to the centroid, we search the right-most and the left-most coordinates and to make a record. The distance from the centroid to the right-most and the left-most becomes a radius. We draw a circle by the radius to remove the connection by fingers and finally give a mark on the last image to find the sum of fingers.
Lemarcis, Baptiste. "Towards streaming gesture recognition." Thèse, 2016. http://constellation.uqac.ca/4132/1/Lemarcis_uqac_0862N_10294.pdf.
Full textSahoo, Lagnajeet. "Hand Gesture Recognition System." Thesis, 2015. http://ethesis.nitrkl.ac.in/7739/1/602.pdf.
Full textPradhan, Lalit Mohan. "Gesture Based Character Recognition." Thesis, 2015. http://ethesis.nitrkl.ac.in/7806/1/2015_Gesture_Pradhan.pdf.
Full textWu, Zong-Guei, and 吳宗桂. "Using KINECT Gesture Recognition for User Recognition." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/qau9uf.
Full text國立虎尾科技大學
電機工程研究所
103
In recent years, the safe identification system used in intelligent environment has been attractive by people and more and more similarly systems were proposed. This paper presented a user identification based on posture and combined the skeleton data which gets from KINECT. It contains two types of features, including non-learning features and learning features of the learning methods. Based on human skeleton joints, there are three user features proposed by the author. The methods in sequence are “Adjacency Joint Distance”, “Confirm Skeleton Angle” and the last one is to combine of the above, and two learning features, “Gravity of Offset” (GLO), “Transfer Matrix of Offset” (TMLO). All of them were used in user identification system as the features. The paper are also using Support Vector Machine (SVM), Gaussian Mixture Model (GMM) and Principal Component Analysis (PCA) to develop user legality confirmed in SVM and develop user identity recognition in GMM and PCA. And propose three types of user recognition user recognition model, GMM-PCA、PCA learning-SVM、PCA learning-GMM, which trying to modify the original method of single model. Three types of non-learning features are separately training in SVM、GMM、PCA. And prefer to select features in better recognition rates. SVM and PCA we select “Adjacency Joint Distance” and GMM select combine features. Non-learning features was trained in GMM-PCA, total of score normalization GMM-PCA was decided to recognition result. The identification of action may change through the time and the habits of user, so it will affect the efficiency of each recognition process. To improve the situation, we add the learning method of machine and developed two learning algorithms. The paper is using Adjacency Joint Distance to train in PCA, and according to PCA learning methods to propose two types of learning offsets features, PCA-GLO and PCA-TMLO are training in SVM and GMM. PCA-GLO is learning 16 times training in SVM, the recognition rates was 94.3%. PCA-GLO is learning 16 times training in GMM, the recognition rates was 99.8%. And PCA-TMLO is learning 10 times training in SVM, the recognition rates was 98.9%. The experiment result, PCA-TMLO training in SVM was better than single SVM by more learning times, and learning times was better than PCA-GLO training in SVM. PCA-GLO training in GMM which recognition rates was better than single GMM. The experiment result, it proved the learning effect which the features was extracted in PCA learning methods.
Chen, Chih-Yu, and 陳治宇. "Virtual Mouse:Vision-Based Gesture Recognition." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/74539959450046293234.
Full text國立中山大學
資訊工程學系研究所
91
The thesis describes a method for human-computer interaction through vision-based gesture recognition and hand tracking, which consists of five phases: image grabbing, image segmentation, feature extraction, gesture recognition, and system mouse controlling. Unlike most of previous works, our method recognizes hand with just one camera and requires no color markers or mechanical gloves. The primary work of the thesis is improving the accuracy and speed of the gesture recognition. Further, the gesture commands will be used to replace the mouse interface on a standard personal computer to control application software in a more intuitive manner.
TIWARI, MANU, and 馬麗麗. "Gesture Recognition in Shopping Scenario." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/q8edw2.
Full text國立交通大學
電機資訊國際學程
107
Smart phones and Smart wristbands are being used for effective activity recognition for health management, personal identification, payment purposes etc. The shopping industry is not far behind in experimenting with these devices in order to make shopping experience better for customers, gaining more information on their behavior, benefiting businesses etc. This work here aims at recognizing activities performed during shopping using an inertial sensor. The study of segments generated and processed to develop a recognition model. The model is robust and light to be developed into a real-time application to recognize activities. The use of graphical features other than statistical features successfully added in increasing the accuracy. The sliding-overlapping window made the recognition model better.
Mandal, Itishree, and Samiksha Ray. "Hand gesture based digit recognition." Thesis, 2014. http://ethesis.nitrkl.ac.in/6488/1/E-30.pdf.
Full textChen, Jiunn-Yeuo, and 陳俊有. "Hand Gesture Commands for a PC Presentation:Hand Gesture Recognition andPointing Computation." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/64517050729864669654.
Full text國立交通大學
資訊工程學系
85
In a PC presentation system, speakers must bow to control the mouse or keyboard key to move the screen upward, downward, leftward, rightward.This causes the time delay or interrupt of presentation. In this thesis, we want to remove this drawback. We use human gestures to replace the mouse function in the PC presentation system.We have eight hand gestures including up, down, left, right, zoom in, zoom out, hold and point. With two calibrated TV cameras, we capture the hand images by frame grabber and do image processing.At last, we send the hand gesture recognition result to the PC presentation system via a RS232 network.
Chen, Feng-Sheng, and 陳豐生. "Gesture Recognition Using Hidden Markov Models." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/39515573353219025950.
Full text國立清華大學
電機工程學系
87
In this thesis, we introduce a hand gesture recognition system to recognize continuous gesture in simple background. The system consists of three modules: feature extraction, hidden Markov model (HMM) training, and gesture recognition using the HMMs. First, we apply the motion information to extract the hand-shape and apply the scale and rotation-invariant Fourier descriptor to characterize hand figures. Then we combine Fourier descriptor and motion information of input image sequence as our feature vector. After having extracted the feature vector, we first train our system using HMM approach and then use the trained HMMs to recognize the input gesture. In training phase, we apply hidden Markov Model to describe the gestures properties (generating the initial state probability distribution, the state transition probability distribution and the observation probability distribution) for each gesture. To recognize gesture, the gesture to be recognized in separately scored against different HMMs. The model with the highest score is selected as the recognized gesture. Our system consists of 20 different hand gestures. The experimental results show that the average recognition rate is 88.5%.
Chen, Kuan-Wei, and 陳冠緯. "Gesture recognition of smart mobile device." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/62wfvm.
Full text樹德科技大學
資訊工程系碩士班
104
With technology advancements, today’s smart mobile devices are moving towards increasingly higher performance specifications. This thesis puts the neural network training that could only be run on home computers or higher level devices in the past to run on today’s smart mobile devices. For the purpose of this thesis, a system was designed on an Android smart mobile device. Acceleration values were obtained using the existing gravitational acceleration sensor in the device and finite state capture and then went through average filtering and normalization before being sent to a backpropagation neural network for neural network training. Upon the completion of the training, recall data in the neural network was obtained likewise using the acceleration sensor in the smart mobile device for gesture recognition. The results of this experiment show that using a smart mobile device for real-time acceleration value capture is a feasible approach and it is possible to use the device as usual while the neural network training is running thanks to Android’s service lifecycle feature. The smart mobile device used was a Sony Z3 and the gestures to be recognized were handwritten Arabic numbers 0~9. Observations were made using different quantities of hidden layer neurons and training samples. Observed results reveal that the quantity of neurons does not have any significant effect on the accuracy of gesture recognition while the impact of training sample size is more evident, i.e. the greater the training sample size, the higher the accuracy and the longer the training time. It is therefore VI necessary to consider how to balance between training time and accuracy. The shortest training time was 86 minutes when using 100 training samples and 50 neurons. The longest was 432 minutes when using 200 training samples and 60 neurons. The training sample size for numbers 0~9 were 10, 15 and 20. When the number of neurons in the same hidden layer was 60, the average accuracy of gesture recognition reached up to 87%, 87.5% and 89.5%. When the number of neurons in the hidden layer was 50, 60, 70 and 80, the average accuracy of gesture recognition reached up to 85%, 87%, 87% and 86%. Therefore, integrating data capture, neural network training and gesture recognition in one smart mobile device is a feasible approach. It is recommended to choose data with more fluctuating acceleration values as training samples, which can increase the success rate of gesture recognition. It is also recommended to choose a higher performance smart mobile device, which can reduce training time
Chao-Hui, Huang, and 黃朝暉. "Silhouette-Based Hand Gesture Recognition System." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/93267423724812911907.
Full text中華大學
資訊工程學系碩士班
89
Let computer science taking over some jobs of human is a dream. However, Duplicating of human intuition is very difficult. In this paper, we try to duplicate the vision of human intuition, and present that by human hand gesture recognition with low computation requirement. Usually, the human hand gesture recognition system requires either high cost of computation, or special auxiliary devices. Due to this, a faster and convenient method becomes necessary. For the sake of real-time implementation, we developed two main algorithms: Curve Detection Algorithm (CDA) and Peak Detection Algorithm (PDA), where CDA extracts the feature from the silhouette pattern of image and PDA extracts specific patterns in silhouette image which implied peak information. Basically, we have developed a new method for hand gesture extraction with lower cost of computation and less devices requirement. Those method is provided for the method of hand gesture extraction base on light spot which is in existence. Using CDA and PDA, we may able to extract the feature of hand gesture as same as there are light spots which be wore on the hand and treat the fingers tips and valley as light spot.
AlSharif, Mohammed H. "Hand Gesture Recognition Using Ultrasonic Waves." Thesis, 2016. http://hdl.handle.net/10754/609434.
Full textBrás, André Filipe Pereira. "Gesture recognition using deep neural networks." Master's thesis, 2017. http://hdl.handle.net/10316/83023.
Full textEsta dissertação teve como principal objetivo o desenvolvimento de um método para realizar segmentação e reconhecimento de gestos. A pesquisa foi motivada pela importância do reconhecimento de ações e gestos humanos em aplicações do mundo real, como a Interação Homem-Máquina e a compreensão de linguagem gestual. Além disso, pensa-se que o estado da arte atual pode ser melhorado, já que esta é uma área de pesquisa em desenvolvimento contínuo, com novos métodos e ideias surgindo frequentemente.A segmentação dos gestos envolveu um conjunto de características artesanais extraídas dos dados 3D do esqueleto, as quais são adequadas para representar cada frame de qualquer sequência de vídeo, e uma Rede Neuronal Artificial para distinguir momentos de descanso de períodos de atividade. Para o reconhecimento de gestos, foram desenvolvidos 3 modelos diferentes. O reconhecimento usando as características artesanais e uma janela deslizante, que junta informação ao longo da dimensão temporal, foi a primeira abordagem. Além disso, a combinação de várias janelas deslizantes com o intuito de obter a influência de diferentes escalas temporais também foi experimentada. Por último, todas as características artesanais foram descartadas e uma Rede Neuronal Convolucional foi usada com o objetivo de extrair automaticamente as características e as representações mais importantes a partir de imagens.Todos os métodos foram testados no conjunto de dados do concurso 2014 Looking At People e o melhor alcançou um índice de Jaccard de 0.71. O desempenho é quase equivalente ao de algumas técnicas do estado da arte.
This dissertation had as the main goal the development of a method to perform gesture segmentation and recognition. The research was motivated by the significance of human action and gesture recognition in real world applications, such as Human-Machine Interaction (HMI) and sign language understanding. Furthermore, it is thought that the current state of the art can be improved, since this is an area of research in continuous developing, with new methods and ideas emerging frequently.The gesture segmentation involved a set of handcrafted features extracted from 3D skeleton data, which are suited to characterize each frame of any video sequence, and an Artificial Neural Network (ANN) to distinguish resting moments from periods of activity. For the gesture recognition, 3 different models were developed. The recognition using the handcrafted features and a sliding window, which gathers information along the time dimension, was the first approach. Furthermore, the combination of several sliding windows in order to reach the influence of different temporal scales was also experienced. Lastly, all the handcrafted features were discarded and a Convolutional Neural Network (CNN) was used with the aim to automatically extract the most important features and representations from images.All the methods were tested in 2014 Looking At People Challenge’s data set and the best one achieved a Jaccard index of 0.71. The performance is almost on pair with that of some of the state of the art techniques.
TSAI, HU-CHUNG, and 蔡鵠仲. "Research on Gesture Recognition Controlled Quadcopter." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/24984313688431062793.
Full text國立高雄海洋科技大學
輪機工程研究所
104
In this thesis, control methods analysis and design for quadcopter are considered. A human–machine interface of gesture recognition is developed to control the quadcopter. The main system architecture includes the quadcopter, the synchronous attitude simulation system, the proportional-integral-derivative (PID) controller, and a human-machine interface of gesture recognition. The quadcopter mechanism is designed in size 330mm*330mm with X-shaped configuration of the motor structure. Synchronous attitude simulation system is mainly used to obtain the quadcopter’s flight attitude via the corrected parameter values of acceleration and gyro sensors. The control of motor speed and achievement of stable quadcopter are based on the flight attitude and PID controller design. Human-machine interface of gesture recognition is sensing the position of the hand gesture. The interactive graphical interface is used to detect the hand position and status and control the flight of quadcopter. Finally, some experimental results validate the control approach of quadcopter on the proposed gesture recognition.