Academic literature on the topic 'Vision Based Gesture Recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Vision Based Gesture Recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Vision Based Gesture Recognition"

1

Lou, Xinyue. "Vision-based Hand Gesture Recognition Technology." Applied and Computational Engineering 141, no. 1 (2025): 54–59. https://doi.org/10.54254/2755-2721/2025.21696.

Full text
Abstract:
Human-computer interaction has a wide range of application prospects in many fields such as medicine, entertainment, industry and education. Gesture recognition is one of the most important technologies for gesture interaction between humans and robots, and visual gesture recognition increases the user's comfort and freedom compared with data glove recognition. This paper summarizes the general process of visual gesture recognition based on the literature, including three steps: pre-processing, feature extraction, and gesture classification. It also defines static and dynamic gestures and make
APA, Harvard, Vancouver, ISO, and other styles
2

P, Hrishikesh, Akshay V, Anugraha K, T. R. Hari Subramaniam, and Jyothisha J. Nair. "Vision Based Gesture Recognition." Procedia Computer Science 235 (2024): 303–15. http://dx.doi.org/10.1016/j.procs.2024.04.031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jasim, Mahmood, Tao Zhang, and Md Hasanuzzaman. "A Real-Time Computer Vision-Based Static and Dynamic Hand Gesture Recognition System." International Journal of Image and Graphics 14, no. 01n02 (2014): 1450006. http://dx.doi.org/10.1142/s0219467814500065.

Full text
Abstract:
This paper presents a novel method for computer vision-based static and dynamic hand gesture recognition. Haar-like feature-based cascaded classifier is used for hand area segmentation. Static hand gestures are recognized using linear discriminant analysis (LDA) and local binary pattern (LBP)-based feature extraction methods. Static hand gestures are classified using nearest neighbor (NN) algorithm. Dynamic hand gestures are recognized using the novel text-based principal directional features (PDFs), which are generated from the segmented image sequences. Longest common subsequence (LCS) algor
APA, Harvard, Vancouver, ISO, and other styles
4

Gupta, Himanshu, Aniruddh Ramjiwal, and Jasmin T. Jose. "Vision Based Approach to Sign Language Recognition." International Journal of Advances in Applied Sciences 7, no. 2 (2018): 156. http://dx.doi.org/10.11591/ijaas.v7.i2.pp156-161.

Full text
Abstract:
We propose an algorithm for automatically recognizing some certain amount of gestures from hand movements to help deaf and dumb and hard hearing people. Hand gesture recognition is quite a challenging problem in its form. We have considered a fixed set of manual commands and a specific environment, and develop a effective, procedure for gesture recognition. Our approach contains steps for segmenting the hand region, locating the fingers, and finally classifying the gesture which in general terms means detecting, tracking and recognising. The algorithm is non-changing to rotations, translations
APA, Harvard, Vancouver, ISO, and other styles
5

Himanshu, Gupta, Ramjiwal Aniruddh, and T. Jose Jasmin. "Vision Based Approach to Sign Language Recognition." International Journal of Advances in Applied Sciences (IJAAS) 7, no. 2 (2018): 156–61. https://doi.org/10.11591/ijaas.v7.i2.pp156-161.

Full text
Abstract:
We propose an algorithm for automatically recognizing some certain amount of gestures from hand movements to help deaf and dumb and hard hearing people. Hand gesture recognition is quite a challenging problem in its form. We have considered a fixed set of manual commands and a specific environment, and develop a effective, procedure for gesture recognition. Our approach contains steps for segmenting the hand region, locating the fingers, and finally classifying the gesture which in general terms means detecting, tracking and recognising. The algorithm is non-changing to rotations, translations
APA, Harvard, Vancouver, ISO, and other styles
6

RAUTARAY, SIDDHARTH S., and ANUPAM AGRAWAL. "VISION-BASED APPLICATION-ADAPTIVE HAND GESTURE RECOGNITION SYSTEM." International Journal of Information Acquisition 09, no. 01 (2013): 1350007. http://dx.doi.org/10.1142/s0219878913500071.

Full text
Abstract:
With the increasing role of computing devices, facilitating natural human computer interaction (HCI) will have a positive impact on their usage and acceptance as a whole. For long time, research on HCI has been restricted to techniques based on the use of keyboard, mouse, etc. Recently, this paradigm has changed. Techniques such as vision, sound, speech recognition allow for much richer form of interaction between the user and machine. The emphasis is to provide a natural form of interface for interaction. Gestures are one of the natural forms of interaction between humans. As gesture commands
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Lewei. "Real-time gesture-based control of UAVs using multimodal fusion of FMCW radar and vision." Journal of Physics: Conference Series 2664, no. 1 (2023): 012002. http://dx.doi.org/10.1088/1742-6596/2664/1/012002.

Full text
Abstract:
Abstract Gesture-based control has gained prominence as an intuitive and natural means of interaction with unmanned aerial vehicles (UAVs). This paper presents a real-time gesture-based control system for UAVs that leverages the multimodal fusion of Frequency Modulated Continuous Wave (FMCW) radar and vision sensors, aiming to enhance user experience through precise and responsive UAV control via hand gestures. The research focuses on developing an effective fusion framework that combines the complementary advantages of FMCW radar and vision sensors. FMCW radar provides robust range and veloci
APA, Harvard, Vancouver, ISO, and other styles
8

Yu, Hengcheng, and Zhengyu Chen. "Research on contactless control of elevator based on machine vision." Highlights in Science, Engineering and Technology 7 (August 3, 2022): 89–94. http://dx.doi.org/10.54097/hset.v7i.1022.

Full text
Abstract:
Aiming at the problem of cross-infection caused by elevator public buttons during the COVID-19 epidemic, a non-contact elevator button control gesture recognition system based on machine vision is designed. In order to improve the detection speed of gesture recognition, combined with the Spatial Pyramid Pooling (SPP) and replaced the Backbone in YOLOv5 with the lightweight model ShuffleNetV2, an improved YOLOv5_shff algorithm was proposed. After testing, in the task of recognizing gestures, the detection speed of the YOLOv5_shff algorithm is 14% higher than the original model, and the detectio
APA, Harvard, Vancouver, ISO, and other styles
9

Komang Somawirata, I., and Fitri Utaminingrum. "Smart wheelchair controlled by head gesture based on vision." Journal of Physics: Conference Series 2497, no. 1 (2023): 012011. http://dx.doi.org/10.1088/1742-6596/2497/1/012011.

Full text
Abstract:
Abstract Head Gesture Recognition has been developed using a variety of devices that mostly contain a sensor, such as a gyroscope or an accelerometer, for determining the direction and magnitude of movement. This paper explains how to control a smart wheelchair using Head-Gesture Recognition based on Computer Vision. Using the Haar Cascade Algorithm Method for determining the position of the face and nose, determining the order of the head gesture would be easy to do. We classify head gestures to become four, namely: Look down, Look up/center, Turn right and Turn left. The four gesture informa
APA, Harvard, Vancouver, ISO, and other styles
10

Yong Xu. "Research on Dynamic Gesture Recognition and Control System based on Machine Vision." Journal of Electrical Systems 20, no. 2 (2024): 616–28. http://dx.doi.org/10.52783/jes.1215.

Full text
Abstract:
Hand gesture recognition and control is a new type of human-computer interaction that can provide a more convenient and efficient operation mode by utilizing non-contact gesture recognition technology. This paper presents a lightweight dynamic gesture recognition method for intelligent office presentation control. First, we introduce the concept of hand gesture recognition and go over key gesture recognition technologies like classification. The structure, process, and evaluation index of the gesture recognition algorithm are described in detail using a convolutional neural network model. Duri
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Vision Based Gesture Recognition"

1

Akremi, Mohamed. "Manifold-Based Approaches for Action and Gesture Recognition." Electronic Thesis or Diss., université Paris-Saclay, 2025. http://www.theses.fr/2025UPAST045.

Full text
Abstract:
La reconnaissance des actions humaines (HAR) est devenue un domaine de recherche essentiel en raison de ses nombreuses applications dans le monde réel, notamment l'interaction homme-machine, la santé intelligente, la réalité virtuelle, la surveillance, le contrôle des drones (UAV) et les systèmes autonomes. Au cours des dernières décennies, de nombreuses approches ont été développées pour reconnaître les actions humaines à partir de séquences vidéo RGB monoculaires. Plus récemment, l'émergence des capteurs de profondeur a favorisé le développement de l'analyse des activités en 3D et de la reco
APA, Harvard, Vancouver, ISO, and other styles
2

Crawford, Gordon Finlay. "Vision-based analysis, interpretation and segmentation of hand shape using six key marker points." Thesis, University of Ulster, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.243732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fritsch, Jan Nikolaus. "Vision based recognition of gestures with context." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=968577474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bernard, Arnaud Jean Marc. "Human computer interface based on hand gesture recognition." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/42748.

Full text
Abstract:
With the improvement of multimedia technologies such as broadband-enabled HDTV, video on demand and internet TV, the computer and the TV are merging to become a single device. Moreover the previously cited technologies as well as DVD or Blu-ray can provide menu navigation and interactive content. The growing interest in video conferencing led to the integration of the webcam in different devices such as laptop, cell phones and even the TV set. Our approach is to directly use an embedded webcam to remotely control a TV set using hand gestures. Using specific gestures, a user is able to control
APA, Harvard, Vancouver, ISO, and other styles
5

Mazhar, Osama. "Vision-based human gestures recognition for human-robot interaction." Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS044.

Full text
Abstract:
Dans la perspective des usines du futur, pour garantir une interaction productive, sure et efficace entre l’homme et le robot, il est impératif que le robot puisse interpréter l’information fournie par le collaborateur humain. Pour traiter cette problématique nous avons exploré des solutions basées sur l’apprentissage profond et avons développé un framework pour la détection de gestes humains. Le framework proposé permet une détection robuste des gestes statiques de la main et des gestes dynamiques de la partie supérieure du corps.Pour la détection des gestes statiques de la main, openpose est
APA, Harvard, Vancouver, ISO, and other styles
6

Hettiarachchi, Randima. "Multi-Manifold learning and Voronoi region-based segmentation with an application in hand gesture recognition." Elsevier, 2015. http://hdl.handle.net/1993/31969.

Full text
Abstract:
A computer vision system consists of many stages, depending on its application. Feature extraction and segmentation are two key stages of a typical computer vision system and hence developments in feature extraction and segmentation are significant in improving the overall performance of a computer vision system. There are many inherent problems associated with feature extraction and segmentation processes of a computer vision system. In this thesis, I propose novel solutions to some of these problems in feature extraction and segmentation. First, I explore manifold learning, which is a no
APA, Harvard, Vancouver, ISO, and other styles
7

Blonski, Brian M. "THE USE OF CONTEXTUAL CLUES IN REDUCING FALSE POSITIVES IN AN EFFICIENT VISION-BASED HEAD GESTURE RECOGNITION SYSTEM." DigitalCommons@CalPoly, 2010. https://digitalcommons.calpoly.edu/theses/295.

Full text
Abstract:
This thesis explores the use of head gesture recognition as an intuitive interface for computer interaction. This research presents a novel vision-based head gesture recognition system which utilizes contextual clues to reduce false positives. The system is used as a computer interface for answering dialog boxes. This work seeks to validate similar research, but focuses on using more efficient techniques using everyday hardware. A survey of image processing techniques for recognizing and tracking facial features is presented along with a comparison of several methods for tracking and ident
APA, Harvard, Vancouver, ISO, and other styles
8

Lam, Benny, and Jakob Nilsson. "Creating Good User Experience in a Hand-Gesture-Based Augmented Reality Game." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-156878.

Full text
Abstract:
The dissemination of new innovative technology requires feasibility and simplicity. The problem with marker-based augmented reality is similar to glove-based hand gesture recognition: they both require an additional component to function. This thesis investigates the possibility of combining markerless augmented reality together with appearance-based hand gesture recognition by implementing a game with good user experience. The methods employed in this research consist of a game implementation and a pre-study meant for measuring interactive accuracy and precision, and for deciding upon which g
APA, Harvard, Vancouver, ISO, and other styles
9

Kaâniche, Mohamed Bécha. "Human gesture recognition." Nice, 2009. http://www.theses.fr/2009NICE4032.

Full text
Abstract:
Dans cette thèse, nous voulons reconnaître les gestes (par ex. Lever la main) et plus généralement les actions brèves (par ex. Tomber, se baisser) effectués par un individu. De nombreux travaux ont été proposés afin de reconnaître des gestes dans un contexte précis (par ex. En laboratoire) à l’aide d’une multiplicité de capteurs (par ex. Réseaux de cameras ou individu observé muni de marqueurs). Malgré ces hypothèses simplificatrices, la reconnaissance de gestes reste souvent ambiguë en fonction de la position de l’individu par rapport aux caméras. Nous proposons de réduire ces hypothèses afin
APA, Harvard, Vancouver, ISO, and other styles
10

Dang, Darren Phi Bang. "Template based gesture recognition." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/41404.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.<br>Includes bibliographical references (p. 65-66).<br>by Darren PHi Bang Dang.<br>M.S.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Vision Based Gesture Recognition"

1

Mubarak, Shah, and Jain Ramesh 1949-, eds. Motion-based recognition. Kluwer Academic Publishers, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hu, Zhongxu, and Chen Lv. Vision-Based Human Activity Recognition. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2290-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shah, Mubarak. Motion-Based Recognition. Springer Netherlands, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

N, Nasr Hatem, ed. Selected papers on model-based vision. SPIE Optical Engineering Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Ming-Hsuan. Face Detection and Gesture Recognition for Human-Computer Interaction. Springer US, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

1950-, Ahuja Narendra, ed. Face detection and gesture recognition for human-computer interaction. Kluwer Academic, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

M, Larson Rodney, Nasr Hatem N, and Society of Photo-optical Instrumentation Engineers., eds. Model-based vision development and tools: 14-15 November 1991, Boston, Massachusetts. SPIE, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

United States. National Aeronautics and Space Administration., ed. On three dimensional object recognition and pose-determination: An abstraction based approach. Space Automation & Robotics Center, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Eleni, Efthimiou, Kouroupetroglou Georgios, and Fotinia Stavroula-Evita, eds. Gesture and sign language in human-computer interaction and embodied communication: 9th International Gesture Workshop, GW 2011, Athens, Greece, May 25-27, 2011 : revised selected papers. Springer, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Raymond Shu Tak. Invariant object recognition based on elastic graph matching: Theory and applications. IOS Press, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Vision Based Gesture Recognition"

1

Bobick, Aaron F., and Andrew D. Wilson. "State-Based Recognition of Gesture." In Computational Imaging and Vision. Springer Netherlands, 1997. http://dx.doi.org/10.1007/978-94-015-8935-2_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Ying, and Thomas S. Huang. "Vision-Based Gesture Recognition: A Review." In Gesture-Based Communication in Human-Computer Interaction. Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-46616-9_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kelly, Daniel, John McDonald, and Charles Markham. "Recognition of Spatiotemporal Gestures in Sign Language Using Gesture Threshold HMMs." In Machine Learning for Vision-Based Motion Analysis. Springer London, 2011. http://dx.doi.org/10.1007/978-0-85729-057-1_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Camgöz, Necati Cihan, Ahmet Alp Kindiroglu, and Lale Akarun. "Gesture Recognition Using Template Based Random Forest Classifiers." In Computer Vision - ECCV 2014 Workshops. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16178-5_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xie, Ningwei, Wei Yu, Lei Yang, Meng Guo, and Jie Li. "Attention-Based Fusion of Directed Rotation Graphs for Skeleton-Based Dynamic Hand Gesture Recognition." In Pattern Recognition and Computer Vision. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-18907-4_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sreekanth, N. S., and N. K. Narayanan. "Dynamic Gesture Recognition—A Machine Vision Based Approach." In Lecture Notes in Electrical Engineering. Springer India, 2016. http://dx.doi.org/10.1007/978-81-322-3592-7_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Taiqian, Yande Li, Junfeng Hu, et al. "A Survey on Vision-Based Hand Gesture Recognition." In Lecture Notes in Computer Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-04375-9_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ting, Huong Yong, Kok Swee Sim, Fazly Salleh Abas, and Rosli Besar. "Vision-Based Human Gesture Recognition Using Kinect Sensor." In Lecture Notes in Electrical Engineering. Springer Singapore, 2014. http://dx.doi.org/10.1007/978-981-4585-42-2_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Olivera-García, Hiram, Jorge Cervantes-Ojeda, and María C. Gómez-Fuentes. "Vision-Based Gesture Recognition for Smart Light Switching." In Advances in Computational Intelligence. Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19493-1_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ren, Yu, and Chengcheng Gu. "Real-Time Hand Gesture Recognition Based on Vision." In Entertainment for Education. Digital Techniques and Systems. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-14533-9_48.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Vision Based Gesture Recognition"

1

Swedheetha, C., Palanichamy Naveen, T. Akilan, P. Manikandan, and B. Pushpavanam. "Computer Vision-Based Hand Gesture Recognition." In 2024 4th International Conference on Ubiquitous Computing and Intelligent Information Systems (ICUIS). IEEE, 2024. https://doi.org/10.1109/icuis64676.2024.10866139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Yanfen, Xianmin Zhang, Yanlin Chen, and Yanjiang Huang. "A Human Gesture Recognition Method Based on Machine Vision." In 2024 International Conference on Advanced Robotics and Mechatronics (ICARM). IEEE, 2024. http://dx.doi.org/10.1109/icarm62033.2024.10715744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhu, Xinting, and Ming Fang. "Traffic police gesture recognition based on SlowFast network." In 5th International Conference on Computer Vision and Data Mining (ICCVDM 2024), edited by Xin Zhang and Minghao Yin. SPIE, 2024. http://dx.doi.org/10.1117/12.3048304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ding, Weili, Shuhui Zhang, and Han Liu. "sEMG-vision Tra: A Gesture Recognition Method Based on Surface EMG Signal-Vision Fusion." In 2024 IEEE International Conference on Service Operations and Logistics, and Informatics (SOLI). IEEE, 2024. https://doi.org/10.1109/soli63266.2024.10956022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Abdalla, Sousannah, and Sabur Baidya. "UAV Control with Vision-Based Hand Gesture Recognition over Edge-Computing." In 2025 International Conference on Unmanned Aircraft Systems (ICUAS). IEEE, 2025. https://doi.org/10.1109/icuas65942.2025.11007931.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dinesh, Balumuri, C. V. Naveeth Reddy, and N. Senthamilarasi. "Computer Vision-Based Hand Recognition and Gesture Control for Dino Games." In 2025 International Conference on Emerging Smart Computing and Informatics (ESCI). IEEE, 2025. https://doi.org/10.1109/esci63694.2025.10988141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shi, Yinrui, Juanjuan Li, Yuxin Yang, and Yuang Han. "Design of a Solar-Powered Car Based on Gesture Recognition." In 2024 International Conference on Image Processing, Computer Vision and Machine Learning (ICICML). IEEE, 2024. https://doi.org/10.1109/icicml63543.2024.10957989.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Nana, Jianwei Niu, Xuefeng Liu, et al. "BeyondVision: An EMG-driven Micro Hand Gesture Recognition Based on Dynamic Segmentation." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/668.

Full text
Abstract:
Hand gesture recognition (HGR) plays a pivotal role in natural and intuitive human-computer interactions. Recent HGR methods focus on recognizing gestures from vision-based images or videos. However, vision-based methods are limited in recognizing micro hand gestures (MHGs) (e.g., pinch within 1cm) and gestures with occluded fingers. To address these issues, combined with the electromyography (EMG) technique, we propose BeyondVision, an EMG-driven MHG recognition system based on deep learning. BeyondVision consists of a wristband-style EMG sampling device and a tailored lightweight neural netw
APA, Harvard, Vancouver, ISO, and other styles
9

Yanmin Zhu, Zhibo Yang, and Bo Yuan. "Vision Based Hand Gesture Recognition." In 2013 International Conference on Service Sciences (ICSS 2013). IEEE, 2013. http://dx.doi.org/10.1109/icss.2013.40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Novikov, Anatoly Ivanovich, Evgeny Yurievich Kholopov, Aleksey Igorevich Efimov, and Mikhail Borisovich Nikiforov. "Palm Gesture Recognition Technology Based on Comparison of Contour Autocorrelation Functions." In 32nd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/graphicon-2022-477-483.

Full text
Abstract:
The technology of detecting static hand gestures is considered. The technology includes four main stages of image processing: detection of palm contours; piecewise linear approximation of the palm contour; construction of the autocorrelation function (ACF) of the contour of the observed gesture; comparison of the calculated ACF with the ACF of reference gestures. Examples of identification of the main gestures with the palm using the proposed technology, as well as the resulting quantitative estimates of the similarity measure of gestures, are given. The proposed technology is characterized by
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Vision Based Gesture Recognition"

1

Zhao, Ruyin. CSI-based Gesture Recognition and Object Detection. Iowa State University, 2021. http://dx.doi.org/10.31274/cc-20240624-456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pasupuleti, Murali Krishna. Next-Generation Extended Reality (XR): A Unified Framework for Integrating AR, VR, and AI-driven Immersive Technologies. National Education Services, 2025. https://doi.org/10.62311/nesx/rrv325.

Full text
Abstract:
Abstract: Extended Reality (XR), encompassing Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR), is evolving into a transformative technology with applications in healthcare, education, industrial training, smart cities, and entertainment. This research presents a unified framework integrating AI-driven XR technologies with computer vision, deep learning, cloud computing, and 5G connectivity to enhance immersion, interactivity, and scalability. AI-powered neural rendering, real-time physics simulation, spatial computing, and gesture recognition enable more realistic and adap
APA, Harvard, Vancouver, ISO, and other styles
3

Ferdaus, Md Meftahul, Mahdi Abdelguerfi, Elias Ioup, et al. KANICE : Kolmogorov-Arnold networks with interactive convolutional elements. Engineer Research and Development Center (U.S.), 2025. https://doi.org/10.21079/11681/49791.

Full text
Abstract:
We introduce KANICE, a novel neural architecture that combines Convolutional Neural Networks (CNNs) with Kolmogorov-Arnold Network (KAN) principles. KANICE integrates Interactive Convolutional Blocks (ICBs) and KAN linear layers into a CNN framework. This leverages KANs’ universal approximation capabilities and ICBs’ adaptive feature learning. KANICE captures complex, non-linear data relationships while enabling dynamic, context-dependent feature extraction based on the Kolmogorov-Arnold representation theorem. We evaluated KANICE on four datasets: MNIST, Fashion-MNIST, EMNIST, and SVHN, compa
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!