Academic literature on the topic 'Real-Time Gesture Detection'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Real-Time Gesture Detection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Real-Time Gesture Detection"

1

USHA, M. NITHIN GOWDA H. SANTHOSH M. NANDA KUMAR S. NUTHAN PAWAR E. "REAL-TIME HAND SIGN TRAINING AND DETECTION." International Journal For Technological Research In Engineering 11, no. 5 (2024): 149–51. https://doi.org/10.5281/zenodo.10554229.

Full text
Abstract:
Real-time hand sign recognition and detection are major for applications in human computer interaction, sign language interpretation, and gesture-based control systems.This project focuses on creating realtime hand gesture and finger gesture annotations using the MediaPipe framework in Python.The  hand gestures and finger gestures using keypoints and finger coordinates found by the MediaPipe framework.The system offers two machine learning models: one for recognizing hand signs and another for detecting finger gestures. It provides resources, including sample programs, model files, and training data, allowing users to utilize pre-trained models. Key Dependencies: MediaPipe, OpenCV, TensorFlow tf-nightly for TFLite models with LSTM, scikit for confusion matrix display, matplotlib (for visualization The system's structure consists of sample programs, model files, and training data, offering users flexibility in training and utilizing the models. A demo program is also provided for realtime use with a webcam, complete with options for customization. this project offers a robust solution training and detection. Users can effectively recognize hand signs and finger gestures through the integrating the MediaPipe framework and learning models, enabling applications in human-computer interaction and beyond.
APA, Harvard, Vancouver, ISO, and other styles
2

Yu, Myoungseok, Narae Kim, Yunho Jung, and Seongjoo Lee. "A Frame Detection Method for Real-Time Hand Gesture Recognition Systems Using CW-Radar." Sensors 20, no. 8 (2020): 2321. http://dx.doi.org/10.3390/s20082321.

Full text
Abstract:
In this paper, a method to detect frames was described that can be used as hand gesture data when configuring a real-time hand gesture recognition system using continuous wave (CW) radar. Detecting valid frames raises accuracy which recognizes gestures. Therefore, it is essential to detect valid frames in the real-time hand gesture recognition system using CW radar. The conventional research on hand gesture recognition systems has not been conducted on detecting valid frames. We took the R-wave on electrocardiogram (ECG) detection as the conventional method. The detection probability of the conventional method was 85.04%. It has a low accuracy to use the hand gesture recognition system. The proposal consists of 2-stages to improve accuracy. We measured the performance of the detection method of hand gestures provided by the detection probability and the recognition probability. By comparing the performance of each detection method, we proposed an optimal detection method. The proposal detects valid frames with an accuracy of 96.88%, 11.84% higher than the accuracy of the conventional method. Also, the recognition probability of the proposal method was 94.21%, which was 3.71% lower than the ideal method.
APA, Harvard, Vancouver, ISO, and other styles
3

Zheng, Zhuowen. "Gesture recognition real-time control system based on YOLOV4." Journal of Physics: Conference Series 2196, no. 1 (2022): 012026. http://dx.doi.org/10.1088/1742-6596/2196/1/012026.

Full text
Abstract:
Abstract With the development of industrial information technology in recent years, gesture control has attracted wide attention from scholars. Various gesture control methods have emerged, such as visual control, wearable device control, magnetic field feature extraction control. Based on one of the visual gesture control methods, this paper proposes a visual gesture control method applied to music box control by combining YOLOv4 object detection network. We design seven main gestures, reconstruct gesture datasets, and retrain the YOLOv4 object detection network by the means of the self-built datasets, further build a music box gesture control system. In this paper, we obtain the recognition accuracy of 97.8% for the object detection network in the gesture control system after a series of experiments, and recruit eight volunteers to conduct experimental tests on the self-built gesture-controlled music box system, mainly to quantify the time of executing a single command, attention concentration, etc. The results show that compared with the traditional control method, the visual gesture control method ensures the accuracy while has a faster response speed and takes up less of the user’s attention.
APA, Harvard, Vancouver, ISO, and other styles
4

Meng, Yuting, Haibo Jiang, Nengquan Duan, and Haijun Wen. "Real-Time Hand Gesture Monitoring Model Based on MediaPipe’s Registerable System." Sensors 24, no. 19 (2024): 6262. http://dx.doi.org/10.3390/s24196262.

Full text
Abstract:
Hand gesture recognition plays a significant role in human-to-human and human-to-machine interactions. Currently, most hand gesture detection methods rely on fixed hand gesture recognition. However, with the diversity and variability of hand gestures in daily life, this paper proposes a registerable hand gesture recognition approach based on Triple Loss. By learning the differences between different hand gestures, it can cluster them and identify newly added gestures. This paper constructs a registerable gesture dataset (RGDS) for training registerable hand gesture recognition models. Additionally, it proposes a normalization method for transforming hand gesture data and a FingerComb block for combining and extracting hand gesture data to enhance features and accelerate model convergence. It also improves ResNet and introduces FingerNet for registerable single-hand gesture recognition. The proposed model performs well on the RGDS dataset. The system is registerable, allowing users to flexibly register their own hand gestures for personalized gesture recognition.
APA, Harvard, Vancouver, ISO, and other styles
5

Prananta, Gidion Bagas, Hagi Azzam Azzikri, and Chaerur Rozikin. "REAL-TIME HAND GESTURE DETECTION AND RECOGNITION USING CONVOLUTIONAL ARTIFICIAL NEURAL NETWORKS." METHODIKA: Jurnal Teknik Informatika dan Sistem Informasi 9, no. 2 (2023): 30–34. http://dx.doi.org/10.46880/mtk.v9i2.1911.

Full text
Abstract:
Real-time hand gesture detection is an interesting topic in pattern recognition and computer vision. In this study, we propose the use of a Convolutional Neural Network (CNN) to detect and recognize hands in real-time. Our goal is to develop a system that can accurately identify and interpret user gestures in real-time. The proposed approach involves two main stages, namely hand gesture recognition and gesture recognition. For stage detection, we use the CNN architecture to recognize hands in the video. We train the CNN model using a dataset containing various hand gestures. Once a hand is detected, we extract the relevant hand region and proceed to the gesture recognition stage. The gesture recognition stage involves training and testing CNN models for different hand signal recognition. We use a hand gesture dataset that contains a variety of common hand signals. The experimental results show that the proposed system can detect and recognize hand movements in real-time with satisfactory accuracy. Although there are still some challenges that need to be overcome, this research provides a solid foundation for further development in real-time hand gesture recognition.
APA, Harvard, Vancouver, ISO, and other styles
6

Bhargavi, Mrs Jangam, Chitikala Sairam, and Donga Hemanth. "Real time interface for deaf-hearing communication." International Scientific Journal of Engineering and Management 04, no. 03 (2025): 1–7. https://doi.org/10.55041/isjem02356.

Full text
Abstract:
Bridging the communication gap between the deaf and hearing communities using AI is achieved by integrating two key modules: Speech-to-Sign Language Translation and Sign Gesture Detection in Real Time. The first module translates English spoken language into American Sign Language (ASL) animations. It consists of three sub-modules: speech-to-text conversion using the speech recognition module in Python, English text to ASL gloss translation using an NLP model, and ASL gloss to animated video generation, where DWpose Pose Estimation, and an avatar is used for visual representation. The second module focuses on real-time sign gesture detection, where a dataset is created from the WLASL and MS-ASL datasets. Hand gestures are labeled using Labeling, and a YOLO-based model is trained for hand pose detection to enable real-time recognition. The system aims to enhance accessibility and interaction between deaf and hearing users through an efficient, automated translation and recognition pipeline. Keywords: Speech-to-sign translation, real-time sign language recognition, ASL gloss, YOLO hand pose detection, AI for accessibility, deep learning for sign language, gesture recognition, DWpose Pose Estimation, NLP, dataset labeling, real-time gesture recognition.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Chang, and Tamás Szirányi. "Real-Time Human Detection and Gesture Recognition for On-Board UAV Rescue." Sensors 21, no. 6 (2021): 2180. http://dx.doi.org/10.3390/s21062180.

Full text
Abstract:
Unmanned aerial vehicles (UAVs) play an important role in numerous technical and scientific fields, especially in wilderness rescue. This paper carries out work on real-time UAV human detection and recognition of body and hand rescue gestures. We use body-featuring solutions to establish biometric communications, like yolo3-tiny for human detection. When the presence of a person is detected, the system will enter the gesture recognition phase, where the user and the drone can communicate briefly and effectively, avoiding the drawbacks of speech communication. A data-set of ten body rescue gestures (i.e., Kick, Punch, Squat, Stand, Attention, Cancel, Walk, Sit, Direction, and PhoneCall) has been created by a UAV on-board camera. The two most important gestures are the novel dynamic Attention and Cancel which represent the set and reset functions respectively. When the rescue gesture of the human body is recognized as Attention, the drone will gradually approach the user with a larger resolution for hand gesture recognition. The system achieves 99.80% accuracy on testing data in body gesture data-set and 94.71% accuracy on testing data in hand gesture data-set by using the deep learning method. Experiments conducted on real-time UAV cameras confirm our solution can achieve our expected UAV rescue purpose.
APA, Harvard, Vancouver, ISO, and other styles
8

Anthoniraj, Dr S. "GestureSpeak & Real-Time Virtual Mouse Using Hand Gestures." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 04 (2025): 1–9. https://doi.org/10.55041/ijsrem46121.

Full text
Abstract:
Abstract - GestureSpeak presents a new method of virtual mouse control that enables real-time hand gestures to be used to interface with computers. GestureSpeak, which was created to increase accessibility and inclusion, uses machine learning and computer vision algorithms to identify and decipher hand gestures and convert them into virtual mouse operations. To improve communication for those who use sign language, the system also has a sign language interpreter that translates American Sign Language (ASL) movements into spoken words. GestureSpeak overcomes the drawbacks of physical input devices and conventional mouse systems by building upon standard gesture recognition techniques, offering users with physical disabilities a flexible and hands-free option. GestureSpeak hopes to provide a smooth user experience in a variety of settings by optimizing gesture detection and performance. Keywords — Computer Vision, Accessibility, Sign Language Translation, Virtual Mice, Real-time Gesture Detection and Human-Computer Interaction
APA, Harvard, Vancouver, ISO, and other styles
9

Shinde, Siddhesh, Vaibhav Sonawane, Om Suryawanshi, R. U. Shekokar, and Prathamesh Mohalkar. "Real-Time American Sign Language Detection System Using Raspberry Pi and Sequential CNN." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 04 (2025): 1–9. https://doi.org/10.55041/ijsrem43820.

Full text
Abstract:
This paper presents the development of a real time American Sign Language (ASL) detection system using Raspberry Pi and a Sequential Convolutional Neural Network (CNN) model. The system aims to bridge the communication gap between the Deaf and Hard of Hearing (DHH) community and the hearing population by translating ASL gestures into text and audio outputs. The proposed system leverages the Raspberry Pi 5, a cost effective and scalable hardware platform, combined with deep learning techniques to achieve high accuracy in gesture recognition. The system utilizes the custom dataset for training and employs hand landmark detection using MediaPipe for real-time gesture analysis. The results demonstrate an 85% accuracy in recognizing ASL gestures, with real-time text and audio outputs. The system is designed for personal, educational, and public applications, offering a practical solution for enhancing communication accessibility for the DHH community. Key Words: American Sign Language(ASL), Raspberry Pi, Machine Learning, Gesture recognition, Sign Language Translation, Text-to-Speech conversion
APA, Harvard, Vancouver, ISO, and other styles
10

Prof., C. D. Sawarkar Vivek Vaidya Vansh Sharma Samir Sheikh Aniket Neware Prathmesh Chaudhari. "AI Based Real Time Hand Gesture Recognition System." International Journal of Advanced Innovative Technology in Engineering 9, no. 3 (2024): 320–23. https://doi.org/10.5281/zenodo.12747525.

Full text
Abstract:
This research presents a comprehensive approach for real-time hand gesture recognition using a synergistic combination of TensorFlow, OpenCV, and Media Pipe. Hand gesture recognition holds immense potential for natural and intuitive human-computer interaction in various applications, such as augmented reality, virtual reality, and human computer interfaces. The proposed system leverages the strengths of TensorFlow for deep learning-based model development, OpenCV for computer vision tasks, and Media Pipe for efficient hand landmark detection. The workflow begins with hand detection using OpenCV, followed by the extraction of hand landmarks through Media Pipe's hand tracking module. These landmarks serve as crucial input features for a custom trained TensorFlow model, designed to recognize a diverse set of hand gestures. The model is trained on a well- curated dataset, ensuring robust performance across different hand shapes, sizes, and orientations.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Real-Time Gesture Detection"

1

Dardas, Nasser Hasan Abdel-Qader. "Real-time Hand Gesture Detection and Recognition for Human Computer Interaction." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23499.

Full text
Abstract:
This thesis focuses on bare hand gesture recognition by proposing a new architecture to solve the problem of real-time vision-based hand detection, tracking, and gesture recognition for interaction with an application via hand gestures. The first stage of our system allows detecting and tracking a bare hand in a cluttered background using face subtraction, skin detection and contour comparison. The second stage allows recognizing hand gestures using bag-of-features and multi-class Support Vector Machine (SVM) algorithms. Finally, a grammar has been developed to generate gesture commands for application control. Our hand gesture recognition system consists of two steps: offline training and online testing. In the training stage, after extracting the keypoints for every training image using the Scale Invariance Feature Transform (SIFT), a vector quantization technique will map keypoints from every training image into a unified dimensional histogram vector (bag-of-words) after K-means clustering. This histogram is treated as an input vector for a multi-class SVM to build the classifier. In the testing stage, for every frame captured from a webcam, the hand is detected using my algorithm. Then, the keypoints are extracted for every small image that contains the detected hand posture and fed into the cluster model to map them into a bag-of-words vector, which is fed into the multi-class SVM classifier to recognize the hand gesture. Another hand gesture recognition system was proposed using Principle Components Analysis (PCA). The most eigenvectors and weights of training images are determined. In the testing stage, the hand posture is detected for every frame using my algorithm. Then, the small image that contains the detected hand is projected onto the most eigenvectors of training images to form its test weights. Finally, the minimum Euclidean distance is determined among the test weights and the training weights of each training image to recognize the hand gesture. Two application of gesture-based interaction with a 3D gaming virtual environment were implemented. The exertion videogame makes use of a stationary bicycle as one of the main inputs for game playing. The user can control and direct left-right movement and shooting actions in the game by a set of hand gesture commands, while in the second game, the user can control and direct a helicopter over the city by a set of hand gesture commands.
APA, Harvard, Vancouver, ISO, and other styles
2

Rehman, Faridi Shah Mohammad Hamoodur. "Artificial Intelligence Based Real-Time Processing of Sterile Preparations Compounding." University of Toledo / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1596595453534505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liou, Dung-Hua, and 劉東樺. "A real-time hand gesture recognition system by adaptive skin-color detection and motion history image." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/27730493713146752022.

Full text
Abstract:
碩士<br>大同大學<br>資訊工程學系(所)<br>97<br>In recent years, hand gesture recognition based man-machine interface is being developed vigorously. The most commonly used applications include robot control, TV remote control, slide show control, etc. Man-machine interface by hand gesture is both intuitive and friendly for users. Due to the effect of lighting and complex background, most visual hand gesture recognition systems work only under restricted environment. This is why visual hand gesture recognition systems still are not popular in our daily life. The purpose of this thesis is to develop a simple and real-time hand gesture recognition system which could overcome the effects caused by camera, lighting, and even environmental variations. Users can interact directly with systems by intuitive hand gestures without training. An adaptive skin color detection method based on face detection and color distribution analysis is proposed to obtain the personalized skin color model. Then, we could apply the created color model to detect the other skin color regions like hands in frames. In addition, a simple hand moving direction detection method based on motion history image (MHI) is proposed. Four groups of directional patterns are defined for measuring the directions. There are six hand gestures defined in our system, natural hand moving up, moving down, moving left, moving right, fist hand, and waving hand. They could be bound to some hot keys or events for interactions. Five persons are requested to do 250 hand gestures within two meters in front of the camera. Experimental results show the accuracy is up to 94.1% in average and the processing time is only 3.81 ms per frame. These demonstrated the reliability and robustness of proposed system.
APA, Harvard, Vancouver, ISO, and other styles
4

Yan, Yi-wei, and 嚴逸緯. "Integration of Human Feature Detection and Geometry Analysis for Real-time Face Pose Estimation and Gesture Recognition." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/63197581381191989404.

Full text
Abstract:
碩士<br>國立成功大學<br>電機工程學系碩博士班<br>97<br>In recent years, the digital products have become more accessible to people. The requirement of the different levels of intelligent human-machine interface is increased gradually and conduces to grow more and more related research of human face technique. Face detection and face recognition technology applications such as identifications system, access control monitoring systems. Human-Computer Interactions (HCI) is more and more common in real life. The survey of human face pose estimation, a human face-related research field is a popular topic in HCI. In this paper, we divide the human faces into several viewpoint categories according to their poses in 3D and propose a system to estimate human face pose based on object detection and geometry analysis. The system architecture includes two components: 1) Face detection, 2) Face Pose estimation. It is not only considered about performance, but also the extension of the system by using the modular structure design. We define 9-posed in this system by the human features detection such as eyes, head and shoulders, frontal face and profile face and we defined a detect array for these detectors. Because of the fast object detection algorithm, the features can be detected and get good detect rate in low resolution 320*240. To improve the detect array of this system, we design a cascade detector array which detect only the interested region in image and can detect 9 face poses in real-time. We can speed up the detection system by using the cascade detector array. We have proposed a gesture detection system based on Paul and Viola’s object detection, and combine it with image processing to recognize the defined gesture. We define two gestures in gesture detection to control the appliance. In final chapter of this thesis, we will show the experimental results using the test videos we took. Then we combine the pose estimation system and gesture detection system and apply it to the appliance control in NCKU Aspire Home. The proposed system can not only detect the human pose’s position and pose effectively in image, but also order the appliance. In this research, we proposed a human face pose estimation system. If we add the face model with a pre-training mode, we can increase the system detect rate and it will be a complete detection approach.
APA, Harvard, Vancouver, ISO, and other styles
5

Kibbanahalli, Shivalingappa Marulasidda Swamy. "Real-time human action and gesture recognition using skeleton joints information towards medical applications." Thesis, 2020. http://hdl.handle.net/1866/24320.

Full text
Abstract:
Des efforts importants ont été faits pour améliorer la précision de la détection des actions humaines à l’aide des articulations du squelette. Déterminer les actions dans un environnement bruyant reste une tâche difficile, car les coordonnées cartésiennes des articulations du squelette fournies par la caméra de détection à profondeur dépendent de la position de la caméra et de la position du squelette. Dans certaines applications d’interaction homme-machine, la position du squelette et la position de la caméra ne cessent de changer. La méthode proposée recommande d’utiliser des valeurs de position relatives plutôt que des valeurs de coordonnées cartésiennes réelles. Les récents progrès des réseaux de neurones à convolution (RNC) nous aident à obtenir une plus grande précision de prédiction en utilisant des entrées sous forme d’images. Pour représenter les articulations du squelette sous forme d’image, nous devons représenter les informations du squelette sous forme de matrice avec une hauteur et une largeur égale. Le nombre d’articulations du squelette fournit par certaines caméras de détection à profondeur est limité, et nous devons dépendre des valeurs de position relatives pour avoir une représentation matricielle des articulations du squelette. Avec la nouvelle représentation des articulations du squelette et le jeu de données MSR, nous pouvons obtenir des performances semblables à celles de l’état de l’art. Nous avons utilisé le décalage d’image au lieu de l’interpolation entre les images, ce qui nous aide également à obtenir des performances similaires à celle de l’état de l’art.<br>There have been significant efforts in the direction of improving accuracy in detecting human action using skeleton joints. Recognizing human activities in a noisy environment is still challenging since the cartesian coordinate of the skeleton joints provided by depth camera depends on camera position and skeleton position. In a few of the human-computer interaction applications, skeleton position, and camera position keep changing. The proposed method recommends using relative positional values instead of actual cartesian coordinate values. Recent advancements in CNN help us to achieve higher prediction accuracy using input in image format. To represent skeleton joints in image format, we need to represent skeleton information in matrix form with equal height and width. With some depth cameras, the number of skeleton joints provided is limited, and we need to depend on relative positional values to have a matrix representation of skeleton joints. We can show the state-of-the-art prediction accuracy on MSR data with the help of the new representation of skeleton joints. We have used frames shifting instead of interpolation between frames, which helps us achieve state-of-the-art performance.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Real-Time Gesture Detection"

1

Passi, Kalpdrum, and Sandipgiri Goswami. "Real Time Static Gesture Detection Using Deep Learning." In Big Data Analytics. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37188-3_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hou, Weiyan, Huaiyuan Guo, Abdumalik Hussein, Fangyuan Xu, and Zhenlong Wang. "Real-Time Detection System of Bird Flight Gesture." In Communications in Computer and Information Science. Springer Nature Singapore, 2024. https://doi.org/10.1007/978-981-96-0188-2_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Amatya, Smriti, Ishika, M. V. Manoj Kumar, et al. "Real-Time Hand Gesture Detection Using Convex Hull and Contour Edge Detection." In Emerging Research in Computing, Information, Communication and Applications. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1342-5_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhao, Xian, Zhan Song, Jian Guo, Yanguo Zhao, and Feng Zheng. "Real-Time Hand Gesture Detection and Recognition by Random Forest." In Communications in Computer and Information Science. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-31968-6_89.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Jiliang, Li Peng, Wei Feng, Zhaojie Ju, and Honghai Liu. "Human-AGV Interaction: Real-Time Gesture Detection Using Deep Learning." In Intelligent Robotics and Applications. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-27541-9_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yadav, Kapil, and Jhilik Bhattacharya. "Real-Time Hand Gesture Detection and Recognition for Human Computer Interaction." In Advances in Intelligent Systems and Computing. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23036-8_49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mgbemena, Chika Edith, John Oyekan, Ashutosh Tiwari, et al. "Gesture Detection Towards Real-Time Ergonomic Analysis for Intelligent Automation Assistance." In Advances in Ergonomics of Manufacturing: Managing the Enterprise of the Future. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41697-7_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

El Sibai, Rayane, Chady Abou Jaoude, and Jacques Demerjian. "Vision-Based Approach for Real-Time Hand Detection and Gesture Recognition." In Recent Trends in Computer Applications. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-89914-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yoder, Gabriel, and Lijun Yin. "Real-Time Hand Detection and Gesture Tracking with GMM and Model Adaptation." In Advances in Visual Computing. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10520-3_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Roy, Subhendu, Sraboni Ghosh, Aratrika Barat, Madhurima Chattopadhyay, and Debjyoti Chowdhury. "Real-time Implementation of Electromyography for Hand Gesture Detection Using Micro Accelerometer." In Advances in Intelligent Systems and Computing. Springer India, 2016. http://dx.doi.org/10.1007/978-81-322-2656-7_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Real-Time Gesture Detection"

1

Kulkarni, Manas Girish, Revati Tushar Aute, Rajlakshmi Nilesh Desai, Atharva Vishwas Deshpande, and Dipti Pratik Pandit. "Real-Time Gender Classification Using MiniXception and Hand Gesture Detection Using MediaPipe Framework." In 2025 International Conference on Computational, Communication and Information Technology (ICCCIT). IEEE, 2025. https://doi.org/10.1109/icccit62592.2025.10928023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Darmawan, Irfan, Aulia Melda Meldiawati, Randi Rizal, Alam Rahmatulloh, Erna Haerani, and Rohmat Gunawan. "Hands-On Gaming: Real-Time Gesture Detection in Rock-Paper-Scissors Using Mediapipe and CNN." In 2025 International Conference on Advancement in Data Science, E-learning and Information System (ICADEIS). IEEE, 2025. https://doi.org/10.1109/icadeis65852.2025.10933061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jain, Khushu, Yuvraj Panwar, Devanshu Vats, and Shuchi Mala. "Real Time Automated Sign Language Detection and Translation Using AI and ML Driven Hand Gesture Recognition." In 2025 3rd International Conference on Disruptive Technologies (ICDT). IEEE, 2025. https://doi.org/10.1109/icdt63985.2025.10986616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Alvarado García, Gabriel Alfredo, and Fávell Eduardo Núñez Rodríguez. "Gesture-Based Control of an OMRON Viper 650 Robot." In I Conferencia Internacional de Ciencia, Tecnología e Innovación. Trans Tech Publications Ltd, 2024. http://dx.doi.org/10.4028/p-ag7cow.

Full text
Abstract:
This project focuses on developing and implementing a robotic control system based on detecting signs and gestures using computer vision. The main goal was to create an intuitive and efficient interface for interacting with an OMRON Viper 650 industrial robot. To achieve this, computer vision technologies like Mediapipe and OpenCV were used to detect and recognize the user’s hands and fingers in real-time. The collected data was processed with a Python script and stored in a text file. Additionally, a program was developed in C# using OMRON’s ACE programming interface to extract data from the text file and send commands to the Viper 650 robot, enabling it to interpret the user’s gestures and perform actions accordingly. This project has successfully created an innovative solution that combines computer vision, programming, and industrial robotics to provide an intuitive and efficient control experience, opening up new possibilities in industrial and human-robot interaction applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Yuasa, Mayumi, and Osamu Yamaguchi. "Real-time face blending by automatic facial feature point detection." In Gesture Recognition (FG). IEEE, 2008. http://dx.doi.org/10.1109/afgr.2008.4813368.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yuasa, Mayumi, and Osamu Yamaguchi. "Real-time face blending by automatic facial feature point detection." In Gesture Recognition (FG). IEEE, 2008. http://dx.doi.org/10.1109/afgr.2008.4813441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Thuy Thi Nguyen, Nguyen Dang Binh, and Horst Bischof. "An active boosting-based learning framework for real-time hand detection." In Gesture Recognition (FG). IEEE, 2008. http://dx.doi.org/10.1109/afgr.2008.4813315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Guangxiang, Dequan Li, and Anni Yang. "Real-Time Hand Gesture Detection Based on YOLOv5s." In 2022 41st Chinese Control Conference (CCC). IEEE, 2022. http://dx.doi.org/10.23919/ccc55666.2022.9901909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shamalik, Rameez, and Sanjay Koli. "Real Time Gesture Detection Using Convolutional Neural Network." In 2022 IEEE 4th International Conference on Cybernetics, Cognition and Machine Learning Applications (ICCCMLA). IEEE, 2022. http://dx.doi.org/10.1109/icccmla56841.2022.9989195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hinduja, Saurabh, and Shaun Canavan. "Real-time Action Unit Intensity Detection." In 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020). IEEE, 2020. http://dx.doi.org/10.1109/fg47880.2020.00026.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Real-Time Gesture Detection"

1

Pasupuleti, Murali Krishna. Next-Generation Extended Reality (XR): A Unified Framework for Integrating AR, VR, and AI-driven Immersive Technologies. National Education Services, 2025. https://doi.org/10.62311/nesx/rrv325.

Full text
Abstract:
Abstract: Extended Reality (XR), encompassing Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR), is evolving into a transformative technology with applications in healthcare, education, industrial training, smart cities, and entertainment. This research presents a unified framework integrating AI-driven XR technologies with computer vision, deep learning, cloud computing, and 5G connectivity to enhance immersion, interactivity, and scalability. AI-powered neural rendering, real-time physics simulation, spatial computing, and gesture recognition enable more realistic and adaptive XR environments. Additionally, edge computing and federated learning enhance processing efficiency and privacy in decentralized XR applications, while blockchain and quantum-resistant cryptography secure transactions and digital assets in the metaverse. The study explores the role of AI-enhanced security, deepfake detection, and privacy-preserving AI techniques to mitigate risks associated with AI-driven XR. Case studies in healthcare, smart cities, industrial training, and gaming illustrate real-world applications and future research directions in neuromorphic computing, brain-computer interfaces (BCI), and ethical AI governance in immersive environments. This research lays the foundation for next-generation AI-integrated XR ecosystems, ensuring seamless, secure, and scalable digital experiences. Keywords: Extended Reality (XR), Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR), Artificial Intelligence (AI), Neural Rendering, Spatial Computing, Deep Learning, 5G Networks, Cloud Computing, Edge Computing, Federated Learning, Blockchain, Cybersecurity, Brain-Computer Interfaces (BCI), Quantum Computing, Privacy-Preserving AI, Human-Computer Interaction, Metaverse.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography