To see the other types of publications on this topic, follow the link: Indian sign language (ISL).

Journal articles on the topic 'Indian sign language (ISL)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Indian sign language (ISL).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Shinde, Aditya. "Indian Sign Language Detection." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 01 (2025): 1–9. https://doi.org/10.55041/ijsrem41093.

Full text
Abstract:
- The communication gap remains one of the most significant barriers between individuals with hearing and speech impairments and the broader society. This project addresses this challenge by developing a real-time Indian Sign Language (ISL) detection system that leverages computer vision and machine learning techniques. By capturing hand gestures from video input, the system translates these movements into text or speech, enabling effective communication between ISL users and those unfamiliar with the language. Additionally, the system incorporates text-to-speech functionality, ensuring a seamless and humanized interaction experience. The proposed model utilizes Convolutional Neural Networks (CNNs) for image processing and gesture recognition, trained on a comprehensive dataset of ISL gestures. The framework employs preprocessing, feature extraction, and classification algorithms to accurately identify static and dynamic gestures. The system is designed to focus on the nuances of ISL, providing accurate recognition of gestures in real time while offering multilingual support. This initiative aspires to create an inclusive environment by empowering the hearing-impaired community and promoting better integration within society. By using cost-effective techniques, the project ensures scalability and practicality for everyday applications, making communication more efficient and inclusive. Keywords: Indian Sign Language (ISL), Gesture Recognition, Convolutional Neural Networks (CNNs), Real-time Communication
APA, Harvard, Vancouver, ISO, and other styles
2

Mishra, Ravita, Gargi Angne, Nidhi Gawde, Preeti Khamkar, and Sneha Utekar. "SignSpeak: Indian Sign Language Recognition with ML Precision." Indian Journal Of Science And Technology 18, no. 8 (2025): 620–34. https://doi.org/10.17485/ijst/v18i8.4049.

Full text
Abstract:
Objectives: To develop an accessible educational platform for Indian Sign Language (ISL) recognition, bridging communication gaps using advanced machine learning techniques, and promoting inclusivity for the hearing-impaired community. Methods: The study utilized Random Forest for classifying ISL letters and numbers with 1200 images per class and Long Short-Term Memory (LSTM)/Large Language Model (LLM) for gesture-based word and sentence recognition using 120 custom images. Feedback from Jhaveri Thanawala School for the Deaf validated the approach. Findings: The Random Forest model achieved 99.98% accuracy in recognizing ISL letters and numbers. LSTM and LLM models demonstrated 87% accuracy in translating gestures into meaningful sentences. The dynamic learning and quiz modules improved user engagement, facilitating effective ISL mastery. Feedback from Jhaveri Thanawala School confirmed its real-world usability. These results enhance prior works by offering an integrated, highly accurate platform to promote ISL adoption, enabling better societal inclusivity for individuals with hearing disabilities. Novelty: Signova integrates gesture recognition with sentence generation which makes it a unique ISL recognition system, achieving high accuracy while providing interactive tools for learning and practicing ISL to foster its accessibility. Keywords: Indian Sign Language, Sign-Language Recognition, Random Forest, Long Short-Term Memory, Large Language Model
APA, Harvard, Vancouver, ISO, and other styles
3

Preeti Jain, Prof. "Healthcare Application Using Indian Sign Language." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 06 (2025): 1–9. https://doi.org/10.55041/ijsrem50261.

Full text
Abstract:
Abstract - The absence of standardized and easily available technology solutions for people with disabilities—especially those who use Indian Sign Language (ISL) to access vital services like healthcare—means that there are still significant communication hurdles in India. Unlike American Sign Language (ASL), which is mostly one-handed, ISL relies on intricate two-handed motions, which creates unique difficulties for software-based interpretation systems. The lack of extensive, standardized ISL datasets, which are essential for developing precise machine learning and gesture recognition models, exacerbates these difficulties even further. The lack of a complete ISL-based solution still prevents ISL users from accessing essential services like healthcare, even with improvements in sign language recognition technology. Although several platforms provide sign language translation, the majority are not prepared to deal with the particular needs of ISL. In addition to investigating recent developments in ISL translation, gesture recognition, and letter recognition, this project seeks to create an ISL communication system especially suited for hospital situations. The foundation for improved ISL accessibility in the healthcare industry and beyond will be laid by investigating fundamental techniques including deep learning, machine learning, and real-time processing. Key Words: Indian Sign Language (ISL), gesture recognition, real-time translation, accessibility, deep learning.
APA, Harvard, Vancouver, ISO, and other styles
4

Shinde, Aditya. "Enhanced Indian Sign Language Detection." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 06 (2025): 1–9. https://doi.org/10.55041/ijsrem49874.

Full text
Abstract:
Abstract - The communication problem involving members of society who have speech and hearing impairments is still not fully resolved. In an earlier study, we created a real-time Indian Sign Language (ISL) recognition system which uses LSTM architecture for sequential gesture recognition. The focus of this paper is on further improving this system by changing the architecture from LSTM to CNN to enhance spatial feature extraction and overall system performance. Using a more comprehensive ISL dataset, we trained and tested the model and added new advanced preprocessing techniques such as Gaussian blur and converting the images to grayscale. These modifications improved the accuracy of the model and reduced the processing power needed, allowing for more advanced, rapid, and reliable real-time ISL gesture recognition. The result of this study is in the direction of making available an effective, simple, and easy-to-use technological interface for the deaf and hearing-impaired people in India. Keywords: Indian Sign Language (ISL), Gesture Recognition, Convolutional Neural Networks (CNNs), Real-time Communication, Image Preprocessing, Gaussian Blur, Grayscale Conversion, Sign Language Translation, Computer Vision, Human-Computer Interaction (HCI), Deep Learning
APA, Harvard, Vancouver, ISO, and other styles
5

Aadhya, Satrasala, and al. et. "Indian Sign Language Translator Using CNN." International Journal of Computational Learning & Intelligence 4, no. 4 (2025): 792–98. https://doi.org/10.5281/zenodo.15279424.

Full text
Abstract:
This paper main focus is to create a real-time Indian Sign Language (ISL) translator designed to overcome the gap between the deaf and hard-of-hearing population and the hearing population. By leveraging computer vision techniques and machine learning models, the system can accurately recognize a wide range of ISL gestures and translate them into corresponding text outputs in English.  The application is intended to facilitate seamless communication, enhancing accessibility in various settings such as education, healthcare, and daily interactions. This solution aims to foster greater inclusion and social integration for ISL users while addressing the lack of real-time ISL translation tools in India.
APA, Harvard, Vancouver, ISO, and other styles
6

Wankhade, Vaishnavi. "Indian Sign Language Detection using Machine Learning." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem30798.

Full text
Abstract:
Indian Sign Language (ISL) serves as a primary means of communication for millions of hearing-impaired individuals in India. However, the lack of comprehensive tools for interpreting ISL poses significant challenges in facilitating effective communication and integration of the deaf community into society. This research paper explores the advancements, challenges, and potential applications of Indian Sign Language detection technology. It provides an overview of existing techniques for ISL detection, including computer vision-based approaches and wearable devices. Additionally, the paper discusses the unique challenges associated with ISL detection, such as variations in gestures and environmental factors. Furthermore, it examines the potential applications of ISL detection technology in various domains, including education, healthcare, and accessibility. By analyzing current research trends and technological developments, this paper aims to contribute to the advancement of ISL detection technology and its societal impact. . KEYWORD : Indian Sign Language, Sign Language Detection, Computer Vision, Wearable Devices, Accessibility, Communication, Deaf Community
APA, Harvard, Vancouver, ISO, and other styles
7

Patole, Piyush, Mihir Sarawate, and Krushna Joshi. "A Communication Translator Interface for Sign Language Interpretation." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (2023): 4546–58. http://dx.doi.org/10.22214/ijraset.2023.52325.

Full text
Abstract:
Abstract: Sign language is an essential means of communication for deaf and hard-of-hearing individuals. However, unlike spoken languages which have a universal language, every country has its own native sign language. In India, the Indian Sign Language (ISL) is used. This survey aims to provide an overview of the recognition and translation of essential Indian sign language. While significant research has been conducted in American Sign Language (ASL), the same cannot be said for Indian Sign Language due to its unique characteristics. The proposed method focuses on designing a tool for translating ISL hand gestures to help the deaf-mute community convey their ideas. A self-created ISL dataset was used to train the model for gesture recognition. The literature contains a plethora of methods for extracting features and classifying sign language, with a majority of them utilizing machine learning techniques. However, this article proposes the adoption of a deep learning method by designing a Convolution Neural Network (CNN) model for the purpose of extracting sign language features and recognizing them accurately. This CNN model is specifically designed to identify complex patterns in the data and use them to efficiently recognize sign language features. By adopting this approach, it is expected that the recognition of sign language will improve significantly, providing a more effective means of communication for the deaf and hard-of-hearing community.
APA, Harvard, Vancouver, ISO, and other styles
8

Ravita, Mishra, Angne Gargi, Gawde Nidhi, Khamkar Preeti, and Utekar Sneha. "SignSpeak: Indian Sign Language Recognition with ML Precision." Indian Journal of Science and Technology 18, no. 8 (2025): 620–34. https://doi.org/10.17485/IJST/v18i8.4049.

Full text
Abstract:
<strong>Objectives:</strong>&nbsp;To develop an accessible educational platform for Indian Sign Language (ISL) recognition, bridging communication gaps using advanced machine learning techniques, and promoting inclusivity for the hearing-impaired community.&nbsp;<strong>Methods:</strong>&nbsp;The study utilized Random Forest for classifying ISL letters and numbers with 1200 images per class and Long Short-Term Memory (LSTM)/Large Language Model (LLM) for gesture-based word and sentence recognition using 120 custom images. Feedback from Jhaveri Thanawala School for the Deaf validated the approach.&nbsp;<strong>Findings:</strong>&nbsp;The Random Forest model achieved 99.98% accuracy in recognizing ISL letters and numbers. LSTM and LLM models demonstrated 87% accuracy in translating gestures into meaningful sentences. The dynamic learning and quiz modules improved user engagement, facilitating effective ISL mastery. Feedback from Jhaveri Thanawala School confirmed its real-world usability. These results enhance prior works by offering an integrated, highly accurate platform to promote ISL adoption, enabling better societal inclusivity for individuals with hearing disabilities.&nbsp;<strong>Novelty:</strong>&nbsp;Signova integrates gesture recognition with sentence generation which makes it a unique ISL recognition system, achieving high accuracy while providing interactive tools for learning and practicing ISL to foster its accessibility. <strong>Keywords:</strong>&nbsp;Indian Sign Language, Sign-Language Recognition, Random Forest, Long Short-Term Memory, Large Language Model &nbsp;
APA, Harvard, Vancouver, ISO, and other styles
9

Snehal, Pawar, Salunke Pragati, Mhasavade Arati, Bhutkar Aishwaraya, and R. Pathak K. "Real Time Identification of American Sign Language for Deaf and Dumb Community." Advancement in Image Processing and Pattern Recognition 2, no. 3 (2020): 1–7. https://doi.org/10.5281/zenodo.3600015.

Full text
Abstract:
<em>The only way for deaf and dumb for communication is based on sign language which involves hand gestures. In this system, we are working on the American Sign Language (ASL) dataset (A-Z), (0-9) and word alphabet identification escort by our word identification dataset of Indian Sign Language (ISL). Sign data samples to be making our system more faultless, error free, and unambiguous with help of Convolutional Neural Network (CNN). Today, much research has been going on the field of sign language recognition but existing study failed to develop trust full communication interpreter. The motivation of this system is to serve a real-time two-way communication translator based on Indian Sign Language (ISL) with higher precision, efficiency, and accuracy. Indian Sign Language (ISL) used by Deaf-mute people&rsquo;s community in India, does have adequate, delightful, acceptable, meaningful essential and structural properties.</em>
APA, Harvard, Vancouver, ISO, and other styles
10

Pradnya D. Bormane. "Indian Sign Language Recognition: Support Vector Machine Approach." Advances in Nonlinear Variational Inequalities 27, no. 3 (2024): 716–27. http://dx.doi.org/10.52783/anvi.v27.1438.

Full text
Abstract:
Indian Sign Language (ISL) is the primary form of communication for the dumb and deaf community in India. Recognizing Indian Sign Language plays an imperative part in promoting communication rights, social inclusion and equality for deaf people, while also contributing to technological advancement and cultural diversity. System’s ability to automatically recognize ISL signs could significantly improve community interactions between deaf and people with hearing loss. The objective of this research is to design a system that can accurately recognize and interpret Indian Sign language (ISL), thereby improving communication accessibility for the deaf and dumb community. Also, enhance the accuracy of Indian Sign language (ISL) recognition. In this research, Machine Learning approach for Sign Language (SL) Recognition using Support Vector Machine (SVM) is implemented. The Support Vector Machine (SVM) model was trained using a linear kernel and a regularization parameter (C) set to 0.999 on a dataset of sequences for gesture recognition. After training, the model achieved a test accuracy of 86% on the test data. The development and implementation of gesture recognition system can increase awareness of the communication needs and rights of deaf people.
APA, Harvard, Vancouver, ISO, and other styles
11

Gaonkar, Niyati V., and Vishal R. Gori. "Real-Time Bidirectional Translation System Between Text and Indian Sign Language Using Deep Learning and NLP Techniques." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 04 (2025): 1–9. https://doi.org/10.55041/ijsrem44150.

Full text
Abstract:
In this paper, we present a real-time translation system that bridges the communication gap between the hearing and non-hearing communities. Our system converts English text to Indian Sign Language (ISL) and vice versa, using Natural Language Processing (NLP) techniques and deep learning-based gesture recognition. The system supports video-based gesture recognition for ISL and provides accurate text translations in real-time. This study addresses the technical challenges involved, including feature extraction from gestures and translating com- plex ISL sentences using neural networks like LSTM. Keywords- Indian Sign Language (ISL), Sign Language Translation, Gesture Recognition, Deep Learning, LSTM Model, Mediapipe Holistic, Text-to-Sign Conversion, Dy- namic Gesture Segmentation, Fingerspelling, Natural Lan- guage Processing (NLP), Pose Estimation, Hand Landmark Tracking, Real-Time Sign Language Recognition, Data Aug- mentation, Accessibility Technology
APA, Harvard, Vancouver, ISO, and other styles
12

N, Sudiksha. "Talking Fingers: Bridging the Communication Gap through Real-Time Speech-to-Indian Sign Language Translation." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 01 (2025): 1–9. https://doi.org/10.55041/ijsrem40875.

Full text
Abstract:
"Talking Fingers" is an innovative initiative to be developed to facilitate communication between hearing and non-hearing individuals by building a web-based system that can translate spoken language into Indian Sign Language (ISL). Being an essential means of communication among millions in India, ISL remains underdeveloped by technologies that are dominated by American and British Sign Languages. Current tools rely on the basic word-by-word translation with no contextual or grammatical accuracy. The proposed system will thus integrate speech recognition, NLP, and ISL visuals for real-time, context-aware translations. Spoken input will be converted into text through the Google Speech API and then processed using NLP techniques to segment meaningful phrases. The matched phrases are matched with the ISL visual representations, which may be in the form of videos or GIFs, in a comprehensive database. A fallback mechanism ensures seamless communication by spelling out words letter by letter when specific ISL visuals are unavailable. This platform serves as scalable and adaptable solutions for different public and educational spaces, bridging the communication gap for the deaf and hard-of-hearing community. With emphasis on ISL and incorporation of advanced technologies, "Talking Fingers" delivers an inclusive and robust solution, enabling users and bringing greater inclusivity in communication. Keywords: Indian Sign Language (ISL), Natural Language Processing (NLP), Speech-to-Sign Translation, Communication Accessibility, Real-time Translation, Sign Language Automation
APA, Harvard, Vancouver, ISO, and other styles
13

Patil, Gouri Shanker, R. Rangasayee, and Geetha Mukundan. "Non-fluent aphasia in deaf user of Indian Sign Language." Cognitive Linguistic Studies 1, no. 1 (2014): 147–53. http://dx.doi.org/10.1075/cogls.1.1.07pat.

Full text
Abstract:
The current study describes aphasia in a deaf user of Indian Sign Language (ISL). One congenitally deaf adult with LHD was evaluated for signs of aphasia. The tools used were Aphasia Diagnostic Battery in Indian Sign Language (ADB in ISL), Magnetic Resonance Imaging (MRI) investigation, linguistic, and neurobehavioral profile. The results of all investigative procedures revealed signs and symptoms consistent with non-fluent aphasia specifically Broca’s aphasia. The data from ISL in brain damaged individual further emphasize the role of left hemisphere in sign language processing.
APA, Harvard, Vancouver, ISO, and other styles
14

Haren, Amal, Ann Reny Reema, and Rudra Prathap Boppuru. "Hand Kinesics in Indian Sign Language using NLP Techniques with SVM Based Polarity." International Journal of Engineering and Advanced Technology (IJEAT) 9, no. 4 (2020): 2044–50. https://doi.org/10.35940/ijeat.D8483.049420.

Full text
Abstract:
With the advent of new technology every year, human beings continue to make clever innovations to benefit not only themselves but also those with some kind of impairment. Communication is carried out by talking to each other for regular people, but people who are deaf interact with each other through sign language. Taking this problem into account, we are proposing a methodology that allows to ease the communication with each other by translating speech into sign language. This paper explains a methodology that translates speech into the corresponding Indian Sign Language (ISL). In India, it is spoken in almost 28 different languages. So, language has always been a problem. Thus, we have come with a project just for India in which the person can communicate with the app in any Indian language they know, and it will convert it into Indian Sign Language. This is applicable to not just literate but also illiterate people across India. The idea is to take the speech input and translate to text, which will then undergo text-pre-processing using NLP for better analysis and will be connected to the HamNoSys data for the generation of sign languages. The polarity detection will also be included. It is implemented using the SVM algorithm for Sentimental Analysis. Thus, the main objective of this project is to develop a useful project which can be used to capture the whole vocabulary of Indian Sign Language (ISL) and provide access to information and services to mute people in ISL.
APA, Harvard, Vancouver, ISO, and other styles
15

G,, Anvith. "Talking Fingers: A Multilingual Speech-to-Sign Language Converter." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 01 (2025): 1–9. https://doi.org/10.55041/ijsrem40782.

Full text
Abstract:
Communication is a basic human need, but thousands of people with hearing and speech impairments face limitations in everyday communication "Talking Fingers" is a modern assistive technology tool that transfers spoken or written language to Indian Sign Language (ISL). With multilingual capabilities, the tool provides technologies such as Google ML Kit for language recognition, MyMemory API for translation, ISL grammar services for more than one language and several languages accessible at a time Translated ISL will, a it provides a simple and powerful communication channel. The system outlines the ability to blend artificial intelligence (AI) and language generation to promote inclusion and empower individuals with disabilities
APA, Harvard, Vancouver, ISO, and other styles
16

Gandhe, Dakshesh, Pranay Mokar, Aniruddha Ramane, and Dr R. M. Chopade. "Sign Language Recognition for Real-time Communication." International Journal for Research in Applied Science and Engineering Technology 12, no. 5 (2024): 288–93. http://dx.doi.org/10.22214/ijraset.2024.61514.

Full text
Abstract:
Abstract: Sign language is an essential communication tool for India's Deaf and Hard of Hearing people. This study introduces a novel approach for recognising and synthesising Indian Sign Language (ISL) using Long Short-Term Memory (LSTM) networks. LSTM, a kind of recurrent neural network (RNN), has demonstrated promising performance in sequential data processing. In this study, we leverage LSTM to develop a robust ISL recognition system, which can accurately interpret sign gestures in real-time. Additionally, we employ LSTM-based models for ISL synthesis, enabling the conversion of spoken language into sign language for improved inclusivity and accessibility. We evaluate the proposed approach on a diverse dataset of ISL signs, achieving high recognition accuracy and natural sign synthesis. The integration of LSTM in ISL technology holds significant potential for breaking down communication barriers and improving the quality of life for India's deaf and hard of hearing people
APA, Harvard, Vancouver, ISO, and other styles
17

Zeshan, Ulrike, and Sibaji Panda. "Sign-speaking: The structure of simultaneous bimodal utterances." Applied Linguistics Review 9, no. 1 (2018): 1–34. http://dx.doi.org/10.1515/applirev-2016-1031.

Full text
Abstract:
AbstractWe present data from a bimodal trilingual situation involving Indian Sign Language (ISL), Hindi and English. Signers are co-using these languages while in group conversations with deaf people and hearing non-signers. The data show that in this context, English is an embedded language that does not impact on the grammar of the utterances, while both ISL and Hindi structures are realised throughout. The data show mismatches between the simultaneously expressed ISL and Hindi, such that semantic content and/or syntactic structures are different in both languages, yet are produced at the same time. The data also include instances of different propositions expressed simultaneously in the two languages. This under-documented behaviour is called “sign-speaking” here, and we explore its implications for theories of multilingualism, code-switching, and bilingual language production.
APA, Harvard, Vancouver, ISO, and other styles
18

Mistree, Kinjal, Devendra Thakor, and Brijesh Bhatt. "A Machine Translation System from Indian Sign Language to English Text." International Journal of Information Technologies and Systems Approach 15, no. 1 (2022): 1–23. http://dx.doi.org/10.4018/ijitsa.313419.

Full text
Abstract:
Sign language recognition and translation is a crucial step towards improving communication between the deaf and the rest of the society. According to the Indian Sign Language Research and Training Centre (ISLRTC), India has around 300 certified human interpreters. With such a shortage of human interpreters, an alternative service is desired that helps people to achieve smooth communicate with deaf. In this study, an approach is presented that translates ISL sentences in English text using MobileNetV2 model and neural machine translation (NMT). The system features ISL corpus created from Brown corpus using ISL grammar rules. The approach converts the ISL videos into ISL gloss sequence using MobileNetV2 model and recognised ISL gloss sequence is then fed to machine translation module. MobileNetV2 was proven best-suited model for recognition of ISL sentences and NMT gives better result than statistical machine translation (SMT) to convert ISL gloss sequence into English text. The automatic and human evaluation of the proposed approach gives 83.3% and 86.1% accuracy, respectively.
APA, Harvard, Vancouver, ISO, and other styles
19

Das Chakladar, Debashis, Pradeep Kumar, Shubham Mandal, Partha Pratim Roy, Masakazu Iwamura, and Byung-Gyu Kim. "3D Avatar Approach for Continuous Sign Movement Using Speech/Text." Applied Sciences 11, no. 8 (2021): 3439. http://dx.doi.org/10.3390/app11083439.

Full text
Abstract:
Sign language is a visual language for communication used by hearing-impaired people with the help of hand and finger movements. Indian Sign Language (ISL) is a well-developed and standard way of communication for hearing-impaired people living in India. However, other people who use spoken language always face difficulty while communicating with a hearing-impaired person due to lack of sign language knowledge. In this study, we have developed a 3D avatar-based sign language learning system that converts the input speech/text into corresponding sign movements for ISL. The system consists of three modules. Initially, the input speech is converted into an English sentence. Then, that English sentence is converted into the corresponding ISL sentence using the Natural Language Processing (NLP) technique. Finally, the motion of the 3D avatar is defined based on the ISL sentence. The translation module achieves a 10.50 SER (Sign Error Rate) score.
APA, Harvard, Vancouver, ISO, and other styles
20

P, Adithyaraaj R., Mariyammal N, Mohammed Furkhan S, Rathika, and Prof K. Vijayalakshmi. "Indian Sign Language (ISL) Translator: AI-Powered Bidirectional Translation System." International Journal for Research in Applied Science and Engineering Technology 13, no. 3 (2025): 2053–61. https://doi.org/10.22214/ijraset.2025.67589.

Full text
Abstract:
Abstract: This work presents an advanced AI-based translation system designed to bridge communication barriers for the Deaf and Hard-of-Hearing (DHH) community by converting spoken and textual language into Indian Sign Language (ISL) and vice versa. The system leverages deep learning techniques, including computer vision and natural language processing (NLP), to interpret hand gestures and facial expressions accurately. Integrated with real-time processing capabilities, the model enables seamless interaction between ISL users and non-signing individuals. By utilizing a custom-trained Transformer-based NLP model and a Convolutional Neural Network (CNN) for visual recognition, the system ensures accurate and efficient translation. The prototype has been developed using VS Code, with datasets managed in local storage to optimize performance. This work aims to enhance accessibility, promote inclusivity, and facilitate effortless communication through a robust and scalable ISL translation model. The importance of an efficient ISL translation system extends beyond accessibility—it fosters independence, enhances social inclusion, and bridges the gap between the DHH community and the hearing population. Many Deaf individuals struggle with traditional text-based communication due to differences in sentence structures and grammar between ISL and spoken languages. By incorporating deep learning models for gesture recognition and NLP-based translation, our system provides a user-friendly solution for effective communication. Additionally, this system has the potential to be implemented in educational institutions, workplaces, and public services, ensuring better integration of the Deaf community into society. By addressing existing gaps and leveraging AI, our translator serves as a critical step toward an inclusive digital ecosystem.
APA, Harvard, Vancouver, ISO, and other styles
21

Attar, Rakesh Kumar, Vishal Goyal, and Lalit Goyal. "Development of Airport Terminology based Synthetic Animated Indian Sign Language Dictionary." Journal of Scientific Research 66, no. 05 (2022): 88–94. http://dx.doi.org/10.37398/jsr.2022.660512.

Full text
Abstract:
In the current era of computerization, the development of a synthetic animated Indian Sign Language (ISL) dictionary could prove very beneficial for deaf people to share their ideas, views and thoughts with hearing people. Although many human based video dictionaries are available, no ISL synthetic animated dictionary solely for public places is developed yet. The development of an ISL dictionary of 1200 words using synthetic animation for airports terminology is reported in this article. The most frequently used words at airports in ISL are categorized and then are translated into Signing Gesture Markup Language (SiGML) which generates the signs utilizing synthetic animations through a virtual avatar. The developed ISL dictionary can be used for automatic sign translation systems at airports animating signs from written or spoken announcements. This ISL dictionary is used in the development of airport announcement system for deaf that is capable of displaying spoken airport announcements in ISL using synthetic animations. Moreover, the developed dictionary can prove very beneficial for educating deaf people and for assisting while visiting public places.
APA, Harvard, Vancouver, ISO, and other styles
22

Yerpude, Poonam. "Non-Verbal (Sign Language) To Verbal Language Translator Using Convolutional Neural Network." International Journal for Research in Applied Science and Engineering Technology 10, no. 1 (2022): 269–73. http://dx.doi.org/10.22214/ijraset.2022.39820.

Full text
Abstract:
Abstract: Communication is very imperative for daily life. Normal people use verbal language for communication while people with disabilities use sign language for communication. Sign language is a way of communicating by using the hand gestures and parts of the body instead of speaking and listening. As not all people are familiar with sign language, there lies a language barrier. There has been much research in this field to remove this barrier. There are mainly 2 ways in which we can convert the sign language into speech or text to close the gap, i.e. , Sensor based technique,and Image processing. In this paper we will have a look at the Image processing technique, for which we will be using the Convolutional Neural Network (CNN). So, we have built a sign detector, which will recognise the sign numbers from 1 to 10. It can be easily extended to recognise other hand gestures including alphabets (A- Z) and expressions. We are creating this model based on Indian Sign Language(ISL). Keywords: Multi Level Perceptron (MLP), Convolutional Neural Network (CNN), Indian Sign Language(ISL), Region of interest(ROI), Artificial Neural Network(ANN), VGG 16(CNN vision architecture model), SGD(Stochastic Gradient Descent).
APA, Harvard, Vancouver, ISO, and other styles
23

Nupur Giri. "Gesturely: A Conversation AI based Indian Sign Language Model." Journal of Information Systems Engineering and Management 10, no. 10s (2025): 576–84. https://doi.org/10.52783/jisem.v10i10s.1421.

Full text
Abstract:
The project, Gesturely, aims to improve communication in educational settings for people with hearing impairments. Sign language, notably Indian Sign Language (ISL) in India, serves as a primary mode of expression for the deaf community. The form of expression among the deaf relies on a rich vocabulary of gestures involving fingers, hands, arms, eyes, head, and face. The research endeavors to develop an algorithm capable of translating ISL into English, initially focusing on words within the education domain. Through the integration of advanced computer vision and deep learning methodologies, the objective is to create a system capable of interpreting ISL gestures and converting them into written text. The project involves the creation of a comprehensive dataset, with 50 words and more than 2500 videos. The vision is to empower the deaf community with real-time translation capabilities, promoting inclusivity and accessibility in communication
APA, Harvard, Vancouver, ISO, and other styles
24

Muthusamy, Prema, and Gomathi Pudupalayam Murugan. "Occlusion Resistant Spatio-Temporal Hybrid Cue Network for Indian Sign Language Recognition and Translation." Indian Journal Of Science And Technology 17, no. 44 (2024): 4590–99. https://doi.org/10.17485/ijst/v17i44.2225.

Full text
Abstract:
Objective: To tackle the issues of occlusion in human skeleton extraction and simplify the pixel matching related to the human skeleton structure for efficient Indian Sign Language (ISL) recognition and translation. Methods: This paper presents Occlusion-Resistant STHCN (OSTHCN) to tackle the occlusion problem in human skeleton extraction for effective ISL recognition and translation. This model incorporates Skeleton Occupancy Likelihood Map estimation using B-Spline curves to enhance the skeleton extraction. Due to occlusions caused by fingers and hands, the extracted skeleton is composed of disconnected skeletal subgraphs. Consequently, each observed skeleton is represented as a probability distribution along an ellipsoidal contour, originating from the central points of the skeleton. A heuristic technique estimates occluded skeletons using 3D probability map with an occupancy grid where each voxel indicates skeleton likelihood. The occupancy distribution is updated using observed branch clusters across image sequences, detecting occluded skeletons by finding minimum-cost paths. Finally, Maximally connected subgraphs are merged into a main graph by finding minimum-cost paths in the 3D likelihood map, enabling the prediction of occluded skeleton parts for ISL recognition and translation. Findings: OSTHCN model achieved an accuracy of 96.74% on the ISL Continuous Sign Language Translation Recognition (ISL-CSLTR) dataset outperforming existing prediction models. Novelty: This model employs a unique occlusion-handling strategy for skeleton extraction, estimating occluded part, integrating connected subgraphs via minimal cost path searches for more precise skeleton parts and enhancing accuracy for ISL recognition and translation. Keywords: Sign Language, Graph Convolutional Neural Network, Ellipsoidal Contour, 3D Likelihood Map, Occupancy Probability
APA, Harvard, Vancouver, ISO, and other styles
25

Dixit, Karishma, and Anand Singh Jalal. "A Vision-Based Approach for Indian Sign Language Recognition." International Journal of Computer Vision and Image Processing 2, no. 4 (2012): 25–36. http://dx.doi.org/10.4018/ijcvip.2012100103.

Full text
Abstract:
The sign language is the essential communication method between the deaf and dumb people. In this paper, the authors present a vision based approach which efficiently recognize the signs of Indian Sign Language (ISL) and translate the accurate meaning of those recognized signs. A new feature vector is computed by fusing Hu invariant moment and structural shape descriptor to recognize sign. A multi-class Support Vector Machine (MSVM) is utilized for training and classifying signs of ISL. The performance of the algorithm is illustrated by simulations carried out on a dataset having 720 images. Experimental results demonstrate that the proposed approach can successfully recognize hand gesture with 96% recognition rate.
APA, Harvard, Vancouver, ISO, and other styles
26

Sinha, Prof Pragya. "Design and Development of Indian Sign Language Character Recognition System." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, no. 12 (2023): 1–13. http://dx.doi.org/10.55041/ijsrem27773.

Full text
Abstract:
The purpose of this study is to look into the challenges involved in categorizing Indian Sign Language (ISL) characters. While a lot of research has been done in the related field of American Sign Language (ASL), not as much has been done with ISL. Lack of standard datasets, obscured traits, and variance in language with geography are the key barriers that have hindered much ISL research. Our study aims to progress this field by collecting a dataset from a deaf school and applying various feature extraction techniques to extract useful information, which is then input into a range of supervised learning algorithms. Our current results for each approach include four fold cross-validation. What sets our work apart from earlier research is that the validation set in our four-fold cross-validation contains photographs of people who are not the same people as those in the training set. Hand gestures and signs are used by those with speech impairments to communicate. Understanding what they're trying to say is challenging for the average person. Though extremely uncommon, there are many systems that convert data to Hindi. Therefore, it is imperative to implement a system that enables the general public to understand and interpret all signals, gestures, and communications. It will close the communication gap that exists between normal people and those who have speech difficulty. The two primary research approaches centered on human-computer interaction are sign language recognition and learning. Multiple sensors are required in order for data flow to be understood in sign language. This research paper focuses on the development of a Hindi-language training tool that can detect images and interpret what someone else is trying to say to persons who have speech impairments. Keywords: Indian Sign Language (ISL), American Sign Language (ASL), Feature Extraction, Supervised learning, Sign Language, etc.
APA, Harvard, Vancouver, ISO, and other styles
27

Chakole, Vijay V. "Educational Learning-Based Sign Language System Using Machine Learning." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 03 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem29753.

Full text
Abstract:
This study proposes an innovative approach to multicultural education by integrating Indian Sign Language (ISL) and American Sign Language (ASL) through Machine Learning (ML) techniques. By collecting and preprocessing high-quality video data of ISL and ASL, we aim to develop ML models capable of recognizing and generating signs in both languages. Through bidirectional transfer learning and cross-language representation learning, we seek to enhance the learning experience and address common challenges in sign language acquisition. Additionally, personalized learning environments and culturally sensitive design, informed by collaboration with Deaf communities in India and America, ensure inclusivity and accuracy. Evaluation metrics and ethical considerations are integrated into the development process to promote responsible implementation and continuous improvement. Ultimately, this project aims to lay the groundwork for advancing multilingual sign language education globally. By employing advanced ML techniques, this study aims to bridge the gap between Indian Sign Language (ISL) and American Sign Language (ASL) education, fostering inclusivity and accessibility in learning environments. Through meticulous data collection, preprocessing, and collaborative development processes, our approach emphasizes accuracy, cultural sensitivity, and personalized learning experiences. By engaging with Deaf communities in both India and America, we ensure the authenticity and relevance of our platform. Evaluation metrics and ethical considerations are prioritized to uphold privacy, consent, and fairness principles. By establishing a robust foundation for multilingual sign language education, this project contributes to broader discussions on leveraging ML for enhancing accessibility and inclusivity in education systems worldwide. Keywords- Hand Gesture, Sign language Recognition, OpenCV, Media-pipe, tensorflow.
APA, Harvard, Vancouver, ISO, and other styles
28

Shahi, Mr Shivanshu. "Multilevel Conversion of Indian Sign Language from Gesture to Speech." International Journal for Research in Applied Science and Engineering Technology 13, no. 5 (2025): 5764–70. https://doi.org/10.22214/ijraset.2025.71548.

Full text
Abstract:
Abstract: Indian Sign Language (ISL) serves as a primary mode of communication for Deaf and hard-of-hearing communities in India. However, despite its societal importance, ISL remains largely unsupported by mainstream technological platforms, limiting inclusive communication. This research introduces a real-time ISL recognition andtranslationsystem thatconvert shandgesturesintocorresponding text and speech outputs, enabling phrase-level communication rather thanisolated characterinterpretation. The architecture uses a modular pipeline approach, with a Convolutional Neural Network (CNN) for accurate gesture classification, a phrase-mapping module to translate gestures into meaningful expressions, a MediaPipe for accurate hand landmark detection, and a text-to-speech (TTS) system to turn the generated text into audible speech output. Unlike previous systems restricted to static signs, our approach supports semantically rich, multi-word phrases, enhancingnatural communicationflow. Aspecially constructed dataset of ten frequently used Indian Sign Language (ISL) phrases was used to train the model. To improve generalization,150samplesfromeachclassweretakeninvarious lighting and background conditions. The final system achieved 95% classification accuracy, operated at 60 frames per second, and maintainedlatency below100milliseconds. Usabilitytestingwith multiple users confirmed the system's robustness, responsiveness, and accessibility. The findings demonstrate the viability of deploying deep learning-based ISL recognition systems in authentic environments, including public areas, healthcare facilities, and educational institutions.
APA, Harvard, Vancouver, ISO, and other styles
29

Ajay M. Pol, Et al. "Enhancing Sign Language Recognition through Fusion of CNN Models." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 10 (2023): 902–10. http://dx.doi.org/10.17762/ijritcc.v11i10.8608.

Full text
Abstract:
This study introduces a pioneering hybrid model designed for the recognition of sign language, with a specific focus on American Sign Language (ASL) and Indian Sign Language (ISL). Departing from traditional machine learning methods, the model ingeniously blends hand-crafted techniques with deep learning approaches to surmount inherent limitations. Notably, the hybrid model achieves an exceptional accuracy rate of 96% for ASL and 97% for ISL, surpassing the typical 90-93% accuracy rates of previous models. This breakthrough underscores the efficacy of combining predefined features and rules with neural networks. What sets this hybrid model apart is its versatility in recognizing both ASL and ISL signs, addressing the global variations in sign languages. The elevated accuracy levels make it a practical and accessible tool for the hearing-impaired community. This has significant implications for real-world applications, particularly in education, healthcare, and various contexts where improved communication between hearing-impaired individuals and others is paramount. The study represents a noteworthy stride in sign language recognition, presenting a hybrid model that excels in accurately identifying ASL and ISL signs, thereby contributing to the advancement of communication and inclusivity.
APA, Harvard, Vancouver, ISO, and other styles
30

Anjali, Mogusala. "Contextual Translation System to Sign Language." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 02 (2025): 1–9. https://doi.org/10.55041/ijsrem41320.

Full text
Abstract:
People with hearing and speech disabilities face significant challenges in communicating with others, as not everyone understands sign language. This project aims to create a system that helps bridge this communication gap by converting spoken English into Indian Sign Language (ISL). The system works by recognizing voice input, the recognized speech is converted into text, which is then simplified using natural language processing techniques. Finally, the text is translated into ISL and displayed as a series of images or motion videos using Python libraries. This system provides an easy and accessible way for people with hearing or speech disabilities to communicate effectively, promoting inclusivity and understanding in everyday interactions.
APA, Harvard, Vancouver, ISO, and other styles
31

Sakthivel, Mr E. "Indian Sign Language to Text/Speech Translation." International Journal for Research in Applied Science and Engineering Technology 13, no. 5 (2025): 2964–73. https://doi.org/10.22214/ijraset.2025.70817.

Full text
Abstract:
Abstract: This work showcases the design of a real-time Indian Sign Language (ISL) to text and speech translation system for enhancing communication for those with hearing and speech disability. The system uses MediaPipe to obtain 3D hand landmark coordinates that are saved in a CSV dataset. A Feed-Forward Neural Network, namely a Multi-Layer Perceptron (MLP) in TensorFlow/Keras, is trained to identify 26 alphabetic symbols (A–Z) from 126 input features that describe the x, y, z coordinates of 21 hand landmarks per hand. Two hidden layers with dropout are used to avoid overfitting and a test accuracy of around 96% is achieved. The model is converted to TensorFlow Lite after training for integration into mobile environments in a lightweight manner. A Flask-based server exposed through Cloudflare tunnels accepts real-time landmark information from the mobile app, does inference on the TFLite model, and returns the predicted result. The app dynamically shows recognized letters, constructs words and sentences, and reads them out in speech using a Text-to-Speech (TTS) engine. Also, Swaram API integration supports multilingual audio output, thus rendering the system universally accessible to varied linguistic users. This end-to-end solution is a scalable, efficient, and accessible method of ISL recognition and speech translation
APA, Harvard, Vancouver, ISO, and other styles
32

Shikalgar, Prof S. A. "Automated Sign Language Interpretation." International Journal for Research in Applied Science and Engineering Technology 13, no. 4 (2025): 806–10. https://doi.org/10.22214/ijraset.2025.68336.

Full text
Abstract:
Communication plays an essential role in human interaction, allowing individuals to express ideas and emotions. While spoken languages are widely used, individuals with hearing and speech impairments rely on sign language. However, the lack of widespread understanding of sign language creates communication barriers between them and the hearing community. This study presents a real-time Indian Sign Language (ISL) recognition system using the Media pipe framework and Long Short-Term Memory (LSTM) networks. The approach involves training an LSTM model to distinguish between different signs, utilizing a dataset generated through a pre-trained Holistic model from the Mediapipe framework, which serves as a feature extractor.
APA, Harvard, Vancouver, ISO, and other styles
33

S. Nikkam, Pushpalatha. "Voice To Sign Language Conversion." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–9. https://doi.org/10.55041/ijsrem48637.

Full text
Abstract:
ABSTRACT True incapacity can be seen as the inability to speak, where individuals with speech impairments struggle to communicate verbally or through hearing. To bridge this gap, many rely on sign language, a visual method of communication that uses hand gestures. Although sign language has become more widespread, interaction between those who sign and those who don't can still pose challenges. As communication has grown to be an essential part of daily life, sign language serves as a crucial tool for those with speech and hearing difficulties. Recent advances in computer vision and deep learning have significantly enhanced the ability to recognize gestures and movements. While American Sign Language (ASL) has been thoroughly researched, Indian Sign Language (ISL) remains underexplored. Our proposed approach focuses on recognizing 4972 static hand gestures representing 24 English alphabets (excluding J and Z) in ISL. The project aims to build a deep learning-based system that translates these gestures into text using the "Google Text-to-Speech" API, thereby enabling better interaction between signers and non-signers. Using a dataset from Kaggle and a custom Convolutional Neural Network (CNN), our method achieved a 99% accuracy rate. Key Words: Convolutional Neural Network; Google text to speech API; Indian signing.
APA, Harvard, Vancouver, ISO, and other styles
34

Devika, M., SK Aravind, B. Aleena, Mary Philip Anagha, and Azharuddin Sahib Muhammed. "Real-Time Translation of Speech to Indian Sign Language to Facilitate Hearing Impairment." Recent Trends in Androids and IOS Applications 7, no. 1 (2024): 1–7. https://doi.org/10.5281/zenodo.13768072.

Full text
Abstract:
<em>Sign language is a visual language utilized by individuals who are deaf as their primary means of communication. Unlike spoken languages, sign language relies on gestures, body movements, and manual communication to effectively convey ideas and thoughts. It can be used by individuals who have difficulty speaking, those who are unable to speak, and by individuals without hearing impairments to communicate with deaf individuals. Access to sign language is crucial for the social, emotional, and linguistic development of deaf individuals. Our project aims to bridge the communication gap between deaf individuals and the general population by leveraging advancements in web applications, machine learning, and natural language processing technologies. The primary objective of this project is to develop an interface capable of converting audio/voice inputs into corresponding sign language for deaf individuals. This is achieved through the simultaneous integration of hand shapes, orientations, and movements of the hands, arms, or body. The interface operates in two phases: first, converting audio to text using speech-to-text APIs (such as Python modules or Google API); and second, representing the text using parse trees and applying the semantics of natural language processing (specifically, NLTK) for the lexical analysis of sign language grammar. This work adheres to the rules of Indian Sign Language (ISL) and follows ISL grammar guidelines.</em> <strong><em>&nbsp;</em></strong>
APA, Harvard, Vancouver, ISO, and other styles
35

Passi, Purvi, Jessica Pereira, Anjelica Misal, Viven Menezese, and Dr M. Kiruthika. "SignEase - Sign Language Interpreter Model An Indian Sign Language Interpretation Model using Machine Learning and Computer Vision Technology." International Journal for Research in Applied Science and Engineering Technology 12, no. 5 (2024): 1–6. http://dx.doi.org/10.22214/ijraset.2024.60728.

Full text
Abstract:
Abstract: This study introduces a real-time system designed to recognize hand poses and gestures from the Indian Sign Language (ISL) using grid-based (control points) features. The primary aim is to bridge communication barriers between the hearing and speech impaired individuals and the broader society.Existing solutions often struggle with either accuracy or realtime performance, whereas our system excels in both aspects. It can accurately identify hand gestures in Indian Sign Language. In addition to recognition capabilities, our system offers a ’Learning Portal’ for users to efficiently learn and practice ISL, ASL, etc., enhancing its accessibility and effectiveness. Notably, the system operates solely on smartphone camera input, eliminating the need for any external hardware like gloves or specialized sensors, thus ensuring user-friendliness. Key techniques employed include hand detection via MediaPipe, cvzone, etc modules, and grid-based feature extraction, which transforms hand poses into concise feature vectors. These features are then compared with a TensorFlow- provideddatabase for classification for accurate translation
APA, Harvard, Vancouver, ISO, and other styles
36

Rakesh, Ch, G. Madhumitha, S. Meghana, T. Sahithi Niharika, M. Rohith, and Ajay Ram K. "Two Way Indian Sign Language Translator using LSTM and NLP." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (2023): 5978–83. http://dx.doi.org/10.22214/ijraset.2023.53085.

Full text
Abstract:
Abstract: Sign language is the efficacious medium that connects silent people and the world, and there are many significant existing sign languages across the globe. A lot of research is done to compromise the borders of difficulty in the communication between silent and normal person and most of them are based on ASL (American Sign Language). The aspiration for bridging the gap between silent people and normal people in terms of communication using ISL (Indian Sign Language) led to unfold this project. It functions as a two-way sign translator that is the conversion of sign to text and contrariwise. It recognizes various poses as well as gestures and returns appropriate results. The designed translator predicts the sign with an accuracy of 88 percent in real-time and was trained to recognize 15 actions using LSTM and MediaPipe. The text to sign translator works up to paragraph level using NLP.
APA, Harvard, Vancouver, ISO, and other styles
37

Zeshan, Ulrike, and Sibaji Panda. "Two languages at hand." Sign Language and Linguistics 18, no. 1 (2015): 90–131. http://dx.doi.org/10.1075/sll.18.1.03zes.

Full text
Abstract:
This article explores patterns of co-use of two sign languages in casual conversational data from four deaf bilinguals, who are fluent in Indian Sign Language (ISL) and Burundi Sign Language (BuSL). We investigate the contributions that both sign languages make to these conversations at lexical, clause, and discourse level, including a distinction between signs from closed grammatical classes and open lexical classes. The results show that despite individual differences between signers, there are also striking commonalities. Specifically, we demonstrate the shared characteristics of the signers’ bilingual outputs in the domains of negation, where signers prefer negators found in both sign languages, and wh-questions, where signers choose BuSL for specific question words and ISL for general wh-questions. The article thus makes the argument that these signers have developed a fairly stable bilingual variety that is characteristic of this particular community of practice, and we explore theoretical implications arising from these patterns.
APA, Harvard, Vancouver, ISO, and other styles
38

Singh, Mr Akarsh. "AI Tool/Mobile App for Indian Sign Language (ISL)." International Journal for Research in Applied Science and Engineering Technology 13, no. 5 (2025): 2819–24. https://doi.org/10.22214/ijraset.2025.71056.

Full text
Abstract:
Abstract: This project proposes an AI-powered Sign Language Generator for Audio-Visual Content in English/Hindi that leverages cutting-edge technologies to bridge this communication gap. The system captures spoken language using advanced speech recognition techniques provided by the Google Speech Recognition API, transcribing speech into text with high accuracy. When inputs are in Hindi, the system employs the Google Translate API to convert the text into English, ensuring a standardized vocabulary that maps to ISL gestures.
APA, Harvard, Vancouver, ISO, and other styles
39

Priyal, Vanshika, Khushi Koli, and Khyati Ahlawat. "Indian Sign Language Detection and Translation using Machine Learning in English." International Journal of Scientific Research in Science and Technology 12, no. 2 (2025): 926–37. https://doi.org/10.32628/ijsrst251222642.

Full text
Abstract:
Indian Sign Language (ISL) is a critical means of communication for India's deaf and hard-of-hearing population of millions. However, the lack of accessible tools prevents their communication with non-ISL speakers and leaves them vulnerable to social isolation. To address this lack, this study proposes a gesture recognition system for ISL-to-speech translation using machine learning algorithms—Support Vector Machine (SVM), k-Nearest Neighbors (k-NN), and Recurrent Neural Networks (RNN). A performance-evaluating dataset with all ISL alphabets, comprising 20 frames per sign, was generated. The evaluation in static and dynamic gesture recognition indicates the performance of these models, with RNN being the most appropriate for the dynamic sequence. Real-time experimentation using webcam input confirmed the flexibility of the developed approach, indicating more than 90% accuracy by using preprocessing methods. Moreover, the combination with Google Text-to-Speech (gTTS) also enables enhanced real-time translation, thus allowing the system to be applicable for real-world uses. The outputs offer a model for building mobile applications and public service tools to enable inclusive communication. Future work will add more data to the dataset, improve recognition of complex gestures, and incorporate context-aware comprehension in order to enhance the functionality of the system.
APA, Harvard, Vancouver, ISO, and other styles
40

Anand, Nisha. "A Sociolinguistic Study of the Use of Indian Sign Language." SMART MOVES JOURNAL IJELLH 8, no. 3 (2020): 37. http://dx.doi.org/10.24113/ijellh.v8i3.10481.

Full text
Abstract:
This paper discusses the “Language Use” pattern of ISL by the deaf community. This paper aims to understand the vitality of sign language within the community and to foresee whether ISL is likely to be maintained in coming future. As proposed by Boehm (1997:67), “The choices people make in regard to language use reflect trend towards either language maintenance and language shift. To some extent, this reveals the vitality of the language. Fase et al. (1992:6) says that, “It has been commonly found that when the mother tongue of the minority language remains dominant in communication within the ethnic group, it can be said that mother tongue has been maintained.” This survey also deals with the major issue faced by the deaf community in this speech dominant society, which is huge “communication gap” with the majority speaking people of our society.
APA, Harvard, Vancouver, ISO, and other styles
41

Muthu Mariappan H and Dr Gomathi V. "Indian Sign Language Recognition through Hybrid ConvNet-LSTM Networks." EMITTER International Journal of Engineering Technology 9, no. 1 (2021): 182–203. http://dx.doi.org/10.24003/emitter.v9i1.613.

Full text
Abstract:
Dynamic hand gesture recognition is a challenging task of Human-Computer Interaction (HCI) and Computer Vision. The potential application areas of gesture recognition include sign language translation, video gaming, video surveillance, robotics, and gesture-controlled home appliances. In the proposed research, gesture recognition is applied to recognize sign language words from real-time videos. Classifying the actions from video sequences requires both spatial and temporal features. The proposed system handles the former by the Convolutional Neural Network (CNN), which is the core of several computer vision solutions and the latter by the Recurrent Neural Network (RNN), which is more efficient in handling the sequences of movements. Thus, the real-time Indian sign language (ISL) recognition system is developed using the hybrid CNN-RNN architecture. The system is trained with the proposed CasTalk-ISL dataset. The ultimate purpose of the presented research is to deploy a real-time sign language translator to break the hurdles present in the communication between hearing-impaired people and normal people. The developed system achieves 95.99% top-1 accuracy and 99.46% top-3 accuracy on the test dataset. The obtained results outperform the existing approaches using various deep models on different datasets.
APA, Harvard, Vancouver, ISO, and other styles
42

Irlapale, Pranav. "Sign Language Detection." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 06 (2025): 1–9. https://doi.org/10.55041/ijsrem50810.

Full text
Abstract:
"Sign Language Detection" is a real-time system developed to bridge the communication gap between the deaf-mute community and the wider population by converting sign language gestures into text. The project leverages MediaPipe for accurate and efficient hand gesture tracking and integrates it with a lightweight TensorFlow model trained on datasets of Indian Sign Language (ISL) and International Sign Language (ASL) sourced from Kaggle. The system takes video input, detects and interprets hand gestures frame by frame, and translates them into meaningful text in real time. The frontend is built using HTML and CSS, with a backend powered by Flask for API integration and MongoDB for managing gesture data and user records. Designed to run efficiently even on low-resource systems, this project provides an accessible and scalable solution for enhancing communication for individuals with hearing and speech impairments.
APA, Harvard, Vancouver, ISO, and other styles
43

Pandey, Subham, Sumaiya Tahseen, Rohit Pathak, Hina Parveen, and Maruti Maurya. "Real-Time Vision-Based Indian Sign Language Translation Using Deep Learning Techniques." International Journal of Innovative Research in Computer Science and Technology 13, no. 3 (2025): 35–46. https://doi.org/10.55524/ijircst.2025.13.3.6.

Full text
Abstract:
This work proposes a vision-based approach to real-time sign language translation for Indian Sign Language (ISL). The system uses state-of-the-art deep learning architectures such as CNN (Convolutional Neural Networks), LSTM (Long Short-Term Memory) networks, and Transformer-based encoder-decoder models for gesture recognition in both isolated and continuous forms. Data preprocessing techniques such as DTW (Dynamic Time Warping) were applied to augment and normalize gesture sequences from custom ISL and public ASL datasets. The model performance was quantitatively evaluated using precision, recall, F1-score, BLEU, ROUGE, CER(character error rate) and WER (word error rate). A Transformer-based model outperformed the achieving a BLEU score of 0.74 and a classification accuracy of 96.1%. The developed desktop application enables real-time ISL-to-English translation at 18 FPS without requiring external sensors, while ablation studies validate the benefits of multimodal fusion and pose-language alignment. This work demonstrates a robust, scalable approach to non-intrusive sign language translation, advancing accessibility for the DHH community.
APA, Harvard, Vancouver, ISO, and other styles
44

G C, Shwethashree. "Inclusive Communication: Leveraging AI for Sign Language Translation and Real-Time Audio Transcription." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–9. https://doi.org/10.55041/ijsrem48555.

Full text
Abstract:
Abstract - Humans communicate through both natural language and body language, including gestures, facial expressions, and lip movements. While understanding spoken language is essential, recognizing sign language is equally important, especially for individuals with hearing impairments. Deaf individuals often struggle to communicate with those unfamiliar with sign language, making real-time translation systems invaluable. This paper proposes a real-time meeting platform that recognizes Indian Sign Language (ISL) gestures and converts them into text and speech, enabling smooth interaction between deaf and hearing individuals. The system uses image processing, computer vision, and deep learning—specifically Long Short-Term Memory (LSTM) networks—to analyze hand gestures from a live video stream. LSTM models effectively capture temporal patterns in gesture sequences, enhancing recognition accuracy. The identified gestures are then translated into text and synthesized into speech. This system aims to bridge communication gaps and improve accessibility for the hearing impaired. Key words: Indian Sign Language (ISL), Gesture Recognition, LSTM, Real-Time Translation, Accessibility, Communication, Deep Learning, Speech Synthesis, Computer Vision, Virtual Meetings.
APA, Harvard, Vancouver, ISO, and other styles
45

Gudi, Swaroop. "Sign Language Detection Using Gloves." International Journal for Research in Applied Science and Engineering Technology 12, no. 11 (2024): 1387–91. http://dx.doi.org/10.22214/ijraset.2024.65315.

Full text
Abstract:
This paper presents a comprehensive system for real-time translation of Indian Sign Language (ISL) gestures into spoken language using gloves equipped with flex sensors. The system incorporates an Arduino Nano microcontroller for data acquisition, an HC-05 Bluetooth module for wireless data transmission, and an Android application for processing. A deep learning model, trained on an ISL dataset using Keras and TensorFlow, classifies the gestures. The processed data is then converted into spoken language using Google Text-to-Speech (GTTS). The gloves measure finger movements through flex sensors, with data transmitted to the Android app for real-time classification and speech synthesis. This system is designed to bridge communication gaps for the hearing-impaired community by providing an intuitive and responsive translation tool. Our evaluation shows high accuracy in gesture recognition, with average latency ensuring near real-time performance. The system's effectiveness is demonstrated through extensive testing, showcasing its potential as an assistive technology. Future improvements include expanding the dataset and incorporating additional sensors to enhance gesture recognition accuracy and robustness. This research highlights the integration of wearable technology and machine learning as a promising solution for enhancing accessibility and communication for sign language users
APA, Harvard, Vancouver, ISO, and other styles
46

Kalpesh Pimple, Sania. "S.I.G.N. - Sign Interpretation and Gesture Navigation." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 04 (2025): 1–9. https://doi.org/10.55041/ijsrem44476.

Full text
Abstract:
The S.I.G.N (Sign Interpretation using Gesture Navigation) project addresses the communication gap between sign language users and non-users, especially in sectors like education, healthcare, and public services.The project places a strong emphasis on Indian Sign Language (ISL) to cater to the communication needs of the hearing-impaired community in India. By tailoring recognition models and sign databases to ISL, the system ensures higher relevance, usability, and cultural alignment for Indian users. It utilizes machine learning, computer vision, and natural language processing (NLP) to translate sign language gestures into meaningful text or speech in real time. The system comprises two key components: (1) Real-time Sign to Sentence Generation, which converts live gestures into accurate, context-aware sentences, and (2) Word to Sign Generation, which displays typed words as corresponding sign gestures for learning and interaction. Convolutional Neural Networks (CNNs) handle gesture recognition, while MediaPipe and OpenPose ensure precise pose estimation. NLP techniques enhance grammatical accuracy, and the system achieves over 95% accuracy, delivering reliable performance. A Streamlit-based interface enables smooth real-time interaction, making S.I.G.N a scalable, cost-effective, and inclusive alternative to human interpreters. Keywords: Computer Vision, Deep Learning, Convolutional Neural Networks (CNNs), Machine Learning, Natural Language Processing (NLP), Image Processing, Feature Extraction, Hand Tracking.
APA, Harvard, Vancouver, ISO, and other styles
47

Abhishek, Deshmukh. "Real-Time Indian Sign Language Recognition Using CNNs for Communication Accessibility." INTERNATIONAL JOURNAL OF MULTIDISCIPLINARY RESEARCH AND ANALYSIS 07, no. 09 (2024): 4447–53. https://doi.org/10.5281/zenodo.13831865.

Full text
Abstract:
The challenge of communication for the deaf and mute community continues to pose a barrier in connecting with society. Sign language, a manual communication method, has emerged as an essential tool for this group, yet it remains largely unrecognized by the majority of the population. This research proposes a machine learning-based Indian Sign Language (ISL) detection system utilizing Convolutional Neural Networks (CNN) to bridge this gap. The system is designed to automatically recognize hand gestures representing ISL alphabets in real-time through a camera interface. Key steps include image preprocessing, gesture detection, and classification using a trained CNN model, followed by deployment on mobile platforms via TensorFlow Lite integrated with Flutter. This approach ensures the model is lightweight yet capable of delivering high accuracy in real-world settings. The model achieves impressive results, with accuracy levels exceeding 90% in predicting hand gestures. The application is user-friendly, enabling anyone with a smartphone to recognize ISL symbols and assist in communication with the deaf-mute community. This paper discusses the implementation, performance, and potential extensions of the system, positioning it as an effective tool for improving communication accessibility. <strong>&nbsp;</strong>
APA, Harvard, Vancouver, ISO, and other styles
48

Prema, Muthusamy, and Pudupalayam Murugan Gomathi. "Occlusion Resistant Spatio-Temporal Hybrid Cue Network for Indian Sign Language Recognition and Translation." Indian Journal of Science and Technology 17, no. 44 (2024): 4590–99. https://doi.org/10.17485/IJST/v17i44.2225.

Full text
Abstract:
Abstract <strong>Objective:</strong>&nbsp;To tackle the issues of occlusion in human skeleton extraction and simplify the pixel matching related to the human skeleton structure for efficient Indian Sign Language (ISL) recognition and translation.&nbsp;<strong>Methods:</strong>&nbsp;This paper presents Occlusion-Resistant STHCN (OSTHCN) to tackle the occlusion problem in human skeleton extraction for effective ISL recognition and translation. This model incorporates Skeleton Occupancy Likelihood Map estimation using B-Spline curves to enhance the skeleton extraction. Due to occlusions caused by fingers and hands, the extracted skeleton is composed of disconnected skeletal subgraphs. Consequently, each observed skeleton is represented as a probability distribution along an ellipsoidal contour, originating from the central points of the skeleton. A heuristic technique estimates occluded skeletons using 3D probability map with an occupancy grid where each voxel indicates skeleton likelihood. The occupancy distribution is updated using observed branch clusters across image sequences, detecting occluded skeletons by finding minimum-cost paths. Finally, Maximally connected subgraphs are merged into a main graph by finding minimum-cost paths in the 3D likelihood map, enabling the prediction of occluded skeleton parts for ISL recognition and translation.&nbsp;<strong>Findings:</strong>&nbsp;OSTHCN model achieved an accuracy of 96.74% on the ISL Continuous Sign Language Translation Recognition (ISL-CSLTR) dataset outperforming existing prediction models.&nbsp;<strong>Novelty:</strong>&nbsp;This model employs a unique occlusion-handling strategy for skeleton extraction, estimating occluded part, integrating connected subgraphs via minimal cost path searches for more precise skeleton parts and enhancing accuracy for ISL recognition and translation. <strong>Keywords:</strong> Sign Language, Graph Convolutional Neural Network, Ellipsoidal Contour, 3D Likelihood Map, Occupancy Probability
APA, Harvard, Vancouver, ISO, and other styles
49

Mangai .V. "Comparative Analysis of Various Yolo Models for Sign Language Recognition with a specific dataset." Journal of Information Systems Engineering and Management 10, no. 43s (2025): 544–50. https://doi.org/10.52783/jisem.v10i43s.8443.

Full text
Abstract:
Understanding and replaying sign language are the hardcore communication tasks between the normal person and to deaf and dumped person, and vice versa. To enhance sign language-based communication, several models have been developed for making sign language into an understandable format by translating gestures into words. The ultimate goal of this research paper is to analyse and compare the various You Only Look One (YOLO) models on SLR problem. YOLO is a fast and efficient convolutional neural networks (CNN) variant that provides a better solution for sign language problems. The comparison of different YOLO models with Indian Sign Language (ISL) dataset can provide a suitable YOLO model for SLR. Therefore, the proposed work has considered the ISign Benchmark dataset. The ISL-based comparison analysis is implemented on Python tool where the various performance metrics are calculated for selecting the best YOLO model . This will make a way to give a fast and efficient means for recognizing sign gestures.
APA, Harvard, Vancouver, ISO, and other styles
50

Sharma, Sakshi, and Sukhwinder Singh. "Recognition of Indian Sign Language (ISL) Using Deep Learning Model." Wireless Personal Communications 123, no. 1 (2021): 671–92. http://dx.doi.org/10.1007/s11277-021-09152-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!