To see the other types of publications on this topic, follow the link: Israeli Sign Language (ISL).

Journal articles on the topic 'Israeli Sign Language (ISL)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Israeli Sign Language (ISL).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jaraisy, Marah, and Rose Stamp. "The Vulnerability of Emerging Sign Languages: (E)merging Sign Languages?" Languages 7, no. 1 (2022): 49. http://dx.doi.org/10.3390/languages7010049.

Full text
Abstract:
Emerging sign languages offer linguists an opportunity to observe language emergence in real time, far beyond the capabilities of spoken language studies. Sign languages can emerge in different social circumstances—some in larger heterogeneous communities, while others in smaller and more homogeneous communities. Often, examples of the latter, such as Ban Khor Sign Language (in Thailand), Al Sayyid Bedouin Sign Language (in Israel), and Mardin Sign Language (in Turkey), arise in communities with a high incidence of hereditary deafness. Traditionally, these communities were in limited contact with the wider deaf community in the region, and so the local sign language remained relatively uninfluenced by the surrounding signed language(s). Yet, in recent years, changes in education, mobility, and social communication patterns have resulted in increased interaction between sign languages. Rather than undergoing language emergence, these sign languages are now facing a state of “mergence” with the majority sign language used by the wider deaf community. This study focuses on the language contact situation between two sign languages in Kufr Qassem, Israel. In the current situation, third-generation deaf signers in Kufr Qassem are exposed to the local sign language, Kufr Qassem Sign Language (KQSL), and the dominant sign language of the wider Israeli deaf community, Israeli Sign Language (ISL), both of which emerged around 90 years ago. In the current study, we analyzed the signing of twelve deaf sign-bilinguals from Kufr Qassem whilst they engaged in a semi-spontaneous task in three language conditions: (1) with another bilingual signer, (2) with a monolingual KQSL signer, and (3) with a monolingual ISL signer. The results demonstrate that KQSL-ISL sign-bilinguals show a preference for ISL in all conditions, even when paired with a monolingual KQSL signer. We conclude that the degree of language shift in Kufr Qassem is considerable. KQSL may be endangered due to the risk of social and linguistic mergence of the KQSL community with the ISL community in the near future.
APA, Harvard, Vancouver, ISO, and other styles
2

Meir, Irit. "A Perfect Marker in Israeli Sign Language." Sign Language and Linguistics 2, no. 1 (1999): 43–62. http://dx.doi.org/10.1075/sll.2.1.04mei.

Full text
Abstract:
In this paper I argue for the existence of an aspectual marker in Israeli Sign Language (ISL) denoting perfect constructions. This marker is the sign glossed as ALREADY. Though this sign often occurs in past time contexts, I argue that it is a perfect-aspect marker and not a past tense marker. This claim is supported by the following observations: (a) ALREADY can co-occur with past, present and future time adverbials; (b) its core meaning is to relate a resultant state to a prior event; (c) it occurs much more in dialogues than in narrative contexts. Further examination of the properties and functions of ALREADY in the language reveals that it shares many properties with perfect constructions in other languages. In addition, it is shown that the co-occurrence of ALREADY with various time adverbials, as well as with the durational aspectual modulation, gives rise to a rich aspectual system in the language. This aspectual system is compared to similar systems in other languages. The ISL system turns out to be very different from that of Hebrew on the one hand, while showing significant similarities to that of ASL. However, there are also some differences between ISL and ASL aspectual markers, which might be due to the relative youth of ISL, and to the different source for the aspectual marker: a verbin the case of ASL, and an adverb in ISL.
APA, Harvard, Vancouver, ISO, and other styles
3

Meir, Irit. "Question and Negation in Israeli Sign Language." Sign Language and Linguistics 7, no. 2 (2006): 97–124. http://dx.doi.org/10.1075/sll.7.2.03mei.

Full text
Abstract:
The paper presents the interrogative and negative constructions in Israeli Sign Language (ISL). Both manual and nonmanual components of these constructions are described, revealing a complex and rich system. In addition to the basic lexical terms, ISL uses various morphological devices to expand its basic question and negation vocabulary, such as compounding and suffixation. The nonmanual component consists of specific facial expressions, head and body posture, and mouthing. The use of mouthing is especially interesting, as ISL seems to use it extensively, both as a word formation device and as a grammatical marker for negation. Interrogative and negative constructions interact in with other grammatical categories in the language; i.e., the distribution of various negation words is determined by the lexical category of the negated word. Thus, the distribution of negation words provides evidence for the existence of Nouns, Verbs and Adjectives as formal categories in the language. Finally, a diachronic comparison between present day ISL and earlier stages of the language reveals interesting traits in the development of these systems.
APA, Harvard, Vancouver, ISO, and other styles
4

Lanesman, Sara, and Rose Stamp. "A Sociolinguistic Analysis of Name Signs in Israeli Sign Language." Sign Language Studies 25, no. 2 (2025): 293–324. https://doi.org/10.1353/sls.2025.a953724.

Full text
Abstract:
Abstract: Name sign systems have been described in many deaf communities around the world. The most frequent name sign types are associated with an individual's appearance, for example, a signers' hairstyle, clothes, and physical features such as height, weight, etc. However, a recent study that examined name signs in Swedish Sign Language, for example, found a decrease in name signs based on appearance and an increase in person name signs, suggesting that name signs are undergoing changes. This study examines name signs produced by 160 deaf signers of Israeli Sign Language (ISL), a sign language that emerged in Israel around ninety years ago. The findings show that, like in other studies, name signs based on appearance are the most frequent in ISL. However, the distribution of name sign types differed based on signers' age and language background. Older signers and deaf people from hearing families are more likely to have name signs related to their appearance while younger signers and deaf people from deaf families are more likely to have name signs related to their legal name, including initialized name signs or signs based on the literal translation of the name. The results are discussed in light of changes in society including changes in deaf education and a rise in political correctness.
APA, Harvard, Vancouver, ISO, and other styles
5

Novogrodsky, Rama, and Natalia Meir. "Age, frequency, and iconicity in early sign language acquisition: Evidence from the Israeli Sign Language MacArthur–Bates Communicative Developmental Inventory." Applied Psycholinguistics 41, no. 4 (2020): 817–45. http://dx.doi.org/10.1017/s0142716420000247.

Full text
Abstract:
AbstractThe current study described the development of the MacArthur–Bates Communicative Developmental Inventory (CDI) for Israeli Sign Language (ISL) and investigated the effects of age, sign iconicity, and sign frequency on lexical acquisition of bimodal-bilingual toddlers acquiring ISL. Previous findings bring inconclusive evidence on the role of sign iconicity (the relationship between form and meaning) and sign frequency (how often a word/sign is used in the language) on the acquisition of signs. The ISL-CDI consisted of 563 video clips. Iconicity ratings from 41 sign-naïve Hebrew-speaking adults (Study 1A) and sign frequency ratings from 19 native ISL adult signers (Study 1B) were collected. ISL vocabulary was evaluated in 34 toddlers, native signers (Study 2). Results indicated significant effects of age, strong correlations between parental ISL ratings and ISL size even when age was controlled for, and strong correlations between naturalistic data and ISL-CDI scores, supporting the validity of the ISL-CDI. Moreover, the results revealed effects of iconicity, frequency, and interactions between age and the iconicity and frequency factors, suggesting that both iconicity and frequency are modulated by age. The findings contribute to the field of sign language acquisition and to our understanding of potential factors affecting human language acquisition beyond language modality.
APA, Harvard, Vancouver, ISO, and other styles
6

Fuks, Orit. "Intensifier actions in Israeli Sign Language (ISL) discourse." Gesture 15, no. 2 (2016): 192–223. http://dx.doi.org/10.1075/gest.15.2.03fuk.

Full text
Abstract:
The study describes certain structural modifications employed on the citation forms of ISL during signing for intensification purposes. In Signed Languages, citation forms are considered relatively immune to modifications. Nine signers signed several scenarios describing some intense quality. The signers used conventional adverbs existing in ISL for intensification purposes. Yet, they also employed idiosyncratic modifications on the formational components of adjectives simultaneously to form realization. These optional modifications enriched the messages conveyed merely by the conventional forms. They show that signers can incorporate gradient modes of expressions directly into the production of the lexical items to communicate more diverse and explicit messages in context. Using a comparative semiotic approach allowed us to describe the synergetic cooperation manifested at the stage of utterance construction between formational elements which were more suited to convey gradient and analog meanings in context and those that were less suited and thus not modified.
APA, Harvard, Vancouver, ISO, and other styles
7

Sandler, Wendy, Gal Belsitzman, and Irit Meir. "Visual foreign accent in an emerging sign language." Special Issue in Memory of Irit Meir 23, no. 1-2 (2020): 233–57. http://dx.doi.org/10.1075/sll.00050.san.

Full text
Abstract:
Abstract In the study of sign language phonology, little attention has been paid to the phonetic detail that distinguishes one sign language from another. We approach this issue by studying the foreign accent of signers of a young sign language – Al-Sayyid Bedouin Sign Language (ABSL) – which is in contact with another sign language in the region, Israeli Sign Language (ISL). By comparing ISL signs and sentences produced by ABSL signers with those of ISL signers, we uncover language particular features at a level of detail typically overlooked in sign language research. For example, within signs we find reduced occlusion (lack of contact), and across phrases there is frequent long distance spreading of the nondominant hand. This novel study of an emerging language in a language contact environment provides a model for comparative sign language phonology, and suggests that a community’s signature accent is part of the evolution of a phonological system.
APA, Harvard, Vancouver, ISO, and other styles
8

Stamp, Rose, Duaa Omar-Hajdawood, and Rama Novogrodsky. "Topical Influence: Reiterative Code-Switching in the Kufr Qassem Deaf Community." Sign Language Studies 24, no. 4 (2024): 771–802. http://dx.doi.org/10.1353/sls.2024.a936333.

Full text
Abstract:
Abstract: Reiterative code-switching, when one lexical item from one language is produced immediately after a semantically equivalent lexical item in another language, is a frequent phenomenon in studies of language contact. Several spoken language studies suggest that reiteration functions as a form of accommodation, amplification (emphasis), reinforcement, or clarification; however, its function in sign language seems less clear. In this study, we investigate reiterative code-switching produced in semispontaneous conversations while manipulating two important factors: interlocutor and topic. Ten bilinguals of Kufr Qassem Sign Language (KQSL), a local sign language used in central Israel, and Israeli Sign Language (ISL), the national sign language of Israel, participated in a semispontaneous conversation task in three interlocutor conditions, with: (1) another bilingual, (2) a KQSL-dominant signer, and (3) an ISL-dominant signer. They were given "local" (e.g., traditions in Kufr Qassem) and "global" (e.g., travel) topics to discuss. A total of 673 code-switches were found in the data, of which sixty-seven were reiterative. Interlocutor was found to be a significant predictor of the presence of reiterative code-switching, with more reiterations observed when participants interacted with a KQSL-dominant signer or bilingual than with an ISL-dominant signer. These results suggest that reiteration serves an accommodative function. Yet, this does not explain reiterations found in the bilingual-bilingual condition. We show that, in these cases, reiteration plays other roles beyond accommodation, including amplification.
APA, Harvard, Vancouver, ISO, and other styles
9

Tkachman, Oksana, and Wendy Sandler. "The noun–verb distinction in two young sign languages." Where do nouns come from? 13, no. 3 (2013): 253–86. http://dx.doi.org/10.1075/gest.13.3.02tka.

Full text
Abstract:
Many sign languages have semantically related noun-verb pairs, such as ‘hairbrush/brush-hair’, which are similar in form due to iconicity. Researchers studying this phenomenon in sign languages have found that the two are distinguished by subtle differences, for example, in type of movement. Here we investigate two young sign languages, Israeli Sign Language (ISL) and Al-Sayyid Bedouin Sign Language (ABSL), to determine whether they have developed a reliable distinction in the formation of noun-verb pairs, despite their youth, and, if so, how. These two young language communities differ from each other in terms of heterogeneity within the community, contact with other languages, and size of population. Using methodology we developed for cross-linguistic comparison, we identify reliable formational distinctions between nouns and related verbs in ISL, but not in ABSL, although early tendencies can be discerned. Our results show that a formal distinction in noun-verb pairs in sign languages is not necessarily present from the beginning, but may develop gradually instead. Taken together with comparative analyses of other linguistic phenomena, the results lend support to the hypothesis that certain social factors such as population size, domains of use, and heterogeneity/homogeneity of the community play a role in the emergence of grammar.
APA, Harvard, Vancouver, ISO, and other styles
10

Sandler, Wendy. "The Medium and the Message." Sign Language and Linguistics 2, no. 2 (1999): 187–215. http://dx.doi.org/10.1075/sll.2.2.04san.

Full text
Abstract:
In natural communication, the medium through which language is transmitted plays an important and systematic role. Sentences are broken up rhythmically into chunks; certain elements receive special stress; and, in spoken language, intonational tunes are superimposed onto these chunks in particular ways — all resulting in an intricate system of prosody. Investigations of prosody in Israeli Sign Language demonstrate that sign languages have comparable prosodic systems to those of spoken languages, although the phonetic medium is completely different. Evidence for the prosodic word and for the phonological phrase in ISL is examined here within the context of the relationship between the medium and the message. New evidence is offered to support the claim that facial expression in sign languages corresponds to intonation in spoken languages, and the term “superarticulation” is coined to describe this system in sign languages. Interesting formaldiffer ences between the intonationaltunes of spoken language and the “superarticulatory arrays” of sign language are shown to offer a new perspective on the relation between the phonetic basis of language, its phonological organization, and its communicative content.
APA, Harvard, Vancouver, ISO, and other styles
11

Fuks, Orit. "The distribution of handshapes in the established lexicon of Israeli Sign Language (ISL)." Semiotica 2021, no. 242 (2021): 101–22. http://dx.doi.org/10.1515/sem-2019-0049.

Full text
Abstract:
Abstract Our study focuses on the perception of the iconicity of handshapes – one of the formational parameters of the sign in signed language. Seventy Hebrew speakers were asked to match handshapes to Hebrew translations of 45 signs (that varied in degree of iconicity), which are specified for one of the handshapes in Israeli Sign Language (ISL). The results show that participants reliably match handshapes to corresponding sign translations for highly iconic signs, but are less accurate for less iconic signs. This demonstrates that there is a notable degree of iconicity in the lexicon of ISL, which is recognizable even to non-signers. The ability of non-signers to detect handshape to form is explained by the fact that word meanings are understood by both deaf and hearing peoples via the mental elaboration of simple iconic sources in which handshape meanings are grounded. The results suggest that while language external iconic mapping could ease the learning of direct iconic forms, it has a more limited capacity to help hearing non-signers learn indirect and opaque forms. The full semiotic distribution of handshapes in the lexicon and their use in language remain difficult for hearing non-signers to understand and depends on more specific language and cultural knowledge.
APA, Harvard, Vancouver, ISO, and other styles
12

Ezra, Din, Shai Mastitz, and Irina Rabaev. "Signsability: Enhancing Communication through a Sign Language App." Software 3, no. 3 (2024): 368–79. http://dx.doi.org/10.3390/software3030019.

Full text
Abstract:
The integration of sign language recognition systems into digital platforms has the potential to bridge communication gaps between the deaf community and the broader population. This paper introduces an advanced Israeli Sign Language (ISL) recognition system designed to interpret dynamic motion gestures, addressing a critical need for more sophisticated and fluid communication tools. Unlike conventional systems that focus solely on static signs, our approach incorporates both deep learning and Computer Vision techniques to analyze and translate dynamic gestures captured in real-time video. We provide a comprehensive account of our preprocessing pipeline, detailing every stage from video collection to the extraction of landmarks using MediaPipe, including the mathematical equations used for preprocessing these landmarks and the final recognition process. The dataset utilized for training our model is unique in its comprehensiveness and is publicly accessible, enhancing the reproducibility and expansion of future research. The deployment of our model on a publicly accessible website allows users to engage with ISL interactively, facilitating both learning and practice. We discuss the development process, the challenges overcome, and the anticipated societal impact of our system in promoting greater inclusivity and understanding.
APA, Harvard, Vancouver, ISO, and other styles
13

Fuks, Orit. "Iconicity Perception under the Lens of Iconicity Rating and Transparency Tasks in Israeli Sign Language (ISL)." Sign Language Studies 24, no. 1 (2023): 46–92. http://dx.doi.org/10.1353/sls.2023.a912330.

Full text
Abstract:
Abstract: This study undertook iconicity ratings and conducted transparency experiments on Israeli Sign Language (ISL). Experiment 1 compared the iconicity ratings of 520 lexical signs of ten Deaf ISL signers and thirteen hearing nonsigners. Ratings were found to be affected by language knowledge, lexical class, and type of iconic mapping, as well as by factors less connected to iconicity, such as a sense of familiarity with a form. In experiment 2, twenty nonsigners guessed the meaning of the 520 signs, and the correct guesses were correlated with the iconicity scores. Overall, nonsigners tended to interpret signs as representing actions. The results demonstrated that (1) signers' ratings reflect the diverse semiotic ways that meanings are represented in the lexicon, predictably more so than nonsigners' ratings, and (2) when meanings are not provided, the perception of iconicity is attuned mostly to the movement aspect of the forms. It is recommended that both studies be conducted together in order to achieve a more nuanced picture concerning the perception of iconicity and its role in the lexicon.
APA, Harvard, Vancouver, ISO, and other styles
14

Dachkovsky, Svetlana. "From a demonstrative to a relative clause marker." Special Issue in Memory of Irit Meir 23, no. 1-2 (2020): 142–70. http://dx.doi.org/10.1075/sll.00047.dac.

Full text
Abstract:
Abstract Demonstratives provide an important link between gesture, discourse and grammar due to their communicative function to coordinate the interlocutor’s focus of attention. This underlies their frequent cross-linguistic development into a wide range of function words and morphemes (Diessel 1999). The present study provides evidence for a link between gesture and grammar by tracking diachronic development of a relative clause marker in Israeli Sign Language (ISL) restrictive relative clauses, which starts as a gestural locative pointing sign, and grammaticalizes into a relative pronoun connecting relative and main clauses and agreeing with referent loci, and then into an invariant relativizer. Diachronic changes are inferred from the data collected from three generations of signers. The results reveal that the behavior of demonstratives in the data varied with the signers’ ages according to four diagnostic criteria of grammaticalization (e.g., Hopper & Traugott 2003): increased systematicity, distributional and morphological changes, and phonetic reduction.
APA, Harvard, Vancouver, ISO, and other styles
15

Swead, Riki Taitelbaum, Yaniv Mama, and Michal Icht. "The Effect of Presentation Mode and Production Type on Word Memory for Hearing Impaired Signers." Journal of the American Academy of Audiology 29, no. 10 (2018): 875–84. http://dx.doi.org/10.3766/jaaa.17030.

Full text
Abstract:
AbstractProduction effect (PE) is a memory phenomenon referring to better memory for produced (vocalized) than for non-produced (silently read) items. Reading aloud was found to improve verbal memory for normal-hearing individuals, as well as for cochlear implant users, studying visually and aurally presented material.The present study tested the effect of presentation mode (written or signed) and production type (vocalization or signing) on word memory in a group of hearing impaired young adults, sign-language users.A PE paradigm was used, in which participants learned lexical items by two presentation modes, written or signed. We evaluated the efficacy of two types of productions: vocalization and signing, using a free recall test.Twenty hearing-impaired young adults, Israeli sign language (ISL) users, participated in the study, ten individuals who mainly use manual communication (MC) (ISL as a first language), and ten who mainly use total communication (TC).For each condition, we calculated the proportion of study words recalled. A mixed-design analysis of variance was conducted, with learning condition (written-vocalize, written-signed, and manual-signed) and production type (production and no-production) as within-subject variables, and group (MC and TC) as a between-subject variable.Production benefit was documented across all learning conditions, with better memory for produced over non-produced words. Recall rates were higher when learning written words relative to signed words. Production by signing yielded better memory relative to vocalizing.The results are explained in light of the encoding distinctiveness account, namely, the larger the number of unique encoding processes involved at study, the better the memory benefit.
APA, Harvard, Vancouver, ISO, and other styles
16

Preeti Jain, Prof. "Healthcare Application Using Indian Sign Language." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 06 (2025): 1–9. https://doi.org/10.55041/ijsrem50261.

Full text
Abstract:
Abstract - The absence of standardized and easily available technology solutions for people with disabilities—especially those who use Indian Sign Language (ISL) to access vital services like healthcare—means that there are still significant communication hurdles in India. Unlike American Sign Language (ASL), which is mostly one-handed, ISL relies on intricate two-handed motions, which creates unique difficulties for software-based interpretation systems. The lack of extensive, standardized ISL datasets, which are essential for developing precise machine learning and gesture recognition models, exacerbates these difficulties even further. The lack of a complete ISL-based solution still prevents ISL users from accessing essential services like healthcare, even with improvements in sign language recognition technology. Although several platforms provide sign language translation, the majority are not prepared to deal with the particular needs of ISL. In addition to investigating recent developments in ISL translation, gesture recognition, and letter recognition, this project seeks to create an ISL communication system especially suited for hospital situations. The foundation for improved ISL accessibility in the healthcare industry and beyond will be laid by investigating fundamental techniques including deep learning, machine learning, and real-time processing. Key Words: Indian Sign Language (ISL), gesture recognition, real-time translation, accessibility, deep learning.
APA, Harvard, Vancouver, ISO, and other styles
17

Shinde, Aditya. "Indian Sign Language Detection." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 01 (2025): 1–9. https://doi.org/10.55041/ijsrem41093.

Full text
Abstract:
- The communication gap remains one of the most significant barriers between individuals with hearing and speech impairments and the broader society. This project addresses this challenge by developing a real-time Indian Sign Language (ISL) detection system that leverages computer vision and machine learning techniques. By capturing hand gestures from video input, the system translates these movements into text or speech, enabling effective communication between ISL users and those unfamiliar with the language. Additionally, the system incorporates text-to-speech functionality, ensuring a seamless and humanized interaction experience. The proposed model utilizes Convolutional Neural Networks (CNNs) for image processing and gesture recognition, trained on a comprehensive dataset of ISL gestures. The framework employs preprocessing, feature extraction, and classification algorithms to accurately identify static and dynamic gestures. The system is designed to focus on the nuances of ISL, providing accurate recognition of gestures in real time while offering multilingual support. This initiative aspires to create an inclusive environment by empowering the hearing-impaired community and promoting better integration within society. By using cost-effective techniques, the project ensures scalability and practicality for everyday applications, making communication more efficient and inclusive. Keywords: Indian Sign Language (ISL), Gesture Recognition, Convolutional Neural Networks (CNNs), Real-time Communication
APA, Harvard, Vancouver, ISO, and other styles
18

Gandhe, Dakshesh, Pranay Mokar, Aniruddha Ramane, and Dr R. M. Chopade. "Sign Language Recognition for Real-time Communication." International Journal for Research in Applied Science and Engineering Technology 12, no. 5 (2024): 288–93. http://dx.doi.org/10.22214/ijraset.2024.61514.

Full text
Abstract:
Abstract: Sign language is an essential communication tool for India's Deaf and Hard of Hearing people. This study introduces a novel approach for recognising and synthesising Indian Sign Language (ISL) using Long Short-Term Memory (LSTM) networks. LSTM, a kind of recurrent neural network (RNN), has demonstrated promising performance in sequential data processing. In this study, we leverage LSTM to develop a robust ISL recognition system, which can accurately interpret sign gestures in real-time. Additionally, we employ LSTM-based models for ISL synthesis, enabling the conversion of spoken language into sign language for improved inclusivity and accessibility. We evaluate the proposed approach on a diverse dataset of ISL signs, achieving high recognition accuracy and natural sign synthesis. The integration of LSTM in ISL technology holds significant potential for breaking down communication barriers and improving the quality of life for India's deaf and hard of hearing people
APA, Harvard, Vancouver, ISO, and other styles
19

Aadhya, Satrasala, and al. et. "Indian Sign Language Translator Using CNN." International Journal of Computational Learning & Intelligence 4, no. 4 (2025): 792–98. https://doi.org/10.5281/zenodo.15279424.

Full text
Abstract:
This paper main focus is to create a real-time Indian Sign Language (ISL) translator designed to overcome the gap between the deaf and hard-of-hearing population and the hearing population. By leveraging computer vision techniques and machine learning models, the system can accurately recognize a wide range of ISL gestures and translate them into corresponding text outputs in English.  The application is intended to facilitate seamless communication, enhancing accessibility in various settings such as education, healthcare, and daily interactions. This solution aims to foster greater inclusion and social integration for ISL users while addressing the lack of real-time ISL translation tools in India.
APA, Harvard, Vancouver, ISO, and other styles
20

Wankhade, Vaishnavi. "Indian Sign Language Detection using Machine Learning." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem30798.

Full text
Abstract:
Indian Sign Language (ISL) serves as a primary means of communication for millions of hearing-impaired individuals in India. However, the lack of comprehensive tools for interpreting ISL poses significant challenges in facilitating effective communication and integration of the deaf community into society. This research paper explores the advancements, challenges, and potential applications of Indian Sign Language detection technology. It provides an overview of existing techniques for ISL detection, including computer vision-based approaches and wearable devices. Additionally, the paper discusses the unique challenges associated with ISL detection, such as variations in gestures and environmental factors. Furthermore, it examines the potential applications of ISL detection technology in various domains, including education, healthcare, and accessibility. By analyzing current research trends and technological developments, this paper aims to contribute to the advancement of ISL detection technology and its societal impact. . KEYWORD : Indian Sign Language, Sign Language Detection, Computer Vision, Wearable Devices, Accessibility, Communication, Deaf Community
APA, Harvard, Vancouver, ISO, and other styles
21

Shinde, Aditya. "Enhanced Indian Sign Language Detection." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 06 (2025): 1–9. https://doi.org/10.55041/ijsrem49874.

Full text
Abstract:
Abstract - The communication problem involving members of society who have speech and hearing impairments is still not fully resolved. In an earlier study, we created a real-time Indian Sign Language (ISL) recognition system which uses LSTM architecture for sequential gesture recognition. The focus of this paper is on further improving this system by changing the architecture from LSTM to CNN to enhance spatial feature extraction and overall system performance. Using a more comprehensive ISL dataset, we trained and tested the model and added new advanced preprocessing techniques such as Gaussian blur and converting the images to grayscale. These modifications improved the accuracy of the model and reduced the processing power needed, allowing for more advanced, rapid, and reliable real-time ISL gesture recognition. The result of this study is in the direction of making available an effective, simple, and easy-to-use technological interface for the deaf and hearing-impaired people in India. Keywords: Indian Sign Language (ISL), Gesture Recognition, Convolutional Neural Networks (CNNs), Real-time Communication, Image Preprocessing, Gaussian Blur, Grayscale Conversion, Sign Language Translation, Computer Vision, Human-Computer Interaction (HCI), Deep Learning
APA, Harvard, Vancouver, ISO, and other styles
22

Das Chakladar, Debashis, Pradeep Kumar, Shubham Mandal, Partha Pratim Roy, Masakazu Iwamura, and Byung-Gyu Kim. "3D Avatar Approach for Continuous Sign Movement Using Speech/Text." Applied Sciences 11, no. 8 (2021): 3439. http://dx.doi.org/10.3390/app11083439.

Full text
Abstract:
Sign language is a visual language for communication used by hearing-impaired people with the help of hand and finger movements. Indian Sign Language (ISL) is a well-developed and standard way of communication for hearing-impaired people living in India. However, other people who use spoken language always face difficulty while communicating with a hearing-impaired person due to lack of sign language knowledge. In this study, we have developed a 3D avatar-based sign language learning system that converts the input speech/text into corresponding sign movements for ISL. The system consists of three modules. Initially, the input speech is converted into an English sentence. Then, that English sentence is converted into the corresponding ISL sentence using the Natural Language Processing (NLP) technique. Finally, the motion of the 3D avatar is defined based on the ISL sentence. The translation module achieves a 10.50 SER (Sign Error Rate) score.
APA, Harvard, Vancouver, ISO, and other styles
23

Gaonkar, Niyati V., and Vishal R. Gori. "Real-Time Bidirectional Translation System Between Text and Indian Sign Language Using Deep Learning and NLP Techniques." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 04 (2025): 1–9. https://doi.org/10.55041/ijsrem44150.

Full text
Abstract:
In this paper, we present a real-time translation system that bridges the communication gap between the hearing and non-hearing communities. Our system converts English text to Indian Sign Language (ISL) and vice versa, using Natural Language Processing (NLP) techniques and deep learning-based gesture recognition. The system supports video-based gesture recognition for ISL and provides accurate text translations in real-time. This study addresses the technical challenges involved, including feature extraction from gestures and translating com- plex ISL sentences using neural networks like LSTM. Keywords- Indian Sign Language (ISL), Sign Language Translation, Gesture Recognition, Deep Learning, LSTM Model, Mediapipe Holistic, Text-to-Sign Conversion, Dy- namic Gesture Segmentation, Fingerspelling, Natural Lan- guage Processing (NLP), Pose Estimation, Hand Landmark Tracking, Real-Time Sign Language Recognition, Data Aug- mentation, Accessibility Technology
APA, Harvard, Vancouver, ISO, and other styles
24

Mishra, Ravita, Gargi Angne, Nidhi Gawde, Preeti Khamkar, and Sneha Utekar. "SignSpeak: Indian Sign Language Recognition with ML Precision." Indian Journal Of Science And Technology 18, no. 8 (2025): 620–34. https://doi.org/10.17485/ijst/v18i8.4049.

Full text
Abstract:
Objectives: To develop an accessible educational platform for Indian Sign Language (ISL) recognition, bridging communication gaps using advanced machine learning techniques, and promoting inclusivity for the hearing-impaired community. Methods: The study utilized Random Forest for classifying ISL letters and numbers with 1200 images per class and Long Short-Term Memory (LSTM)/Large Language Model (LLM) for gesture-based word and sentence recognition using 120 custom images. Feedback from Jhaveri Thanawala School for the Deaf validated the approach. Findings: The Random Forest model achieved 99.98% accuracy in recognizing ISL letters and numbers. LSTM and LLM models demonstrated 87% accuracy in translating gestures into meaningful sentences. The dynamic learning and quiz modules improved user engagement, facilitating effective ISL mastery. Feedback from Jhaveri Thanawala School confirmed its real-world usability. These results enhance prior works by offering an integrated, highly accurate platform to promote ISL adoption, enabling better societal inclusivity for individuals with hearing disabilities. Novelty: Signova integrates gesture recognition with sentence generation which makes it a unique ISL recognition system, achieving high accuracy while providing interactive tools for learning and practicing ISL to foster its accessibility. Keywords: Indian Sign Language, Sign-Language Recognition, Random Forest, Long Short-Term Memory, Large Language Model
APA, Harvard, Vancouver, ISO, and other styles
25

Sarmad Khan, Hafiz Muhammad, Simon D. Mcloughlin, and Irene Murtagh. "Comparative Evaluation and Utilization of Convolutional Neural Network Architectures for Irish Sign Language Recognition." International Journal of Combinatorial Optimization Problems and Informatics 16, no. 1 (2025): 123–31. https://doi.org/10.61467/2007.1558.2025.v16i1.550.

Full text
Abstract:
Irish Sign Language (ISL) stands as a preferred mode of communication used by the deaf and hard-of-hearing community in Ireland. With its unique grammar, syntax, and lexicon, ISL plays a pivotal role in facilitating communication for thousands of individuals, reflecting centuries of cultural heritage and linguistic development. An estimated 5,000 Deaf individuals utilize ISL, with an additional 40,000 hearing individuals, spanning from regular to occasional users, also engaging with Irish Sign Language. Despite its cultural and linguistic importance, ISL faces numerous challenges in terms of technical accessibility. The exclusion of sign languages from modern language technologies places the deaf or hard-of-hearing individuals at a disadvantage, exacerbating the barrier to human-to-human communication and further marginalizing an already under-resourced linguistic subset. This necessitates innovative approaches and technologies to enhance its utilization and promote inclusivity. This research is concerned with evaluating the performance of various deep neural network architectures in the recognition of sign language by utilizing and evaluating state-of-the-art architectures for the recognition of sign language, in particular Irish Sign Language. This research is part of research progress towards the development of an automatic computational annotation system for Irish Sign Language. Notably, the Densenet architecture performed better than other architectures in ISL alphabet recognition with an average accuracy of 99\%. Our findings illustrate the potential of sophisticated deep neural networks to overcome constraints relating to the scarcity of ISL-specific data. This contribution provides the potential to further develop natural language processing tools and technologies for Irish Sign Language, which may alleviate the lack of technical communicative accessibility and inclusion for the deaf and hard-of-hearing community in Ireland.
APA, Harvard, Vancouver, ISO, and other styles
26

Ravita, Mishra, Angne Gargi, Gawde Nidhi, Khamkar Preeti, and Utekar Sneha. "SignSpeak: Indian Sign Language Recognition with ML Precision." Indian Journal of Science and Technology 18, no. 8 (2025): 620–34. https://doi.org/10.17485/IJST/v18i8.4049.

Full text
Abstract:
<strong>Objectives:</strong>&nbsp;To develop an accessible educational platform for Indian Sign Language (ISL) recognition, bridging communication gaps using advanced machine learning techniques, and promoting inclusivity for the hearing-impaired community.&nbsp;<strong>Methods:</strong>&nbsp;The study utilized Random Forest for classifying ISL letters and numbers with 1200 images per class and Long Short-Term Memory (LSTM)/Large Language Model (LLM) for gesture-based word and sentence recognition using 120 custom images. Feedback from Jhaveri Thanawala School for the Deaf validated the approach.&nbsp;<strong>Findings:</strong>&nbsp;The Random Forest model achieved 99.98% accuracy in recognizing ISL letters and numbers. LSTM and LLM models demonstrated 87% accuracy in translating gestures into meaningful sentences. The dynamic learning and quiz modules improved user engagement, facilitating effective ISL mastery. Feedback from Jhaveri Thanawala School confirmed its real-world usability. These results enhance prior works by offering an integrated, highly accurate platform to promote ISL adoption, enabling better societal inclusivity for individuals with hearing disabilities.&nbsp;<strong>Novelty:</strong>&nbsp;Signova integrates gesture recognition with sentence generation which makes it a unique ISL recognition system, achieving high accuracy while providing interactive tools for learning and practicing ISL to foster its accessibility. <strong>Keywords:</strong>&nbsp;Indian Sign Language, Sign-Language Recognition, Random Forest, Long Short-Term Memory, Large Language Model &nbsp;
APA, Harvard, Vancouver, ISO, and other styles
27

N, Sudiksha. "Talking Fingers: Bridging the Communication Gap through Real-Time Speech-to-Indian Sign Language Translation." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 01 (2025): 1–9. https://doi.org/10.55041/ijsrem40875.

Full text
Abstract:
"Talking Fingers" is an innovative initiative to be developed to facilitate communication between hearing and non-hearing individuals by building a web-based system that can translate spoken language into Indian Sign Language (ISL). Being an essential means of communication among millions in India, ISL remains underdeveloped by technologies that are dominated by American and British Sign Languages. Current tools rely on the basic word-by-word translation with no contextual or grammatical accuracy. The proposed system will thus integrate speech recognition, NLP, and ISL visuals for real-time, context-aware translations. Spoken input will be converted into text through the Google Speech API and then processed using NLP techniques to segment meaningful phrases. The matched phrases are matched with the ISL visual representations, which may be in the form of videos or GIFs, in a comprehensive database. A fallback mechanism ensures seamless communication by spelling out words letter by letter when specific ISL visuals are unavailable. This platform serves as scalable and adaptable solutions for different public and educational spaces, bridging the communication gap for the deaf and hard-of-hearing community. With emphasis on ISL and incorporation of advanced technologies, "Talking Fingers" delivers an inclusive and robust solution, enabling users and bringing greater inclusivity in communication. Keywords: Indian Sign Language (ISL), Natural Language Processing (NLP), Speech-to-Sign Translation, Communication Accessibility, Real-time Translation, Sign Language Automation
APA, Harvard, Vancouver, ISO, and other styles
28

Patil, Gouri Shanker, R. Rangasayee, and Geetha Mukundan. "Non-fluent aphasia in deaf user of Indian Sign Language." Cognitive Linguistic Studies 1, no. 1 (2014): 147–53. http://dx.doi.org/10.1075/cogls.1.1.07pat.

Full text
Abstract:
The current study describes aphasia in a deaf user of Indian Sign Language (ISL). One congenitally deaf adult with LHD was evaluated for signs of aphasia. The tools used were Aphasia Diagnostic Battery in Indian Sign Language (ADB in ISL), Magnetic Resonance Imaging (MRI) investigation, linguistic, and neurobehavioral profile. The results of all investigative procedures revealed signs and symptoms consistent with non-fluent aphasia specifically Broca’s aphasia. The data from ISL in brain damaged individual further emphasize the role of left hemisphere in sign language processing.
APA, Harvard, Vancouver, ISO, and other styles
29

Snehal, Pawar, Salunke Pragati, Mhasavade Arati, Bhutkar Aishwaraya, and R. Pathak K. "Real Time Identification of American Sign Language for Deaf and Dumb Community." Advancement in Image Processing and Pattern Recognition 2, no. 3 (2020): 1–7. https://doi.org/10.5281/zenodo.3600015.

Full text
Abstract:
<em>The only way for deaf and dumb for communication is based on sign language which involves hand gestures. In this system, we are working on the American Sign Language (ASL) dataset (A-Z), (0-9) and word alphabet identification escort by our word identification dataset of Indian Sign Language (ISL). Sign data samples to be making our system more faultless, error free, and unambiguous with help of Convolutional Neural Network (CNN). Today, much research has been going on the field of sign language recognition but existing study failed to develop trust full communication interpreter. The motivation of this system is to serve a real-time two-way communication translator based on Indian Sign Language (ISL) with higher precision, efficiency, and accuracy. Indian Sign Language (ISL) used by Deaf-mute people&rsquo;s community in India, does have adequate, delightful, acceptable, meaningful essential and structural properties.</em>
APA, Harvard, Vancouver, ISO, and other styles
30

R, Thirumahal, Aswath Harish Jayaprakash, Shiva Prakash P, Yuvaraj Kesavan P, Chirenjeevi M, and Siva M. "Machine Learning based ISL Identification and Translation." Journal of Ubiquitous Computing and Communication Technologies 6, no. 4 (2024): 353–67. https://doi.org/10.36548/jucct.2024.4.003.

Full text
Abstract:
Sign language is an essential means of communication for the deaf and hard-of-hearing community. However, effective communication between sign language users and those unfamiliar with sign language can be challenging. The primary goal is to utilize the machine learning to automatically identify sign language gestures and translate them into easily understandable formats. This research presents a comprehensive sign language detection system that captures sign language gestures, detects them, and provides output in text using LSTM (Long Short-Term Memory) and Transformers with an accuracy of 79%. This multimodal approach ensures the system helps in understanding the sign language.
APA, Harvard, Vancouver, ISO, and other styles
31

Pradnya D. Bormane. "Indian Sign Language Recognition: Support Vector Machine Approach." Advances in Nonlinear Variational Inequalities 27, no. 3 (2024): 716–27. http://dx.doi.org/10.52783/anvi.v27.1438.

Full text
Abstract:
Indian Sign Language (ISL) is the primary form of communication for the dumb and deaf community in India. Recognizing Indian Sign Language plays an imperative part in promoting communication rights, social inclusion and equality for deaf people, while also contributing to technological advancement and cultural diversity. System’s ability to automatically recognize ISL signs could significantly improve community interactions between deaf and people with hearing loss. The objective of this research is to design a system that can accurately recognize and interpret Indian Sign language (ISL), thereby improving communication accessibility for the deaf and dumb community. Also, enhance the accuracy of Indian Sign language (ISL) recognition. In this research, Machine Learning approach for Sign Language (SL) Recognition using Support Vector Machine (SVM) is implemented. The Support Vector Machine (SVM) model was trained using a linear kernel and a regularization parameter (C) set to 0.999 on a dataset of sequences for gesture recognition. After training, the model achieved a test accuracy of 86% on the test data. The development and implementation of gesture recognition system can increase awareness of the communication needs and rights of deaf people.
APA, Harvard, Vancouver, ISO, and other styles
32

Mistree, Kinjal, Devendra Thakor, and Brijesh Bhatt. "A Machine Translation System from Indian Sign Language to English Text." International Journal of Information Technologies and Systems Approach 15, no. 1 (2022): 1–23. http://dx.doi.org/10.4018/ijitsa.313419.

Full text
Abstract:
Sign language recognition and translation is a crucial step towards improving communication between the deaf and the rest of the society. According to the Indian Sign Language Research and Training Centre (ISLRTC), India has around 300 certified human interpreters. With such a shortage of human interpreters, an alternative service is desired that helps people to achieve smooth communicate with deaf. In this study, an approach is presented that translates ISL sentences in English text using MobileNetV2 model and neural machine translation (NMT). The system features ISL corpus created from Brown corpus using ISL grammar rules. The approach converts the ISL videos into ISL gloss sequence using MobileNetV2 model and recognised ISL gloss sequence is then fed to machine translation module. MobileNetV2 was proven best-suited model for recognition of ISL sentences and NMT gives better result than statistical machine translation (SMT) to convert ISL gloss sequence into English text. The automatic and human evaluation of the proposed approach gives 83.3% and 86.1% accuracy, respectively.
APA, Harvard, Vancouver, ISO, and other styles
33

Ajay M. Pol, Et al. "Enhancing Sign Language Recognition through Fusion of CNN Models." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 10 (2023): 902–10. http://dx.doi.org/10.17762/ijritcc.v11i10.8608.

Full text
Abstract:
This study introduces a pioneering hybrid model designed for the recognition of sign language, with a specific focus on American Sign Language (ASL) and Indian Sign Language (ISL). Departing from traditional machine learning methods, the model ingeniously blends hand-crafted techniques with deep learning approaches to surmount inherent limitations. Notably, the hybrid model achieves an exceptional accuracy rate of 96% for ASL and 97% for ISL, surpassing the typical 90-93% accuracy rates of previous models. This breakthrough underscores the efficacy of combining predefined features and rules with neural networks. What sets this hybrid model apart is its versatility in recognizing both ASL and ISL signs, addressing the global variations in sign languages. The elevated accuracy levels make it a practical and accessible tool for the hearing-impaired community. This has significant implications for real-world applications, particularly in education, healthcare, and various contexts where improved communication between hearing-impaired individuals and others is paramount. The study represents a noteworthy stride in sign language recognition, presenting a hybrid model that excels in accurately identifying ASL and ISL signs, thereby contributing to the advancement of communication and inclusivity.
APA, Harvard, Vancouver, ISO, and other styles
34

Patole, Piyush, Mihir Sarawate, and Krushna Joshi. "A Communication Translator Interface for Sign Language Interpretation." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (2023): 4546–58. http://dx.doi.org/10.22214/ijraset.2023.52325.

Full text
Abstract:
Abstract: Sign language is an essential means of communication for deaf and hard-of-hearing individuals. However, unlike spoken languages which have a universal language, every country has its own native sign language. In India, the Indian Sign Language (ISL) is used. This survey aims to provide an overview of the recognition and translation of essential Indian sign language. While significant research has been conducted in American Sign Language (ASL), the same cannot be said for Indian Sign Language due to its unique characteristics. The proposed method focuses on designing a tool for translating ISL hand gestures to help the deaf-mute community convey their ideas. A self-created ISL dataset was used to train the model for gesture recognition. The literature contains a plethora of methods for extracting features and classifying sign language, with a majority of them utilizing machine learning techniques. However, this article proposes the adoption of a deep learning method by designing a Convolution Neural Network (CNN) model for the purpose of extracting sign language features and recognizing them accurately. This CNN model is specifically designed to identify complex patterns in the data and use them to efficiently recognize sign language features. By adopting this approach, it is expected that the recognition of sign language will improve significantly, providing a more effective means of communication for the deaf and hard-of-hearing community.
APA, Harvard, Vancouver, ISO, and other styles
35

Attar, Rakesh Kumar, Vishal Goyal, and Lalit Goyal. "Development of Airport Terminology based Synthetic Animated Indian Sign Language Dictionary." Journal of Scientific Research 66, no. 05 (2022): 88–94. http://dx.doi.org/10.37398/jsr.2022.660512.

Full text
Abstract:
In the current era of computerization, the development of a synthetic animated Indian Sign Language (ISL) dictionary could prove very beneficial for deaf people to share their ideas, views and thoughts with hearing people. Although many human based video dictionaries are available, no ISL synthetic animated dictionary solely for public places is developed yet. The development of an ISL dictionary of 1200 words using synthetic animation for airports terminology is reported in this article. The most frequently used words at airports in ISL are categorized and then are translated into Signing Gesture Markup Language (SiGML) which generates the signs utilizing synthetic animations through a virtual avatar. The developed ISL dictionary can be used for automatic sign translation systems at airports animating signs from written or spoken announcements. This ISL dictionary is used in the development of airport announcement system for deaf that is capable of displaying spoken airport announcements in ISL using synthetic animations. Moreover, the developed dictionary can prove very beneficial for educating deaf people and for assisting while visiting public places.
APA, Harvard, Vancouver, ISO, and other styles
36

Anjali, Mogusala. "Contextual Translation System to Sign Language." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 02 (2025): 1–9. https://doi.org/10.55041/ijsrem41320.

Full text
Abstract:
People with hearing and speech disabilities face significant challenges in communicating with others, as not everyone understands sign language. This project aims to create a system that helps bridge this communication gap by converting spoken English into Indian Sign Language (ISL). The system works by recognizing voice input, the recognized speech is converted into text, which is then simplified using natural language processing techniques. Finally, the text is translated into ISL and displayed as a series of images or motion videos using Python libraries. This system provides an easy and accessible way for people with hearing or speech disabilities to communicate effectively, promoting inclusivity and understanding in everyday interactions.
APA, Harvard, Vancouver, ISO, and other styles
37

G,, Anvith. "Talking Fingers: A Multilingual Speech-to-Sign Language Converter." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 01 (2025): 1–9. https://doi.org/10.55041/ijsrem40782.

Full text
Abstract:
Communication is a basic human need, but thousands of people with hearing and speech impairments face limitations in everyday communication "Talking Fingers" is a modern assistive technology tool that transfers spoken or written language to Indian Sign Language (ISL). With multilingual capabilities, the tool provides technologies such as Google ML Kit for language recognition, MyMemory API for translation, ISL grammar services for more than one language and several languages accessible at a time Translated ISL will, a it provides a simple and powerful communication channel. The system outlines the ability to blend artificial intelligence (AI) and language generation to promote inclusion and empower individuals with disabilities
APA, Harvard, Vancouver, ISO, and other styles
38

El Zaar, Abdellah, Nabil Benaya, and Abderrahim El Allati. "Sign Language Recognition: High Performance Deep Learning Approach Applyied To Multiple Sign Languages." E3S Web of Conferences 351 (2022): 01065. http://dx.doi.org/10.1051/e3sconf/202235101065.

Full text
Abstract:
In this paper we present a high performance Deep Learning architecture based on Convolutional Neural Network (CNN). The proposed architecture is effective as it is capable of recognizing and analyzing with high accuracy different Sign language datasets. The sign language recognition is one of the most important tasks that will change the lives of deaf people by facilitating their daily life and their integration into society. Our approach was trained and tested on an American Sign Language (ASL) dataset, Irish Sign Alphabets (ISL) dataset and Arabic Sign Language Alphabet (ArASL) dataset and outperforms the state-of-the-art methods by providing a recognition rate of 99% for ASL and ISL, and 98% for ArASL.
APA, Harvard, Vancouver, ISO, and other styles
39

Yerpude, Poonam. "Non-Verbal (Sign Language) To Verbal Language Translator Using Convolutional Neural Network." International Journal for Research in Applied Science and Engineering Technology 10, no. 1 (2022): 269–73. http://dx.doi.org/10.22214/ijraset.2022.39820.

Full text
Abstract:
Abstract: Communication is very imperative for daily life. Normal people use verbal language for communication while people with disabilities use sign language for communication. Sign language is a way of communicating by using the hand gestures and parts of the body instead of speaking and listening. As not all people are familiar with sign language, there lies a language barrier. There has been much research in this field to remove this barrier. There are mainly 2 ways in which we can convert the sign language into speech or text to close the gap, i.e. , Sensor based technique,and Image processing. In this paper we will have a look at the Image processing technique, for which we will be using the Convolutional Neural Network (CNN). So, we have built a sign detector, which will recognise the sign numbers from 1 to 10. It can be easily extended to recognise other hand gestures including alphabets (A- Z) and expressions. We are creating this model based on Indian Sign Language(ISL). Keywords: Multi Level Perceptron (MLP), Convolutional Neural Network (CNN), Indian Sign Language(ISL), Region of interest(ROI), Artificial Neural Network(ANN), VGG 16(CNN vision architecture model), SGD(Stochastic Gradient Descent).
APA, Harvard, Vancouver, ISO, and other styles
40

Shahi, Mr Shivanshu. "Multilevel Conversion of Indian Sign Language from Gesture to Speech." International Journal for Research in Applied Science and Engineering Technology 13, no. 5 (2025): 5764–70. https://doi.org/10.22214/ijraset.2025.71548.

Full text
Abstract:
Abstract: Indian Sign Language (ISL) serves as a primary mode of communication for Deaf and hard-of-hearing communities in India. However, despite its societal importance, ISL remains largely unsupported by mainstream technological platforms, limiting inclusive communication. This research introduces a real-time ISL recognition andtranslationsystem thatconvert shandgesturesintocorresponding text and speech outputs, enabling phrase-level communication rather thanisolated characterinterpretation. The architecture uses a modular pipeline approach, with a Convolutional Neural Network (CNN) for accurate gesture classification, a phrase-mapping module to translate gestures into meaningful expressions, a MediaPipe for accurate hand landmark detection, and a text-to-speech (TTS) system to turn the generated text into audible speech output. Unlike previous systems restricted to static signs, our approach supports semantically rich, multi-word phrases, enhancingnatural communicationflow. Aspecially constructed dataset of ten frequently used Indian Sign Language (ISL) phrases was used to train the model. To improve generalization,150samplesfromeachclassweretakeninvarious lighting and background conditions. The final system achieved 95% classification accuracy, operated at 60 frames per second, and maintainedlatency below100milliseconds. Usabilitytestingwith multiple users confirmed the system's robustness, responsiveness, and accessibility. The findings demonstrate the viability of deploying deep learning-based ISL recognition systems in authentic environments, including public areas, healthcare facilities, and educational institutions.
APA, Harvard, Vancouver, ISO, and other styles
41

Sinha, Prof Pragya. "Design and Development of Indian Sign Language Character Recognition System." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, no. 12 (2023): 1–13. http://dx.doi.org/10.55041/ijsrem27773.

Full text
Abstract:
The purpose of this study is to look into the challenges involved in categorizing Indian Sign Language (ISL) characters. While a lot of research has been done in the related field of American Sign Language (ASL), not as much has been done with ISL. Lack of standard datasets, obscured traits, and variance in language with geography are the key barriers that have hindered much ISL research. Our study aims to progress this field by collecting a dataset from a deaf school and applying various feature extraction techniques to extract useful information, which is then input into a range of supervised learning algorithms. Our current results for each approach include four fold cross-validation. What sets our work apart from earlier research is that the validation set in our four-fold cross-validation contains photographs of people who are not the same people as those in the training set. Hand gestures and signs are used by those with speech impairments to communicate. Understanding what they're trying to say is challenging for the average person. Though extremely uncommon, there are many systems that convert data to Hindi. Therefore, it is imperative to implement a system that enables the general public to understand and interpret all signals, gestures, and communications. It will close the communication gap that exists between normal people and those who have speech difficulty. The two primary research approaches centered on human-computer interaction are sign language recognition and learning. Multiple sensors are required in order for data flow to be understood in sign language. This research paper focuses on the development of a Hindi-language training tool that can detect images and interpret what someone else is trying to say to persons who have speech impairments. Keywords: Indian Sign Language (ISL), American Sign Language (ASL), Feature Extraction, Supervised learning, Sign Language, etc.
APA, Harvard, Vancouver, ISO, and other styles
42

Zeshan, Ulrike, and Sibaji Panda. "Sign-speaking: The structure of simultaneous bimodal utterances." Applied Linguistics Review 9, no. 1 (2018): 1–34. http://dx.doi.org/10.1515/applirev-2016-1031.

Full text
Abstract:
AbstractWe present data from a bimodal trilingual situation involving Indian Sign Language (ISL), Hindi and English. Signers are co-using these languages while in group conversations with deaf people and hearing non-signers. The data show that in this context, English is an embedded language that does not impact on the grammar of the utterances, while both ISL and Hindi structures are realised throughout. The data show mismatches between the simultaneously expressed ISL and Hindi, such that semantic content and/or syntactic structures are different in both languages, yet are produced at the same time. The data also include instances of different propositions expressed simultaneously in the two languages. This under-documented behaviour is called “sign-speaking” here, and we explore its implications for theories of multilingualism, code-switching, and bilingual language production.
APA, Harvard, Vancouver, ISO, and other styles
43

Gudi, Swaroop. "Sign Language Detection Using Gloves." International Journal for Research in Applied Science and Engineering Technology 12, no. 11 (2024): 1387–91. http://dx.doi.org/10.22214/ijraset.2024.65315.

Full text
Abstract:
This paper presents a comprehensive system for real-time translation of Indian Sign Language (ISL) gestures into spoken language using gloves equipped with flex sensors. The system incorporates an Arduino Nano microcontroller for data acquisition, an HC-05 Bluetooth module for wireless data transmission, and an Android application for processing. A deep learning model, trained on an ISL dataset using Keras and TensorFlow, classifies the gestures. The processed data is then converted into spoken language using Google Text-to-Speech (GTTS). The gloves measure finger movements through flex sensors, with data transmitted to the Android app for real-time classification and speech synthesis. This system is designed to bridge communication gaps for the hearing-impaired community by providing an intuitive and responsive translation tool. Our evaluation shows high accuracy in gesture recognition, with average latency ensuring near real-time performance. The system's effectiveness is demonstrated through extensive testing, showcasing its potential as an assistive technology. Future improvements include expanding the dataset and incorporating additional sensors to enhance gesture recognition accuracy and robustness. This research highlights the integration of wearable technology and machine learning as a promising solution for enhancing accessibility and communication for sign language users
APA, Harvard, Vancouver, ISO, and other styles
44

Nupur Giri. "Gesturely: A Conversation AI based Indian Sign Language Model." Journal of Information Systems Engineering and Management 10, no. 10s (2025): 576–84. https://doi.org/10.52783/jisem.v10i10s.1421.

Full text
Abstract:
The project, Gesturely, aims to improve communication in educational settings for people with hearing impairments. Sign language, notably Indian Sign Language (ISL) in India, serves as a primary mode of expression for the deaf community. The form of expression among the deaf relies on a rich vocabulary of gestures involving fingers, hands, arms, eyes, head, and face. The research endeavors to develop an algorithm capable of translating ISL into English, initially focusing on words within the education domain. Through the integration of advanced computer vision and deep learning methodologies, the objective is to create a system capable of interpreting ISL gestures and converting them into written text. The project involves the creation of a comprehensive dataset, with 50 words and more than 2500 videos. The vision is to empower the deaf community with real-time translation capabilities, promoting inclusivity and accessibility in communication
APA, Harvard, Vancouver, ISO, and other styles
45

Dixit, Karishma, and Anand Singh Jalal. "A Vision-Based Approach for Indian Sign Language Recognition." International Journal of Computer Vision and Image Processing 2, no. 4 (2012): 25–36. http://dx.doi.org/10.4018/ijcvip.2012100103.

Full text
Abstract:
The sign language is the essential communication method between the deaf and dumb people. In this paper, the authors present a vision based approach which efficiently recognize the signs of Indian Sign Language (ISL) and translate the accurate meaning of those recognized signs. A new feature vector is computed by fusing Hu invariant moment and structural shape descriptor to recognize sign. A multi-class Support Vector Machine (MSVM) is utilized for training and classifying signs of ISL. The performance of the algorithm is illustrated by simulations carried out on a dataset having 720 images. Experimental results demonstrate that the proposed approach can successfully recognize hand gesture with 96% recognition rate.
APA, Harvard, Vancouver, ISO, and other styles
46

Chakole, Vijay V. "Educational Learning-Based Sign Language System Using Machine Learning." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 03 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem29753.

Full text
Abstract:
This study proposes an innovative approach to multicultural education by integrating Indian Sign Language (ISL) and American Sign Language (ASL) through Machine Learning (ML) techniques. By collecting and preprocessing high-quality video data of ISL and ASL, we aim to develop ML models capable of recognizing and generating signs in both languages. Through bidirectional transfer learning and cross-language representation learning, we seek to enhance the learning experience and address common challenges in sign language acquisition. Additionally, personalized learning environments and culturally sensitive design, informed by collaboration with Deaf communities in India and America, ensure inclusivity and accuracy. Evaluation metrics and ethical considerations are integrated into the development process to promote responsible implementation and continuous improvement. Ultimately, this project aims to lay the groundwork for advancing multilingual sign language education globally. By employing advanced ML techniques, this study aims to bridge the gap between Indian Sign Language (ISL) and American Sign Language (ASL) education, fostering inclusivity and accessibility in learning environments. Through meticulous data collection, preprocessing, and collaborative development processes, our approach emphasizes accuracy, cultural sensitivity, and personalized learning experiences. By engaging with Deaf communities in both India and America, we ensure the authenticity and relevance of our platform. Evaluation metrics and ethical considerations are prioritized to uphold privacy, consent, and fairness principles. By establishing a robust foundation for multilingual sign language education, this project contributes to broader discussions on leveraging ML for enhancing accessibility and inclusivity in education systems worldwide. Keywords- Hand Gesture, Sign language Recognition, OpenCV, Media-pipe, tensorflow.
APA, Harvard, Vancouver, ISO, and other styles
47

Nespor, Marina, and Wendy Sandler. "Prosody in Israeli Sign Language." Language and Speech 42, no. 2-3 (1999): 143–76. http://dx.doi.org/10.1177/00238309990420020201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

S. Nikkam, Pushpalatha. "Voice To Sign Language Conversion." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–9. https://doi.org/10.55041/ijsrem48637.

Full text
Abstract:
ABSTRACT True incapacity can be seen as the inability to speak, where individuals with speech impairments struggle to communicate verbally or through hearing. To bridge this gap, many rely on sign language, a visual method of communication that uses hand gestures. Although sign language has become more widespread, interaction between those who sign and those who don't can still pose challenges. As communication has grown to be an essential part of daily life, sign language serves as a crucial tool for those with speech and hearing difficulties. Recent advances in computer vision and deep learning have significantly enhanced the ability to recognize gestures and movements. While American Sign Language (ASL) has been thoroughly researched, Indian Sign Language (ISL) remains underexplored. Our proposed approach focuses on recognizing 4972 static hand gestures representing 24 English alphabets (excluding J and Z) in ISL. The project aims to build a deep learning-based system that translates these gestures into text using the "Google Text-to-Speech" API, thereby enabling better interaction between signers and non-signers. Using a dataset from Kaggle and a custom Convolutional Neural Network (CNN), our method achieved a 99% accuracy rate. Key Words: Convolutional Neural Network; Google text to speech API; Indian signing.
APA, Harvard, Vancouver, ISO, and other styles
49

P, Adithyaraaj R., Mariyammal N, Mohammed Furkhan S, Rathika, and Prof K. Vijayalakshmi. "Indian Sign Language (ISL) Translator: AI-Powered Bidirectional Translation System." International Journal for Research in Applied Science and Engineering Technology 13, no. 3 (2025): 2053–61. https://doi.org/10.22214/ijraset.2025.67589.

Full text
Abstract:
Abstract: This work presents an advanced AI-based translation system designed to bridge communication barriers for the Deaf and Hard-of-Hearing (DHH) community by converting spoken and textual language into Indian Sign Language (ISL) and vice versa. The system leverages deep learning techniques, including computer vision and natural language processing (NLP), to interpret hand gestures and facial expressions accurately. Integrated with real-time processing capabilities, the model enables seamless interaction between ISL users and non-signing individuals. By utilizing a custom-trained Transformer-based NLP model and a Convolutional Neural Network (CNN) for visual recognition, the system ensures accurate and efficient translation. The prototype has been developed using VS Code, with datasets managed in local storage to optimize performance. This work aims to enhance accessibility, promote inclusivity, and facilitate effortless communication through a robust and scalable ISL translation model. The importance of an efficient ISL translation system extends beyond accessibility—it fosters independence, enhances social inclusion, and bridges the gap between the DHH community and the hearing population. Many Deaf individuals struggle with traditional text-based communication due to differences in sentence structures and grammar between ISL and spoken languages. By incorporating deep learning models for gesture recognition and NLP-based translation, our system provides a user-friendly solution for effective communication. Additionally, this system has the potential to be implemented in educational institutions, workplaces, and public services, ensuring better integration of the Deaf community into society. By addressing existing gaps and leveraging AI, our translator serves as a critical step toward an inclusive digital ecosystem.
APA, Harvard, Vancouver, ISO, and other styles
50

Mangai .V. "Comparative Analysis of Various Yolo Models for Sign Language Recognition with a specific dataset." Journal of Information Systems Engineering and Management 10, no. 43s (2025): 544–50. https://doi.org/10.52783/jisem.v10i43s.8443.

Full text
Abstract:
Understanding and replaying sign language are the hardcore communication tasks between the normal person and to deaf and dumped person, and vice versa. To enhance sign language-based communication, several models have been developed for making sign language into an understandable format by translating gestures into words. The ultimate goal of this research paper is to analyse and compare the various You Only Look One (YOLO) models on SLR problem. YOLO is a fast and efficient convolutional neural networks (CNN) variant that provides a better solution for sign language problems. The comparison of different YOLO models with Indian Sign Language (ISL) dataset can provide a suitable YOLO model for SLR. Therefore, the proposed work has considered the ISign Benchmark dataset. The ISL-based comparison analysis is implemented on Python tool where the various performance metrics are calculated for selecting the best YOLO model . This will make a way to give a fast and efficient means for recognizing sign gestures.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography