To see the other types of publications on this topic, follow the link: Sign language recognition.

Journal articles on the topic 'Sign language recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Sign language recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

P.R., Mahidar. "Sign Language Recognition Techniques - A Survey." International Journal of Psychosocial Rehabilitation 24, no. 5 (April 20, 2020): 2747–60. http://dx.doi.org/10.37200/ijpr/v24i5/pr201978.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

P, Keerthana, Nishanth M, Karpaga Vinayagam D, Alfred Daniel J, and Sangeetha K. "Sign Language Recognition." International Research Journal on Advanced Science Hub 3, Special Issue ICARD 3S (March 20, 2021): 41–44. http://dx.doi.org/10.47392/irjash.2021.060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jadhav, Akshay, Gayatri Tatkar, Gauri Hanwate, and Rutwik Patwardhan. "Sign Language Recognition." International Journal of Advanced Research in Computer Science and Software Engineering 7, no. 3 (March 30, 2017): 109–15. http://dx.doi.org/10.23956/ijarcsse/v7i3/0127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dubey, Shriya, Smrithi Suryawanshi, Aditya Rachamalla, and K. Madhu Babu. "Sign Language Recognition." International Journal for Research in Applied Science and Engineering Technology 11, no. 1 (January 31, 2023): 386–92. http://dx.doi.org/10.22214/ijraset.2023.48586.

Full text
Abstract:
Abstract: People communicate using sign language by visually conveying sign patterns to portray purpose. One method of communicating with deaf-mute people is to use sign language mechanisms. One of the nonverbal communication strategies used in sign language is the hand gesture. Many manufacturers all over the world have created various sign language systems, but they are neither adaptable nor cost-effective for end users. We present a design that can recognize various American sign language static hand motions in real-time using transfer learning, Python, and OpenCV in this paper. “Hello, Yes, No, Thank You, and I Love You" are all prevalent sign language terms that our system correctly acknowledges. The following are the key steps in system design; we created our own dataset taking prominent gestures of the American Sign Language, captured images with OpenCV and webcam, the images were then labelled for object detection, training and testing of dataset was done with transfer learning using SSD MobileNet, and eventually the gestures were successfully determined in real-time.
APA, Harvard, Vancouver, ISO, and other styles
5

Tolentino, Lean Karlo S., Ronnie O. Serfa Juan, August C. Thio-ac, Maria Abigail B. Pamahoy, Joni Rose R. Forteza, and Xavier Jet O. Garcia. "Static Sign Language Recognition Using Deep Learning." International Journal of Machine Learning and Computing 9, no. 6 (December 2019): 821–27. http://dx.doi.org/10.18178/ijmlc.2019.9.6.879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Patil, Prof Pritesh, Ruchir Bhagwat, Pratham Padale, Yash Shah, and Hrutik Surwade. "Sign Language Recognition System." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (May 31, 2022): 1772–76. http://dx.doi.org/10.22214/ijraset.2022.42626.

Full text
Abstract:
Abstract: A large number of deaf and mute people are present around the world and communicating with them is a bit difficult at times; because not everyone can understand Sign language(a system of communication using visual gestures and signs). In addition, there is a lack of official sign language interpreters. In India, the official number of approved sign language interpreters is only 250[1] . This makes communication with deaf and mute people very difficult. The majority of deaf and dumb teaching methods involve accommodating them to people who do not have disabilities - while discouraging the use of sign language. There is a need to encourage the use of sign language. People communicate with each other in sign language by using hand and finger gestures. The language serves its purpose by bridging the gap between the deaf-mute and speaking communities. With recent technological developments, sign language identification is a hard subject in the field of computer vision that has room for further progress. In this project, we propose an optimal recognition engine whose main objective is to translate static American Sign Language alphabets, numbers, and words into human and machine understandable English script and the other way around. Using Neural Networks, we offer a machine learning-based technique for identifying American Sign Language. Keywords: deep learning; convolutional neural network; recognition; comparison; sign language;
APA, Harvard, Vancouver, ISO, and other styles
7

M R, Dr Pooja, Meghana M, Harshith Bhaskar, Anusha Hulatti, Praful Koppalkar, and Bopanna M J. "Sign Language Recognition System." Indian Journal of Software Engineering and Project Management 1, no. 3 (January 10, 2022): 1–3. http://dx.doi.org/10.54105/ijsepm.c9011.011322.

Full text
Abstract:
We witness many people who face disabilities like being deaf, dumb, blind etc. They face a lot of challenges and difficulties trying to interact and communicate with others. This paper presents a new technique by providing a virtual solution without making use of any sensors. Histogram Oriented Gradient (HOG) along with Artificial Neural Network (ANN) have been implemented. The user makes use of web camera, which takes input from the user and processes the image of different gestures. The algorithm recognizes the image and identifies the pending voice input. This paper explains two way means of communication between impaired and normal people which implies that the proposed ideology can convert sign language to text and voice.
APA, Harvard, Vancouver, ISO, and other styles
8

M R, Dr Pooja, Meghana M, Harshith Bhaskar, Anusha Hulatti, Praful Koppalkar, and Bopanna M J. "Sign Language Recognition System." Indian Journal of Software Engineering and Project Management 1, no. 3 (January 10, 2022): 1–3. http://dx.doi.org/10.35940/ijsepm.c9011.011322.

Full text
Abstract:
We witness many people who face disabilities like being deaf, dumb, blind etc. They face a lot of challenges and difficulties trying to interact and communicate with others. This paper presents a new technique by providing a virtual solution without making use of any sensors. Histogram Oriented Gradient (HOG) along with Artificial Neural Network (ANN) have been implemented. The user makes use of web camera, which takes input from the user and processes the image of different gestures. The algorithm recognizes the image and identifies the pending voice input. This paper explains two way means of communication between impaired and normal people which implies that the proposed ideology can convert sign language to text and voice.
APA, Harvard, Vancouver, ISO, and other styles
9

ZakiAbdo, Mahmoud, Alaa Mahmoud Hamdy, Sameh Abd El-Rahman Salem, and El-Sayed Mostafa Saad. "Arabic Sign Language Recognition." International Journal of Computer Applications 89, no. 20 (March 26, 2014): 19–26. http://dx.doi.org/10.5120/15747-4523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Holden, Eun-Jung, Gareth Lee, and Robyn Owens. "Australian sign language recognition." Machine Vision and Applications 16, no. 5 (November 25, 2005): 312–20. http://dx.doi.org/10.1007/s00138-005-0003-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Deshpande, Padmanabh D., and Sudhir S. Kanade. "Recognition of Indian Sign Language using SVM classifier." International Journal of Trend in Scientific Research and Development Volume-2, Issue-3 (April 30, 2018): 1053–58. http://dx.doi.org/10.31142/ijtsrd11104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Malipatil, Sridevi. "Real Time Sign Language RecognitionReal Time Sign Language Recognition." International Journal for Research in Applied Science and Engineering Technology 10, no. 6 (June 30, 2022): 2032–36. http://dx.doi.org/10.22214/ijraset.2022.44266.

Full text
Abstract:
Abstract: With regard to hearing and vocally impaired individualities, communication with others is a way longer struggle for them. They are unfit to speak with traditional individualities duly. They face difficulties in getting jobs and living a traditional life like others. In this paper, we are introducing a smart communication system for hearing and vocally impaired individuals and also for normal people. The overall delicacy of the system is 92.5, with both the hands involved. The main advantage of this system being proposed over the former system is that in the former system the signs can be detectedby the camera only when the hands are covered in gloves whereas in this proposed system, we have tried our swish to overcome that disadvantage handed by the former system. Keywords: Open CV, Google API, Raspberry Pi video core GPU, image pre-processing, feature extraction.
APA, Harvard, Vancouver, ISO, and other styles
13

Ferreira, Pedro M., Diogo Pernes, Ana Rebelo, and Jaime S. Cardoso. "Signer-Independent Sign Language Recognition with Adversarial Neural Networks." International Journal of Machine Learning and Computing 11, no. 2 (March 2021): 121–29. http://dx.doi.org/10.18178/ijmlc.2021.11.2.1024.

Full text
Abstract:
Sign Language Recognition (SLR) has become an appealing topic in modern societies because such technology can ideally be used to bridge the gap between deaf and hearing people. Although important steps have been made towards the development of real-world SLR systems, signer-independent SLR is still one of the bottleneck problems of this research field. In this regard, we propose a deep neural network along with an adversarial training objective, specifically designed to address the signer-independent problem. Specifically, the proposed model consists of an encoder, mapping from input images to latent representations, and two classifiers operating on these underlying representations: (i) the sign-classifier, for predicting the class/sign labels, and (ii) the signer-classifier, for predicting their signer identities. During the learning stage, the encoder is simultaneously trained to help the sign-classifier as much as possible while trying to fool the signer-classifier. This adversarial training procedure allows learning signer-invariant latent representations that are in fact highly discriminative for sign recognition. Experimental results demonstrate the effectiveness of the proposed model and its capability of dealing with the large inter-signer variations.
APA, Harvard, Vancouver, ISO, and other styles
14

N, Kaushik, and Vaidya Rahul. "A Survey of Approaches for Sign Language Recognition System." International Journal of Psychosocial Rehabilitation 24, no. 1 (January 20, 2020): 1775–83. http://dx.doi.org/10.37200/ijpr/v24i1/pr200278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Goyal, Er Kanika, and Amitoj Singh. "Indian Sign Language Recognition System for Differently-able People." Journal on Today's Ideas - Tomorrow's Technologies 2, no. 2 (December 5, 2014): 145–51. http://dx.doi.org/10.15415/jotitt.2014.22011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Alka, Mishra, Jacob Mathew Abhilash, Agrawal Abhyudaya, Quarishi Adnan, Vaishnava Amiy, and Kumar Sparsh. "Sensor based sign language recognition system." i-manager’s Journal on Pattern Recognition 9, no. 1 (2022): 15. http://dx.doi.org/10.26634/jpr.9.1.18757.

Full text
Abstract:
Sign language is used as a primary form of communication by many people who are deaf, deafened and non-verbal. Communication barriers exist for members of these populations during daily interactions with those who are unable to understand or use sign language. Advancements in technology and machine learning techniques have enabled development of innovative approaches to translate these sign languages to spoken languages. This paper proposes an intelligent system for translating sign language into text. This approach consists of hardware as well as software. The hardware consists of flex, contact, inertial sensors and SD card module mounted on a synthetic glove, additionally If-Else (Rule based learning) based learning is performed by Arduino nano (Atmega328p) to represent the proposed system. This system is able to recognize static letters, numbers and translating 26 letters from A to Z and 10 numbers from 0 to 9 from the American sign language. The database with the use of alphabet and numbers is prepared, and tested by kNN and CN2 rule inducer, where kNN has shown promising result. This experiment is done by Orange software. Experimental results demonstrate that our system is effective, cheaper and has high classification accuracy as compared to other technology available in market.
APA, Harvard, Vancouver, ISO, and other styles
17

Karbasi, M., A. Zabidi, I. M. Yassin, A. Waqas, and Z. Bhatti. "Malaysian sign language dataset for automatic sign language recognition system." Journal of Fundamental and Applied Sciences 9, no. 4S (January 23, 2018): 459. http://dx.doi.org/10.4314/jfas.v9i4s.26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Zorins, Aleksejs, and Peter Grabusts. "LATVIAN SIGN LANGUAGE RECOGNITION CLASSIFICATION POSSIBILITIES." Environment. Technology. Resources. Proceedings of the International Scientific and Practical Conference 2 (June 15, 2017): 185. http://dx.doi.org/10.17770/etr2017vol2.2653.

Full text
Abstract:
There is a lack of automated sign language recognition system in Latvia while many other countries have been already equipped with such a system. Latvian deaf society requires support of such a system which would allow people with special needs to enhance their communication in governmental and public places. The aim of this paper is to recognize Latvian sign language alphabet using classification approach with artificial neural networks, which is a first step in developing integral system of Latvian Sign Language recognition. Communication in our daily life is generally vocal, but body language has its own significance. It has many areas of application like sign languages are used for various purposes and in case of people who are deaf and dumb, sign language plays an important role. Gestures are the very first form of communication. The paper presents Sign Language Recognition possibilities with centre of gravity method. So this area influenced us very much to carry on the further work related to hand gesture classification and sign’s clustering.
APA, Harvard, Vancouver, ISO, and other styles
19

Sood, Dhruv. "Sign Language Recognition using Deep Learning." International Journal for Research in Applied Science and Engineering Technology 10, no. 3 (March 31, 2022): 246–49. http://dx.doi.org/10.22214/ijraset.2022.40627.

Full text
Abstract:
Abstract: Millions of people with speech and hearing impairments communicate with sign languages every day. For hearingimpaired people, gesture recognition is a natural way of communicating, much like voice recognition is for most people. In this study, we look at the issue of translating/converting sign language to text and propose a better solution based on machine learning techniques. We want to establish a system that hearing-impaired people may utilise in their everyday lives to promote communication and collaboration between hearing-impaired people and people who aren't trained in American Sign Language (ASL). To develop a deep learning model for the ASL dataset, we'll use a technique called Transfer Learning in combination with Data Augmentation. Keywords: Sign language, machine leaning, Transfer learning, ASL, Inception v3
APA, Harvard, Vancouver, ISO, and other styles
20

Ketan Dagli, Mokshak, and Dr Preeti Savant. "SIGN LANGUAGE RECOGNITION: A SURVEY." International Journal of Engineering Applied Sciences and Technology 6, no. 8 (December 1, 2021): 196–99. http://dx.doi.org/10.33564/ijeast.2021.v06i08.033.

Full text
Abstract:
Sign Language Recognition is a growing field in research where different people are trying to develop or propose a system that can help the specially-able people of the society in the best possible way so that they can interact and communicate with the people of the society easily and without the problem of them learning the sign language. This language is basically used by the deaf and dumb people of society. This paper describes various ways a sign language recognition system has been built or has been proposed by different researchers. It also aims to have a better understanding of various recognition systems and various methods of recognizing and predicting the hand signs/hand gestures of the specially able people of the society. In return to predict a value which is in a humanreadable format that is text. This paper describes in detail about the various ways and how one method of sign language recognition is different from other based on performance, results, difficulty in developing the model, etc.
APA, Harvard, Vancouver, ISO, and other styles
21

Wattamwar, Aniket. "Sign Language Recognition using CNN." International Journal for Research in Applied Science and Engineering Technology 9, no. 9 (September 30, 2021): 826–30. http://dx.doi.org/10.22214/ijraset.2021.38058.

Full text
Abstract:
Abstract: This research work presents a prototype system that helps to recognize hand gesture to normal people in order to communicate more effectively with the special people. Aforesaid research work focuses on the problem of gesture recognition in real time that sign language used by the community of deaf people. The problem addressed is based on Digital Image Processing using CNN (Convolutional Neural Networks), Skin Detection and Image Segmentation techniques. This system recognizes gestures of ASL (American Sign Language) including the alphabet and a subset of its words. Keywords: gesture recognition, digital image processing, CNN (Convolutional Neural Networks), image segmentation, ASL (American Sign Language), alphabet
APA, Harvard, Vancouver, ISO, and other styles
22

VERSHA, VERMA, and PATIL SANDEEP B. "STATIC DEVNAGARI SIGN LANGUAGE RECOGNITION." i-manager’s Journal on Pattern Recognition 3, no. 3 (2016): 13. http://dx.doi.org/10.26634/jpr.3.3.12406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Hashim, Abdulla D., and Fattah Alizadeh. "Kurdish Sign Language Recognition System." UKH Journal of Science and Engineering 2, no. 1 (June 30, 2018): 1–6. http://dx.doi.org/10.25079/ukhjse.v2n1y2018.pp1-6.

Full text
Abstract:
Deaf people all around the world face difficulty to communicate with the others. Hence, they use their own language to communicate with each other. On the other hand, it is difficult for deaf people to get used to technological services such as websites, television, mobile applications, and so on. This project aims to design a prototype system for deaf people to help them to communicate with other people and computers without relying on human interpreters. The proposed system is for letter-based Kurdish Sign Language (KuSL) which has not been introduced before. The system would be a real-time system that takes actions immediately after detecting hand gestures. Three algorithms for detecting KuSL have been implemented and tested, two of them are well-known methods that have been implemented and tested by other researchers, and the third one has been introduced in this paper for the 1st time. The new algorithm is named Gridbased gesture descriptor. It turned out to be the best method for the recognition of Kurdish hand signs. Furthermore, the result of the algorithm was 67% accuracy of detecting hand gestures. Finally, the other well-known algorithms are named scale invariant feature transform and speeded-up robust features, and they responded with 42% of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
24

Hee-Deok Yang. "Sign Language Recognition Using Kinect." Journal of Advanced Engineering and Technology 8, no. 4 (December 2015): 299–303. http://dx.doi.org/10.35272/jaet.2015.8.4.299.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Santosa, Paulus Insap. "Isolated Sign Language Characters Recognition." TELKOMNIKA (Telecommunication Computing Electronics and Control) 11, no. 3 (September 1, 2013): 583. http://dx.doi.org/10.12928/telkomnika.v11i3.1142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Wang, Qi, Xilin Chen, Liang-Guo Zhang, Chunli Wang, and Wen Gao. "Viewpoint invariant sign language recognition." Computer Vision and Image Understanding 108, no. 1-2 (October 2007): 87–97. http://dx.doi.org/10.1016/j.cviu.2006.11.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

M. Mullur, Kumara Maruthi. "Indian Sign Language Recognition System." International Journal of Engineering Trends and Technology 21, no. 9 (March 25, 2015): 450–54. http://dx.doi.org/10.14445/22315381/ijett-v21p288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Rokade, Yogeshwar I., and Prashant M. Jadav. "Indian Sign Language Recognition System." International Journal of Engineering and Technology 9, no. 3S (July 17, 2017): 189–96. http://dx.doi.org/10.21817/ijet/2017/v9i3/170903s030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Badarch, Luubaatar, Munkh-Erdene Ganbat, Otgonbayar Altankhuyag, and Amartuvshin Togooch. "Mongolian Sign Language Recognition Model." ICT Focus 1, no. 1 (September 29, 2022): 1–9. http://dx.doi.org/10.58873/sict.v1i1.27.

Full text
Abstract:
Sign language is a gesture-based manual used by people with hearing impairment and spoken language disorder to communicate with others. There is no universal sign language — the most used one is the American Sign Language. Mongolian Sign Language (MSL) has hand signs for letters of the alphabet, numbers, and other commonly used words. There are an estimated 16000 MSL signers. The lack of means to translate MSL into the Mongolian language, such as professional interpreters or translator applications, hinders MSL signers’ freedom of expression and political and public participation. Here, we created an MSL recognition system model that uses a camera to capture the letter symbols for the MSL alphabet and translates them into written Mongolian words. The proposed model uses two machine learning models that 1) recognize input, sorts, and filters, and 2) process Mongolian language. The model had an F1 score of 0.8678, given 51 distinct hand gestures. The natural language processing model that forms words had sufficient performance, though it can be improved in further works.
APA, Harvard, Vancouver, ISO, and other styles
30

V Pareddy, Smt Sudha. "Sign Language Recognition using Deep Learning." International Journal for Research in Applied Science and Engineering Technology 10, no. 7 (July 31, 2022): 4173–77. http://dx.doi.org/10.22214/ijraset.2022.45913.

Full text
Abstract:
Abstract: Sign language is a way of communicating using hand gestures and movements, body language and facial expressions, instead of spoken words. It can also be defined as any of various formal languages employing a system of hand gestures and their placement relative to the upper body, facial expressions, body postures, and finger spelling especially for communication by and with deaf people. The project that is being built is to recognize the action performed by the person/user in sign language using Deep learning. Ordinary people are not well versed in sign language. The project tries to solve this problem using deep learning that is precisely using TensorFlow. In the project a LSTM (long-short term memory) model in deep Learning is built using TensorFlow to categories the action the user is doing. This will help the user with special needs to communicate with other people using the application we built. By this we can bridge the gap between the especially abled people and ordinary people.
APA, Harvard, Vancouver, ISO, and other styles
31

Kour, Kamal Preet, and Lini Mathew. "Sign Language Recognition Using Image Processing." International Journal of Advanced Research in Computer Science and Software Engineering 7, no. 8 (August 30, 2017): 142. http://dx.doi.org/10.23956/ijarcsse.v7i8.41.

Full text
Abstract:
One of the major drawback of our society is the barrier that is created between disabled or handicapped persons and the normal person. Communication is the only medium by which we can share our thoughts or convey the message but for a person with disability (deaf and dumb) faces difficulty in communication with normal person. For many deaf and dumb people , sign language is the basic means of communication. Sign language recognition (SLR) aims to interpret sign languages automatically by a computer in order to help the deaf communicate with hearing society conveniently. Our aim is to design a system to help the person who trained the hearing impaired to communicate with the rest of the world using sign language or hand gesture recognition techniques. In this system, feature detection and feature extraction of hand gesture is done with the help of SURF algorithm using image processing. All this work is done using MATLAB software. With the help of this algorithm, a person can easily trained a deaf and dumb.
APA, Harvard, Vancouver, ISO, and other styles
32

Vo, Anh H., Van-Huy Pham, and Bao T. Nguyen. "Deep Learning for Vietnamese Sign Language Recognition in Video Sequence." International Journal of Machine Learning and Computing 9, no. 4 (August 2019): 440–45. http://dx.doi.org/10.18178/ijmlc.2019.9.4.823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Trivedi, Kaustubh, Priyanka Gaikwad, Mahalaxmi Soma, Komal Bhore, and Prof Richa Agarwal. "Improve the Recognition Accuracy of Sign Language Gesture." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (May 31, 2022): 4343–47. http://dx.doi.org/10.22214/ijraset.2022.43220.

Full text
Abstract:
Abstract: Image classification is one of classical issue of concern in image processing. There are various techniques for solving this issue. Sign languages are natural language that used to communicate with deaf and mute people. There is much different sign language in the world. But the main focused of system is on Sign Language (SL) which is on the way of standardization in that the system will concentrated on hand gestures only. Hand gesture is very important part of the body for exchange ideas, messages, and thoughts among deaf and dumb people. The proposed system will recognize the number 0 to 9 and alphabets from American Sign Language. It will divide into three parts i.e. preprocessing, feature extraction, classification. It will initially identify the gestures from American Sign language. Finally, the system processes that gesture to recognize number with the help of classification using CNN. Additionally we will play the speech of that identified alphabets. Keywords: Hybrid Approach, American Sign Language, Gesture Recognition. Feature Extraction
APA, Harvard, Vancouver, ISO, and other styles
34

Rambhia, Jainam, Manan Doshi, Rashi Lodha, and Stevina Correia. "Real Time Indian Sign Language Recognition using Deep LSTM Networks." International Journal for Research in Applied Science and Engineering Technology 11, no. 1 (January 31, 2023): 1041–45. http://dx.doi.org/10.22214/ijraset.2023.48695.

Full text
Abstract:
Abstract: On our planet, people with speech and hearing disabilities are part of society. Communication becomes difficult when it is necessary to interact with survivors and the general public. In some races, people with disabilities practice different sign languages for communication. For people with speech and hearing disabilities, sign language is a basic means of communication in everyday life. However, a large portion of our community is unaware of the sign languages they practice, so bringing them into the mainstream is an incredible challenge. Today, computer vision-based solutions are still well received for helping the general public understand sign language. Many analysts are trying out one of his computer vision-based sign language recognition solutions, Recognition of Hand Gesture. Lately it’s been a popular area of research in various vernacular languages being used across the world. Through this research work, we propose a solution to this problem using Keypoint identification and Neural Network architecture for real time sign language recognition. This architecture uses a Long Short Term Memory architecture (LSTM) giving the best accuracy of 91.5% in prediction of words as well as phrases of Indian Sign Language
APA, Harvard, Vancouver, ISO, and other styles
35

De Meulder, Maartje, and Joseph J. Murray. "Buttering their bread on both sides?" Language Problems and Language Planning 41, no. 2 (October 27, 2017): 136–58. http://dx.doi.org/10.1075/lplp.41.2.04dem.

Full text
Abstract:
Abstract In the past two decades, a wave of campaigns to recognise sign languages have taken place in numerous countries. These campaigns sought official recognition of national sign languages, with the aim of enhancing signers’ social mobility and protecting the vitality of sign languages. These activities differ from a long history of sign language planning from a ‘language as a problem’ approach largely used by educators and policymakers to date. However, the instrumental rights and social mobility obtained as a result have thus far been limited with educational linguistic and language acquisition rights especially lacking. This article identifies two reasons for this situation. First, a view of Sign Language Peoples (SLPs) from a medical perspective has led to confusion about the meaning of linguistic rights for them and led governments to treat sign language planning differently than that for spoken languages. Furthermore, SLPs political participation is hindered by recognition being offered by governments without substantial commitments to financial resources, changes in government practices or greater inclusion of sign languages in public life. One exception to this trend are sign language planning bodies, but even these face challenges in the implementation phase. Going forward, we argue that sign language recognition legislation should centre on deaf communities’ concerns regarding sign language vitality. In addition to a need to ensure acquisition for deaf signers, we contend that while the expansion of hearing (and deaf) new signers can be interpreted in terms of language endangerment it can also be seen as strengthening sign languages’ vitality.
APA, Harvard, Vancouver, ISO, and other styles
36

Lupton, Linda, and Macalyne Fristoe. "Sign Vocabulary Recognition in Students of American Sign Language." Sign Language Studies 1076, no. 1 (1992): 215–32. http://dx.doi.org/10.1353/sls.1992.0024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

R., Elakkiya, and Selvamani K. "Subunit sign modeling framework for continuous sign language recognition." Computers & Electrical Engineering 74 (March 2019): 379–90. http://dx.doi.org/10.1016/j.compeleceng.2019.02.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Al-Shamayleh, Ahmad Sami, Rodina Ahmad, Nazean Jomhari, and Mohammad A. M. Abushariah. "AUTOMATIC ARABIC SIGN LANGUAGE RECOGNITION: A REVIEW, TAXONOMY, OPEN CHALLENGES, RESEARCH ROADMAP AND FUTURE DIRECTIONS." Malaysian Journal of Computer Science 33, no. 4 (October 30, 2020): 306–43. http://dx.doi.org/10.22452/mjcs.vol33no4.5.

Full text
Abstract:
Sign language is still the best communication mean between the deaf and hearing impaired citizens. Due to the advancements in technology, we are able to find various research attempts and efforts on Automatic Sign Language Recognition (ASLR) technology for many languages including the Arabic language. Such attempts have simplified and assisted the interpretation between spoken and sign languages. In fact, the technologies that translate between spoken and sign languages have become popular today. Being the first comprehensive and up-to-date review that studies the state-of-the-art ASLR in perspective to Arabic Sign Language Recognition (ArSLR), this review is a contribution to ArSLR research community. In this paper, the research background and fundamentals of ArSLR are provided. ArSLR research taxonomies, databases, open challenges, future research trends, and directions, and a roadmap to ArSLR research are presented. This review investigates two major taxonomies. The primary taxonomy that is related to the capturing mechanism of the gestures for ArSLR, which can be either a Vision-Based Recognition (VBR) approach or Sensor-Based Recognition (SBR) approach. The secondary taxonomy that is related to the type and task of the gestures for ArSLR, which can be either the Arabic alphabet, isolated words, or continuous sign language recognition. In addition, less research attempts have been directed towards Arabic continuous sign language recognition task compared to other tasks, which marks a research gap that can be considered by the research community. To the best of our knowledge, all previous research attempts and reviews on sign language recognition for ArSL used forehand signs. This shows that the backhand signs have not been considered for ArSL tasks, which creates another important research gap to be filled up. Therefore, we recommend more research initiatives to contribute to these gaps by using an SBR approach for signers' dependent and independent approaches.
APA, Harvard, Vancouver, ISO, and other styles
39

S N, Omkar, and Monisha M. "SIGN LANGUAGE RECOGNITION USING THINNING ALGORITHM." ICTACT Journal on Image and Video Processing 02, no. 01 (August 1, 2011): 241–45. http://dx.doi.org/10.21917/ijivp.2011.0035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Ayshee, Tanzila Ferdous, Sadia Afrin Raka, Quazi Ridwan Hasib, Rashedur M. Rahman, and Md Hossain. "Sign Language Recognition for Bengali Characters." International Journal of Fuzzy System Applications 4, no. 4 (October 2015): 1–14. http://dx.doi.org/10.4018/ijfsa.2015100101.

Full text
Abstract:
Sign language is the primary means of communication for people having speaking and hearing impairment. This language uses a system of manual, facial, and other body movements as the means of communication, as opposed to acoustically conveyed sound patterns. This paper uses image processing and fuzzy logic to develop an intelligent system to recognize Bengali Sign Language. The proposed system works in two phases. In the first phase, the fuzzification methods are defined. Then in the next phase, the raw images are processed to identify the fuzzy rules. A detailed implementation procedure of the proposed system is demonstrated by describing the recognition process of four Bengali characters.
APA, Harvard, Vancouver, ISO, and other styles
41

Hu, Hezhen, Wengang Zhou, and Houqiang Li. "Hand-Model-Aware Sign Language Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 1558–66. http://dx.doi.org/10.1609/aaai.v35i2.16247.

Full text
Abstract:
Hand gestures play a dominant role in the expression of sign language. Current deep-learning based video sign language recognition (SLR) methods usually follow a data-driven paradigm under the supervision of the category label. However, those methods suffer limited interpretability and may encounter the overfitting issue due to limited sign data sources. In this paper, we introduce the hand prior and propose a new hand-model-aware framework for isolated SLR with the modeling hand as the intermediate representation. We first transform the cropped hand sequence into the latent semantic feature. Then the hand model introduces the hand prior and provides a mapping from the semantic feature to the compact hand pose representation. Finally, the inference module enhances the spatio-temporal pose representation and performs the final recognition. Due to the lack of annotation on the hand pose under current sign language datasets, we further guide its learning by utilizing multiple weakly-supervised losses to constrain its spatial and temporal consistency. To validate the effectiveness of our method, we perform extensive experiments on four benchmark datasets, including NMFs-CSL, SLR500, MSASL and WLASL. Experimental results demonstrate that our method achieves state-of-the-art performance on all four popular benchmarks with a notable margin.
APA, Harvard, Vancouver, ISO, and other styles
42

Yamamoto, Yohsuke, Masafumi Uchida, and Hideto Ide. "Sign Language Recognition by Statistics Method." IEEJ Transactions on Electronics, Information and Systems 117, no. 3 (1997): 334–35. http://dx.doi.org/10.1541/ieejeiss1987.117.3_334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ghuse, Prof Namrata. "Sign Language Recognition using Smart Glove." International Journal for Research in Applied Science and Engineering Technology 9, no. VII (July 10, 2021): 328–33. http://dx.doi.org/10.22214/ijraset.2021.36347.

Full text
Abstract:
Gesture-based communication Recognition through innovation has been ignored idea even though an enormous local area can profit from it. There are more than 3% total population of the world who even can't speak or hear properly. Using hand Gesture-based communication, Every especially impaired people to communicate with each other or rest of world population. It is a way of correspondence for the rest of the world population who are neither speaking and hearing incompetency. Normal people even don't become intimate or close with sign language based communications. It's a reason become a gap between the especially impaired people & ordinary person. The previous systems of the project used to involve the concepts of image generation and emoji symbols. But the previous frameworks of a project are not affordable and not portable for the impaired person.The Main propaganda of a project has always been to interpret Indian Sign Language Standards and American Sign Language Standards and Convert gestures into voice and text, also assist the impaired person can interact with the other person from the remote location. This hand smart glove has been made with the set up with Gyroscope, Flex Sensor, ESP32 Microcontrollers/Micro bit, Accelerometer,25 LED Matrix Actuators/Output &, flex sensor, vibrator etc.
APA, Harvard, Vancouver, ISO, and other styles
44

Ghuse, Prof Namrata. "Sign Language Recognition using Smart Glove." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (July 15, 2021): 789–92. http://dx.doi.org/10.22214/ijraset.2021.36465.

Full text
Abstract:
Gesture-based communication Recognition through innovation has been ignored idea even though an enormous local area can profit from it. There are more than 3% total population of the world who even can't speak or hear properly. Using hand Gesture-based communication, Every especially impaired people to communicate with each other or rest of world population. It is a way of correspondence for the rest of the world population who are neither speaking and hearing incompetency. Normal people even don't become intimate or close with sign language based communications. It's a reason become a gap between the especially impaired people & ordinary person. The previous systems of the project used to involve the concepts of image generation and emoji symbols. But the previous frameworks of a project are not affordable and not portable for the impaired person.The Main propaganda of a project has always been to interpret Indian Sign Language Standards and American Sign Language Standards and Convert gestures into voice and text, also assist the impaired person can interact with the other person from the remote location. This hand smart glove has been made with the set up with Gyroscope, Flex Sensor, ESP32 Microcontrollers/Micro bit, Accelerometer,25 LED Matrix Actuators/Output &, flex sensor, vibrator etc.
APA, Harvard, Vancouver, ISO, and other styles
45

Raheja, J. L., A. Mishra, and A. Chaudhary. "Indian sign language recognition using SVM." Pattern Recognition and Image Analysis 26, no. 2 (April 2016): 434–41. http://dx.doi.org/10.1134/s1054661816020164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Das, Siddhartha Pratim, Anjan Kumar Talukdar, and Kandarpa Kumar Sarma. "Sign Language Recognition Using Facial Expression." Procedia Computer Science 58 (2015): 210–16. http://dx.doi.org/10.1016/j.procs.2015.08.056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Sidig, Ala addin I., Hamzah Luqman, and Sabri A. Mahmoud. "Transform-based Arabic sign language recognition." Procedia Computer Science 117 (2017): 2–9. http://dx.doi.org/10.1016/j.procs.2017.10.087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Tamura, Shinichi, and Shingo Kawasaki. "Recognition of sign language motion images." Pattern Recognition 21, no. 4 (January 1988): 343–53. http://dx.doi.org/10.1016/0031-3203(88)90048-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Rastgoo, Razieh, Kourosh Kiani, and Sergio Escalera. "Sign Language Recognition: A Deep Survey." Expert Systems with Applications 164 (February 2021): 113794. http://dx.doi.org/10.1016/j.eswa.2020.113794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

El Zaar, Abdellah, Nabil Benaya, and Abderrahim El Allati. "Sign Language Recognition: High Performance Deep Learning Approach Applyied To Multiple Sign Languages." E3S Web of Conferences 351 (2022): 01065. http://dx.doi.org/10.1051/e3sconf/202235101065.

Full text
Abstract:
In this paper we present a high performance Deep Learning architecture based on Convolutional Neural Network (CNN). The proposed architecture is effective as it is capable of recognizing and analyzing with high accuracy different Sign language datasets. The sign language recognition is one of the most important tasks that will change the lives of deaf people by facilitating their daily life and their integration into society. Our approach was trained and tested on an American Sign Language (ASL) dataset, Irish Sign Alphabets (ISL) dataset and Arabic Sign Language Alphabet (ArASL) dataset and outperforms the state-of-the-art methods by providing a recognition rate of 99% for ASL and ISL, and 98% for ArASL.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography