Academic literature on the topic 'Sign language recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sign language recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sign language recognition"

1

P.R., Mahidar. "Sign Language Recognition Techniques - A Survey." International Journal of Psychosocial Rehabilitation 24, no. 5 (April 20, 2020): 2747–60. http://dx.doi.org/10.37200/ijpr/v24i5/pr201978.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

P, Keerthana, Nishanth M, Karpaga Vinayagam D, Alfred Daniel J, and Sangeetha K. "Sign Language Recognition." International Research Journal on Advanced Science Hub 3, Special Issue ICARD 3S (March 20, 2021): 41–44. http://dx.doi.org/10.47392/irjash.2021.060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jadhav, Akshay, Gayatri Tatkar, Gauri Hanwate, and Rutwik Patwardhan. "Sign Language Recognition." International Journal of Advanced Research in Computer Science and Software Engineering 7, no. 3 (March 30, 2017): 109–15. http://dx.doi.org/10.23956/ijarcsse/v7i3/0127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dubey, Shriya, Smrithi Suryawanshi, Aditya Rachamalla, and K. Madhu Babu. "Sign Language Recognition." International Journal for Research in Applied Science and Engineering Technology 11, no. 1 (January 31, 2023): 386–92. http://dx.doi.org/10.22214/ijraset.2023.48586.

Full text
Abstract:
Abstract: People communicate using sign language by visually conveying sign patterns to portray purpose. One method of communicating with deaf-mute people is to use sign language mechanisms. One of the nonverbal communication strategies used in sign language is the hand gesture. Many manufacturers all over the world have created various sign language systems, but they are neither adaptable nor cost-effective for end users. We present a design that can recognize various American sign language static hand motions in real-time using transfer learning, Python, and OpenCV in this paper. “Hello, Yes, No, Thank You, and I Love You" are all prevalent sign language terms that our system correctly acknowledges. The following are the key steps in system design; we created our own dataset taking prominent gestures of the American Sign Language, captured images with OpenCV and webcam, the images were then labelled for object detection, training and testing of dataset was done with transfer learning using SSD MobileNet, and eventually the gestures were successfully determined in real-time.
APA, Harvard, Vancouver, ISO, and other styles
5

Tolentino, Lean Karlo S., Ronnie O. Serfa Juan, August C. Thio-ac, Maria Abigail B. Pamahoy, Joni Rose R. Forteza, and Xavier Jet O. Garcia. "Static Sign Language Recognition Using Deep Learning." International Journal of Machine Learning and Computing 9, no. 6 (December 2019): 821–27. http://dx.doi.org/10.18178/ijmlc.2019.9.6.879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Patil, Prof Pritesh, Ruchir Bhagwat, Pratham Padale, Yash Shah, and Hrutik Surwade. "Sign Language Recognition System." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (May 31, 2022): 1772–76. http://dx.doi.org/10.22214/ijraset.2022.42626.

Full text
Abstract:
Abstract: A large number of deaf and mute people are present around the world and communicating with them is a bit difficult at times; because not everyone can understand Sign language(a system of communication using visual gestures and signs). In addition, there is a lack of official sign language interpreters. In India, the official number of approved sign language interpreters is only 250[1] . This makes communication with deaf and mute people very difficult. The majority of deaf and dumb teaching methods involve accommodating them to people who do not have disabilities - while discouraging the use of sign language. There is a need to encourage the use of sign language. People communicate with each other in sign language by using hand and finger gestures. The language serves its purpose by bridging the gap between the deaf-mute and speaking communities. With recent technological developments, sign language identification is a hard subject in the field of computer vision that has room for further progress. In this project, we propose an optimal recognition engine whose main objective is to translate static American Sign Language alphabets, numbers, and words into human and machine understandable English script and the other way around. Using Neural Networks, we offer a machine learning-based technique for identifying American Sign Language. Keywords: deep learning; convolutional neural network; recognition; comparison; sign language;
APA, Harvard, Vancouver, ISO, and other styles
7

M R, Dr Pooja, Meghana M, Harshith Bhaskar, Anusha Hulatti, Praful Koppalkar, and Bopanna M J. "Sign Language Recognition System." Indian Journal of Software Engineering and Project Management 1, no. 3 (January 10, 2022): 1–3. http://dx.doi.org/10.54105/ijsepm.c9011.011322.

Full text
Abstract:
We witness many people who face disabilities like being deaf, dumb, blind etc. They face a lot of challenges and difficulties trying to interact and communicate with others. This paper presents a new technique by providing a virtual solution without making use of any sensors. Histogram Oriented Gradient (HOG) along with Artificial Neural Network (ANN) have been implemented. The user makes use of web camera, which takes input from the user and processes the image of different gestures. The algorithm recognizes the image and identifies the pending voice input. This paper explains two way means of communication between impaired and normal people which implies that the proposed ideology can convert sign language to text and voice.
APA, Harvard, Vancouver, ISO, and other styles
8

M R, Dr Pooja, Meghana M, Harshith Bhaskar, Anusha Hulatti, Praful Koppalkar, and Bopanna M J. "Sign Language Recognition System." Indian Journal of Software Engineering and Project Management 1, no. 3 (January 10, 2022): 1–3. http://dx.doi.org/10.35940/ijsepm.c9011.011322.

Full text
Abstract:
We witness many people who face disabilities like being deaf, dumb, blind etc. They face a lot of challenges and difficulties trying to interact and communicate with others. This paper presents a new technique by providing a virtual solution without making use of any sensors. Histogram Oriented Gradient (HOG) along with Artificial Neural Network (ANN) have been implemented. The user makes use of web camera, which takes input from the user and processes the image of different gestures. The algorithm recognizes the image and identifies the pending voice input. This paper explains two way means of communication between impaired and normal people which implies that the proposed ideology can convert sign language to text and voice.
APA, Harvard, Vancouver, ISO, and other styles
9

ZakiAbdo, Mahmoud, Alaa Mahmoud Hamdy, Sameh Abd El-Rahman Salem, and El-Sayed Mostafa Saad. "Arabic Sign Language Recognition." International Journal of Computer Applications 89, no. 20 (March 26, 2014): 19–26. http://dx.doi.org/10.5120/15747-4523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Holden, Eun-Jung, Gareth Lee, and Robyn Owens. "Australian sign language recognition." Machine Vision and Applications 16, no. 5 (November 25, 2005): 312–20. http://dx.doi.org/10.1007/s00138-005-0003-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Sign language recognition"

1

Nel, Warren. "An integrated sign language recognition system." Thesis, University of Western Cape, 2014. http://hdl.handle.net/11394/3584.

Full text
Abstract:
Doctor Educationis
Research has shown that five parameters are required to recognize any sign language gesture: hand shape, location, orientation and motion, as well as facial expressions. The South African Sign Language (SASL) research group at the University of the Western Cape has created systems to recognize Sign Language gestures using single parameters. Using a single parameter can cause ambiguities in the recognition of signs that are similarly signed resulting in a restriction of the possible vocabulary size. This research pioneers work at the group towards combining multiple parameters to achieve a larger recognition vocabulary set. The proposed methodology combines hand location and hand shape recognition into one combined recognition system. The system is shown to be able to recognize a very large vocabulary of 50 signs at a high average accuracy of 74.1%. This vocabulary size is much larger than existing SASL recognition systems, and achieves a higher accuracy than these systems in spite of the large vocabulary. It is also shown that the system is highly robust to variations in test subjects such as skin colour, gender and body dimension. Furthermore, the group pioneers research towards continuously recognizing signs from a video stream, whereas existing systems recognized a single sign at a time. To this end, a highly accurate continuous gesture segmentation strategy is proposed and shown to be able to accurately recognize sentences consisting of five isolated SASL gestures.
APA, Harvard, Vancouver, ISO, and other styles
2

Zafrulla, Zahoor. "Automatic recognition of American sign language classifiers." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53461.

Full text
Abstract:
Automatically recognizing classifier-based grammatical structures of American Sign Language (ASL) is a challenging problem. Classifiers in ASL utilize surrogate hand shapes for people or "classes" of objects and provide information about their location, movement and appearance. In the past researchers have focused on recognition of finger spelling, isolated signs, facial expressions and interrogative words like WH-questions (e.g. Who, What, Where, and When). Challenging problems such as recognition of ASL sentences and classifier-based grammatical structures remain relatively unexplored in the field of ASL recognition.  One application of recognition of classifiers is toward creating educational games to help young deaf children acquire language skills. Previous work developed CopyCat, an educational ASL game that requires children to engage in a progressively more difficult expressive signing task as they advance through the game.   We have shown that by leveraging context we can use verification, in place of recognition, to boost machine performance for determining if the signed responses in an expressive signing task, like in the CopyCat game, are correct or incorrect. We have demonstrated that the quality of a machine verifier's ability to identify the boundary of the signs can be improved by using a novel two-pass technique that combines signed input in both forward and reverse directions. Additionally, we have shown that we can reduce CopyCat's dependency on custom manufactured hardware by using an off-the-shelf Microsoft Kinect depth camera to achieve similar verification performance. Finally, we show how we can extend our ability to recognize sign language by leveraging depth maps to develop a method using improved hand detection and hand shape classification to recognize selected classifier-based grammatical structures of ASL.
APA, Harvard, Vancouver, ISO, and other styles
3

Nayak, Sunita. "Representation and learning for sign language recognition." [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nurena-Jara, Roberto, Cristopher Ramos-Carrion, and Pedro Shiguihara-Juarez. "Data collection of 3D spatial features of gestures from static peruvian sign language alphabet for sign language recognition." Institute of Electrical and Electronics Engineers Inc, 2020. http://hdl.handle.net/10757/656634.

Full text
Abstract:
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.
Peruvian Sign Language Recognition (PSL) is approached as a classification problem. Previous work has employed 2D features from the position of hands to tackle this problem. In this paper, we propose a method to construct a dataset consisting of 3D spatial positions of static gestures from the PSL alphabet, using the HTC Vive device and a well-known technique to extract 21 keypoints from the hand to obtain a feature vector. A dataset of 35, 400 instances of gestures for PSL was constructed and a novel way to extract data was stated. To validate the appropriateness of this dataset, a comparison of four baselines classifiers in the Peruvian Sign Language Recognition (PSLR) task was stated, achieving 99.32% in the average in terms of F1 measure in the best case.
Revisión por pares
APA, Harvard, Vancouver, ISO, and other styles
5

Cooper, H. M. "Sign language recognition : generalising to more complex corpora." Thesis, University of Surrey, 2010. http://epubs.surrey.ac.uk/843617/.

Full text
Abstract:
The aim of this thesis is to find new approaches to Sign Language Recognition (SLR) which are suited to working with the Limited corpora currently available. Data available for SLR is of limited quality; low resolution and frame rates make the task of recognition even more complex. The content is rarely natural, concentrating on isolated signs and filmed under laboratory conditions. In addition, the amount of accurately labelled data is minimal. To this end, several contributions are made: Tracking the hands is eschewed in favour of detection based techniques more robust to noise; for both signs and for linguistically-motivated sign sub-units are investigated, to make best use of limited data sets. Finally, an algorithm is proposed to learn signs from the inset signers on TV, with the aid of the accompanying subtitles, thus increasing the corpus of data available. Tracking fast moving hands under laboratory conditions is a complex task, move this to real world data and the challenge is even greater. When using tracked data as a base for SLR, the errors in the tracking are compounded at the classification stage. Proposed instead, is a novel sign detection method, which views space-time as a 3D volume and the sign within it as an object to be located. Features are combined into strong classifiers using a novel boosting implementation designed to create optimal classifiers over sparse datasets. Using boosted volumetric features, on a robust frame differenced input, average classification rates reach 71% on seen signers and 66% on a mixture of seen and unseen signers, with individual sign classification rates gaining 95%. Using a classifier per sign approach to SLR, means that data sets need to contain numerous examples of the signs to be learnt. Instead, this thesis proposes learnt classifiers to detect the common sub-units of sign. The responses of these classifiers can then be combined for recognition at the sign level. This approach requires fewer examples per sign to be learnt, since the sub-unit detectors are trained on data from multiple signs. It is also faster at detection time since there are fewer classifiers to consult, the number of these being limited by the linguistics of sign and not the number of signs being detected. For this method, appearance based boosted classifiers are introduced to distinguish the sub-units of sign. Results show that when combined with temporal models, these novel sub-unit classifiers, can outperform similar- classifiers learnt on tracked results. As an added side effect; since the sub-units are linguistically derived they can be used independently to help linguistic annotators. Since sign language data sets are costly to collect and annotate, there are not many publicly available. Those which are, tend to be constrained in content and often taken under laboratory conditions. However, in the UK, the British Broadcasting Corporation (BBC) regularly produces programs with an inset signer and corresponding subtitles. This provides a natural signer, covering a wide range of topics, in real world conditions. While it has no ground truth, it is proposed that the translated subtitles can provide weak labels for learning signs. The final contributions of this thesis, lead to an innovative approach to learn signs from these co-occurring streams of data. Using a unique, temporally constrained, version of the Apriori mining algorithm, similar sections of video are identified as possible sign locations. These estimates are improved upon by introducing the concept of contextual negatives, removing contextually similar noise. Combined with an iterative honing process, to enhance the localisation of the target sign, 23 word/sign combinations are learnt from a 30 minute news broadcast, providing a novel method for automatic data set creation.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Pei. "Hand shape estimation for South African sign language." Thesis, University of the Western Cape, 2012. http://hdl.handle.net/11394/4374.

Full text
Abstract:
>Magister Scientiae - MSc
Hand shape recognition is a pivotal part of any system that attempts to implement Sign Language recognition. This thesis presents a novel system which recognises hand shapes from a single camera view in 2D. By mapping the recognised hand shape from 2D to 3D,it is possible to obtain 3D co-ordinates for each of the joints within the hand using the kinematics embedded in a 3D hand avatar and smooth the transformation in 3D space between any given hand shapes. The novelty in this system is that it does not require a hand pose to be recognised at every frame, but rather that hand shapes be detected at a given step size. This architecture allows for a more efficient system with better accuracy than other related systems. Moreover, a real-time hand tracking strategy was developed that works efficiently for any skin tone and a complex background.
APA, Harvard, Vancouver, ISO, and other styles
7

Belissen, Valentin. "From Sign Recognition to Automatic Sign Language Understanding : Addressing the Non-Conventionalized Units." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG064.

Full text
Abstract:
Les langues des signes (LS) se sont développées naturellement au sein des communautés de Sourds. Ne disposant pas de forme écrite, ce sont des langues orales, utilisant les canaux gestuel pour l’expression et visuel pour la réception. Ces langues peu dotées ne font pas l'objet d'un large consensus au niveau de leur description linguistique. Elles intègrent des signes lexicaux, c’est-à-dire des unités conventionnalisées du langage dont la forme est supposée arbitraire, mais aussi – et à la différence des langues vocales, si on ne considère pas la gestualité co-verbale – des structures iconiques, en utilisant l’espace pour organiser le discours. L’iconicité, ce lien entre la forme d’un signe et le sens qu’il porte, est en effet utilisée à plusieurs niveaux du discours en LS.La plupart des travaux de recherche en reconnaissance automatique de LS se sont en fait attelés à reconnaitre les signes lexicaux, d’abord sous forme isolée puis au sein de LS continue. Les corpus de vidéos associés à ces recherches sont souvent relativement artificiels, consistant en la répétition d’énoncés élicités sous forme écrite, parfois en LS interprétée, qui peut également présenter des différences importantes avec la LS naturelle.Dans cette thèse, nous souhaitons montrer les limites de cette approche, en élargissant cette perspective pour envisager la reconnaissance d’éléments utilisés pour la construction du discours ou au sein de structures illustratives.Pour ce faire, nous montrons l’intérêt et les limites des corpus de linguistes : la langue y est naturelle et les annotations parfois détaillées, mais pas toujours utilisables en données d’entrée de système d’apprentissage automatique, car pas nécessairement cohérentes. Nous proposons alors la refonte d’un corpus de dialogue en langue des signes française, Dicta-Sign-LSF-v2, avec des annotations riches et cohérentes, suivant un schéma d’annotation partagé par de nombreux linguistes.Nous proposons ensuite une redéfinition du problème de la reconnaissance automatique de LS, consistant en la reconnaissance de divers descripteurs linguistiques, plutôt que de se focaliser sur les signes lexicaux uniquement. En parallèle, nous discutons de métriques de la performance adaptées.Pour réaliser une première expérience de reconnaissance de descripteurs linguistiques non uniquement lexicaux, nous développons alors une représentation compacte et généralisable des signeurs dans les vidéos. Celle-ci est en effet réalisée par un traitement parallèle des mains, du visage et du haut du corps, en utilisant des outils existants ainsi que des modèles que nous avons développés. Un prétraitement permet alors de former un vecteur de caractéristiques pertinentes. Par la suite, nous présentons une architecture adaptée et modulaire d’apprentissage automatique de descripteurs linguistiques, consistant en un réseau de neurones récurrent et convolutionnel.Nous montrons enfin via une analyse quantitative et qualitative l’effectivité du modèle proposé, testé sur Dicta-Sign-LSF-v2. Nous réalisons en premier lieu une analyse approfondie du paramétrage, en évaluant tant le modèle d'apprentissage que la représentation des signeurs. L’étude des prédictions du modèle montre alors le bien-fondé de l'approche proposée, avec une performance tout à fait intéressante pour la reconnaissance continue de quatre descripteurs linguistiques, notamment au vu de l’incertitude relative aux annotations elles-mêmes. La segmentation de ces dernières est en effet subjective, et la pertinence même des catégories utilisées n’est pas démontrée de manière forte. Indirectement, le modèle proposé pourrait donc permettre de mesurer la validité de ces catégories. Avec plusieurs pistes d’amélioration envisagées, notamment sur la représentation des signeurs et l’utilisation de corpus de taille supérieure, le bilan est très encourageant et ouvre la voie à une acception plus large de la reconnaissance continue de langue des signes
Sign Languages (SLs) have developed naturally in Deaf communities. With no written form, they are oral languages, using the gestural channel for expression and the visual channel for reception. These poorly endowed languages do not meet with a broad consensus at the linguistic level. These languages make use of lexical signs, i.e. conventionalized units of language whose form is supposed to be arbitrary, but also - and unlike vocal languages, if we don't take into account the co-verbal gestures - iconic structures, using space to organize discourse. Iconicity, which is defined as the existence of a similarity between the form of a sign and the meaning it carries, is indeed used at several levels of SL discourse.Most research in automatic Sign Language Recognition (SLR) has in fact focused on recognizing lexical signs, at first in the isolated case and then within continuous SL. The video corpora associated with such research are often relatively artificial, consisting of the repetition of elicited utterances in written form. Other corpora consist of interpreted SL, which may also differ significantly from natural SL, as it is strongly influenced by the surrounding vocal language.In this thesis, we wish to show the limits of this approach, by broadening this perspective to consider the recognition of elements used for the construction of discourse or within illustrative structures.To do so, we show the interest and the limits of the corpora developed by linguists. In these corpora, the language is natural and the annotations are sometimes detailed, but not always usable as input data for machine learning systems, as they are not necessarily complete or coherent. We then propose the redesign of a French Sign Language dialogue corpus, Dicta-Sign-LSF-v2, with rich and consistent annotations, following an annotation scheme shared by many linguists.We then propose a redefinition of the problem of automatic SLR, consisting in the recognition of various linguistic descriptors, rather than focusing on lexical signs only. At the same time, we discuss adapted metrics for relevant performance assessment.In order to perform a first experiment on the recognition of linguistic descriptors that are not only lexical, we then develop a compact and generalizable representation of signers in videos. This is done by parallel processing of the hands, face and upper body, using existing tools and models that we have set up. Besides, we preprocess these parallel representations to obtain a relevant feature vector. We then present an adapted and modular architecture for automatic learning of linguistic descriptors, consisting of a recurrent and convolutional neural network.Finally, we show through a quantitative and qualitative analysis the effectiveness of the proposed model, tested on Dicta-Sign-LSF-v2. We first carry out an in-depth analysis of the parameterization, evaluating both the learning model and the signer representation. The study of the model predictions then demonstrates the merits of the proposed approach, with a very interesting performance for the continuous recognition of four linguistic descriptors, especially in view of the uncertainty related to the annotations themselves. The segmentation of the latter is indeed subjective, and the very relevance of the categories used is not strongly demonstrated. Indirectly, the proposed model could therefore make it possible to measure the validity of these categories. With several areas for improvement being considered, particularly in terms of signer representation and the use of larger corpora, the results are very encouraging and pave the way for a wider understanding of continuous Sign Language Recognition
APA, Harvard, Vancouver, ISO, and other styles
8

Rupe, Jonathan C. "Vision-based hand shape identification for sign language recognition /." Link to online version, 2005. https://ritdml.rit.edu/dspace/handle/1850/940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mudduluru, Sravani. "Indian Sign Language Numbers Recognition using Intel RealSense Camera." DigitalCommons@CalPoly, 2017. https://digitalcommons.calpoly.edu/theses/1815.

Full text
Abstract:
The use of gesture based interaction with devices has been a significant area of research in the field of computer science since many years. The main idea of these kind of interactions is to ease the user experience by providing high degree of freedom and provide more interactive way of communication with the technology in a natural way. The significant areas of applications of gesture recognition are in video gaming, human computer interaction, virtual reality, smart home appliances, medical systems, robotics and several others. With the availability of the devices such as Kinect, Leap Motion and Intel RealSense cameras accessing the depth as well as color information has become available to the public with affordable costs. The Intel RealSense camera is a USB powered controller that can be supported with few hardware requirements such as Windows 8 and above. This is one such camera that can be used to track the human body information similar to the Kinect and Leap Motion. It was designed specifically to provide more minute information about the different parts of the human body such as face, hand etc. This camera was designed to give users more natural and intuitive interactions with the smart devices by providing some features such as creating 3D avatars, high quality 3D prints, high-quality graphic gaming visuals, virtual reality and others. The main aim of this study is to try to analyze hand tracking information and build a training model in order to decide if this camera is suitable for sign language. In this study, we have extracted the joint information of 22 joint labels per single hand .We trained the model to identify the Indian Sign Language(ISL) numbers from 0-9. Through this study we analyzed that multi-class SVM model showed higher accuracy of 93.5% when compared to the decision tree and KNN models.
APA, Harvard, Vancouver, ISO, and other styles
10

Brashear, Helene Margaret. "Improving the efficacy of automated sign language practice tools." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34703.

Full text
Abstract:
The CopyCat project is an interdisciplinary effort to create a set of computer-aided language learning tools for deaf children. The CopyCat games allow children to interact with characters using American Sign Language (ASL). Through Wizard of Oz pilot studies we have developed a set of games, shown their efficacy in improving young deaf children's language and memory skills, and collected a large corpus of signing examples. Our previous implementation of the automatic CopyCat games uses automatic sign language recognition and verification in the infrastructure of a memory repetition and phrase verification task. The goal of my research is to expand the automatic sign language system to transition the CopyCat games to include the flexibility of a dialogue system. I have created a labeling ontology from analysis of the CopyCat signing corpus, and I have used the ontology to describe the contents of the CopyCat data set. This ontology was used to change and improve the automatic sign language recognition system and to add flexibility to language use in the automatic game.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Sign language recognition"

1

Grobel, Kirsti. Videobasierte Gebärdenspracherkennung mit Hidden-Markov-Modellen. Düsseldorf: VDI Verlag, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

E, Johnson Robert, ed. RSVP: Fingerspelled word recognition through rapid serial visual presentation. San Diego, CA: DawnSignPress, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

De Meulder, Maartje, Joseph J. Murray, and Rachel L. McKee, eds. TheLegal Recognition of Sign Languages. Bristol, Blue Ridge Summit: Multilingual Matters, 2019. http://dx.doi.org/10.21832/9781788924016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

David, Hutchison. Gesture-Based Human-Computer Interaction and Simulation: 7th International Gesture Workshop, GW 2007, Lisbon, Portugal, May 23-25, 2007, Revised Selected Papers. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Signs of recognition: Powers and hazards of representation in an Indonesian society. Berkeley: University of California Press, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shweta, Dour. Real Time Recognition of Indian Sign Language. Blurb, Incorporated, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Murray, Joseph J., Maartje De Meulder, and Rachel L. McKee. Legal Recognition of Sign Languages: Advocacy and Outcomes Around the World. Multilingual Matters, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Murray, Joseph J., Maartje De Meulder, and Rachel L. McKee. Legal Recognition of Sign Languages: Advocacy and Outcomes Around the World. Multilingual Matters, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Murray, Joseph J., Maartje De Meulder, and Rachel L. McKee. Legal Recognition of Sign Languages: Advocacy and Outcomes Around the World. Multilingual Matters, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Murray, Joseph J., Maartje De Meulder, and Rachel L. McKee. Legal Recognition of Sign Languages: Advocacy and Outcomes Around the World. Multilingual Matters, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Sign language recognition"

1

Cooper, Helen, Brian Holt, and Richard Bowden. "Sign Language Recognition." In Visual Analysis of Humans, 539–62. London: Springer London, 2011. http://dx.doi.org/10.1007/978-0-85729-997-0_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Holden, Eun-Jung, and Robyn Owens. "Visual Sign Language Recognition." In Multi-Image Analysis, 270–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45134-x_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cooper, Helen, Eng-Jon Ong, Nicolas Pugeault, and Richard Bowden. "Sign Language Recognition Using Sub-units." In Gesture Recognition, 89–118. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-57021-1_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lang, Simon, Marco Block, and Raúl Rojas. "Sign Language Recognition Using Kinect." In Artificial Intelligence and Soft Computing, 394–402. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29347-4_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Edwards, Alistair D. N. "Progress in sign language recognition." In Gesture and Sign Language in Human-Computer Interaction, 13–21. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0052985.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sagar, Laxmi Kant, Kartik Kumar, Akshit Goyal, Riya Singh, and Anubhaw Kumar Soni. "Sign Language Recognition Using AI." In Sustainable Computing, 147–57. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-13577-4_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

De Meulder, Maartje, and Thierry Haesenne. "18. A Belgian Compromise? Recognising French-Belgian Sign Language and Flemish Sign Language." In TheLegal Recognition of Sign Languages, edited by Maartje De Meulder, Joseph J. Murray, and Rachel L. McKee, 284–300. Bristol, Blue Ridge Summit: Multilingual Matters, 2019. http://dx.doi.org/10.21832/9781788924016-020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sang, Haifeng, and Hongjiao Wu. "A Sign Language Recognition System in Complex Background." In Biometric Recognition, 453–61. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46654-5_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hong, Sung-Eun, Hyunhwa Lee, Mi-Hye Lee, and Seung-Il Byun. "2. The Korean Sign Language Act." In TheLegal Recognition of Sign Languages, edited by Maartje De Meulder, Joseph J. Murray, and Rachel L. McKee, 36–51. Bristol, Blue Ridge Summit: Multilingual Matters, 2019. http://dx.doi.org/10.21832/9781788924016-004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Muruvik Vonen, Arnfinn, and Paal Richard Peterson. "12. Sign Language Legislation in Norway." In TheLegal Recognition of Sign Languages, edited by Maartje De Meulder, Joseph J. Murray, and Rachel L. McKee, 191–206. Bristol, Blue Ridge Summit: Multilingual Matters, 2019. http://dx.doi.org/10.21832/9781788924016-014.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Sign language recognition"

1

Pahlevanzadeh, Maryam, Mansour Vafadoost, and Majid Shahnazi. "Sign language recognition." In 2007 9th International Symposium on Signal Processing and Its Applications (ISSPA). IEEE, 2007. http://dx.doi.org/10.1109/isspa.2007.4555448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schioppo, Jacob, Zachary Meyer, Diego Fabiano, and Shaun Canavan. "Sign Language Recognition." In CHI '19: CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3290607.3313025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Karayilan, Tulay, and Ozkan Kilic. "Sign language recognition." In 2017 International Conference on Computer Science and Engineering (UBMK). IEEE, 2017. http://dx.doi.org/10.1109/ubmk.2017.8093509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kumar, Anup, Karun Thankachan, and Mevin M. Dominic. "Sign language recognition." In 2016 3rd International Conference on Recent Advances in Information Technology (RAIT). IEEE, 2016. http://dx.doi.org/10.1109/rait.2016.7507939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Aishwarya, Aparna, and Divya Jennifer D'Souza. "Sign Language Recognition." In 2021 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER). IEEE, 2021. http://dx.doi.org/10.1109/discover52564.2021.9663629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Guerra, Rúbia Reis, Tamires Martins Rezende, Frederico Gadelha Guimarães, and Sílvia Grasiella Moreira Almeida. "Facial Expression Analysis in Brazilian Sign Language for Sign Recognition." In XV Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2018. http://dx.doi.org/10.5753/eniac.2018.4418.

Full text
Abstract:
Sign language is one of the main forms of communication used by the deaf community. The language’s smallest unit, a “sign”, comprises a series of intricate manual and facial gestures. As opposed to speech recognition, sign language recognition (SLR) lags behind, presenting a multitude of open challenges because this language is visual-motor. This paper aims to explore two novel approaches in feature extraction of facial expressions in SLR, and to propose the use of Random Forest (RF) in Brazilian SLR as a scalable alternative to Support Vector Machines (SVM) and k-Nearest Neighbors (k-NN). Results show that RF’s performance is at least comparable to SVM’s and k-NN’s, and validate non-manual parameter recognition as a consistent step towards SLR.
APA, Harvard, Vancouver, ISO, and other styles
7

Caridakis, George, Olga Diamanti, Kostas Karpouzis, and Petros Maragos. "Automatic sign language recognition." In the 1st ACM international conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1389586.1389687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Deora, Divya, and Nikesh Bajaj. "Indian sign language recognition." In 2012 1st International Conference on Emerging Technology Trends in Electronics, Communication and Networking (ET2ECN). IEEE, 2012. http://dx.doi.org/10.1109/et2ecn.2012.6470093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pankajakshan, Priyanka C., and Thilagavathi B. "Sign language recognition system." In 2015 International Conference on Innovations in Information,Embedded and Communication Systems (ICIIECS). IEEE, 2015. http://dx.doi.org/10.1109/iciiecs.2015.7192910.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sandjaja, Iwan Njoto, and Nelson Marcos. "Sign Language Number Recognition." In 2009 Fifth International Joint Conference on INC, IMS and IDC. IEEE, 2009. http://dx.doi.org/10.1109/ncm.2009.357.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography