Academic literature on the topic 'Sign language – Translating'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sign language – Translating.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sign language – Translating"

1

Goyal, Lalit, and Vishal Goyal. "Text to Sign Language Translation System." International Journal of Synthetic Emotions 7, no. 2 (2016): 62–77. http://dx.doi.org/10.4018/ijse.2016070104.

Full text
Abstract:
Many machine translation systems for spoken languages are available, but the translation system between the spoken and Sign Language are limited. The translation from Text to Sign Language is different from the translation between spoken languages because the Sign Language is visual spatial language which uses hands, arms, face, and head and body postures for communication in three dimensions. The translation from text to Sign Language is complex as the grammar rules for Sign Language are not standardized. Still a number of approaches have been used for translating the Text to Sign Language in which the input is the text and output is in the form of pre-recorded videos or the animated character generated by computer (Avatar). This paper reviews the research carried out for automatic translation from Text to the Sign Language.
APA, Harvard, Vancouver, ISO, and other styles
2

Wolfe, Rosalee, John C. McDonald, Thomas Hanke, et al. "Sign Language Avatars: A Question of Representation." Information 13, no. 4 (2022): 206. http://dx.doi.org/10.3390/info13040206.

Full text
Abstract:
Given the achievements in automatically translating text from one language to another, one would expect to see similar advancements in translating between signed and spoken languages. However, progress in this effort has lagged in comparison. Typically, machine translation consists of processing text from one language to produce text in another. Because signed languages have no generally-accepted written form, translating spoken to signed language requires the additional step of displaying the language visually as animation through the use of a three-dimensional (3D) virtual human commonly known as an avatar. Researchers have been grappling with this problem for over twenty years, and it is still an open question. With the goal of developing a deeper understanding of the challenges posed by this question, this article gives a summary overview of the unique aspects of signed languages, briefly surveys the technology underlying avatars and performs an in-depth analysis of the features in a textual representation for avatar display. It concludes with a comparison of these features and makes observations about future research directions.
APA, Harvard, Vancouver, ISO, and other styles
3

González-Rodríguez, Jaime-Rodrigo, Diana-Margarita Córdova-Esparza, Juan Terven, and Julio-Alejandro Romero-González. "Towards a Bidirectional Mexican Sign Language–Spanish Translation System: A Deep Learning Approach." Technologies 12, no. 1 (2024): 7. http://dx.doi.org/10.3390/technologies12010007.

Full text
Abstract:
People with hearing disabilities often face communication barriers when interacting with hearing individuals. To address this issue, this paper proposes a bidirectional Sign Language Translation System that aims to bridge the communication gap. Deep learning models such as recurrent neural networks (RNN), bidirectional RNN (BRNN), LSTM, GRU, and Transformers are compared to find the most accurate model for sign language recognition and translation. Keypoint detection using MediaPipe is employed to track and understand sign language gestures. The system features a user-friendly graphical interface with modes for translating between Mexican Sign Language (MSL) and Spanish in both directions. Users can input signs or text and obtain corresponding translations. Performance evaluation demonstrates high accuracy, with the BRNN model achieving 98.8% accuracy. The research emphasizes the importance of hand features in sign language recognition. Future developments could focus on enhancing accessibility and expanding the system to support other sign languages. This Sign Language Translation System offers a promising solution to improve communication accessibility and foster inclusivity for individuals with hearing disabilities.
APA, Harvard, Vancouver, ISO, and other styles
4

De Coster, Mathieu, and Joni Dambre. "Leveraging Frozen Pretrained Written Language Models for Neural Sign Language Translation." Information 13, no. 5 (2022): 220. http://dx.doi.org/10.3390/info13050220.

Full text
Abstract:
We consider neural sign language translation: machine translation from signed to written languages using encoder–decoder neural networks. Translating sign language videos to written language text is especially complex because of the difference in modality between source and target language and, consequently, the required video processing. At the same time, sign languages are low-resource languages, their datasets dwarfed by those available for written languages. Recent advances in written language processing and success stories of transfer learning raise the question of how pretrained written language models can be leveraged to improve sign language translation. We apply the Frozen Pretrained Transformer (FPT) technique to initialize the encoder, decoder, or both, of a sign language translation model with parts of a pretrained written language model. We observe that the attention patterns transfer in zero-shot to the different modality and, in some experiments, we obtain higher scores (from 18.85 to 21.39 BLEU-4). Especially when gloss annotations are unavailable, FPTs can increase performance on unseen data. However, current models appear to be limited primarily by data quality and only then by data quantity, limiting potential gains with FPTs. Therefore, in further research, we will focus on improving the representations used as inputs to translation models.
APA, Harvard, Vancouver, ISO, and other styles
5

Dang, Thanh-Vu, JinYoung Kim, Gwang-Hyun Yu, Ji Yong Kim, Young Hwan Park, and ChilWoo Lee. "Korean Text to Gloss: Self-Supervised Learning approach." Korean Institute of Smart Media 12, no. 1 (2023): 32–46. http://dx.doi.org/10.30693/smj.2023.12.1.32.

Full text
Abstract:
Natural Language Processing (NLP) has grown tremendously in recent years. Typically, bilingual, and multilingual translation models have been deployed widely in machine translation and gained vast attention from the research community. On the contrary, few studies have focused on translating between spoken and sign languages, especially non-English languages. Prior works on Sign Language Translation (SLT) have shown that a mid-level sign gloss representation enhances translation performance. Therefore, this study presents a new large-scale Korean sign language dataset, the Museum-Commentary Korean Sign Gloss (MCKSG) dataset, including 3828 pairs of Korean sentences and their corresponding sign glosses used in Museum-Commentary contexts. In addition, we propose a translation framework based on self-supervised learning, where the pretext task is a text-to-text from a Korean sentence to its back-translation versions, then the pre-trained network will be fine-tuned on the MCKSG dataset. Using self-supervised learning help to overcome the drawback of a shortage of sign language data. Through experimental results, our proposed model outperforms a baseline BERT model by 6.22%.
APA, Harvard, Vancouver, ISO, and other styles
6

Korzeniewska, Ewa, Marta Kania, and Rafał Zawiślak. "Textronic Glove Translating Polish Sign Language." Sensors 22, no. 18 (2022): 6788. http://dx.doi.org/10.3390/s22186788.

Full text
Abstract:
Communication between people is a basic social skill used to exchange information. It is often used for self-express and to meet basic human needs, such as the need for closeness, belonging, and security. This process takes place at different levels, using different means, with specific effects. It generally means a two-way flow of information in the immediate area of contact with another person. When people are communicating using the same language, the flow of information is much easier compared to the situation when two people use two different languages from different language families. The process of social communication with the deaf is difficult as well. It is therefore essential to use modern technologies to facilitate communication with deaf and non-speaking people. This article presents the results of work on a prototype of a glove using textronic elements produced using a physical vacuum deposition process. The signal from the sensors, in the form of resistance changes, is read by the microcontroller, and then it is processed and displayed on a smartphone screen in the form of single letters. During the experiment, 520 letters were signed by each author. The correctness of interpreting the signs was 86.5%. Each letter was recognized within approximately 3 s. One of the main results of the article was also the selection of an appropriate material (Velostat, membrane) that can be used as a sensor for the proposed application solution. The proposed solution can enable communication with the deaf using the finger alphabet, which can be used to spell single words or the most important key words.
APA, Harvard, Vancouver, ISO, and other styles
7

Sharma, Purushottam, Devesh Tulsian, Chaman Verma, Pratibha Sharma, and Nancy Nancy. "Translating Speech to Indian Sign Language Using Natural Language Processing." Future Internet 14, no. 9 (2022): 253. http://dx.doi.org/10.3390/fi14090253.

Full text
Abstract:
Language plays a vital role in the communication of ideas, thoughts, and information to others. Hearing-impaired people also understand our thoughts using a language known as sign language. Every country has a different sign language which is based on their native language. In our research paper, our major focus is on Indian Sign Language, which is mostly used by hearing- and speaking-impaired communities in India. While communicating our thoughts and views with others, one of the most essential factors is listening. What if the other party is not able to hear or grasp what you are talking about? This situation is faced by nearly every hearing-impaired person in our society. This led to the idea of introducing an audio to Indian Sign Language translation system which can erase this gap in communication between hearing-impaired people and society. The system accepts audio and text as input and matches it with the videos present in the database created by the authors. If matched, it shows corresponding sign movements based on the grammar rules of Indian Sign Language as output; if not, it then goes through the processes of tokenization and lemmatization. The heart of the system is natural language processing which equips the system with tokenization, parsing, lemmatization, and part-of-speech tagging.
APA, Harvard, Vancouver, ISO, and other styles
8

Nurgazina, Dana, and Saule Kudubayeva. "Research of semantic aspects of the Kazakh language when translating into the Kazakh sign language." International Journal of Electrical and Computer Engineering (IJECE) 14, no. 4 (2024): 4488. http://dx.doi.org/10.11591/ijece.v14i4.pp4488-4497.

Full text
Abstract:
The article discusses the semantic aspects of Kazakh sign language and its characteristics. Semantics, a field within linguistics, focuses on examining the meanings conveyed by expressions and combinations of signs. The author delves into the inquiry of the degree of similarity between verbal and sign languages, highlighting their fundamental distinctions. The primary objective of the research is to scrutinize the characteristics of parts of speech in the Kazakh language when expressed gesturally, along with the principles governing the translation of verbs and adverbial tenses. The article explains in detail the formulas for translating the text into sign language, based on the subject-object-predicate. Examples are given that illustrate the subject-object relationship and determine who acts as the speaker, "object" or "subject" of the utterance. It is necessary to note that for successful translation it is necessary first to understand the meaning of the sentence. The article concludes by emphasizing the importance of understanding both structural elements and contextual nuances in the fascinating world of the semantics of the Kazakh sign language. It inspires further research aimed at uncovering the complexities and exceptions that contribute to a deep understanding of linguistic nuances in this unique form of communication.
APA, Harvard, Vancouver, ISO, and other styles
9

Wurm, Svenja. "From writing to sign." Signed Language Interpreting and Translation 13, no. 1 (2018): 130–49. http://dx.doi.org/10.1075/tis.00008.wur.

Full text
Abstract:
Abstract This article investigates the roles that text modalities play in translation from written text into recorded signed language. While written literacy practices have a long history, practices involving recorded signed texts are only beginning to develop. In addition, the specific characteristics of source and target modes offer different potentials and limitations, causing challenges for translation between written and signed language. Drawing on an ideological model of literacy and a social-semiotic multimodality approach, this article presents findings of a qualitative case study analyzing one practitioner’s strategies translating an academic text from written English into British Sign Language. Data generated through interviews and text analysis reveal an event influenced by the affordances of the media and the translator’s consideration of source and target literacy practices.
APA, Harvard, Vancouver, ISO, and other styles
10

Gonzalez, Hernando, Silvia Hernández, and Oscar Calderón. "Design of a Sign Language-to-Natural Language Translator Using Artificial Intelligence." International Journal of Online and Biomedical Engineering (iJOE) 20, no. 03 (2024): 89–98. http://dx.doi.org/10.3991/ijoe.v20i03.46765.

Full text
Abstract:
This paper describes the results obtained from the design and validation of translation gloves for Colombian sign language (LSC) to natural language. The MPU6050 sensors capture finger movements, and the TCA9548a card enables data multiplexing. Additionally, an Arduino Uno board preprocesses the data, and the Raspberry Pi interprets it using central tendency statistics, principal component analysis (PCA), and a neural network structure for pattern recognition. Finally, the sign is reproduced in audio format. The methodology developed below focuses on translating specific preselected words, achieving an average classification accuracy of 88.97%.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Sign language – Translating"

1

Zhou, Mingjie. "Deep networks for sign language video caption." HKBU Institutional Repository, 2020. https://repository.hkbu.edu.hk/etd_oa/848.

Full text
Abstract:
In the hearing-loss community, sign language is a primary tool to communicate with people while there is a communication gap between hearing-loss people with normal hearing people. Sign language is different from spoken language. It has its own vocabulary and grammar. Recent works concentrate on the sign language video caption which consists of sign language recognition and sign language translation. Continuous sign language recognition, which can bridge the communication gap, is a challenging task because of the weakly supervised ordered annotations where no frame-level label is provided. To overcome this problem, connectionist temporal classification (CTC) is the most widely used method. However, CTC learning could perform badly if the extracted features are not good. For better feature extraction, this thesis presents the novel self-attention-based fully-inception (SAFI) networks for vision-based end-to-end continuous sign language recognition. Considering the length of sign words differs from each other, we introduce the fully inception network with different receptive fields to extract dynamic clip-level features. To further boost the performance, the fully inception network with an auxiliary classifier is trained with aggregation cross entropy (ACE) loss. Then the encoder of self-attention networks as the global sequential feature extractor is used to model the clip-level features with CTC. The proposed model is optimized by jointly training with ACE on clip-level feature learning and CTC on global sequential feature learning in an end-to-end fashion. The best method in the baselines achieves 35.6% WER on the validation set and 34.5% WER on the test set. It employs a better decoding algorithm for generating pseudo labels to do the EM-like optimization to fine-tune the CNN module. In contrast, our approach focuses on the better feature extraction for end-to-end learning. To alleviate the overfitting on the limited dataset, we employ temporal elastic deformation to triple the real-world dataset RWTH- PHOENIX-Weather 2014. Experimental results on the real-world dataset RWTH- PHOENIX-Weather 2014 demonstrate the effectiveness of our approach which achieves 31.7% WER on the validation set and 31.2% WER on the test set. Even though sign language recognition can, to some extent, help bridge the communication gap, it is still organized in sign language grammar which is different from spoken language. Unlike sign language recognition that recognizes sign gestures, sign language translation (SLT) converts sign language to a target spoken language text which normal hearing people commonly use in their daily life. To achieve this goal, this thesis provides an effective sign language translation approach which gains state-of-the-art performance on the largest real-life German sign language translation database, RWTH-PHOENIX-Weather 2014T. Besides, a direct end-to-end sign language translation approach gives out promising results (an impressive gain from 9.94 to 13.75 BLEU and 9.58 to 14.07 BLEU on the validation set and test set) without intermediate recognition annotations. The comparative and promising experimental results show the feasibility of the direct end-to-end SLT
APA, Harvard, Vancouver, ISO, and other styles
2

Welgemoed, Johan. "A prototype system for machine translation from English to South African Sign Language using synchronous tree adjoining grammars." Thesis, Stellenbosch : Stellenbosch University, 2007. http://hdl.handle.net/10019.1/19892.

Full text
Abstract:
Thesis (MSc)--University of Stellenbosch, 2007.<br>ENGLISH ABSTRACT: Machine translation, especially machine translation for sign languages, remains an active research area. Sign language machine translation presents unique challenges to the whole machine translation process. In this thesis a prototype machine translation system is presented. This system is designed to translate English text into a gloss based representation of South African Sign Language (SASL). In order to perform the machine translation, a transfer based approach was taken. English text is parsed into an intermediate representation. Translation rules are then applied to this intermediate representation to transform it into an equivalent intermediate representation for the SASL glosses. For both these intermediate representations, a tree adjoining grammar (TAG) formalism is used. As part of the prototype machine translation system, a TAG parser was implemented. The translation rules used by the system were derived from a SASL phrase book. This phrase book was also used to create a small gloss based SASL TAG grammar. Lastly, some additional tools, for the editing of TAG trees, were also added to the prototype system.<br>AFRIKAANSE OPSOMMING: Masjienvertaling, veral masjienvertaling vir gebaretale, bly ’n aktiewe navorsingsgebied. Masjienvertaling vir gebaretale bied unieke uitdagings tot die hele masjienvertalingproses. In hierdie tesis bied ons ’n prototipe masjienvertalingstelsel aan. Hierdie stelsel is ontwerp om Engelse teks te vertaal na ’n glos gebaseerde voorstelling van Suid-Afrikaanse Gebaretaal (SAG). Ons vertalingstelsel maak gebruik van ’n oorplasingsbenadering tot masjienvertaling. Engelse teks word ontleed na ’n intermediˆere vorm. Vertalingre¨els word toegepas op hierdie intermediˆere vorm om dit te transformeer na ’n ekwivalente intermediˆere vorm vir die SAG glosse. Vir beide hierdie intermediˆere vorms word boomkoppelingsgrammatikas (BKGs) gebruik. As deel van die prototipe masjienvertalingstelsel, is ’n BKG sintaksontleder ge¨ımplementeer. Die vertalingre¨els wat gebruik word deur die stelsel, is afgelei vanaf ’n SAG fraseboek. Hierdie fraseboek was ook gebruik om ’n klein BKG vir SAG glosse te ontwikkel. Laastens was addisionele nutsfasiliteite, vir die redigering van BKG bome, ontwikkel.
APA, Harvard, Vancouver, ISO, and other styles
3

De, Villiers Hendrik Adrianus Cornelis. "A vision-based South African sign language tutor." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86333.

Full text
Abstract:
Thesis (PhD)--Stellenbosch University, 2014.<br>ENGLISH ABSTRACT: A sign language tutoring system capable of generating detailed context-sensitive feedback to the user is presented in this dissertation. This stands in contrast with existing sign language tutor systems, which lack the capability of providing such feedback. A domain specific language is used to describe the constraints placed on the user’s movements during the course of a sign, allowing complex constraints to be built through the combination of simpler constraints. This same linguistic description is then used to evaluate the user’s movements, and to generate corrective natural language feedback. The feedback is dynamically tailored to the user’s attempt, and automatically targets that correction which would require the least effort on the part of the user. Furthermore, a procedure is introduced which allows feedback to take the form of a simple to-do list, despite the potential complexity of the logical constraints describing the sign. The system is demonstrated using real video sequences of South African Sign Language signs, exploring the different kinds of advice the system can produce, as well as the accuracy of the comments produced. To provide input for the tutor system, the user wears a pair of coloured gloves, and a video of their attempt is recorded. A vision-based hand pose estimation system is proposed which uses the Earth Mover’s Distance to obtain hand pose estimates from images of the user’s hands. A two-tier search strategy is employed, first obtaining nearest neighbours using a simple, but related, metric. It is demonstrated that the two-tier system’s accuracy approaches that of a global search using only the Earth Mover’s Distance, yet requires only a fraction of the time. The system is shown to outperform a closely related system on a set of 500 real images of gloved hands.<br>AFRIKAANSE OPSOMMING: ’n Gebaretaaltutorstelsel met die vermo¨e om konteks-sensitiewe terugvoer te lewer aan die gebruiker word uiteengesit in hierdie proefskrif. Hierdie staan in kontras met bestaande tutorstelsels, wat nie hierdie kan bied vir die gebruiker nie. ’n Domein-spesifieke taal word gebruik om beperkinge te definieer op die gebruiker se bewegings deur die loop van ’n gebaar. Komplekse beperkinge kan opgebou word uit eenvoudiger beperkinge. Dieselfde linguistieke beskrywing van die gebaar word gebruik om die gebruiker se bewegings te evalueer, en om korrektiewe terugvoer te genereer in teksvorm. Die terugvoer word dinamies aangepas met betrekking tot die gebruiker se probeerslag, en bepaal outomaties die maklikste manier wat die gebruiker sy/haar fout kan korrigeer. ’n Prosedure word uiteengesit om die terugvoer in ’n eenvoudige lysvorm aan te bied, ongeag die kompleksiteit van die linguistieke beskrywing van die gebaar. Die stelsel word gedemonstreer aan die hand van opnames van gebare uit Suid-Afrikaanse Gebaretaal. Die verskeie tipes terugvoer wat die stelsel kan lewer, asook die akkuraatheid van hierdie terugvoer, word ondersoek. Om vir die tutorstelsel intree te bied, dra die gebruiker ’n stel gekleurde handskoene. ’n Visie-gebaseerde handvormafskattingstelsel wat gebruik maak van die Aardverskuiwersafstand (Earth Mover’s Distance) word voorgestel. ’n Twee-vlak soekstrategie word gebruik. ’n Rowwe afstandsmate word gebruik om ’n stel voorlopige handpostuurkandidate te verkry, waarna die stel verfyn word deur gebruik van die Aardverskuiwersafstand. Dit word gewys dat hierdie benaderde strategie se akkuraatheid grens aan die van eksakte soektogte, maar neem slegs ’n fraksie van die tyd. Toetsing op ’n stel van 500 re¨ele beelde, wys dat hierdie stelsel beter presteer as ’n naverwante stelsel uit die literatuur.
APA, Harvard, Vancouver, ISO, and other styles
4

Combrink, Andries J. "A preprocessor for an English-to-Sign Language Machine Translation system." Thesis, Stellenbosch : University of Stellenbosch, 2005. http://hdl.handle.net/10019.1/2832.

Full text
Abstract:
Thesis (MSc (Computer Science))--University of Stellenbosch, 2005.<br>Sign Languages such as South African Sign Language, are proper natural languages; they have their own vocabularies, and they make use of their own grammar rules. However, machine translation from a spoken to a signed language creates interesting challenges. These problems are caused as a result of the differences in character between spoken and signed languages. Sign Languages are classified as visual-spatial languages: a signer makes use of the space around him, and gives visual clues from body language, facial expressions and sign movements to help him communicate. It is the absence of these elements in the written form of a spoken language that causes the contextual ambiguities during machine translation. The work described in this thesis is aimed at resolving the ambiguities caused by a translation from written English to South African Sign Language. We designed and implemented a preprocessor that uses areas of linguistics such as anaphora resolution and a data structure called a scene graph to help with the spatial aspect of the translation. The preprocessor also makes use of semantic and syntactic analysis, together with the help of a semantic relational database, to find emotional context from text. This analysis is then used to suggest body language, facial expressions and sign movement attributes, helping us to address the visual aspect of the translation. The results show that the system is flexible enough to be used with different types of text, and will overall improve the quality of a machine translation from English into a Sign Language.
APA, Harvard, Vancouver, ISO, and other styles
5

Sinander, Pierre, and Tomas Issa. "Sign Language Translation." Thesis, KTH, Mekatronik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296169.

Full text
Abstract:
The purpose of the thesis was to create a data glove that can translate ASL by reading the finger- and hand movements. Furthermore, the applicability of conductive fabric as stretch sensors was explored. To read the hand gestures stretch sensors constructed from conductive fabric were attached to each finger of the glove to distinguish how much they were bent. The hand movements were registered using a 3-axis accelerometer which was mounted on the glove. The sensor values were read by an Arduino Nano 33 IoT mounted to the wrist of the glove which processed the readings and translated them into the corresponding sign. The microcontroller would then wirelessly transmit the result to another device through Bluetooth Low Energy. The glove was able to correctly translate all the signs of the ASL alphabet with an average accuracy of 93%. It was found that signs with small differences in hand gestures such as S and T were harder to distinguish between which would result in an accuracy of 70% for these specific signs.<br>Syftet med uppsatsen var att skapa en datahandske som kan översätta ASL genom att läsa av finger- och handrörelser. Vidare undersöktes om ledande tyg kan användas som sträcksensorer. För att läsa av handgesterna fästes ledande tyg på varje finger på handsken för att urskilja hur mycket de böjdes. Handrörelserna registrerades med en 3-axlig accelerometer som var monterad på handsken. Sensorvärdena lästes av en Arduino Nano 33 IoT monterad på handleden som översatte till de motsvarande tecknen. Mikrokontrollern överförde sedan resultatet trådlöst till en annan enhet via Bluetooth Low Energy. Handsken kunde korrekt översätta alla tecken på ASL-alfabetet med en genomsnittlig exakthet på 93%. Det visade sig att tecken med små skillnader i handgester som S och T var svårare att skilja mellan vilket resulterade i en noggrannhet på 70% för dessa specifika tecken.
APA, Harvard, Vancouver, ISO, and other styles
6

Haseeb, Ahmed Abdul, and Asim Ilyas. "Speech Translation into Pakistan Sign Language." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5095.

Full text
Abstract:
ABSTRACT Context: Communication is a primary human need and language is the medium for this. Most people have the ability to listen and speak and they use different languages like Swedish, Urdu and English etc. to communicate. Hearing impaired people use signs to communicate. Pakistan Sign Language (PSL) is the preferred language of the deaf in Pakistan. Currently, human PSL interpreters are required to facilitate communication between the deaf and hearing; they are not always available, which means that communication among the deaf and other people may be impaired or nonexistent. In this situation, a system with voice recognition as an input and PSL as an output will be highly helpful. Objectives: As part of this thesis, we explore challenges faced by deaf people in everyday life while interacting with unimpaired. We investigate state of art work done in this area. This study explores speech recognition and Machine translation techniques to devise a generic and automated system that converts English speech to PSL. A prototype of the proposed solution is developed and validated. Methods: Three step investigation is done as part of thesis work. First, to understand problem itself, interviews were conducted with the domain experts. Secondly, from literature review, it is investigated whether any similar or related work has already been done, state of the art technologies like Machine translation, speech recognition engines and Natural language processing etc. have been analyzed. Thirdly, prototype is developed whose validation data is obtained from domain experts and is validated by ourselves as well as from domain experts. Results: It is found that there is a big communication gap between deaf and unimpaired in Pakistan. This is mainly due to the lack of an automated system that can convert Audio speech to PSL and vice versa. After investigating state of the art work including solutions in other countries specific to their languages, it is found that no system exists that is generic and automated. We found that there is already work started for PSL to English Speech conversion but not the other way around. As part of this thesis, we discovered that a generic and automated system can be devised using speech recognition and Machine translation techniques. Conclusion: Deaf people in Pakistan lack a lot of opportunities mainly due to communication gap between deaf and unimpaired. We establish that there should be a generic and automated system that can convert English speech to PSL and vice versa. As part of this, we worked for such a system that can convert English speech to PSL. Moreover, Speech recognition, Machine translation and Natural language processing techniques can be core ingredients for such a generic and automated system. Using user centric approach, the prototype of the system is validated iteratively from domain experts.<br>This research has investigated a computer based solution to facilitate communication among deaf people and unimpaired. Investigation was performed using literature review and visits to institutes to gain a deeper knowledge about sign language and specifically how is it used in Pakistan context. Secondly, challenges faced by deaf people to interact with unimpaired are analyzed by interviews with domain experts (instructors of deaf institutes) and by directly observing deaf in everyday life situations. We conclude that deaf people rely on sign language for communication with unimpaired people. Deaf people in Pakistan use PSL for communication, English is taught as secondary language all over Pakistan in all educational institutes, deaf people are taught by instructors that not only need to know the domain expertise of the area that they are teaching like Math, History and Science etc. but they also need to know PSL very well in order to teach the deaf. It becomes very difficult for deaf institutes to get instructors that know both. Whenever deaf people need to communicate with unimpaired people in any situation, they either need to hire a translator or request the unimpaired people to write everything for them. Translators are very difficult to get all the time and they are very expensive as well. Moreover, using writing by unimpaired becomes very slow process and not all unimpaired people want to do this. We observed this phenomena ourselves as instructors of the institutes provided us the opportunity to work with deaf people to understand their feelings and challenges in everyday life. In this way, we used to go with deaf people in shopping malls, banks, post offices etc. and with their permission, we observed their interaction. We have concluded that sometimes their interaction with normal people becomes very slow and embarrassing. Based on above findings, we concluded that there is definitely a need for an automated system that can facilitate communication between deaf and unimpaired people. These factors lead to the subsequent objective of this research. The main objective of this thesis is to identify a generic and an automated system without any human intervention that converts English speech into PSL as a solution to bridge the communication gap between deaf and unimpaired. It is identified that existing work done related to this problem area doesn’t fulfill our objective. Current solutions are either very specific to a domain, e.g. post office or need human intervention i.e. not automatic. It is identified that none of the existing systems can be extended towards our desired solution. We explored state of the art techniques like Machine translation, Speech recognition and NLP. We have utilized these in our proposed solution. Prototype of the proposed solution is developed whose functional and non functional validation is performed. Since none of existing work exactly matches to our problem statement, therefore, we have not compared the validation of our prototype to any existing system. We have validated prototype with respect to our problem domain. Moreover, this is validated iteratively from the domain experts, i.e. experts of PSL and the English to PSL human translators. We found this user centric approach very useful to help better understand the problem at the ground level, keeping our work user focused and then realization of user satisfaction level throughout the process. This work has opened a new world of opportunities where deaf can communicate with others who do not have PSL knowledge. Having this system, if it is further developed from a prototype to a functioning system; deaf institutes will have wider scope of choosing best instructors for a given domain that may not have PSL expertise. Deaf people will have more opportunities to interact with other members of the society at every level as communication is the basic pillar for this. The automatic speech to sign language is an attractive prospect; the impending applications are exhilarating and worthwhile. In the field of Human Computer Interface (HCI) we hope that our thesis will be an important addition to the ongoing research.<br>Ahmed Abdul Haseeb & Asim ilyas, Contact no. 00923215126749 House No. 310, Street No. 4 Rawal town Islamabad, Pakistan Postal Code 44000
APA, Harvard, Vancouver, ISO, and other styles
7

Janakiraman, Laxmipreethi. "Deep Directive Attention Network(DDAN) based Sign Language Translation." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/26581.

Full text
Abstract:
Sign language is a visual language. It is an effective way of communication for hearing and speech impaired community. In general, all visual languages are multi-modal which utilizes hand gestures, facial expressions and other non-manual features to effectively consider the linguistics of the language while communicating with others. In the recent few years, due to the advancement of computer vision and the NLP field, Sign Language Recognition(SLR) and Sign Language Translation(SLT) topic has attracted many researchers to find an effective way to translate sign language videos to spoken language sentence. Over a decade many ideations have been published but most of them focused on SLR as a mere gesture recognition problem without considering the linguistic structure. In the literature review, we dive deep in to the understanding of various Senor and vision based approaches which were used in the earlier days followed by Deep learning techniques which are offering state-of-the-art results in the recent days. Applying a mid-level Sign Gloss Representation is a key component of performing a successful SLT. Hence, an effective joint learning of mid-level sign Gloss into the Text translation is crucial to improve the performance. In this dissertation, we propose Deep Directive Attention Network (DDAN)-based sign translation framework that allows aligning key-tokens in sign Gloss with key-words in Text. Directive attention transformer is successfully used in this approach to have better inter-intra modal relationship between Gloss sequences and Text sentences which aids in higher translation accuracy of Sign videos to Text sentences. The proposed DDAN contains the Self-Attention (SA) of each sign Gloss and Text, as well as the Gloss Directive-Attention (DA) of Text. Those two attention units, SA and DA, can be placed and integrated in three different proposed DDAN variants, including DA, SDA and SSDA. We evaluate the translation performance of our Sign2(Gloss+Text) and Gloss2Text approaches on the two challenging benchmark datasets PHOENIX-Weather 2014T and ASLG-PC12. The data statistics were analyzed as the first step. Then, three different model variants were evaluated on the above mentioned datasets. The model variant SSDA outperformed the baseline models in both datasets with higher translation accuracy of Sign videos to Text sentences as well as Gloss sequences to Text sentences . Furthermore, we evaluated on various numbers of encoder and decoder to see the optimal count of layers in which the model outperformed the baselines. The hyper-parameter testing result shows the robustness of the proposed framework. In addition to quantitative analysis results, we also provide the qualitative results of the evaluations which shows the generated text sentences has translation precision close to gold standard text along with evident improvement in the morpho-syntax. Based on all the evaluations and analysis results, we demonstrate that out DDAN-based SLT framework outperforms all the state-of-the-art SLT models and achieve better and higher translation accuracy score.
APA, Harvard, Vancouver, ISO, and other styles
8

Pinheiro, Marcus Weydson. "TraduÃÃo como ferramenta de compreensÃo da lÃngua portuguesa no curso de letras libras da Universidade Federal do CearÃ." Universidade Federal do CearÃ, 2018. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=20206.

Full text
Abstract:
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior<br>Este estudo, inserido no Ãmbito dos Estudos da TraduÃÃo, ratifica o reconhecimento da LÃngua Brasileira de Sinais (Libras), graÃas à sua inclusÃo no currÃculo das instituiÃÃes de Ensino Superior brasileiras. O objetivo principal deste trabalho à identificar e apontar a funÃÃo da traduÃÃo em Libras como estratÃgia de compreensÃo e interpretaÃÃo do gÃnero textual artigo cientÃfico em LP na disciplina âPsicologia e EducaÃÃo de Surdosâ, pertencente ao eixo Fundamentos da EducaÃÃo de Surdos, ministrada no Curso de Licenciatura em Letras Libras da Universidade Federal do Cearà (UFC). Com um olhar sobre os discentes surdos, esta pesquisa pretende averiguar se e de que forma a traduÃÃo de textos cientÃficos na direÃÃo LP/Libras serve como instrumento para se alcanÃarem os nÃveis de compreensÃo necessÃrios à realizaÃÃo das atividades que exigem leitura, traduÃÃo e produÃÃo textual/discursiva com o gÃnero textual artigo cientÃfico escrito originalmente em LP. Para chegar ao objetivo precÃpuo do trabalho aqui apresentado, esta investigaÃÃo baseou-se metodologicamente em uma pesquisa-aÃÃo (VIEIRA, 2009), partindo, por um lado, de anÃlises de fatores extratextuais e intratextuais de um texto-fonte (TF) em LP e do texto-alvo correspondente (TA) em Libras. Essas anÃlises foram realizadas por 6 (seis) discentes surdos regularmente matriculados na disciplina âPsicologia e EducaÃÃo de Surdosâ, que antes foram instruÃdos sobre os fundamentos teÃricos propostos por Nord (2016) no domÃnio da Abordagem Funcionalista da TraduÃÃo. Por outro lado, como parte da metodologia, foram aplicados questionÃrios aos mesmos 6 (seis) discentes surdos e tambÃm à docente ministradora da disciplina âPsicologia e EducaÃÃo de Surdos.â Como suporte teÃrico no campo tradutÃrio, fundamentamos esta pesquisa mediante a Teoria de Escopo, concebida por Reiss & Vermeer (1996), e a Abordagem Funcionalista da TraduÃÃo, com nossa atenÃÃo voltada especificamente para os trabalhos de Nord (2012; 2014; 2016). Nossa preocupaÃÃo parte de uma de nossas hipÃteses: a necessidade de preparaÃÃo dos professores que ministram a disciplina âPsicologia e EducaÃÃo de Surdosâ do Curso de Letras Libras da UFC em questÃes relativas ao uso da traduÃÃo como estratÃgia para que os alunos surdos possam alcanÃar melhores objetivos de aprendizagem, ao serem confrontados com textos cientÃficos escritos em LP. Dada a complexidade do gÃnero textual artigo cientÃfico (p. ex.: linguagem tÃcnica e/ou cientÃfica, registro de portuguÃs formal, termos especÃficos etc.), os docentes precisam apresentar, aos discentes, traduÃÃes dos textos em Libras, fornecidas no formato de DVD. Portanto, vislumbramos em nossas hipÃteses que os discentes envolvidos normalmente utilizam a LP como segunda lÃngua, carecendo, assim, de conhecimentos aprofundados que lhes permitam compreender com facilidade textos cientÃficos redigidos nessa lÃngua. TambÃm lhes faltam conhecimentos e experiÃncias com anÃlises textuais que levem em consideraÃÃo diferentes fatores intratextuais e extratextuais como aqueles propostos por Nord (2016). Entendemos essa necessidade como um desafio atual para o Curso de Licenciatura em Libras da UFC. AlÃm disso, compreendemos que, se forem adotadas medidas que conduzam a um entendimento prÃtico desses exercÃcios, auxiliadas por teorias oriundas do campo dos Estudos da TraduÃÃo, os futuros profissionais surdos licenciados em Libras na UFC terÃo uma formaÃÃo mais completa e um melhor domÃnio da compreensÃo leitora em LP. Isto tambÃm certamente se refletirà no 9 desempenho de sua futura funÃÃo de educadores, em que deverÃo estar empenhados na inclusÃo de crianÃas e jovens surdos na sociedade em geral, mas sem deixar de lado os fatores prÃprios da Cultura Surda. De maneira geral, os resultados por nÃs obtidos apontam que o grupo-alvo participante da pesquisa-aÃÃo nÃo està familiarizado com um modelo de anÃlise textual do TF e do TA nos moldes daquele fornecido pela Abordagem Funcionalista de Nord (2016), o que certamente dificulta sua compreensÃo dos conteÃdos apresentados no texto original em portuguÃs e na traduÃÃo em Libras. De maneira especÃfica, podemos concluir, dentre outras coisas, que os alunos surdos examinados/entrevistados: a) precisam se familiarizar mais com as estruturas sintÃticas e o vocabulÃrio em LP; b) precisam conscientizar-se das peculiaridades de sua lÃngua natural/materna perante o PortuguÃs como segunda lÃngua (PSL); c) necessitam fazer leituras mais atentas de textos cientÃficos; d) carecem de conhecimentos mais profundos sobre conceitos tÃpicos da LinguÃstica Textual; e) declaram aumentar seu conhecimento de terminologias especializadas em Libras atravÃs dos textos cientÃficos traduzidos da LP para Libras; f) afirmam a importÃncia da escola inclusiva/bilÃngue para surdos como preparaÃÃo para o Ensino Superior; g) lanÃam mÃo de diferentes mÃdias eletrÃnicas para obterem uma melhor compreensÃo leitora de textos cientÃficos; h) nÃo se consideram, em sua maioria, capazes de traduzir textos cientÃficos da LP para Libras; i) reconhecem que a traduÃÃo à uma ferramenta que traz vantagens para a compreensÃo leitora.<br>This study, which is embedded in the field of Translation Studies, corroborates the recognition of the Brazilian Sign Language, the so-called Libras, thanks to its inclusion in the Brazilian curricula of Higher Education institutions. The main objective of this research is to identify and point out the function of translation in Libras as a strategy for understanding and interpreting the text type scientific article in Portuguese in the subject âPsychology and Education of Deaf Peopleâ, within the framework of the subjects studied in area of the discipline Fundamentals of Education, part of the Libras Undergraduate Course curriculum (Teachers Training Program) at Federal University of Cearà (UFC). With special attention given to deaf students, this research intends to investigate if and how the translation of scientific texts in the Portuguese/Libras direction serves as an instrument to reach the levels of comprehension necessary to carry out the activities that require reading, translation and text/discourse production with the specific text type scientific article originally written in Portuguese. To reach the main objective of this work, this research was methodologically based on an action-research (VIEIRA, 2009), starting with, on the one hand, analyzes of extratextual and intratextual factors of a source text (ST) in Portuguese and the corresponding target text (TT) in Libras. These analyzes were carried out by six (6) deaf students regularly enrolled in the discipline âPsychology and Education of Deaf Peopleâ at UFC, who were previously instructed on the theoretical foundations proposed by Nord (2016) in the field of the Functionalist Approach to Translation. On the other hand, as part of the methodology, questionnaires were applied to the same 6 (six) deaf students, and also to the teacher of the subject âPsychology and Education of Deaf Peopleâ. As a theoretical support in the field of translation, in this research we make use of the fundamental principles of the Scope Theory, conceived by Reiss & Vermeer (1996), and of the Functionalist Approach to Translation, with our attention focused specifically on Nordâs works (2012; 2014; 2016). Our concern is based on one of our hypotheses: the need for preparation of teachers who teach the discipline âPsychology and Education of Deaf Peopleâ in the UFC Libras Undergraduate Course on questions related to the use of translation as a strategy for deaf students to be able to achieve better learning objectives by being confronted with scientific articles written in Portuguese. Because of the complexity of the text type scientific article (e. g. technical and/or scientific language, formal Portuguese language register, specific terms etc.), teachers need to present, to the students, translations of the texts in Libras, exhibited in DVD format. Thus, in our hypotheses we envisage that the students involved usually use Portuguese as a second language, thus lacking in-depth knowledge that could enable them to easily understand scientific articles written in that language. They also lack knowledge and experience with textual analyzes that take into account the different intratextual and extratextual factors such as those proposed by Nord (2016). We understand this need as a current challenge for the UFC Libras Undergraduate Course; we also think that, if measures are adopted leading to a practical understanding of these exercises, aided by theories from the field of Translation Studies, future deaf professionals that will graduate from the UFC Libras Undergraduate Course will have a more complete training and a better command of reading comprehension 11 in Portuguese. This will also certainly be reflected in the performance of their future role as educators, in which they should be committed to include deaf children and young people in society in general, nevertheless without neglecting the characteristics of the culture of deaf people. In general, our results indicate that the target group participating in the action-research is not familiar with a textual analysis model regarding both ST and TT, such as that provided by Nordâs Functionalist Approach to Translation (2016); this fact certainly makes it difficult for deaf students to understand the contents presented in the original Portuguese texts and the corresponding translation into Libras. Specifically, we can conclude, among other things, that the deaf students that we examined/interviewed during our action-research: a) need to become acquainted with the Brazilian Portuguese syntactic structures and vocabulary as well; b) need to be aware of the peculiarities of their natural/mother tongue as compared to Brazilian Portuguese as a second language; c) need to make more careful readings of scientific articles; d) lack deeper knowledge about typical concepts that are used in Text Linguistics; e) declare to increase their knowledge of specialized terminology in Libras through the scientific articles translated from LP to Libras; f) assume the importance of inclusive/and bilingual school for deaf people as a preparation for Higher Education; g) use different electronic media to obtain a better reading comprehension of scientific articles; h) majorly do not consider themselves capable of translating scientific articles from LP to Libras; i) recognize that translation is a tool that really brings advantages to reading comprehension.
APA, Harvard, Vancouver, ISO, and other styles
9

Chapman, Robbin Nicole 1958. "A lexicon for translation of American Sign Language to English." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/80082.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, June 1999.<br>Includes bibliographical references (leaves 130-132).<br>by Robbin Nicole Chapman.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
10

Almohimeed, Abdulaziz. "Arabic text to Arabic sign language example-based translation system." Thesis, University of Southampton, 2012. https://eprints.soton.ac.uk/345562/.

Full text
Abstract:
This dissertation presents the first corpus-based system for translation from Arabic text into Arabic Sign Language (ArSL) for the deaf and hearing impaired, for whom it can facilitate access to conventional media and allow communication with hearing people. In addition to the familiar technical problems of text-to-text machine translation,building a system for sign language translation requires overcoming some additional challenges. First,the lack of a standard writing system requires the building of a parallel text-to-sign language corpus from scratch, as well as computational tools to prepare this parallel corpus. Further, the corpus must facilitate output in visual form, which is clearly far more difficult than producing textual output. The time and effort involved in building such a parallel corpus of text and visual signs from scratch mean that we will inevitably be working with quite small corpora. We have constructed two parallel Arabic text-to-ArSL corpora for our system. The first was built from school level language instruction material and contains 203 signed sentences and 710 signs. The second was constructed from a children's story and contains 813 signed sentences and 2,478 signs. Working with corpora of limited size means that coverage is a huge issue. A new technique was derived to exploit Arabic morphological information to increase coverage and hence, translation accuracy. Further, we employ two different example-based translation methods and combine them to produce more accurate translation output. We have chosen to use concatenated sign video clips as output rather than a signing avatar, both for simplicity and because this allows us to distinguish more easily between translation errors and sign synthesis errors. Using leave-one-out cross-validation on our first corpus, the system produced translated sign sentence outputs with an average word error rate of 36.2% and an average position-independent error rate of 26.9%. The corresponding figures for our second corpus were an average word error rate of 44.0% and 28.1%. The most frequent source of errors is missing signs in the corpus; this could be addressed in the future by collecting more corpus material. Finally, it is not possible to compare the performance of our system with any other competing Arabic text-to-ArSL machine translation system since no other such systems exist at present.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Sign language – Translating"

1

Ozolins, Uldis. Sign language interpreting in Australia. Language Australia, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Villareal, Corazon D. Translating the Sugilanon: Re-framing the sign. University of the Philippines Press, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Napier, Jemina. Sign language interpreting: Linguistic coping strategies. Douglas McLean, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Napier, Jemina. Sign language interpreting: Linguistic coping strategies. D. McLean, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Stewart, David Alan. Sign language interpreting: Exploring its art and science. Allyn and Bacon, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Napier, Jemina. Sign language interpreting: Theory and practice in Australia and New Zealand. 2nd ed. Federation Press, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fontana, Sabina. Tradurre lingue dei segni: Un'analisi multidimensionale. Mucchi editore, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Grbić, Nadja. "Ich habe mich ganz peinlich gefühlt.": Forschung zum Kommunaldolmetschen in Österreich : Problemstellungen, Perspektiven und Potenziale. Institut für Translationswissenschaft, Karl-Franzens-Universität, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cartwright, Brenda E. Multiple meanings in American sign language: 1,001 interpreter scenarios. RID Press, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kalata-Zawłocka, Aleksandra. Społeczne i językowe konteksty tłumaczenia języka migowego w Polsce. Wydano nakładem Wydziału Polonistyki Uniwersytetu Warszawskiego, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Sign language – Translating"

1

Leeson, Lorraine, and Myriam Vermeerbergen. "Sign language interpreting and translating." In Handbook of Translation Studies. John Benjamins Publishing Company, 2010. http://dx.doi.org/10.1075/hts.1.sig2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Reagan, Timothy. "3. Translating and interpreting sign language." In The Translator as Mediator of Cultures. John Benjamins Publishing Company, 2010. http://dx.doi.org/10.1075/wlp.3.06rea.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Koziol, Wojciech, Hubert Wojtowicz, Daniel Szymczyk, Kazimierz Sikora, and Wieslaw Wajs. "A Machine Translation System for Translating from the Polish Natural Language into the Sign Language." In Intelligent Information and Database Systems. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-15702-3_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Othman, Achraf, and Mohamed Jemni. "A Novel Approach for Translating English Statements to American Sign Language Gloss." In Lecture Notes in Computer Science. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-08599-9_65.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Leeson, Lorraine, and Sarah Sheridan. "Sign language interpreting." In Routledge Encyclopedia of Translation Studies, 3rd ed. Routledge, 2019. http://dx.doi.org/10.4324/9781315678627-112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Abutalipov, Alikhan, Aigerim Janaliyeva, Medet Mukushev, Antonio Cerone, and Anara Sandygulova. "Handshape Classification in a Reverse Dictionary of Sign Languages for the Deaf." In From Data to Models and Back. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70650-0_14.

Full text
Abstract:
AbstractThis paper showcases the work that aims at building a user-friendly mobile application of a reverse dictionary to translate sign languages to spoken languages. The concept behind the reverse dictionary is the ability to perform a video-based search by demonstrating a handshape in front of a mobile phone’s camera. The user would be able to use this feature in two ways. Firstly, the user would be able to search for a word by showing a handshape for the application to provide a list of signs that contain that handshape. Secondly, the user could fingerspell the word letter by letter in front of the camera for the application to return the sign that corresponds to that word. The user can then look through the suggested videos and see their written translations. To offer other functionalities, the application also has Search by Category and Search by Word options. Currently, the reverse dictionary supports translations from Russian Sign Language (RSL) to Russian language.
APA, Harvard, Vancouver, ISO, and other styles
7

Llewellyn-Jones, Peter. "Technology and sign language interpreting." In The Routledge Handbook of Translation and Technology. Routledge, 2019. http://dx.doi.org/10.4324/9781315311258-17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stone, Christopher. "The UNCRPD and “professional” sign language interpreter provision." In Benjamins Translation Library. John Benjamins Publishing Company, 2013. http://dx.doi.org/10.1075/btl.109.08sto.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pym, Anthony. "Chapter 1. A naïve inquiry into translation between Aboriginal languages in pre-Invasion Australia." In Translation Flows. John Benjamins Publishing Company, 2023. http://dx.doi.org/10.1075/btl.163.01pym.

Full text
Abstract:
Was there translation between Australian Aboriginal languages prior to the European Invasion dated from 1788? The evidence from archeological research and the accounts of early European settlers would suggest that there were no specialized translators as such between Aboriginal languages, no specific communicative solution that could be called translation in the post-Renaissance Western sense of the term, and no evidence of a dominant lingua franca that might have acted as an alternative communication solution. Instead, we find ample reference to polyglot speakers, to multilingual meeting places for trade, ceremony and dispute resolution, to multilingual narratives, and the use of local sign languages, smoke signals, bush tracks and message sticks, all of which could help in the performance of communication across language borders. Taken together, these practices suggest interlingual communication flows based not on conveying a message clearly or quickly, but on multilayered interlingual practices based on respect for the territorial embeddedness of languages and the active, informed interpretation of data. Unlike Western calls for ever more translations across ever more languages, Indigenous practices might enhance sustainability by teaching us to respect linguistic diversity, translate less, and think more.
APA, Harvard, Vancouver, ISO, and other styles
10

Krňoul, Zdeněk, Pavel Jedlička, Miloš Železný, and Luděk Müller. "Motion Capture 3D Sign Language Resources." In European Language Grid. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-17258-8_21.

Full text
Abstract:
AbstractThe new 3D motion capture data corpus expands the portfolio of existing language resources by a corpus of 18 hours of Czech sign language. This helps alleviate the current problem, which is a critical lack of quality data necessary for research and subsequent deployment of machine learning techniques in this area. We currently provide the largest collection of annotated sign language recordings acquired by state-of-the-art 3D human body recording technology for the successful future deployment of communication technologies, especially machine translation and sign language synthesis.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Sign language – Translating"

1

Chilukala, Mahender Reddy, and Vishwa Vadalia. "A Report on Translating Sign Language to English Language." In 2022 International Conference on Electronics and Renewable Systems (ICEARS). IEEE, 2022. http://dx.doi.org/10.1109/icears53579.2022.9751846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Joshi, Abhinav, Susmit Agrawal, and Ashutosh Modi. "ISLTranslate: Dataset for Translating Indian Sign Language." In Findings of the Association for Computational Linguistics: ACL 2023. Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.findings-acl.665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mazumder, Seshadri, Rudrabha Mukhopadhyay, Vinay P. Namboodiri, and C. V. Jawahar. "Translating sign language videos to talking faces." In ICVGIP '21: Indian Conference on Computer Vision, Graphics and Image Processing. ACM, 2021. http://dx.doi.org/10.1145/3490035.3490286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hu, Li, Jiahui Li, Jiashuo Zhang, Qi Wang, Bang Zhang, and Ping Tan. "A Speech-driven Sign Language Avatar Animation System for Hearing Impaired Applications." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/852.

Full text
Abstract:
Sign language is the communication language used in hearing impaired community. Recently, the research of sign language production has made great progress but still need to cope with some critical challenges. In this paper, we propose a system-level scheme and push forward the implementation of sign language production for practical usage. We build a system capable of translating speech into sign language avatar. Different from previous approach only focusing on single technology, we systematically combine algorithms of language translation, body gesture animation and facial avatar generation. We also develop two applications: Sign Language Interpretation APP and Virtual Sign Language Anchor, to facilitate easy and clear communication for hearing impaired people.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhu, Dele, Vera Czehmann, and Eleftherios Avramidis. "Neural Machine Translation Methods for Translating Text to Sign Language Glosses." In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.acl-long.700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gumuscekicci, Gizem, Ozay Ezerceli, and F. Boray Tek. "Web Service Translating Content into Turkish Sign Language." In 2020 5th International Conference on Computer Science and Engineering (UBMK). IEEE, 2020. http://dx.doi.org/10.1109/ubmk50275.2020.9219479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pari, S. Neelavathy, M. Jiavudin Sharafath Ahamed, M. Magarika, and K. Latchiyanathan. "SLatAR - A Sign Language Translating Augmented Reality Application." In 2023 12th International Conference on Advanced Computing (ICoAC). IEEE, 2023. http://dx.doi.org/10.1109/icoac59537.2023.10249271.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Alabbad, Dina A., Nouha O. Alsaleh, Naimah A. Alaqeel, Yara A. Alshehri, Nashwa A. Alzahrani, and Maha K. Alhobaishi. "A Robot-based Arabic Sign Language Translating System." In 2022 7th International Conference on Data Science and Machine Learning Applications (CDMA). IEEE, 2022. http://dx.doi.org/10.1109/cdma54072.2022.00030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dweik, Amal, Hanaa Qasrawi, and Dana Shawar. "Smart Glove for Translating Arabic Sign Language “SGTArSL”." In 2021 31st International Conference on Computer Theory and Applications (ICCTA). IEEE, 2021. http://dx.doi.org/10.1109/iccta54562.2021.9916612.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Guo, Dan, Shuo Wang, Qi Tian, and Meng Wang. "Dense Temporal Convolution Network for Sign Language Translation." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/105.

Full text
Abstract:
The sign language translation (SLT) which aims at translating a sign language video into natural language is a weakly supervised task, given that there is no exact mapping relationship between visual actions and textual words in a sentence label. To align the sign language actions and translate them into the respective words automatically, this paper proposes a dense temporal convolution network, termed DenseTCN which captures the actions in hierarchical views. Within this network, a temporal convolution (TC) is designed to learn the short-term correlation among adjacent features and further extended to a dense hierarchical structure. In the kth TC layer, we integrate the outputs of all preceding layers together: (1) The TC in a deeper layer essentially has larger receptive fields, which captures long-term temporal context by the hierarchical content transition. (2) The integration addresses the SLT problem by different views, including embedded short-term and extended longterm sequential learning. Finally, we adopt the CTC loss and a fusion strategy to learn the featurewise classification and generate the translated sentence. The experimental results on two popular sign language benchmarks, i.e. PHOENIX and USTCConSents, demonstrate the effectiveness of our proposed method in terms of various measurements.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!