To see the other types of publications on this topic, follow the link: Native deaf signers.

Journal articles on the topic 'Native deaf signers'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Native deaf signers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

SURIAN, LUCA, MARIANTONIA TEDOLDI, and MICHAEL SIEGAL. "Sensitivity to conversational maxims in deaf and hearing children." Journal of Child Language 37, no. 4 (2009): 929–43. http://dx.doi.org/10.1017/s0305000909990043.

Full text
Abstract:
ABSTRACTWe investigated whether access to a sign language affects the development of pragmatic competence in three groups of deaf children aged 6 to 11 years: native signers from deaf families receiving bimodal/bilingual instruction, native signers from deaf families receiving oralist instruction and late signers from hearing families receiving oralist instruction. The performance of these children was compared to a group of hearing children aged 6 to 7 years on a test designed to assess sensitivity to violations of conversational maxims. Native signers with bimodal/bilingual instruction were as able as the hearing children to detect violations that concern truthfulness (Maxim of Quality) and relevance (Maxim of Relation). On items involving these maxims, they outperformed both the late signers and native signers attending oralist schools. These results dovetail with previous findings on mindreading in deaf children and underscore the role of early conversational experience and instructional setting in the development of pragmatics.
APA, Harvard, Vancouver, ISO, and other styles
2

Roman, Gretchen, Daniel S. Peterson, Edward Ofori, and Meghan E. Vidt. "Upper extremity biomechanics in native and non-native signers." Work 70, no. 4 (2021): 1111–19. http://dx.doi.org/10.3233/wor-213622.

Full text
Abstract:
BACKGROUND: Individuals fluent in sign language (signers) born to non-signing, non-deaf parents (non-natives) may have a greater injury risk than signers born to signing, deaf parents (natives). A comprehensive analysis of movement while signing in natives and non-natives has not been completed and could provide insight into the greater injury prevalence of non-natives. OBJECTIVE: The objective of this study was to determine differences in upper extremity biomechanics between non-natives and natives. METHODS: Strength, ‘micro’ rests, muscle activation, ballistic signing, joint angle, and work envelope were captured across groups. RESULTS: Non-natives had fewer rests (p = 0.002) and greater activation (p = 0.008) in non-dominant upper trapezius. For ballistic signing, natives had greater anterior-posterior jerk (p = 0.033) and for joint angle, natives demonstrated greater wrist flexion-extension range of motion (p = 0.040). Natives also demonstrated greater maximum medial-lateral (p = 0.015), and greater minimum medial-lateral (p = 0.019) and superior-inferior (p = 0.027) positions. CONCLUSIONS: We observed that natives presented with more rests and less activation, but greater ballistic tendencies, joint angle, and envelope compared to non-natives. Additional work should explore potential links between these outcomes and injury risk in signers.
APA, Harvard, Vancouver, ISO, and other styles
3

Quandt, Lorna C., Emily Kubicek, Athena Willis, and Jason Lamberton. "Enhanced biological motion perception in deaf native signers." Neuropsychologia 161 (October 2021): 107996. http://dx.doi.org/10.1016/j.neuropsychologia.2021.107996.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

BRENTARI, DIANE, MARIE A. NADOLSKE, and GEORGE WOLFORD. "Can experience with co-speech gesture influence the prosody of a sign language? Sign language prosodic cues in bimodal bilinguals." Bilingualism: Language and Cognition 15, no. 2 (2012): 402–12. http://dx.doi.org/10.1017/s1366728911000587.

Full text
Abstract:
In this paper the prosodic structure of American Sign Language (ASL) narratives is analyzed in deaf native signers (L1-D), hearing native signers (L1-H), and highly proficient hearing second language signers (L2-H). The results of this study show that the prosodic patterns used by these groups are associated both with their ASL language experience (L1 or L2) and with their hearing status (deaf or hearing), suggesting that experience using co-speech gesture (i.e. gesturing while speaking) may have some effect on the prosodic cues used by hearing signers, similar to the effects of the prosodic structure of an L1 on an L2.
APA, Harvard, Vancouver, ISO, and other styles
5

Miller, P., T. Kargin, and B. Guldenoglu. "Deaf Native Signers Are Better Readers Than Nonnative Signers: Myth or Truth?" Journal of Deaf Studies and Deaf Education 20, no. 2 (2015): 147–62. http://dx.doi.org/10.1093/deafed/enu044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hall, Matthew L., Victor S. Ferreira, and Rachel I. Mayberry. "Phonological similarity judgments in ASL." New Methodologies in Sign Language Phonology: Papers from TISLR 10 15, no. 1 (2012): 104–27. http://dx.doi.org/10.1075/sll.15.1.05hal.

Full text
Abstract:
We created a novel paradigm to investigate phonological processing in sign and asked how age of acquisition (AoA) may affect it. Participants indicated which of two signs was more phonologically similar to a target, and estimated the strength of the resemblance with a mouse click along a continuous scale. We manipulated AoA by testing deaf native and non-native signers, and hearing L2 signers and sign-naïve participants. Consistent with previous research, judgments by the native and L2 signers reflected similarity based on shared phonological features between signs. By contrast, judgments by the non-native signers and sign-naïve participants were influenced by other (potentially visual or somatosensory) properties of signs that native and L2 signers ignored. These results suggest that early exposure to language helps a learner discern which aspects of a linguistic signal are most likely to matter for language learning, even if that language belongs to a different modality.
APA, Harvard, Vancouver, ISO, and other styles
7

LU, JENNY, ANNA JONES, and GARY MORGAN. "The impact of input quality on early sign development in native and non-native language learners." Journal of Child Language 43, no. 3 (2016): 537–52. http://dx.doi.org/10.1017/s0305000915000835.

Full text
Abstract:
AbstractThere is debate about how input variation influences child language. Most deaf children are exposed to a sign language from their non-fluent hearing parents and experience a delay in exposure to accessible language. A small number of children receive language input from their deaf parents who are fluent signers. Thus it is possible to document the impact of quality of input on early sign acquisition. The current study explores the outcomes of differential input in two groups of children aged two to five years: deaf children of hearing parents (DCHP) and deaf children of deaf parents (DCDP). Analysis of child sign language revealed DCDP had a more developed vocabulary and more phonological handshape types compared with DCHP. In naturalistic conversations deaf parents used more sign tokens and more phonological types than hearing parents. Results are discussed in terms of the effects of early input on subsequent language abilities.
APA, Harvard, Vancouver, ISO, and other styles
8

Timperlake, Erin, Lawrence Pick, Donna Morere, and Pamela Dean. "A-218 Development of an American Sign Language Cognitive Screening Measure for Deaf Adults." Archives of Clinical Neuropsychology 37, no. 6 (2022): 1374. http://dx.doi.org/10.1093/arclin/acac060.218.

Full text
Abstract:
Abstract Objective: The Deaf community in the United States is recognized as a unique linguistic and cultural minority group. However, there are almost no American Sign Language (ASL) cognitive screening measures for this population (Dean et al., 2009; Atkinson et al., 2015). This study approached the development of a valid and conceptually equivalent measure of cognitive functioning with the involvement of Deaf community support and feedback. Method: Measure development was conducted by a team of neuropsychologists and clinical psychology PhD students with expertise in ASL linguistics and Deaf culture. The formal linguistic development included two native Deaf signers, one Deaf Interpreter, and one Certified Deaf Interpreter. The measure was then administered to a pilot sample of 20, cognitively intact, Deaf adults (Age: M=40.10, SD=5.50) fluent in ASL. Results: The measure showed good internal reliability for preliminary analyses (α= .72, λ-2 = .77, KR20 = .75) and it positively correlated with ASL fluency as determined by the ASL-Comprehension Test (r(18)= .54, p=.01; Hauser et al., 2016). There were no significant correlations with self-reported educational attainment (r(18)=-.21, p= .39) or race/ethnicity (r(18)=-.11, p= .64). Conclusion: Preliminary analysis of an ASL cognitive screening measures developed for use with culturally Deaf signers shows promise. This is one of the first attempts to create such a measure in the United States and next requires further development and piloting with a sample of cognitively intact older adult Deaf signers.
APA, Harvard, Vancouver, ISO, and other styles
9

Hauser, P. C., J. Cohen, M. W. G. Dye, and D. Bavelier. "Visual Constructive and Visual-Motor Skills in Deaf Native Signers." Journal of Deaf Studies and Deaf Education 12, no. 2 (2007): 148–57. http://dx.doi.org/10.1093/deafed/enl030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bogliotti, Caroline, Hatice Aksen, and Frédéric Isel. "Language experience in LSF development: Behavioral evidence from a sentence repetition task." PLOS ONE 15, no. 11 (2020): e0236729. http://dx.doi.org/10.1371/journal.pone.0236729.

Full text
Abstract:
In psycholinguistics and clinical linguistics, the Sentence Repetition Task (SRT) is known to be a valuable tool to screen general language abilities in both spoken and signed languages. This task enables users to reliably and quickly assess linguistic abilities at different levels of linguistic analysis such as phonology, morphology, lexicon, and syntax. To evaluate sign language proficiency in deaf children using French Sign Language (LSF), we designed a new SRT comprising 20 LSF sentences. The task was administered to a cohort of 62 children– 34 native signers (6;09–12 years) and 28 non-native signers (6;08–12;08 years)–in order to study their general linguistic development as a function of age of sign language acquisition (AOA) and chronological age (CA). Previously, a group of 10 adult native signers was also evaluated with this task. As expected, our results showed a significant effect of AOA, indicating that the native signers repeated more signs and were more accurate than non-native signers. A similar pattern of results was found for CA. Furthermore, native signers made fewer phonological errors (i.e., handshape, movement, and location) than non-native signers. Finally, as shown in previous sign language studies, handshape and movement proved to be the most difficult parameters to master regardless of AOA and CA. Taken together, our findings support the assumption that AOA is a crucial factor in the development of phonological skills regardless of language modality (spoken vs. signed). This study thus constitutes a first step toward a theoretical description of the developmental trajectory in LSF, a hitherto understudied language.
APA, Harvard, Vancouver, ISO, and other styles
11

Jaeger, Hanna, and Anita Junghanns. "Augenblick mal! Theoretische Überlegungen und methodische Zugänge zur Erforschung sozialer Variation in der Deutschen Gebärdensprache." Zeitschrift für Angewandte Linguistik 2018, no. 69 (2018): 97–128. http://dx.doi.org/10.1515/zfal-2018-0018.

Full text
Abstract:
AbstractDeaf sign language users oftentimes claim to be able to recognise straight away whether their interlocutors are native signers. To date it is unclear, however, what exactly such judgement calls might be based on. The aim of the research presented was to explore whether specific articulatory features are being associated with signers that have (allegedly) acquired German Sign Language (Deutsche Gebärdensprache, DGS) as their first language. The study is based on the analysis of qualitative and quantitative data. Qualitative data were generated in ten focus group settings. Each group was made up of three participants and one facilitator. Deaf participants’ meta-linguistic claims concerning linguistic features of ‘native signing’ (i. e. what native signing looks like) were qualitatively analysed using grounded theory methods. Quantitative data were generated via a language assessment experiment designed around stimulus material extracted from DGS corpus data. Participants were asked to judge whether or not individual clips extracted from a DGS corpus had been produced by a native signer. Against the backdrop of the findings identified in the focus group data, the stimulus material was subsequently linguistically analysed in order to identify specific linguistic features that might account for some clips to be judged as ‘produced by a native signer’ as opposed to others that were claimed to have been ‘articulated by a non-native signer’. Through juxtaposing meta-linguistic perspectives, the results of a language perception experiment and the linguistic analysis of the stimulus material, the study brings to the fore specific crystallisation points of linguistic and social features indexing linguistic authenticity. The findings break new ground in that they suggest that the face as articulator in general, and micro-prosodic features expressed in the movement of eyes, eyebrows and mouth in particular, play a significant role in the perception of others as (non-)native signers.
APA, Harvard, Vancouver, ISO, and other styles
12

Lucas, Ceil, and Clayton Valli. "ASL or contact signing: Issues of judgment." Language in Society 20, no. 2 (1991): 201–16. http://dx.doi.org/10.1017/s0047404500016274.

Full text
Abstract:
ABSTRACTThis article reports on one aspect of an ongoing study of language contact in the American deaf community. A kind of signing that results from the contact between American Sign Language (ASL) and English exhibits features of both languages. The ultimate goal of the study is a linguistic description of contact signing and a reexamination of claims that it is a pidgin. Ten dyads and two triads of native ASL signers (6 white dyads, 4 black dyads, 2 black triads) were videotaped with a deaf interviewer, a hearing interviewer, and alone with each other. The different interview situations induced switching between ASL and contact signing. This article (1) reviews the pattern of language use during the interviews with the white dyads and describes the judgments of selected videotaped segments by 10 native signers; (2) examines the role of demographic information in judgments. For each segment, half of the judges were given one set of demographic information, and the other half were given another set. Indications are that this information does affect judgment, even though the linguistic forms viewed were identical. (American Sign Language, language contact, language judgments, deaf community)
APA, Harvard, Vancouver, ISO, and other styles
13

Timperlake, Erin C., Lawrence Pick, Pamela Dean, and Donna Morere. "14 A Culturally and Linguistically Informed Approach to the Development of a Cognitive Screener for Deaf Adults using American Sign Language." Journal of the International Neuropsychological Society 29, s1 (2023): 429–30. http://dx.doi.org/10.1017/s1355617723005659.

Full text
Abstract:
Objective:When assessing individuals from diverse backgrounds, APA ethical principles emphasize the consideration of language and culture when selecting appropriate measures. Research among hearing, English-speaking individuals has shown the effects in identifying cognitive deficits when language, culture, and educational background are not considered in the selection and administration of measures (Ardilla, 2007). Among the Deaf community in the US, a minority group with a unique culture and language (American Sign Language: ASL), there have been few attempts to adapt existing English cognitive measures. Factors complicating this include research resources given the limited number of neuropsychologists and researchers who understand both the complexities of the measures as well as the linguistic and cultural factors within the Deaf population. The goal of the current project is to develop a culturally informed interpretation of a cognitive screening tool for appropriate use with older Deaf adults.Participants and Methods:Item selection was informed by MMSE data from Dean et al. (2009) and methods utilized by Atkinson et al. (2015). Items selection occurred through consultation with three neuropsychologists and graduate peers with either native signing abilities or demonstrated ASL fluency, as well as Deaf identities, cultural affiliation and or community engagement. Selection considered the potential for translation errors, particularly related to equivalence of translation from a spoken modality to a signed. Items were categorized into the following domains: Orientation, Attention, Memory, Language, Executive Functioning, Visuospatial, and Performance Validity. Two native signers (Deaf interpreters) provided formal translation of the items. The measure was piloted with 20 deaf and hard of hearing (DHH) adult signers (ages M=41.10, SD=5.50, Range=31-48). Items were prerecorded to standardize the administration, which was shown to participants through the screenshare function of Zoom software.Results:The average performance was 100.80 (SD=3.91)/ 105 possible points. Within the memory domain, some errors, especially for word selection on delayed recall, were noted which may be related to sign choice and dialect. Additionally, with culture-specific episodic memory items, participants 35% of participants were unable to provide a correct answer with qualitative responses indicating this information may be more familiar to a subset of the Deaf community that had attended Gallaudet University in Washington, D.C. There was a significant positive relationship between ASL fluency, determined by the ASL-Comprehension Test, and performance on the cognitive screener (r(18)=.54, p=.01) while age of onset of deafness (r(18)=-.16, p=.51) and age of ASL acquisition (r(18)= .21, p=.37), were not significant.Conclusions:Results of this preliminary project yielded a measure that benefited from inclusion of content experts in the field during the process of interpretation and translation. It appears appropriate for Deaf signers who are proficient in ASL. The pattern of correlations suggests the measure may be appropriate for use with fluent signers with experience in ASL acquisition. Further development of the measure should focus on appropriate items that address the diversity of the Deaf experience as well as continue to explore inclusive translation approaches.
APA, Harvard, Vancouver, ISO, and other styles
14

Hoffmeister, Robert J., Spyridoula Karipi, and Vassilis Kourbetis. "BILINGUAL CURRICULUM MATERIALS SUPPORTING SIGNED LANGUAGE AS A FIRST LANGUAGE FOR DEAF STUDENTS." Momento - Diálogos em Educação 31, no. 02 (2022): 500–527. http://dx.doi.org/10.14295/momento.v31i02.14506.

Full text
Abstract:
Considering Deaf children and adults as bilingual - their first language is a Signed Language (SL) and the second language is learned via print - provides professionals with a paradigm to be used for creating better learning opportunities. In this paper, Greek Sign Language ((G)SL)) [1] as a first language (L1) is the base language we use to present certain bilingual methodological teaching and learning considerations. This work is the result of a long journey from the initial thinking of the American Sign Language Curriculum and its influence on the development of the (G)SL curriculum in Greece. The paper offers discussion of innovative educational multimedia material that are easily accessed via online web portals, developed for teaching (G)SL as an L1 to pre-school and primary school Deaf children. In this work, SL as L1 is a resource that fully enables Deaf children to learn an L2 via print, supporting their bilingual acquisition capabilities. In developing curricula and supporting materials, we consider two important foundational components: Deaf native signers and near native signers as language role models for Deaf children, parents and teachers; and the development and interaction with digital educational materials. Thus, collaboration between educational and technology professionals and members of the Deaf community is critical. This bilingual model can be incorporated into any SL. (G)SL) is used as a model to display innovative practices merging SL (L1), print (L2), technology and creative instructional and assessment materials, maximized by understanding the visual nature of SL and its advantages for school learning. The penultimate goal is Deaf students to become successful bilingual learners to fully function in the world today and tomorrow.
 
 
 [1] In this paper, we will use (G)SL to indicate that we are discussing Greek Signed Language but content and technology can be used for any SL.
APA, Harvard, Vancouver, ISO, and other styles
15

van den Bogaerde, Beppie. "De Nederlandse Gebarentaal En Taalonderwijs." TTW: De nieuwe generatie 39 (January 1, 1991): 75–82. http://dx.doi.org/10.1075/ttwia.39.07bog.

Full text
Abstract:
Sign Language of the Netherlands (SLN) is considered to be the native language of many prelingually deaf people in the Netherlands. Although research has provided evidence that sign languages are fully fletched natural languages, many misconceptions still abound about sign languages and deaf people. The low status of sign languages all over the world and the attitude of hearing people towards deaf people and their languages, and the resulting attitude of the deaf towards their own languages, restricted the development of these languages until recently. Due to the poor results of deaf education and the dissatisfaction amongst educators of the deaf, parents of deaf children and deaf people themselves, a change of attitude towards the function of sign language in the interaction with deaf people can be observed; many hearing people dealing with deaf people one way or the other wish to learn the sign language of the deaf community of their country. Many hearing parents of deaf children, teachers of the deaf, student-interpreters and linguists are interested in sign language and want to follow a course to improve their signing ability. In order to develop sign language courses, sign language teachers and teaching materials are needed. And precisely these are missing. This is caused by several factors. First, deaf people in general do not receive the same education as hearing people, due to their inability to learn the spoken language of their environment to such an extent, that they have access to the full eduational program. This prohibits them a.o. to become teachers in elementary and secondary schools, or to become sign language teachers. Althought they are fluent "signers", they lack the competence in the spoken language of their country to obtain a teacher's degree in their sign language. A second problem is caused by the fact, that sign languages are visual languages: no adequate system has yet been found to write down a sign language. So until now hardly any teaching materials were available. Sign language courses should be developed with the help of native signers who should be educated to become language-teachers; with their help and with the help of video-material and computer-software, it will be possible in future to teach sign languages as any other language. But in order to reach this goal, it is imperative that deaf children get a better education so that they can contribute to the emancipation of their language.
APA, Harvard, Vancouver, ISO, and other styles
16

Efthimiou, Eleni, Stavroula-Evita Fotinea, Theodore Goulas, Anna Vacalopoulou, Kiki Vasilaki, and Athanasia-Lida Dimou. "Sign Language Technologies and the Critical Role of SL Resources in View of Future Internet Accessibility Services." Technologies 7, no. 1 (2019): 18. http://dx.doi.org/10.3390/technologies7010018.

Full text
Abstract:
In this paper, we touch upon the requirement for accessibility via Sign Language as regards dynamic composition and exchange of new content in the context of natural language-based human interaction, and also the accessibility of web services and electronic content in written text by deaf and hard-of-hearing individuals. In this framework, one key issue remains the option for composition of signed “text”, along with the ability for the reuse of pre-existing signed “text” by exploiting basic editing facilities similar to those available for written text that serve vocal language representation. An equally critical related issue is accessibility of vocal language text by born or early deaf signers, as well as the use of web-based facilities via Sign Language-supported interfaces, taking into account that the majority of native signers present limited reading skills. It is, thus, demonstrated how Sign Language technologies and resources may be integrated in human-centered applications, enabling web services and content accessibility in the education and an everyday communication context, in order to facilitate integration of signer populations in a societal environment that is strongly defined by smart life style conditions. This potential is also demonstrated by end-user-evaluation results.
APA, Harvard, Vancouver, ISO, and other styles
17

Goico, Sara. "A helping hand." Research on Children and Social Interaction 7, no. 2 (2023): 262–87. http://dx.doi.org/10.1558/rcsi.23340.

Full text
Abstract:
In this article, I examine responses to the taking of food items during snack time in an early childhood education classroom with deaf toddlers (18 months to three years old) who are native signers of American Sign Language (ASL). These children have grown up with exposure to ASL from deaf family members and are attending a classroom where all individuals use ASL. Through the combined analysis of ethnographic and interactional data, I argue that the teachers’ corporeal socialization of the deaf toddlers into a visual orientation leads to their development of a social and moral understanding of ownership rights within the classroom which is displayed in the children’s social awareness, social responsiveness and self-reliance in responding to food takings.
APA, Harvard, Vancouver, ISO, and other styles
18

Krebs, Julia, Ronnie B. Wilbur, and Dietmar Roehm. "Two agreement markers in Austrian Sign Language (ÖGS)." Sign Language and Linguistics 20, no. 1 (2017): 27–54. http://dx.doi.org/10.1075/sll.20.1.02kre.

Full text
Abstract:
Abstract For many of the sign languages studied to date, different types of agreement markers have been described which express agreement in transitive constructions involving non-inflecting (plain) verbs and sometimes even inflected agreement verbs. Austrian Sign Language (ÖGS) belongs to the group of sign languages employing two different agreement markers (agrm-bc/agrm-mf), which will be described in this paper. In an online questionnaire, we focused on two questions: (i) whether both forms of agreement markers are rated as equally acceptable by Deaf ÖGS-signers and hearing native signers, and (ii) whether there is a preferred syntactic position (pre- vs. postverbal) for these markers. Data analysis confirmed that both agreement markers are accepted by ÖGS-signers and that both agreement markers are slightly preferred in preverbal position. Further, possible origins of both agreement markers are discussed.
APA, Harvard, Vancouver, ISO, and other styles
19

Mohr, Susanne. "The visual-gestural modality and beyond." Sign Language and Linguistics 15, no. 2 (2012): 185–211. http://dx.doi.org/10.1075/sll.15.2.01moh.

Full text
Abstract:
The article analyses cross-modal language contact between signed and spoken languages with special reference to the Irish Deaf community. This is exemplified by an examination of the phenomenon of mouthings in Irish Sign Language including its origins, dynamics, forms and functions. Initially, the setup of language contact with respect to Deaf communities and the sociolinguistics of the Irish Deaf community are discussed, and in the main part the article analyses elicited data in the form of personal stories by twelve native signers from the Republic of Ireland. The major aim of the investigation is to determine whether mouthings are yet fully integrated into ISL and if so, whether this integration has ultimately caused language change. Finally, it is asked whether traditional sociolinguistic frameworks of language contact can actually tackle issues of cross-modal language contact occurring between signed and spoken languages.
APA, Harvard, Vancouver, ISO, and other styles
20

Keane, Jonathan, Zed Sevcikova Sehyr, Karen Emmorey, and Diane Brentari. "A theory-driven model of handshape similarity." Phonology 34, no. 2 (2017): 221–41. http://dx.doi.org/10.1017/s0952675717000124.

Full text
Abstract:
Following the Articulatory Model of Handshape (Keane 2014), which mathematically defines handshapes on the basis of joint angles, we propose two methods for calculating phonetic similarity: a contour difference method, which assesses the amount of change between handshapes within a fingerspelled word, and a positional similarity method, which compares similarity between pairs of letters in the same position across two fingerspelled words. Both methods are validated with psycholinguistic evidence based on similarity ratings by deaf signers. The results indicate that the positional similarity method more reliably predicts native signer intuition judgements about handshape similarity. This new similarity metric fills a gap in the literature (the lack of a theory-driven similarity metric) that has been empty since effectively the beginning of sign-language linguistics.
APA, Harvard, Vancouver, ISO, and other styles
21

Capek, C. M., G. Grossi, A. J. Newman, et al. "Brain systems mediating semantic and syntactic processing in deaf native signers: Biological invariance and modality specificity." Proceedings of the National Academy of Sciences 106, no. 21 (2009): 8784–89. http://dx.doi.org/10.1073/pnas.0809609106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Capek, Cheryl M., Bencie Woll, Mairéad MacSweeney, et al. "Superior temporal activation as a function of linguistic knowledge: Insights from deaf native signers who speechread." Brain and Language 112, no. 2 (2010): 129–34. http://dx.doi.org/10.1016/j.bandl.2009.10.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Evans, Charlotte J., and Kelvin L. Seifert. "Fostering the Development of ESL/ASL Bilinguals." TESL Canada Journal 18, no. 1 (2000): 01. http://dx.doi.org/10.18806/tesl.v18i1.896.

Full text
Abstract:
This article provides a bilingual perspective about literacy development in deaf students and uses the bilingual perspective to recommend effective teaching strategies for this group of students with special needs. In the case of deaf students, however, the bilingualism is not between two oral languages, but between American Sign Language (ASU and written English. The analogy of Deaf education to bilingual education is imperfect, as the article shows, but nonetheless helpful in suggesting educational strategies. One difference from classic bilingual education is the difference in mode of the two languages, with ASL using a haptic mode (signing) and written English using a visual mode. Another difference is the nontraditional nature of Deaf communities. Although ASL communities certainly have histories and traditions, Deaf individuals rarely learn these from family ties or immersion in a kinship-based culture that "speaks" ASL. Despite these differences in language mode and cultural transmission, teaching deaf students benefits from many strategies usually associated with the teaching of second languages, including fostering motivation, developing self concepts, understanding language development, knowing elements of a student's first language, allowing judicious translation,focusing on comprehension rather than syntax, and incorporating cultural values and native speakers-signers as role models.
APA, Harvard, Vancouver, ISO, and other styles
24

FUNG, CAT H. M., and GLADYS TANG. "Code-blending of functional heads in Hong Kong Sign Language and Cantonese: A case study." Bilingualism: Language and Cognition 19, no. 4 (2016): 754–81. http://dx.doi.org/10.1017/s1366728915000747.

Full text
Abstract:
In analyzing code-switching in spoken languages, Chan (2003, 2008) proposes that only functional heads with their associated language determine the order of the complement. In this paper, we examine whether Chan's analysis can account for code-blending in Hong Kong Sign Language (HKSL) and Cantonese by a deaf child (2;0.26–6;6.26) and three deaf adult native signers. HKSL and Cantonese differ in head directionality so far as the functional elements of modals, negators, and auxiliaries are concerned. They are head-final in HKSL but head-initial in Cantonese. The HKSL–Cantonese code-blending data in this study largely conform to Chan's analysis, where the order of the complement is determined by which language the functional head appears in. However, code-blending the functional heads of a similar category in both languages leads to either order of the complement. Also, the deaf child's apparent violations of adult HKSL grammar reveal crosslinguistic influence from Cantonese to HKSL during code-blending.
APA, Harvard, Vancouver, ISO, and other styles
25

Woolfe, Tyron, Rosalind Herman, Penny Roy, and Bencie Woll. "Early vocabulary development in deaf native signers: a British Sign Language adaptation of the communicative development inventories." Journal of Child Psychology and Psychiatry 51, no. 3 (2010): 322–31. http://dx.doi.org/10.1111/j.1469-7610.2009.02151.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Kanto, Laura, Henna Syrjälä, and Wolfgang Mann. "Assessing Vocabulary in Deaf and Hearing Children using Finnish Sign Language." Journal of Deaf Studies and Deaf Education 26, no. 1 (2020): 147–58. http://dx.doi.org/10.1093/deafed/enaa032.

Full text
Abstract:
Abstract This study investigates children’s vocabulary knowledge in Finnish Sign Language (FinSL), specifically their understanding of different form-meaning mappings by using a multilayered assessment format originally developed for British Sign Language (BSL). The web-based BSL vocabulary test by Mann (2009) was adapted for FinSL following the steps outlined by Mann, Roy and Morgan (2016) and piloted with a small group of deaf and hearing native signers (N = 24). Findings showed a hierarchy of difficulty between the tasks, which is concordant with results reported previously for BSL and American Sign Language (ASL). Additionally, the reported psychometric properties of the FinSL vocabulary test strengthen previous claims made for BSL and ASL that the underlying construct is appropriate for use with signed languages. Results also add new insights into the adaptation process of tests from one signed language to another and show this process to be a reliable and valid way to develop assessment tools in lesser-researched signed languages such as FinSL.
APA, Harvard, Vancouver, ISO, and other styles
27

Eliana, Mastrantuono, Burigo Michele, R. Rodríguez Ortiz Isabel, and Saldaña David. "The Role of Multiple Articulatory Channels of Sign-Supported Speech Revealed by Visual Processing." Journal of Speech, Language, and Hearing Research 62, no. 6 (2019): 1625–56. https://doi.org/10.1044/2019_JSLHR-S-17-0433.

Full text
Abstract:
Purpose The use of sign-supported speech (SSS) in the education of deaf students has been recently discussed in relation to its usefulness with deaf children using cochlear implants. To clarify the benefits of SSS for comprehension, 2 eye-tracking experiments aimed to detect the extent to which signs are actively processed in this mode of communication. Method Participants were 36 deaf adolescents, including cochlear implant users and native deaf signers. Experiment 1 attempted to shift observers' foveal attention to the linguistic source in SSS from which most information is extracted, lip movements or signs, by magnifying the face area, thus modifying lip movements perceptual accessibility (magnified condition), and by constraining the visual field to either the face or the sign through a moving window paradigm (gaze contingent condition). Experiment 2 aimed to explore the reliance on signs in SSS by occasionally producing a mismatch between sign and speech. Participants were required to concentrate upon the orally transmitted message. Results In Experiment 1, analyses revealed a greater number of fixations toward the signs and a reduction in accuracy in the gaze contingent condition across all participants. Fixations toward signs were also increased in the magnified condition. In Experiment 2, results indicated less accuracy in the mismatching condition across all participants. Participants looked more at the sign when it was inconsistent with speech. Conclusions All participants, even those with residual hearing, rely on signs when attending SSS, either peripherally or through overt attention, depending on the perceptual conditions.
APA, Harvard, Vancouver, ISO, and other styles
28

MacSweeney, Mairéad, Bencie Woll, Ruth Campbell, et al. "Neural Correlates of British Sign Language Comprehension: Spatial Processing Demands of Topographic Language." Journal of Cognitive Neuroscience 14, no. 7 (2002): 1064–75. http://dx.doi.org/10.1162/089892902320474517.

Full text
Abstract:
In all signed languages used by deaf people, signs are executed in “sign space” in front of the body. Some signed sentences use this space to map detailed “real-world” spatial relationships directly. Such sentences can be considered to exploit sign space “topographically.” Using functional magnetic resonance imaging, we explored the extent to which increasing the topographic processing demands of signed sentences was reflected in the differential recruitment of brain regions in deaf and hearing native signers of the British Sign Language. When BSL signers performed a sentence anomaly judgement task, the occipito-temporal junction was activated bilaterally to a greater extent for topographic than nontopo-graphic processing. The differential role of movement in the processing of the two sentence types may account for this finding. In addition, enhanced activation was observed in the left inferior and superior parietal lobules during processing of topographic BSL sentences. We argue that the left parietal lobe is specifically involved in processing the precise configuration and location of hands in space to represent objects, agents, and actions. Importantly, no differences in these regions were observed when hearing people heard and saw English translations of these sentences. Despite the high degree of similarity in the neural systems underlying signed and spoken languages, exploring the linguistic features which are unique to each of these broadens our understanding of the systems involved in language comprehension.
APA, Harvard, Vancouver, ISO, and other styles
29

Stroh, Anna‐Lena, Konstantin Grin, Frank Rösler, et al. "Developmental experiences alter the temporal processing characteristics of the visual cortex: Evidence from deaf and hearing native signers." European Journal of Neuroscience 55, no. 6 (2022): 1629–44. http://dx.doi.org/10.1111/ejn.15629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Waters, Dafydd, Ruth Campbell, Cheryl M. Capek, et al. "Fingerspelling, signed language, text and picture processing in deaf native signers: The role of the mid-fusiform gyrus." NeuroImage 35, no. 3 (2007): 1287–302. http://dx.doi.org/10.1016/j.neuroimage.2007.01.025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

HUENERFAUTH, MATT. "SPATIAL, TEMPORAL, AND SEMANTIC MODELS FOR AMERICAN SIGN LANGUAGE GENERATION: IMPLICATIONS FOR GESTURE GENERATION." International Journal of Semantic Computing 02, no. 01 (2008): 21–45. http://dx.doi.org/10.1142/s1793351x08000336.

Full text
Abstract:
Software to generate animations of American Sign Language (ASL) has important accessibility benefits for the significant number of deaf adults with low levels of written language literacy. We have implemented a prototype software system to generate an important subset of ASL phenomena called "classifier predicates," complex and spatially descriptive types of sentences. The output of this prototype system has been evaluated by native ASL signers. Our generator includes several novel models of 3D space, spatial semantics, and temporal coordination motivated by linguistic properties of ASL. These classifier predicates have several similarities to iconic gestures that often co-occur with spoken language; these two phenomena will be compared. This article explores implications of the design of our system for research in multimodal gesture generation systems. A conceptual model of multimodal communication signals is introduced to show how computational linguistic research on ASL relates to the field of multimodal natural language processing.
APA, Harvard, Vancouver, ISO, and other styles
32

Sze, Felix, Monica Xiao Wei, and David Lam. "Development of the Hong Kong Sign Language Sentence Repetition Test." Journal of Deaf Studies and Deaf Education 25, no. 3 (2020): 298–317. http://dx.doi.org/10.1093/deafed/enaa001.

Full text
Abstract:
Abstract This paper presents the design and development of the Hong Kong Sign Language-Sentence Repetition Test (HKSL-SRT). It will be argued that the test offers evidence of discriminability, reliability, as well as practicality and can serve as an effective global measurement of individuals' proficiency in HKSL. The full version of the test consists of 40 signed sentences of increasing length and complexity. Specifically, we will evaluate the manual and non-manual components of these sentences to find out whether and to what extent they can differentiate three groups of deaf signers, namely, native signers, early learners and late learners. Statistical analyses show that the test scores based on a correct repetition of the manual signs of each sentence bear a significant negative correlation with signers' age of acquisition. Including the correct repetition of non-manuals in the scoring scheme can result in higher reliability and separation index of the test in the Rasch model. This paper will also discuss how psychometric measures of Rasch analysis, including the concept of fit and the rankings of items/persons in the Wright map, have been applied to the original list of the 40 sentence items for the development of a shortened test.
APA, Harvard, Vancouver, ISO, and other styles
33

Ivanko, D., D. Ryumin, and A. Karpov. "AUTOMATIC LIP-READING OF HEARING IMPAIRED PEOPLE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W12 (May 9, 2019): 97–101. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w12-97-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Inability to use speech interfaces greatly limits the deaf and hearing impaired people in the possibility of human-machine interaction. To solve this problem and to increase the accuracy and reliability of the automatic Russian sign language recognition system it is proposed to use lip-reading in addition to hand gestures recognition. Deaf and hearing impaired people use sign language as the main way of communication in everyday life. Sign language is a structured form of hand gestures and lips movements involving visual motions and signs, which is used as a communication system. Since sign language includes not only hand gestures, but also lip movements that mimic vocalized pronunciation, it is of interest to investigate how accurately such a visual speech can be recognized by a lip-reading system, especially considering the fact that the visual speech of hearing impaired people is often characterized with hyper-articulation, which should potentially facilitate its recognition. For this purpose, thesaurus of Russian sign language (TheRusLan) collected in SPIIRAS in 2018–19 was used. The database consists of color optical FullHD video recordings of 13 native Russian sign language signers (11 females and 2 males) from “Pavlovsk boarding school for the hearing impaired”. Each of the signers demonstrated 164 phrases for 5 times. This work covers the initial stages of this research, including data collection, data labeling, region-of-interest detection and methods for informative features extraction. The results of this study can later be used to create assistive technologies for deaf or hearing impaired people.</p>
APA, Harvard, Vancouver, ISO, and other styles
34

Cardin, Velia, Eleni Orfanidou, Lena Kästner, et al. "Monitoring Different Phonological Parameters of Sign Language Engages the Same Cortical Language Network but Distinctive Perceptual Ones." Journal of Cognitive Neuroscience 28, no. 1 (2016): 20–40. http://dx.doi.org/10.1162/jocn_a_00872.

Full text
Abstract:
The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production.
APA, Harvard, Vancouver, ISO, and other styles
35

Gabarró-López, Sílvia, Laurence Meurant, and Nicolas Hanquet. "Crossing boundaries: Using French Belgian Sign Language (LSFB) and multimodal French corpora for contrastive, translation and interpreting studies." Across Languages and Cultures 25, no. 2 (2024): 268–87. http://dx.doi.org/10.1556/084.2024.00913.

Full text
Abstract:
AbstractThe French Belgian Sign Language (LSFB) corpus is the cornerstone of a unique multilingual data system that includes four distinct corpora. The first is a reference corpus containing dialogical LSFB data produced by deaf signers, which is also translated into written French, and the second one is a comparable multimodal corpus of Belgian French containing dialogical data produced by hearing native speakers. The other two corpora are made up of interpreted data, namely a parallel bidirectional corpus of LSFB > French data produced by hearing bimodal interpreters and another unidirectional parallel corpus of French > LSFB co-interpreted data produced by hearing and deaf interpreters working in tandem. This paper aims to describe these four corpora and to provide an overview of previous contrastive research drawing on their data and applications which have been developed so far. A new study contrasting reformulation structures in semi-spontaneous LSFB dialogical data and co-interpreted LSFB data is presented in order to exemplify how these corpora can be further compared to shed light on unknown issues to date such as the specificities of co-interpretation.
APA, Harvard, Vancouver, ISO, and other styles
36

Lutzenberger, Hannah, Roland Pfau, and Connie de Vos. "Emergence or Grammaticalization? The Case of Negation in Kata Kolok." Languages 7, no. 1 (2022): 23. http://dx.doi.org/10.3390/languages7010023.

Full text
Abstract:
Typological comparisons have revealed that signers can use manual elements and/or a non-manual marker to express standard negation, but little is known about how such systematic marking emerges from its gestural counterparts as a new sign language arises. We analyzed 1.73 h of spontaneous language data, featuring six deaf native signers from generations III-V of the sign language isolate Kata Kolok (Bali). These data show that Kata Kolok cannot be classified as a manual dominant or non-manual dominant sign language since both the manual negative sign and a side-to-side headshake are used extensively. Moreover, the intergenerational comparisons indicate a considerable increase in the use of headshake spreading for generation V which is unlikely to have resulted from contact with Indonesian Sign Language varieties. We also attest a specialized negative existential marker, namely, tongue protrusion, which does not appear in co-speech gesture in the surrounding community. We conclude that Kata Kolok is uniquely placed in the typological landscape of sign language negation, and that grammaticalization theory is essential to a deeper understanding of the emergence of grammatical structure from gesture.
APA, Harvard, Vancouver, ISO, and other styles
37

Hickok, G., K. Say, U. Bellugi, and E. S. Klima. "The basis of hemispheric asymmetries for language and spatial cognition: Clues from focal brain damage in two deaf native signers." Aphasiology 10, no. 6 (1996): 577–91. http://dx.doi.org/10.1080/02687039608248438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Mastrantuono, Eliana, Michele Burigo, Isabel R. Rodríguez-Ortiz, and David Saldaña. "The Role of Multiple Articulatory Channels of Sign-Supported Speech Revealed by Visual Processing." Journal of Speech, Language, and Hearing Research 62, no. 6 (2019): 1625–56. http://dx.doi.org/10.1044/2019_jslhr-s-17-0433.

Full text
Abstract:
Purpose The use of sign-supported speech (SSS) in the education of deaf students has been recently discussed in relation to its usefulness with deaf children using cochlear implants. To clarify the benefits of SSS for comprehension, 2 eye-tracking experiments aimed to detect the extent to which signs are actively processed in this mode of communication. Method Participants were 36 deaf adolescents, including cochlear implant users and native deaf signers. Experiment 1 attempted to shift observers' foveal attention to the linguistic source in SSS from which most information is extracted, lip movements or signs, by magnifying the face area, thus modifying lip movements perceptual accessibility (magnified condition), and by constraining the visual field to either the face or the sign through a moving window paradigm (gaze contingent condition). Experiment 2 aimed to explore the reliance on signs in SSS by occasionally producing a mismatch between sign and speech. Participants were required to concentrate upon the orally transmitted message. Results In Experiment 1, analyses revealed a greater number of fixations toward the signs and a reduction in accuracy in the gaze contingent condition across all participants. Fixations toward signs were also increased in the magnified condition. In Experiment 2, results indicated less accuracy in the mismatching condition across all participants. Participants looked more at the sign when it was inconsistent with speech. Conclusions All participants, even those with residual hearing, rely on signs when attending SSS, either peripherally or through overt attention, depending on the perceptual conditions. Supplemental Material https://doi.org/10.23641/asha.8121191
APA, Harvard, Vancouver, ISO, and other styles
39

Proksch, Jason, and Daphne Bavelier. "Changes in the Spatial Distribution of Visual Attention after Early Deafness." Journal of Cognitive Neuroscience 14, no. 5 (2002): 687–701. http://dx.doi.org/10.1162/08989290260138591.

Full text
Abstract:
There is much anecdotal suggestion of improved visual skills in congenitally deaf individuals. However, this claim has only been met by mixed results from careful investigations of visual skills in deaf individuals. Psychophysical assessments of visual functions have failed, for the most part, to validate the view of enhanced visual skills after deafness. Only a few studies have shown an advantage for deaf individuals in visual tasks. Interestingly, all of these studies share the requirement that participants process visual information in their peripheral visual field under demanding conditions of attention. This work has led us to propose that congenital auditory deprivation alters the gradient of visual attention from central to peripheral field by enhancing peripheral processing. This hypothesis was tested by adapting a search task from Lavie and colleagues in which the interference from distracting information on the search task provides a measure of attentional resources. These authors have established that during an easy central search for a target, any surplus attention remaining will involuntarily process a peripheral distractor that the subject has been instructed to ignore. Attentional resources can be measured by adjusting the difficulty of the search task to the point at which no surplus resources are available for the distractor. Through modification of this paradigm, central and peripheral attentional resources were compared in deaf and hearing individuals. Deaf individuals possessed greater attentional resources in the periphery but less in the center when compared to hearing individuals. Furthermore, based on results from native hearing signers, it was shown that sign language alone could not be responsible for these changes. We conclude that auditory deprivation from birth leads to compensatory changes within the visual system that enhance attentional processing of the peripheral visual field.
APA, Harvard, Vancouver, ISO, and other styles
40

Berteletti, Ilaria, Sarah E. Kimbley, SaraBeth J. Sullivan, Lorna C. Quandt, and Makoto Miyakoshi. "Different Language Modalities Yet Similar Cognitive Processes in Arithmetic Fact Retrieval." Brain Sciences 12, no. 2 (2022): 145. http://dx.doi.org/10.3390/brainsci12020145.

Full text
Abstract:
Does experience with signed language impact the neurocognitive processes recruited by adults solving arithmetic problems? We used event-related potentials (ERPs) to identify the components that are modulated by operation type and problem size in Deaf American Sign Language (ASL) native signers and in hearing English-speaking participants. Participants were presented with single-digit subtraction and multiplication problems in a delayed verification task. Problem size was manipulated in small and large problems with an additional extra-large subtraction condition to equate the overall magnitude of large multiplication problems. Results show comparable behavioral results and similar ERP dissociations across groups. First, an early operation type effect is observed around 200 ms post-problem onset, suggesting that both groups have a similar attentional differentiation for processing subtraction and multiplication problems. Second, for the posterior-occipital component between 240 ms and 300 ms, subtraction problems show a similar modulation with problem size in both groups, suggesting that only subtraction problems recruit quantity-related processes. Control analyses exclude possible perceptual and cross-operation magnitude-related effects. These results are the first evidence that the two operation types rely on distinct cognitive processes within the ASL native signing population and that they are equivalent to those observed in the English-speaking population.
APA, Harvard, Vancouver, ISO, and other styles
41

Emmorey, Karen, Stephen McCullough, Sonya Mehta, Laura L. B. Ponto, and Thomas J. Grabowski. "The Biology of Linguistic Expression Impacts Neural Correlates for Spatial Language." Journal of Cognitive Neuroscience 25, no. 4 (2013): 517–33. http://dx.doi.org/10.1162/jocn_a_00339.

Full text
Abstract:
Biological differences between signed and spoken languages may be most evident in the expression of spatial information. PET was used to investigate the neural substrates supporting the production of spatial language in American Sign Language as expressed by classifier constructions, in which handshape indicates object type and the location/motion of the hand iconically depicts the location/motion of a referent object. Deaf native signers performed a picture description task in which they overtly named objects or produced classifier constructions that varied in location, motion, or object type. In contrast to the expression of location and motion, the production of both lexical signs and object type classifier morphemes engaged left inferior frontal cortex and left inferior temporal cortex, supporting the hypothesis that unlike the location and motion components of a classifier construction, classifier handshapes are categorical morphemes that are retrieved via left hemisphere language regions. In addition, lexical signs engaged the anterior temporal lobes to a greater extent than classifier constructions, which we suggest reflects increased semantic processing required to name individual objects compared with simply indicating the type of object. Both location and motion classifier constructions engaged bilateral superior parietal cortex, with some evidence that the expression of static locations differentially engaged the left intraparietal sulcus. We argue that bilateral parietal activation reflects the biological underpinnings of sign language. To express spatial information, signers must transform visual–spatial representations into a body-centered reference frame and reach toward target locations within signing space.
APA, Harvard, Vancouver, ISO, and other styles
42

Waters, Dafydd, Ruth Campbell, Cheryl M. Capek, et al. "Corrigendum to “Fingerspelling, signed language, text and picture processing in deaf native signers: The role of the mid-fusiform gyrus” [NeuroImage 35 (2007) 1287–1302]." NeuroImage 40, no. 2 (2008): 984–86. http://dx.doi.org/10.1016/j.neuroimage.2007.12.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Jones, Emily, Julie Brown, and Chorong Oh. "Cognitive resource allocation in deaf individuals: Any implications for injurious falls?" Hearing Balance and Communication 22, no. 4 (2024): 129–37. https://doi.org/10.4103/hbc.hbc_28_24.

Full text
Abstract:
Abstract Background: This mixed-methods pilot study was designed to investigate the possible risk of injurious falls in American Sign Language (ASL) users. Sign language is highly cognitive, yet the impacts of concurrent cognitive load among ASL users are unclear. Research on dual task of walking and signing may provide functional and critical information to prevent injurious falls in the Deaf community where ASL is a primary language. The current study was designed to investigate the potential fall risks due to the simultaneous activity of walking and signing among ASL users. Methods: Five ASL users (4 native ASL users and 1 ASL interpreter) participated in the study. Participants completed three walking tasks under a different cognitive load: Walking without signing, walking while retelling the Cinderella story using ASL, and walking while having a conversation in ASL. Four gait parameters (i.e., functional ambulation profile (FAP), velocity, stride length (SL), and double support time [DST]) were measured using the GAITRite© Portable Walkway System from CIR Systems, Inc. Following the walking tasks, a semi-structured interview was conducted to understand participants’ subjective perception of potential fall risks associated with the dual task. Results: The one-way analysis of variance with the within-subject factor of cognitive load and the Tukey post hoc test revealed participants’ gait stability decreased as the cognitive load increased, evidenced by the significant decrease in FAP scores, SL, and velocity, and increase in DST. Through the qualitatively thematic analysis of the interview, three themes were identified: Walking and signing is a common experience in the Deaf community; signers maintain compensatory strategies; and cultural rules to minimize fall risks while walking and signing exist. Discussion: While the sample size was limited, objective gait deterioration was found in all participants which implies increased fall risks. In addition, participants stated cultural rules and strategies that evidence the presence of such gait changes. The results from this pilot study suggest that while the risk of falls is underperceived, the impact of such an occurrence is felt throughout this sample of Deaf community members. Further investigations in this area are warranted. Conclusion: This pilot study reveals that walking and signing concurrently may increase fall risk among ASL users, as evidenced by objective gait deterioration and participants’ noted compensatory strategies. These findings emphasize the need for greater awareness and further research on fall prevention approaches within the Deaf community.
APA, Harvard, Vancouver, ISO, and other styles
44

Matchin, William, Deniz İlkbaşaran, Marla Hatrak, et al. "The Cortical Organization of Syntactic Processing Is Supramodal: Evidence from American Sign Language." Journal of Cognitive Neuroscience 34, no. 2 (2022): 224–35. http://dx.doi.org/10.1162/jocn_a_01790.

Full text
Abstract:
Abstract Areas within the left-lateralized neural network for language have been found to be sensitive to syntactic complexity in spoken and written language. Previous research has revealed that these areas are active for sign language as well, but whether these areas are specifically responsive to syntactic complexity in sign language independent of lexical processing has yet to be found. To investigate the question, we used fMRI to neuroimage deaf native signers' comprehension of 180 sign strings in American Sign Language (ASL) with a picture-probe recognition task. The ASL strings were all six signs in length but varied at three levels of syntactic complexity: sign lists, two-word sentences, and complex sentences. Syntactic complexity significantly affected comprehension and memory, both behaviorally and neurally, by facilitating accuracy and response time on the picture-probe recognition task and eliciting a left lateralized activation response pattern in anterior and posterior superior temporal sulcus (aSTS and pSTS). Minimal or absent syntactic structure reduced picture-probe recognition and elicited activation in bilateral pSTS and occipital-temporal cortex. These results provide evidence from a sign language, ASL, that the combinatorial processing of anterior STS and pSTS is supramodal in nature. The results further suggest that the neurolinguistic processing of ASL is characterized by overlapping and separable neural systems for syntactic and lexical processing.
APA, Harvard, Vancouver, ISO, and other styles
45

Fischer, Susan D., Lorraine A. Delhorne, and Charlotte M. Reed. "Effects of Rate of Presentation on the Reception of American Sign Language." Journal of Speech, Language, and Hearing Research 42, no. 3 (1999): 568–82. http://dx.doi.org/10.1044/jslhr.4203.568.

Full text
Abstract:
Previous research on the visual reception of fingerspelled English suggests that communication rates are limited primarily by constraints on production. Studies of artificially accelerated fingerspelling indicate that reception of fingerspelled sentences is highly accurate for rates up to 2 to 3 times those that can be produced naturally. The current paper reports on the results of a comparable study of the reception of American Sign Language (ASL). Fourteen native deaf ASL signers participated in an experiment in which videotaped productions of isolated ASL signs or ASL sentences were presented at normal playback speed and at speeds of 2, 3, 4, and 6 times normal speed. For isolated signs, identification scores decreased from 95% correct to 46% correct across the range of rates that were tested; for sentences, the ability to identify key signs decreased from 88% to 19% over the range of rates tested. The results indicate a breakdown in processing at around 2.5–3 times the normal rate as evidenced both by a substantial drop in intelligibility in this region and by a shift in error patterns away from semantic and toward formational. These results parallel those obtained in previous studies of the intelligibility of the auditory reception of time-compressed speech and the visual reception of accelerated fingerspelling. Taken together, these results suggest a modality-independent upper limit to language processing.
APA, Harvard, Vancouver, ISO, and other styles
46

Lewin, Donna, and Adam C. Schembri. "Mouth gestures in British Sign Language." Nonmanuals in Sign Language 14, no. 1 (2011): 94–114. http://dx.doi.org/10.1075/sll.14.1.06lew.

Full text
Abstract:
This article investigates the claim that tongue protrusion (‘th’) acts as a nonmanual adverbial morpheme in British Sign Language (BSL) (Brennan 1992; Sutton-Spence & Woll 1999) drawing on narrative data produced by two deaf native signers as part of the European Cultural Heritage Online (ECHO) corpus. Data from ten BSL narratives have been analysed to observe the frequency and form of tongue protrusion. The results from this preliminary investigation indicate tongue protrusion occurs as part of the phonological formation of lexical signs (i.e., ‘echo phonology’, see Woll 2001), as well as a separate meaningful unit that co-occurs (sometimes as part of constructed action) with classifier constructions and lexical verb signs. In the latter cases, the results suggest ‘th’ sometimes appears to function as an adverbial morpheme in BSL, but with a greater variety of meanings than previously suggested in the BSL literature. One use of the adverbial appears similar to a nonmanual signal in American Sign Language described by Liddell (1980), although the form of the mouth gesture in our BSL data differs from what is reported in Liddell’s work. Thus, these findings suggest the mouth gesture ‘th’ in BSL has a broad range of functions. Some uses of tongue protrusion, however, remain difficult to categorise and further research with a larger dataset is needed.
APA, Harvard, Vancouver, ISO, and other styles
47

Буркова, Светлана Игоревна. "THE ROLE OF VISUAL MODALITY IN LANGUAGE VITALITY AND MAINTENANCE." Tomsk Journal of Linguistics and Anthropology, no. 3(33) (November 28, 2021): 19–30. http://dx.doi.org/10.23951/2307-6119-2021-3-19-30.

Full text
Abstract:
В статье на примере русского жестового языка (РЖЯ) делается попытка показать, что инструменты оценки жизнеспособности и сохранности языка, разработанные на материале звуковых языков, не вполне подходят для оценки жизнеспособности и сохранности жестовых языков. Если, например, оценивать жизнеспособность РЖЯ по шестибалльной шкале в системе «девяти факторов», предложенной в документе ЮНЕСКО (Language vitality…, 2003) и используемой в Атласе языков, находящихся под угрозой исчезновения, то эта оценка составит не более 3 баллов, т. е. РЖЯ будет характеризоваться как язык, находящийся под угрозой исчезновения. Это бесписьменный язык, преимущественно используемый в сфере бытового общения, существующий в окружении функционально несопоставимо более мощного русского звукового языка; подавляющее большинство носителей РЖЯ являются билингвами, в той или иной степени владеющими русским звуковым языком в его устной или письменной форме; большая часть носителей РЖЯ усваивают жестовый язык не в семье, с рождения, а в более позднем возрасте; условия усвоения РЖЯ влияют на языковую компетенцию его носителей; окружающий русский звуковой язык влияет на лексику и грамматику РЖЯ; этот язык остается пока недостаточно изученным и слабо задокументированным, и т. д. Однако в действительности РЖЯ в этих условиях стабильно сохраняется, а в последнее время даже расширяет свой словарный состав и сферы использования. Главный фактор, который обеспечивает сохранность жестового языка и который не учитывается в существующих методиках, предназначенных для оценки витальности языков — это модальность, в которой существует жестовый язык. Глухие люди, в силу того что им недоступна или плохо доступна аудиальная модальность, не могут полностью перейти на звуковой язык. Наиболее естественной для коммуникации для них остается визуальная модальность, при этом современные средства связи и интернет открывают дополнительные возможности для подержания и развития языка в визуальной модальности. The paper discusses sociolinguistic aspects of Russian Sign Language (RSL) and attempts to show that the tools used to access the degree of language vitality, which were developed for spoken languages, are not quite suitable to access vitality of sign languages. For example, if to try to assess the vitality of RSL in terms of six-point scale of the “nine factors” system proposed by UNESCO (Language vitality ..., 2003), which is used in the Atlas of Endangered Languages, the assessment of RSL would be no more than 3 points. In other words, RSL would be characterized as an endangered language. It is an unwritten language, mainly used in everyday communication; it exists in the environment of functionally much more powerful spoken Russian; the overwhelming majority of RSL signers are bilinguals, they use spoken Russian, at least in its written form; most deaf children acquire RSL not in the family, from birth, but later in life, at kindergartens or schools; the conditions of RSL acquisition affect the deaf signers’ language proficiency, as well as spoken Russian affects RSL’s lexicon and grammar; RSL still remains insufficiently studied and poorly documented, etc. However, RSL, as a native communication system of the Deaf, based on visual modality, is not only well maintained, but even expands some spheres of use. The main factor, which supports maintenance of RSL and which is not taken into account in the existing tools to access the degree of language vitality is visual modality. The auditory modality is inaccessible or poorly accessible for the deaf, so they can not completely shift to spoken Russian. Visual modality remains the most natural for their communication. In addition, modern technologies and the internet provide much more opportunities for the existence of RSL in this modality and for its development.
APA, Harvard, Vancouver, ISO, and other styles
48

Capek, Cheryl M., Dafydd Waters, Bencie Woll, et al. "Hand and Mouth: Cortical Correlates of Lexical Processing in British Sign Language and Speechreading English." Journal of Cognitive Neuroscience 20, no. 7 (2008): 1220–34. http://dx.doi.org/10.1162/jocn.2008.20084.

Full text
Abstract:
Spoken languages use one set of articulators—the vocal tract, whereas signed languages use multiple articulators, including both manual and facial actions. How sensitive are the cortical circuits for language processing to the particular articulators that are observed? This question can only be addressed with participants who use both speech and a signed language. In this study, we used functional magnetic resonance imaging to compare the processing of speechreading and sign processing in deaf native signers of British Sign Language (BSL) who were also proficient speechreaders. The following questions were addressed: To what extent do these different language types rely on a common brain network? To what extent do the patterns of activation differ? How are these networks affected by the articulators that languages use? Common peri-sylvian regions were activated both for speechreading English words and for BSL signs. Distinctive activation was also observed reflecting the language form. Speechreading elicited greater activation in the left mid-superior temporal cortex than BSL, whereas BSL processing generated greater activation at the temporo-parieto-occipital junction in both hemispheres. We probed this distinction further within BSL, where manual signs can be accompanied by different types of mouth action. BSL signs with speech-like mouth actions showed greater superior temporal activation, whereas signs made with non-speech-like mouth actions showed more activation in posterior and inferior temporal regions. Distinct regions within the temporal cortex are not only differentially sensitive to perception of the distinctive articulators for speech and for sign but also show sensitivity to the different articulators within the (signed) language.
APA, Harvard, Vancouver, ISO, and other styles
49

Haug, Tobias, Jong Nivja De, Franz Holzknecht, et al. "Development and validation of a fluency rating scale for Swiss German Sign Language." Frontiers in Education 9 (December 3, 2024): 1466936. https://doi.org/10.3389/feduc.2024.1466936.

Full text
Abstract:
<strong>Introduction:</strong> Sign language fluency is an area that has received very little attention within research on sign language education and assessment. Therefore, we wanted to develop and validate a rating scale of fluency for Swiss German Sign Language (<em>Deutschschweizerische Geb&auml;rdensprache</em>, DSGS). <strong>Methods:</strong> Different kinds of data were collected to inform the rating scale development. The data were from (1) focus group interviews with sign language teachers (<em>N</em> = 3); (2) annotated DSGS data from users/learners with various levels of proficiency (i.e., deaf native signers of DSGS, hearing sign language interpreters, and beginning learners of DSGS, approximately CEFR level A1-A2) (<em>N</em> = 28) who completed different signing tasks that were manipulated by preparation time; (3) feedback from raters (<em>N</em> = 3); and (4) complimented with theory from spoken and sign language fluency. <strong>Results:</strong> In the focus group interview, sign language teachers identified a number of fluency aspects. The annotated DSGS data were analyzed using different regression models to see how language background and preparation time for the tasks can predict aspects of fluency (e.g., number and duration of pauses). Whereas preparation time showed only a slight effect in the annotated data, language background predicted the occurrence of fluency features that also informed the scale development. The resulting rating scale consisted of six criteria, each on a six-point scale. DSGS performances (<em>N</em> = 162) (same as the annotated data) from the different groups of DSGS users/learners were rated by three raters. The rated data were analyzed using multi-facet Rasch measurement. Overall, the rating scale functioned well, with each score category being modal at some point on the continuum. Results from correlation and regression analysis of the annotated data and rated DSGS performances complemented validity evidence of the rating scale. <strong>Discussion:</strong> We argue that the different sources of data serve as a sound empirical basis for the operationalized &ldquo;DSGS fluency construct&rdquo; in the rating scale. The results of the analyses relating performance data to ratings show strong validity evidence of the second version of the rating scale. Together, the objective fluency measures explained 88% of the variance in the rating scores.
APA, Harvard, Vancouver, ISO, and other styles
50

Quinto-Pozos, David, and Frances Cooley. "A Developmental Disorder of Signed Language Production in a Native Deaf Signer of ASL." Languages 5, no. 4 (2020): 40. http://dx.doi.org/10.3390/languages5040040.

Full text
Abstract:
Evidence for a Developmental Language Disorder (DLD) could surface with language processing/comprehension, language production, or a combination of both. Whereas, various studies have described cases of DLD in signing deaf children, there exist few detailed examples of deaf children who exhibit production issues in the absence of processing or comprehension challenges or motor deficits. We describe such a situation by detailing a case study of “Gregory”, a deaf native signer of American Sign Language (ASL). We adopt a detailed case-study methodology for obtaining information from Gregory’s family and school, which we combine with linguistic and non-linguistic data that we collected through one-on-one sessions with Gregory. The results provide evidence of persistent issues with language production (in particular, atypical articulation of some phonological aspects of signs), yet typical comprehension skills and unremarkable fine-motor motor skills. We also provide a snapshot of Gregory’s rich linguistic environment, which we speculate, may serve to attenuate his production deficit. The results of this study have implications for the provision of language services for signing deaf children in schools and also for language therapists. We propose that language therapists who are fluent in signed language be trained to work with signing children.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!