Academic literature on the topic 'Language resource'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Language resource.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Language resource"

1

Lin, Donghui, Yohei Murakami, and Toru Ishida. "Towards Language Service Creation and Customization for Low-Resource Languages." Information 11, no. 2 (January 27, 2020): 67. http://dx.doi.org/10.3390/info11020067.

Full text
Abstract:
The most challenging issue with low-resource languages is the difficulty of obtaining enough language resources. In this paper, we propose a language service framework for low-resource languages that enables the automatic creation and customization of new resources from existing ones. To achieve this goal, we first introduce a service-oriented language infrastructure, the Language Grid; it realizes new language services by supporting the sharing and combining of language resources. We then show the applicability of the Language Grid to low-resource languages. Furthermore, we describe how we can now realize the automation and customization of language services. Finally, we illustrate our design concept by detailing a case study of automating and customizing bilingual dictionary induction for low-resource Turkic languages and Indonesian ethnic languages.
APA, Harvard, Vancouver, ISO, and other styles
2

Zinn, Claus. "The Language Resource Switchboard." Computational Linguistics 44, no. 4 (December 2018): 631–39. http://dx.doi.org/10.1162/coli_a_00329.

Full text
Abstract:
The CLARIN research infrastructure gives users access to an increasingly rich and diverse set of language-related resources and tools. Whereas there is ample support for searching resources using metadata-based search, or full-text search, or for aggregating resources into virtual collections, there is little support for users to help them process resources in one way or another. In spite of the large number of tools that process texts in many different languages, there is no single point of access where users can find tools to fit their needs and the resources they have. In this squib, we present the Language Resource Switchboard (LRS), which helps users to discover tools that can process their resources. For this, the LRS identifies all applicable tools for a given resource, lists the tasks the tools can achieve, and invokes the selected tool in such a way so that processing can start immediately with little or no prior tool parameterization.
APA, Harvard, Vancouver, ISO, and other styles
3

Santos, André C., Luís D. Pedrosa, Martijn Kuipers, and Rui M. Rocha. "Resource Description Language: A Unified Description Language for Network Embedded Resources." International Journal of Distributed Sensor Networks 8, no. 8 (January 2012): 860864. http://dx.doi.org/10.1155/2012/860864.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rickerson, Earl. "Language Resource Center." IALLT Journal of Language Learning Technologies 29, no. 1 (January 1, 1996): 25–34. http://dx.doi.org/10.17161/iallt.v29i1.9605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Chanhee, Kisu Yang, Taesun Whang, Chanjun Park, Andrew Matteson, and Heuiseok Lim. "Exploring the Data Efficiency of Cross-Lingual Post-Training in Pretrained Language Models." Applied Sciences 11, no. 5 (February 24, 2021): 1974. http://dx.doi.org/10.3390/app11051974.

Full text
Abstract:
Language model pretraining is an effective method for improving the performance of downstream natural language processing tasks. Even though language modeling is unsupervised and thus collecting data for it is relatively less expensive, it is still a challenging process for languages with limited resources. This results in great technological disparity between high- and low-resource languages for numerous downstream natural language processing tasks. In this paper, we aim to make this technology more accessible by enabling data efficient training of pretrained language models. It is achieved by formulating language modeling of low-resource languages as a domain adaptation task using transformer-based language models pretrained on corpora of high-resource languages. Our novel cross-lingual post-training approach selectively reuses parameters of the language model trained on a high-resource language and post-trains them while learning language-specific parameters in the low-resource language. We also propose implicit translation layers that can learn linguistic differences between languages at a sequence level. To evaluate our method, we post-train a RoBERTa model pretrained in English and conduct a case study for the Korean language. Quantitative results from intrinsic and extrinsic evaluations show that our method outperforms several massively multilingual and monolingual pretrained language models in most settings and improves the data efficiency by a factor of up to 32 compared to monolingual training.
APA, Harvard, Vancouver, ISO, and other styles
6

Tune, Kula Kekeba, and Vasudeva Varma. "Building CLIA for Resource-Scarce African Languages." International Journal of Information Retrieval Research 5, no. 1 (January 2015): 48–67. http://dx.doi.org/10.4018/ijirr.2015010104.

Full text
Abstract:
Since most of the existing major search engines and commercial Information Retrieval (IR) systems are primarily designed for well-resourced European and Asian languages, they have paid little attention to the development of Cross-Language Information Access (CLIA) technologies for resource-scarce African languages. This paper presents the authors' experience in building CLIA for indigenous African languages, with a special focus on the development and evaluation of Oromo-English-CLIR. The authors have adopted a knowledge-based query translation approach to design and implement their initial Oromo-English CLIR (OMEN-CLIR). Apart from designing and building the first OMEN-CLIR from scratch, another major contribution of this study is assessing the performance of the proposed retrieval system at one of the well-recognized international Cross-Language Evaluation Forums like the CLEF campaign. The overall performance of OMEN-CLIR was found to be very promising and encouraging, given the limited amount of linguistic resources available for severely under-resourced African languages like Afaan Oromo.
APA, Harvard, Vancouver, ISO, and other styles
7

Ranasinghe, Tharindu, and Marcos Zampieri. "Multilingual Offensive Language Identification for Low-resource Languages." ACM Transactions on Asian and Low-Resource Language Information Processing 21, no. 1 (January 31, 2022): 1–13. http://dx.doi.org/10.1145/3457610.

Full text
Abstract:
Offensive content is pervasive in social media and a reason for concern to companies and government organizations. Several studies have been recently published investigating methods to detect the various forms of such content (e.g., hate speech, cyberbullying, and cyberaggression). The clear majority of these studies deal with English partially because most annotated datasets available contain English data. In this article, we take advantage of available English datasets by applying cross-lingual contextual word embeddings and transfer learning to make predictions in low-resource languages. We project predictions on comparable data in Arabic, Bengali, Danish, Greek, Hindi, Spanish, and Turkish. We report results of 0.8415 F1 macro for Bengali in TRAC-2 shared task [23], 0.8532 F1 macro for Danish and 0.8701 F1 macro for Greek in OffensEval 2020 [58], 0.8568 F1 macro for Hindi in HASOC 2019 shared task [27], and 0.7513 F1 macro for Spanish in in SemEval-2019 Task 5 (HatEval) [7], showing that our approach compares favorably to the best systems submitted to recent shared tasks on these three languages. Additionally, we report competitive performance on Arabic and Turkish using the training and development sets of OffensEval 2020 shared task. The results for all languages confirm the robustness of cross-lingual contextual embeddings and transfer learning for this task.
APA, Harvard, Vancouver, ISO, and other styles
8

Rijhwani, Shruti, Jiateng Xie, Graham Neubig, and Jaime Carbonell. "Zero-Shot Neural Transfer for Cross-Lingual Entity Linking." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6924–31. http://dx.doi.org/10.1609/aaai.v33i01.33016924.

Full text
Abstract:
Cross-lingual entity linking maps an entity mention in a source language to its corresponding entry in a structured knowledge base that is in a different (target) language. While previous work relies heavily on bilingual lexical resources to bridge the gap between the source and the target languages, these resources are scarce or unavailable for many low-resource languages. To address this problem, we investigate zero-shot cross-lingual entity linking, in which we assume no bilingual lexical resources are available in the source low-resource language. Specifically, we propose pivot-basedentity linking, which leverages information from a highresource “pivot” language to train character-level neural entity linking models that are transferred to the source lowresource language in a zero-shot manner. With experiments on 9 low-resource languages and transfer through a total of54 languages, we show that our proposed pivot-based framework improves entity linking accuracy 17% (absolute) on average over the baseline systems, for the zero-shot scenario.1 Further, we also investigate the use of language-universal phonological representations which improves average accuracy (absolute) by 36% when transferring between languages that use different scripts.
APA, Harvard, Vancouver, ISO, and other styles
9

McGroarty, Mary. "Home language: Refuge, resistance, resource?" Language Teaching 45, no. 1 (January 27, 2011): 89–104. http://dx.doi.org/10.1017/s0261444810000558.

Full text
Abstract:
This presentation builds on the concept of orientations to languages other than English in the US first suggested by Ruíz (1984). Using examples from recent ethnographic, sociolinguistic, and policy-related investigations undertaken principally in North America, the discussion explores possible connections between individual and group language identities. It demonstrates that orientations to languages are dynamic inside and outside speech communities, varying across time and according to multiple contextual factors, including the history and size of local bilingual groups along with the impact of contemporary economic and political conditions. Often the conceptions of multiple languages reflected in policy and pedagogy oversimplify the complexity documented by research and raise questions for teaching practice.
APA, Harvard, Vancouver, ISO, and other styles
10

Rallo, John A. "Foreign Language Resource Center." IALLT Journal of Language Learning Technologies 4, no. 3 (January 17, 2019): 14–22. http://dx.doi.org/10.17161/iallt.v4i3.8760.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Language resource"

1

Cardillo, Eileen Robin. "Resource limitation approaches to language comprehension." Thesis, University of Oxford, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.418563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Loza, Christian. "Cross Language Information Retrieval for Languages with Scarce Resources." Thesis, University of North Texas, 2009. https://digital.library.unt.edu/ark:/67531/metadc12157/.

Full text
Abstract:
Our generation has experienced one of the most dramatic changes in how society communicates. Today, we have online information on almost any imaginable topic. However, most of this information is available in only a few dozen languages. In this thesis, I explore the use of parallel texts to enable cross-language information retrieval (CLIR) for languages with scarce resources. To build the parallel text I use the Bible. I evaluate different variables and their impact on the resulting CLIR system, specifically: (1) the CLIR results when using different amounts of parallel text; (2) the role of paraphrasing on the quality of the CLIR output; (3) the impact on accuracy when translating the query versus translating the collection of documents; and finally (4) how the results are affected by the use of different dialects. The results show that all these variables have a direct impact on the quality of the CLIR system.
APA, Harvard, Vancouver, ISO, and other styles
3

Jansson, Herman. "Low-resource Language Question Answering Systemwith BERT." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-42317.

Full text
Abstract:
The complexity for being at the forefront regarding information retrieval systems are constantly increasing. Recent technology of natural language processing called BERT has reached superhuman performance in high resource languages for reading comprehension tasks. However, several researchers has stated that multilingual model’s are not enough for low-resource languages, since they are lacking a thorough understanding of those languages. Recently, a Swedish pre-trained BERT model has been introduced which is trained on significantly more Swedish data than the multilingual models currently available. This study compares both multilingual and Swedish monolingual inherited BERT model’s for question answering utilizing both a English and a Swedish machine translated SQuADv2 data set during its fine-tuning process. The models are evaluated with SQuADv2 benchmark and within a implemented question answering system built upon the classical retriever-reader methodology. This study introduces a naive and more robust prediction method for the proposed question answering system as well finding a sweet spot for each individual model approach integrated into the system. The question answering system is evaluated and compared against another question answering library at the leading edge within the area, applying a custom crafted Swedish evaluation data set. The results show that the fine-tuned model based on the Swedish pre-trained model and the Swedish SQuADv2 data set were superior in all evaluation metrics except speed. The comparison between the different systems resulted in a higher evaluation score but a slower prediction time for this study’s system.
APA, Harvard, Vancouver, ISO, and other styles
4

Chavula, Catherine. "Using language similarities in retrieval for resource scarce languages: a study of several southern Bantu languages." Doctoral thesis, Faculty of Science, 2021. http://hdl.handle.net/11427/33614.

Full text
Abstract:
Most of the Web is published in languages that are not accessible to many potential users who are only able to read and understand their local languages. Many of these local languages are Resources Scarce Languages (RSLs) and lack the necessary resources, such as machine translation tools, to make available content more accessible. State of the art preprocessing tools and retrieval methods are tailored for Web dominant languages and, accordingly, documents written in RSLs are lowly ranked and difficult to access in search results, resulting in a struggling and frustrating search experience for speakers of RSLs. In this thesis, we propose the use of language similarities to match, re-rank and return search results written in closely related languages to improve the quality of search results and user experience. We also explore the use of shared morphological features to build multilingual stemming tools. Focusing on six Bantu languages spoken in Southeastern Africa, we first explore how users would interact with search results written in related languages. We conduct a user study, examining the usefulness and user preferences for ranking search results with different levels of intelligibility, and the types of emotions users experience when interacting with such results. Our results show that users can complete tasks using related language search results but, as intelligibility decreases, more users struggle to complete search tasks and, consequently, experience negative emotions. Concerning ranking, we find that users prefer that relevant documents be ranked higher, and that intelligibility be used as a secondary criterion. Additionally, we use a User-Centered Design (UCD) approach to investigate enhanced interface features that could assist users to effectively interact with such search results. Usability evaluation of our designed interface scored 86% using the System Usability Scale (SUS). We then investigate whether ranking models that integrate relevance and intelligibility features would improve retrieval effectiveness. We develop these features by drawing from traditional Information Retrieval (IR) models and linguistics studies, and employ Learning To Rank (LTR) and unsupervised methods. Our evaluation shows that models that use both relevance and intelligibility feature(s) have better performance when compared to models that use relevance features only. Finally, we propose and evaluate morphological processing approaches that include multilingual stemming, using rules derived from common morphological features across Bantu family of languages. Our evaluation of the proposed stemming approach shows that its performance is competitive on queries that use general terms. Overall, the thesis provides evidence that considering and matching search results written in closely related languages, as well as ranking and presenting them appropriately, improves the quality of retrieval and user experience for speakers of RSLs.
APA, Harvard, Vancouver, ISO, and other styles
5

Kolak, Okan. "Rapid resource transfer for multilingual natural language processing." College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/3182.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Dept. of Linguistics. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Yuan Ph D. Massachusetts Institute of Technology. "Transfer learning for low-resource natural language analysis." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108847.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 131-142).
Expressive machine learning models such as deep neural networks are highly effective when they can be trained with large amounts of in-domain labeled training data. While such annotations may not be readily available for the target task, it is often possible to find labeled data for another related task. The goal of this thesis is to develop novel transfer learning techniques that can effectively leverage annotations in source tasks to improve performance of the target low-resource task. In particular, we focus on two transfer learning scenarios: (1) transfer across languages and (2) transfer across tasks or domains in the same language. In multilingual transfer, we tackle challenges from two perspectives. First, we show that linguistic prior knowledge can be utilized to guide syntactic parsing with little human intervention, by using a hierarchical low-rank tensor method. In both unsupervised and semi-supervised transfer scenarios, this method consistently outperforms state-of-the-art multilingual transfer parsers and the traditional tensor model across more than ten languages. Second, we study lexical-level multilingual transfer in low-resource settings. We demonstrate that only a few (e.g., ten) word translation pairs suffice for an accurate transfer for part-of-speech (POS) tagging. Averaged across six languages, our approach achieves a 37.5% improvement over the monolingual top-performing method when using a comparable amount of supervision. In the second monolingual transfer scenario, we propose an aspect-augmented adversarial network that allows aspect transfer over the same domain. We use this method to transfer across different aspects in the same pathology reports, where traditional domain adaptation approaches commonly fail. Experimental results demonstrate that our approach outperforms different baselines and model variants, yielding a 24% gain on this pathology dataset.
by Yuan Zhang.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Kimutis, Michelle T. "Bilingual Education: A Resource for Teachers." Miami University Honors Theses / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=muhonors1302698144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zouhair, Taha. "Automatic Speech Recognition for low-resource languages using Wav2Vec2 : Modern Standard Arabic (MSA) as an example of a low-resource language." Thesis, Högskolan Dalarna, Institutionen för information och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:du-37702.

Full text
Abstract:
The need for fully automatic translation at DigitalTolk, a Stockholm-based company providing translation services, leads to exploring Automatic Speech Recognition as a first step for Modern Standard Arabic (MSA). Facebook AI recently released a second version of its Wav2Vec models, dubbed Wav2Vec 2.0, which uses deep neural networks and provides several English pretrained models along with a multilingual model trained in 53 different languages, referred to as the Cross-Lingual Speech Representation (XLSR-53). The small English and the XLSR-53 pretrained models are tested, and the results stemming from them discussed, with the Arabic data from Mozilla Common Voice. In this research, the small model did not yield any results and may have needed more unlabelled data to train whereas the large model proved to be successful in predicting the audio recordings in Arabic and a Word Error Rate of 24.40% was achieved, an unprecedented result. The small model turned out to be not suitable for training especially on languages other than English and where the unlabelled data is not enough. On the other hand, the large model gave very promising results despite the low amount of data. The large model should be the model of choice for any future training that needs to be done on low resource languages such as Arabic.
APA, Harvard, Vancouver, ISO, and other styles
9

Packham, Sean. "Crowdsourcing a text corpus for a low resource language." Master's thesis, University of Cape Town, 2016. http://hdl.handle.net/11427/20436.

Full text
Abstract:
Low resourced languages, such as South Africa's isiXhosa, have a limited number of digitised texts, making it challenging to build language corpora and the information retrieval services, such as search and translation that depend on them. Researchers have been unable to assemble isiXhosa corpora of sufficient size and quality to produce working machine translation systems and it has been acknowledged that there is little to know training data and sourcing translations from professionals can be a costly process. A crowdsourcing translation game which paid participants for their contributions was proposed as a solution to source original and relevant parallel corpora for low resource languages such as isiXhosa. The objectives of this dissertation is to report on the four experiments that were conducted to assess user motivation and contribution quantity under various scenarios using the developed crowdsourcing translation game. The first experiment was a pilot study to test a custom built system and to find out if social network users would volunteer to participate in a translation game for free. The second experiment tested multiple payment schemes with users from the University of Cape Town. The schemes rewarded users with consistent, increasing or decreasing amounts for subsequent contributions. Experiment 3 tested whether the same users from Experiment 2 would continue contributing if payments were taken away. The last experiment tested a payment scheme that did not offer a direct and guaranteed reward. Users were paid based on their leaderboard placement and only a limited number of the top leaderboard spots were allocated rewards. From experiment 1 and 3 we found that people do not volunteer without financial incentives, experiment 2 and 4 showed that people want increased rewards when putting in increased effort , experiment 3 also showed that people will not continue contributing if the financial incentives are taken away and experiment 4 also showed that the possibility of incentives is as attractive as offering guaranteed incentives .
APA, Harvard, Vancouver, ISO, and other styles
10

Louvan, Samuel. "Low-Resource Natural Language Understanding in Task-Oriented Dialogue." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/333813.

Full text
Abstract:
Task-oriented dialogue (ToD) systems need to interpret the user's input to understand the user's needs (intent) and corresponding relevant information (slots). This process is performed by a Natural Language Understanding (NLU) component, which maps the text utterance into a semantic frame representation, involving two subtasks: intent classification (text classification) and slot filling (sequence tagging). Typically, new domains and languages are regularly added to the system to support more functionalities. Collecting domain-specific data and performing fine-grained annotation of large amounts of data every time a new domain and language is introduced can be expensive. Thus, developing an NLU model that generalizes well across domains and languages with less labeled data (low-resource) is crucial and remains challenging. This thesis focuses on investigating transfer learning and data augmentation methods for low-resource NLU in ToD. Our first contribution is a study of the potential of non-conversational text as a source for transfer. Most transfer learning approaches assume labeled conversational data as the source task and adapt the NLU model to the target task. We show that leveraging similar tasks from non-conversational text improves performance on target slot filling tasks through multi-task learning in low-resource settings. Second, we propose a set of lightweight augmentation methods that apply data transformation on token and sentence levels through slot value substitution and syntactic manipulation. Despite its simplicity, the performance is comparable to deep learning-based augmentation models, and it is effective on six languages on NLU tasks. Third, we investigate the effectiveness of domain adaptive pre-training for zero-shot cross-lingual NLU. In terms of overall performance, continued pre-training in English is effective across languages. This result indicates that the domain knowledge learned in English is transferable to other languages. In addition to that, domain similarity is essential. We show that intermediate pre-training data that is more similar – in terms of data distribution – to the target dataset yields better performance.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Language resource"

1

Oral language resource book. Portsmouth, NH: Heinemann, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Leanne, Allen, Dewsbury Alison, and Western Australia. Education Department., eds. Oral language resource book. Melbourne: Longman Australia on behalf of the Education Department of Western Australia, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Leanne, Allen, and Western Australia. Education Department., eds. Oral language: Resource book. Port Melbourne: Rigby Heinemann on behalf of the Education Department of Western Australia, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Traci, Jacobson, and Rider Teri, eds. Sign language classroom resource. Oceanside, CA: Academic Communication Associates, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Canadian Legal Information Centre. Plain Language Centre. Plain Language Resource Centre catalogue. [Ottawa: Multiculturalism and Citizenship Canada], 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

K, Das Bikram, ed. Language education in human resource development. Singapore: SEAMEO Regional Language Centre, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

The language of literature: Grade 10: Unit Six Resource book. Evanston, Ill: McDougal Littell, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Unit Four Resource Book: The language of literature: Grade 10. Evanston, Ill: McDougal Littell, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bianco, Joseph Lo. Language and literacy: Australia's fundamental resource. [Canberra, Australia?]: National Board of Employment, Education and Training, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

A documentation of Rajbanshi language resource. Siliguril: N.L. Publishers, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Language resource"

1

Raman, Rajesh, Marvin Solomon, Miron Livny, and Alain Roy. "The Classads Language." In Grid Resource Management, 255–70. Boston, MA: Springer US, 2004. http://dx.doi.org/10.1007/978-1-4615-0509-9_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Palakodety, Shriphani, Ashiqur R. KhudaBukhsh, and Guha Jayachandran. "Language Identification." In Low Resource Social Media Text Mining, 27–40. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-5625-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jayawardena, Dhammika. "The Language of HRM." In Critical Human Resource Management, 37–50. 1 Edition. | New York : Routledge, 2021. | Series: Routledge studies in human resource development: Routledge, 2021. http://dx.doi.org/10.4324/9781003102434-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Muskens, Reinhard. "Language, Lambdas, and Logic." In Resource-Sensitivity, Binding and Anaphora, 23–54. Dordrecht: Springer Netherlands, 2003. http://dx.doi.org/10.1007/978-94-010-0037-6_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Johansson, Richard. "Chapter 7. NLP for resource building*." In Natural Language Processing, 169–90. Amsterdam: John Benjamins Publishing Company, 2021. http://dx.doi.org/10.1075/nlp.14.07joh.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bahri, Afef, Rafik Bouaziz, and Faïez Gargouri. "Towards an Efficient Datalog Based Evaluation of the FSAQL Query Language." In Resource Discovery, 150–80. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-45263-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mahboob, Ahmar, and Angel M. Y. Lin. "Local Languages as a Resource in (Language) Education." In Conceptual Shifts and Contextualized Practices in Education for Glocal Interaction, 197–217. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-6421-0_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mariani, Joseph, and Gil Francopoulo. "Language Matrices and a Language Resource Impact Factor." In Language Production, Cognition, and the Lexicon, 441–71. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-08043-7_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Johnson, Janelle M., and Julia B. Richards. "Language Orientations in Guatemala: Toward Language as Resource?" In Honoring Richard Ruiz and his Work on Language Planning and Bilingual Education, edited by Nancy H. Hornberger, 316–37. Bristol, Blue Ridge Summit: Multilingual Matters, 2016. http://dx.doi.org/10.21832/9781783096701-024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Turner, Marianne. "Language and Multilingualism." In Multilingualism as a Resource and a Goal, 19–43. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-21591-0_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Language resource"

1

Yang, Jian, Yuwei Yin, Shuming Ma, Dongdong Zhang, Zhoujun Li, and Furu Wei. "High-resource Language-specific Training for Multilingual Neural Machine Translation." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/619.

Full text
Abstract:
Multilingual neural machine translation (MNMT) trained in multiple language pairs has attracted considerable attention due to fewer model parameters and lower training costs by sharing knowledge among multiple languages. Nonetheless, multilingual training is plagued by language interference degeneration in shared parameters because of the negative interference among different translation directions, especially on high-resource languages. In this paper, we propose the multilingual translation model with the high-resource language-specific training (HLT-MT) to alleviate the negative interference, which adopts the two-stage training with the language-specific selection mechanism. Specifically, we first train the multilingual model only with the high-resource pairs and select the language-specific modules at the top of the decoder to enhance the translation quality of high-resource directions. Next, the model is further trained on all available corpora to transfer knowledge from high-resource languages (HRLs) to low-resource languages (LRLs). Experimental results show that HLT-MT outperforms various strong baselines on WMT-10 and OPUS-100 benchmarks. Furthermore, the analytic experiments validate the effectiveness of our method in mitigating the negative interference in multilingual training.
APA, Harvard, Vancouver, ISO, and other styles
2

Sabiiti Bamutura, David. "Ry/Rk-Lex: A Computational Lexicon for Runyankore and Rukiga Languages." In Eighth Swedish Language Technology Conference (SLTC-2020), 25-27 November 2020. Linköping University Electronic Press, 2021. http://dx.doi.org/10.3384/ecp184169.

Full text
Abstract:
Current research in computational linguistics and NLP requires the existence of language resources. Whereas these resources are available for only a few well-resourced languages, there are many languages that have been neglected. Among the neglected and / or under-resourced languages are Runyankore and Rukiga (henceforth referred to as Ry/Rk). In this paper, we report on Ry/Rk-Lex, a moderately large computational lexicon for Ry/Rk that we constructed from various existing data sources. Ry/Rk are two under-resourced Bantu languages with virtually no computational resources. About 9,400 lemmata have been entered so far. Ry/Rk-Lex has been enriched with syntactic and lexical semantic features, with the intent of providing a reference computational lexicon for Ry/Rk in other NLP (1) tasks such as: morphological analysis and generation; part of speech (POS) tagging; named entity recognition (NER); and (2) applications such as: spell and grammar checking; and cross-lingual information retrieval (CLIR). We have used Ry/Rk-Lex to dramatically increase the lexical coverage of previously developed computational resource grammars for Ry/Rk.
APA, Harvard, Vancouver, ISO, and other styles
3

Feng, Xiaocheng, Xiachong Feng, Bing Qin, Zhangyin Feng, and Ting Liu. "Improving Low Resource Named Entity Recognition using Cross-lingual Knowledge Transfer." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/566.

Full text
Abstract:
Neural networks have been widely used for high resource language (e.g. English) named entity recognition (NER) and have shown state-of-the-art results.However, for low resource languages, such as Dutch, Spanish, due to the limitation of resources and lack of annotated data, taggers tend to have lower performances.To narrow this gap, we propose three novel strategies to enrich the semantic representations of low resource languages: we first develop neural networks to improve low resource word representations by knowledge transfer from high resource language using bilingual lexicons. Further, a lexicon extension strategy is designed to address out-of lexicon problem by automatically learning semantic projections.Thirdly, we regard word-level entity type distribution features as an external language-independent knowledge and incorporate them into our neural architecture. Experiments on two low resource languages (including Dutch and Spanish) demonstrate the effectiveness of these additional semantic representations (average 4.8\% improvement). Moreover, on Chinese OntoNotes 4.0 dataset, our approach achieved an F-score of 83.07\% with 2.91\% absolute gain compared to the state-of-the-art results.
APA, Harvard, Vancouver, ISO, and other styles
4

Gandhe, Ankur, Florian Metze, and Ian Lane. "Neural network language models for low resource languages." In Interspeech 2014. ISCA: ISCA, 2014. http://dx.doi.org/10.21437/interspeech.2014-560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Motlani, Raveesh. "Developing language technology tools and resources for a resource-poor language: Sindhi." In Proceedings of the NAACL Student Research Workshop. Stroudsburg, PA, USA: Association for Computational Linguistics, 2016. http://dx.doi.org/10.18653/v1/n16-2008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Campi, A., and F. Callegati. "Network Resource Description Language." In 2009 IEEE Globecom Workshops. IEEE, 2009. http://dx.doi.org/10.1109/glocomw.2009.5360708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yu, Jiawei, and Jinsong Zhang. "Zero-resource Language Recognition." In 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2019. http://dx.doi.org/10.1109/apsipaasc47483.2019.9023049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lalitha Devi, Sobha. "Resolving Pronouns for a Resource-Poor Language, Malayalam Using Resource-Rich Language, Tamil." In Recent Advances in Natural Language Processing. Incoma Ltd., Shoumen, Bulgaria, 2019. http://dx.doi.org/10.26615/978-954-452-056-4_072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Isahara, Hitoshi. "Resource-based Natural Language Processing." In 2007 International Conference on Natural Language Processing and Knowledge Engineering. IEEE, 2007. http://dx.doi.org/10.1109/nlpke.2007.4368002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Xu, Shipeng, Hongzhi Yu, Thomas Fang Zheng, Guanyu Li, and Gegeentana. "Language resource construction for Mongolian." In 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2017. http://dx.doi.org/10.1109/apsipa.2017.8282132.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Language resource"

1

Gross, Thomas, and David O'Hallaron. Resource Management Under Language and Application Control (REMULAC). Fort Belvoir, VA: Defense Technical Information Center, February 2003. http://dx.doi.org/10.21236/ada412385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Goldfine, Alan. Using the information resource dictionary system command language. Gaithersburg, MD: National Bureau of Standards, 1985. http://dx.doi.org/10.6028/nbs.ir.85-3165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rosenberg, J. Extensible Markup Language (XML) Formats for Representing Resource Lists. RFC Editor, May 2007. http://dx.doi.org/10.17487/rfc4826.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Goldfine, Alan. Using the Information Resource Dictionary System Command Language (second edition). Gaithersburg, MD: National Bureau of Standards, 1988. http://dx.doi.org/10.6028/nbs.ir.88-3701.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Goldfine, Alan, and Thomasin Kirkendall. The ICST-NBS Information Resource Dictionary System command language prototype. Gaithersburg, MD: National Bureau of Standards, 1988. http://dx.doi.org/10.6028/nbs.ir.88-3830.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tesink, K., and R. Fox. A Uniform Resource Name (URN) Namespace for the Common Language Equipment Identifier (CLEI) Code. RFC Editor, August 2005. http://dx.doi.org/10.17487/rfc4152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Garcia-Martin, M., and G. Camarillo. Extensible Markup Language (XML) Format Extension for Representing Copy Control Attributes in Resource Lists. RFC Editor, October 2008. http://dx.doi.org/10.17487/rfc5364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kisteleki, R., and B. Haberman. Securing Routing Policy Specification Language (RPSL) Objects with Resource Public Key Infrastructure (RPKI) Signatures. RFC Editor, June 2016. http://dx.doi.org/10.17487/rfc7909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pikilnyak, Andrey V., Nadia M. Stetsenko, Volodymyr P. Stetsenko, Tetiana V. Bondarenko, and Halyna V. Tkachuk. Comparative analysis of online dictionaries in the context of the digital transformation of education. [б. в.], June 2021. http://dx.doi.org/10.31812/123456789/4431.

Full text
Abstract:
The article is devoted to a comparative analysis of popular online dictionaries and an overview of the main tools of these resources to study a language. The use of dictionaries in learning a foreign language is an important step to understanding the language. The effectiveness of this process increases with the use of online dictionaries, which have a lot of tools for improving the educational process. Based on the Alexa Internet resource it was found the most popular online dictionaries: Cambridge Dictionary, Wordreference, Merriam–Webster, Wiktionary, TheFreeDictionary, Dictionary.com, Glosbe, Collins Dictionary, Longman Dictionary, Oxford Dictionary. As a result of the deep analysis of these online dictionaries, we found out they have the next standard functions like the word explanations, transcription, audio pronounce, semantic connections, and examples of use. In propose dictionaries, we also found out the additional tools of learning foreign languages (mostly English) that can be effective. In general, we described sixteen functions of the online platforms for learning that can be useful in learning a foreign language. We have compiled a comparison table based on the next functions: machine translation, multilingualism, a video of pronunciation, an image of a word, discussion, collaborative edit, the rank of words, hints, learning tools, thesaurus, paid services, sharing content, hyperlinks in a definition, registration, lists of words, mobile version, etc. Based on the additional tools of online dictionaries we created a diagram that shows the functionality of analyzed platforms.
APA, Harvard, Vancouver, ISO, and other styles
10

Gurung, M. B., Uma Pratap, N. C. T. D. Shrestha, H. K. Sharma, N. Islam, and N. B. Tamang. Beekeeping Training for Farmers in Afghanistan: Resource Manual for Trainers [in Urdu]. International Centre for Integrated Mountain Development (ICIMOD), 2012. http://dx.doi.org/10.53055/icimod.564.

Full text
Abstract:
Beekeeping contributes to rural development by supporting agricultural production through pollination and by providing honey, wax, and other products for home use and sale. It offers a good way for resource-poor farmers in the Hindu Kush Himalayas to obtain income, as it requires only a small start-up investment, can be carried out in a small space close to the home, and generally yields profits within a year of operation. A modern approach to bee management, using frame hives and focusing on high quality, will help farmers benefit most fully from beekeeping. This manual is designed to help provide beekeepers with the up-to-date training they need. It presents an inclusive curriculum developed through ICIMOD’s work with partner organizations in Bangladesh, Bhutan, India, and Nepal, supported by the Austrian Development Agency. A wide range of stakeholders – trainers, trainees, government and non-governmental organizations (NGOs), associations and federations, and private entrepreneurs – were engaged in the identification of curriculum needs and in development and testing of the curriculum. The manual covers the full range of beekeeping-related topics, including the use of bees for crop pollination; production of honey, wax and other hive products; honey quality standards; and using value chain and market management to increase beekeepers’ benefits. It also includes emerging issues and innovations regarding such subjects as indigenous honeybees, gender and equity, integrated pest management, and bee-related policy. The focus is on participatory hands-on training, with clear explanations in simple language and many illustrations. The manual provides a basic resource for trainers and field extension workers in government and NGOs, universities, vocational training institutes, and private sector organizations, and for local trainers in beekeeping groups, beekeeping resource centres, cooperatives, and associations, for use in training Himalayan farmers. Individual ICIMOD regional member countries are planning local language editions adapted for their countries’ specific conditions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography