Academic literature on the topic 'Cross-Lingual knowledge transfer'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Cross-Lingual knowledge transfer.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Cross-Lingual knowledge transfer"

1

Wang, Yabing, Fan Wang, Jianfeng Dong, and Hao Luo. "CL2CM: Improving Cross-Lingual Cross-Modal Retrieval via Cross-Lingual Knowledge Transfer." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 6 (2024): 5651–59. http://dx.doi.org/10.1609/aaai.v38i6.28376.

Full text
Abstract:
Cross-lingual cross-modal retrieval has garnered increasing attention recently, which aims to achieve the alignment between vision and target language (V-T) without using any annotated V-T data pairs. Current methods employ machine translation (MT) to construct pseudo-parallel data pairs, which are then used to learn a multi-lingual and multi-modal embedding space that aligns visual and target-language representations. However, the large heterogeneous gap between vision and text, along with the noise present in target language translations, poses significant challenges in effectively aligning
APA, Harvard, Vancouver, ISO, and other styles
2

Chai, Linzheng, Jian Yang, Tao Sun, et al. "XCOT: Cross-lingual Instruction Tuning for Cross-lingual Chain-of-Thought Reasoning." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 22 (2025): 23550–58. https://doi.org/10.1609/aaai.v39i22.34524.

Full text
Abstract:
Chain-of-thought (CoT) has emerged as a powerful technique to elicit reasoning in large language models and improve a variety of downstream tasks. CoT mainly demonstrates excellent performance in English, but its usage in low-resource languages is constrained due to poor language generalization. To bridge the gap among different languages, we propose a cross-lingual instruction fine-tuning framework (xCoT) to transfer knowledge from high-resource languages to low-resource languages. Specifically, the multilingual instruction training data (xCoT-Instruct) is created to encourage the semantic al
APA, Harvard, Vancouver, ISO, and other styles
3

Abhishek Singhal, Happa Khan, Aditya Sharma. "Empowering Multilingual AI: Cross-Lingual Transfer Learning." Tuijin Jishu/Journal of Propulsion Technology 43, no. 4 (2023): 284–87. http://dx.doi.org/10.52783/tjjpt.v43.i4.2353.

Full text
Abstract:
Multilingual Natural Language Processing (NLP) and Cross-Lingual Transfer Learning have emerged as pivotal fields in the realm of language technology. This abstract explores the essential concepts and methodologies behind these areas, shedding light on their significance in a world characterized by linguistic diversity. Multilingual NLP enables machines to process global collaboration. Cross-lingual transfer learning, on the other hand, leverages knowledge from one language to enhance NLP tasks in another, facilitating efficient resource utilization and improved model performance. The abstract
APA, Harvard, Vancouver, ISO, and other styles
4

Al-Duwais, Mashael, Hend Al-Khalifa, and Abdulmalik Al-Salman. "A Benchmark Evaluation of Multilingual Large Language Models for Arabic Cross-Lingual Named-Entity Recognition." Electronics 13, no. 17 (2024): 3574. http://dx.doi.org/10.3390/electronics13173574.

Full text
Abstract:
Multilingual large language models (MLLMs) have demonstrated remarkable performance across a wide range of cross-lingual Natural Language Processing (NLP) tasks. The emergence of MLLMs made it possible to achieve knowledge transfer from high-resource to low-resource languages. Several MLLMs have been released for cross-lingual transfer tasks. However, no systematic evaluation comparing all models for Arabic cross-lingual Named-Entity Recognition (NER) is available. This paper presents a benchmark evaluation to empirically investigate the performance of the state-of-the-art multilingual large l
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Mozhi, Yoshinari Fujinuma, and Jordan Boyd-Graber. "Exploiting Cross-Lingual Subword Similarities in Low-Resource Document Classification." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 9547–54. http://dx.doi.org/10.1609/aaai.v34i05.6500.

Full text
Abstract:
Text classification must sometimes be applied in a low-resource language with no labeled training data. However, training data may be available in a related language. We investigate whether character-level knowledge transfer from a related language helps text classification. We present a cross-lingual document classification framework (caco) that exploits cross-lingual subword similarity by jointly training a character-based embedder and a word-based classifier. The embedder derives vector representations for input words from their written forms, and the classifier makes predictions based on t
APA, Harvard, Vancouver, ISO, and other styles
6

Colhon, Mihaela. "Language engineering for syntactic knowledge transfer." Computer Science and Information Systems 9, no. 3 (2012): 1231–47. http://dx.doi.org/10.2298/csis120130032c.

Full text
Abstract:
In this paper we present a method for an English-Romanian treebank construction, together with the obtained evaluation results. The treebank is built upon a parallel English-Romanian corpus word-aligned and annotated at the morphological and syntactic level. The syntactic trees of the Romanian texts are generated by considering the syntactic phrases of the English parallel texts automatically resulted from syntactic parsing. The method reuses and adjusts existing tools and algorithms for cross-lingual transfer of syntactic constituents and syntactic trees alignment.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhan, Qingran, Xiang Xie, Chenguang Hu, Juan Zuluaga-Gomez, Jing Wang, and Haobo Cheng. "Domain-Adversarial Based Model with Phonological Knowledge for Cross-Lingual Speech Recognition." Electronics 10, no. 24 (2021): 3172. http://dx.doi.org/10.3390/electronics10243172.

Full text
Abstract:
Phonological-based features (articulatory features, AFs) describe the movements of the vocal organ which are shared across languages. This paper investigates a domain-adversarial neural network (DANN) to extract reliable AFs, and different multi-stream techniques are used for cross-lingual speech recognition. First, a novel universal phonological attributes definition is proposed for Mandarin, English, German and French. Then a DANN-based AFs detector is trained using source languages (English, German and French). When doing the cross-lingual speech recognition, the AFs detectors are used to t
APA, Harvard, Vancouver, ISO, and other styles
8

Xu, Zenan, Linjun Shou, Jian Pei, et al. "A Graph Fusion Approach for Cross-Lingual Machine Reading Comprehension." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (2023): 13861–68. http://dx.doi.org/10.1609/aaai.v37i11.26623.

Full text
Abstract:
Although great progress has been made for Machine Reading Comprehension (MRC) in English, scaling out to a large number of languages remains a huge challenge due to the lack of large amounts of annotated training data in non-English languages. To address this challenge, some recent efforts of cross-lingual MRC employ machine translation to transfer knowledge from English to other languages, through either explicit alignment or implicit attention. For effective knowledge transition, it is beneficial to leverage both semantic and syntactic information. However, the existing methods fail to expli
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Xun, and Kun Zhang. "Contrastive Learning Pre-Training and Quantum Theory for Cross-Lingual Aspect-Based Sentiment Analysis." Entropy 27, no. 7 (2025): 713. https://doi.org/10.3390/e27070713.

Full text
Abstract:
The cross-lingual aspect-based sentiment analysis (ABSA) task continues to pose a significant challenge, as it involves training a classifier on high-resource source languages and then applying it to classify texts in low-resource target languages, thereby bridging linguistic gaps while preserving accuracy. Most existing methods achieve exceptional performance by relying on multilingual pre-trained language models (mPLM) and translation systems to transfer knowledge across languages. However, little attention has been paid to factors beyond semantic similarity, which ultimately hinders classif
APA, Harvard, Vancouver, ISO, and other styles
10

Rijhwani, Shruti, Jiateng Xie, Graham Neubig, and Jaime Carbonell. "Zero-Shot Neural Transfer for Cross-Lingual Entity Linking." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6924–31. http://dx.doi.org/10.1609/aaai.v33i01.33016924.

Full text
Abstract:
Cross-lingual entity linking maps an entity mention in a source language to its corresponding entry in a structured knowledge base that is in a different (target) language. While previous work relies heavily on bilingual lexical resources to bridge the gap between the source and the target languages, these resources are scarce or unavailable for many low-resource languages. To address this problem, we investigate zero-shot cross-lingual entity linking, in which we assume no bilingual lexical resources are available in the source low-resource language. Specifically, we propose pivot-basedentity
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Cross-Lingual knowledge transfer"

1

Aufrant, Lauriane. "Training parsers for low-resourced languages : improving cross-lingual transfer with monolingual knowledge." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS089/document.

Full text
Abstract:
Le récent essor des algorithmes d'apprentissage automatique a rendu les méthodes de Traitement Automatique des Langues d'autant plus sensibles à leur facteur le plus limitant : la qualité des systèmes repose entièrement sur la disponibilité de grandes quantités de données, ce qui n'est pourtant le cas que d'une minorité parmi les 7.000 langues existant au monde. La stratégie dite du transfert cross-lingue permet de contourner cette limitation : une langue peu dotée en ressources (la cible) peut être traitée en exploitant les ressources disponibles dans une autre langue (la source). Les progrès
APA, Harvard, Vancouver, ISO, and other styles
2

Raithel, Lisa. "Cross-lingual Information Extraction for the Assessment and Prevention of Adverse Drug Reactions." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG011.

Full text
Abstract:
Les travaux décrits dans cette thèse portent sur la détection et l'extraction trans- et multilingue des effets indésirables des médicaments dans des textes biomédicaux rédigés par des non-spécialistes. Dans un premier temps, je décris la création d'un nouveau corpus trilingue (allemand, français, japonais), centré sur l'allemand et le français, ainsi que le développement de directives, applicables à toutes les langues, pour l'annotation de contenus textuels produits par des utilisateurs de médias sociaux. Enfin, je décris le processus d'annotation et fournis un aperçu du jeu de données obtenu.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Cross-Lingual knowledge transfer"

1

Gui, Lin, Qin Lu, Ruifeng Xu, Qikang Wei, and Yuhui Cao. "Improving Transfer Learning in Cross Lingual Opinion Analysis Through Negative Transfer Detection." In Knowledge Science, Engineering and Management. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25159-2_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tian, Lin, Xiuzhen Zhang, and Jey Han Lau. "Rumour Detection via Zero-Shot Cross-Lingual Transfer Learning." In Machine Learning and Knowledge Discovery in Databases. Research Track. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86486-6_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Han, Soyeon Caren, Yingru Lin, Siqu Long, and Josiah Poon. "Low Resource Named Entity Recognition Using Contextual Word Representation and Neural Cross-Lingual Knowledge Transfer." In Neural Information Processing. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36708-4_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Daou, Ousmane, Satya Ranjan Dash, and Shantipriya Parida. "Cross-Lingual Transfer Learning for Bambara Leveraging Resources From Other Languages." In Advances in Computational Intelligence and Robotics. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-0728-1.ch009.

Full text
Abstract:
Bambara, a language spoken primarily in West Africa, faces resource limitations that hinder the development of natural language processing (NLP) applications. This chapter presents a comprehensive cross-lingual transfer learning (CTL) approach to harness knowledge from other languages and substantially improve the performance of Bambara NLP tasks. The authors meticulously outline the methodology, including the creation of a Bambara corpus, training a CTL classifier, evaluating its performance across different languages, conducting a rigorous comparative analysis against baseline methods, and p
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Cross-Lingual knowledge transfer"

1

Rajaee, Sara, and Christof Monz. "Analyzing the Evaluation of Cross-Lingual Knowledge Transfer in Multilingual Language Models." In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 2024. https://doi.org/10.18653/v1/2024.eacl-long.177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yamada, Ikuya, and Ryokan Ri. "LEIA: Facilitating Cross-lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation." In Findings of the Association for Computational Linguistics ACL 2024. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.findings-acl.419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wan, Genshun, and Zhongfu Ye. "Multi-Modal Knowledge Transfer for Target Speaker Lipreading with Improved Audio-Visual Pretraining and Cross-Lingual Fine-Tuning." In 2024 IEEE International Conference on Multimedia and Expo Workshops (ICMEW). IEEE, 2024. http://dx.doi.org/10.1109/icmew63481.2024.10645443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Swietojanski, Pawel, Arnab Ghoshal, and Steve Renals. "Unsupervised cross-lingual knowledge transfer in DNN-based LVCSR." In 2012 IEEE Spoken Language Technology Workshop (SLT 2012). IEEE, 2012. http://dx.doi.org/10.1109/slt.2012.6424230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lu, Di, Xiaoman Pan, Nima Pourdamghani, Shih-Fu Chang, Heng Ji, and Kevin Knight. "A Multi-media Approach to Cross-lingual Entity Knowledge Transfer." In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 2016. http://dx.doi.org/10.18653/v1/p16-1006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kuulmets, Hele-Andra, Taido Purason, Agnes Luhtaru, and Mark Fishel. "Teaching Llama a New Language Through Cross-Lingual Knowledge Transfer." In Findings of the Association for Computational Linguistics: NAACL 2024. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.findings-naacl.210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Singh, Sumit, and Uma Tiwary. "Silp_nlp at SemEval-2023 Task 2: Cross-lingual Knowledge Transfer for Mono-lingual Learning." In Proceedings of the The 17th International Workshop on Semantic Evaluation (SemEval-2023). Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.semeval-1.164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Singh, Sumit, Pankaj Goyal, and Uma Tiwary. "silp_nlp at SemEval-2024 Task 1: Cross-lingual Knowledge Transfer for Mono-lingual Learning." In Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024). Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.semeval-1.174.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Zhuoran, Chunming Hu, Junfan Chen, Zhijun Chen, Xiaohui Guo, and Richong Zhang. "Improving Zero-Shot Cross-Lingual Transfer via Progressive Code-Switching." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/706.

Full text
Abstract:
Code-switching is a data augmentation scheme mixing words from multiple languages into source lingual text. It has achieved considerable generalization performance of cross-lingual transfer tasks by aligning cross-lingual contextual word representations. However, uncontrolled and over-replaced code-switching would augment dirty samples to model training. In other words, the excessive code-switching text samples will negatively hurt the models' cross-lingual transferability. To this end, we propose a Progressive Code-Switching (PCS) method to gradually generate moderately difficult code-switchi
APA, Harvard, Vancouver, ISO, and other styles
10

Feng, Xiaocheng, Xiachong Feng, Bing Qin, Zhangyin Feng, and Ting Liu. "Improving Low Resource Named Entity Recognition using Cross-lingual Knowledge Transfer." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/566.

Full text
Abstract:
Neural networks have been widely used for high resource language (e.g. English) named entity recognition (NER) and have shown state-of-the-art results.However, for low resource languages, such as Dutch, Spanish, due to the limitation of resources and lack of annotated data, taggers tend to have lower performances.To narrow this gap, we propose three novel strategies to enrich the semantic representations of low resource languages: we first develop neural networks to improve low resource word representations by knowledge transfer from high resource language using bilingual lexicons. Further, a
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!