Gotowa bibliografia na temat „Pretrained language model”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Pretrained language model”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Pretrained language model"

1

Lee, Chanhee, Kisu Yang, Taesun Whang, Chanjun Park, Andrew Matteson, and Heuiseok Lim. "Exploring the Data Efficiency of Cross-Lingual Post-Training in Pretrained Language Models." Applied Sciences 11, no. 5 (2021): 1974. http://dx.doi.org/10.3390/app11051974.

Pełny tekst źródła
Streszczenie:
Language model pretraining is an effective method for improving the performance of downstream natural language processing tasks. Even though language modeling is unsupervised and thus collecting data for it is relatively less expensive, it is still a challenging process for languages with limited resources. This results in great technological disparity between high- and low-resource languages for numerous downstream natural language processing tasks. In this paper, we aim to make this technology more accessible by enabling data efficient training of pretrained language models. It is achieved b
Style APA, Harvard, Vancouver, ISO itp.
2

De Coster, Mathieu, and Joni Dambre. "Leveraging Frozen Pretrained Written Language Models for Neural Sign Language Translation." Information 13, no. 5 (2022): 220. http://dx.doi.org/10.3390/info13050220.

Pełny tekst źródła
Streszczenie:
We consider neural sign language translation: machine translation from signed to written languages using encoder–decoder neural networks. Translating sign language videos to written language text is especially complex because of the difference in modality between source and target language and, consequently, the required video processing. At the same time, sign languages are low-resource languages, their datasets dwarfed by those available for written languages. Recent advances in written language processing and success stories of transfer learning raise the question of how pretrained written
Style APA, Harvard, Vancouver, ISO itp.
3

Kuwana, Ayato, Atsushi Oba, Ranto Sawai, and Incheon Paik. "Automatic Taxonomy Classification by Pretrained Language Model." Electronics 10, no. 21 (2021): 2656. http://dx.doi.org/10.3390/electronics10212656.

Pełny tekst źródła
Streszczenie:
In recent years, automatic ontology generation has received significant attention in information science as a means of systemizing vast amounts of online data. As our initial attempt of ontology generation with a neural network, we proposed a recurrent neural network-based method. However, updating the architecture is possible because of the development in natural language processing (NLP). By contrast, the transfer learning of language models trained by a large, unlabeled corpus has yielded a breakthrough in NLP. Inspired by these achievements, we propose a novel workflow for ontology generat
Style APA, Harvard, Vancouver, ISO itp.
4

Lee, Eunchan, Changhyeon Lee, and Sangtae Ahn. "Comparative Study of Multiclass Text Classification in Research Proposals Using Pretrained Language Models." Applied Sciences 12, no. 9 (2022): 4522. http://dx.doi.org/10.3390/app12094522.

Pełny tekst źródła
Streszczenie:
Recently, transformer-based pretrained language models have demonstrated stellar performance in natural language understanding (NLU) tasks. For example, bidirectional encoder representations from transformers (BERT) have achieved outstanding performance through masked self-supervised pretraining and transformer-based modeling. However, the original BERT may only be effective for English-based NLU tasks, whereas its effectiveness for other languages such as Korean is limited. Thus, the applicability of BERT-based language models pretrained in languages other than English to NLU tasks based on t
Style APA, Harvard, Vancouver, ISO itp.
5

Wang, Canjun, Zhao Li, Tong Chen, Ruishuang Wang, and Zhengyu Ju. "Research on the Application of Prompt Learning Pretrained Language Model in Machine Translation Task with Reinforcement Learning." Electronics 12, no. 16 (2023): 3391. http://dx.doi.org/10.3390/electronics12163391.

Pełny tekst źródła
Streszczenie:
With the continuous advancement of deep learning technology, pretrained language models have emerged as crucial tools for natural language processing tasks. However, optimization of pretrained language models is essential for specific tasks such as machine translation. This paper presents a novel approach that integrates reinforcement learning with prompt learning to enhance the performance of pretrained language models in machine translation tasks. In our methodology, a “prompt” string is incorporated into the input of the pretrained language model, to guide the generation of an output that a
Style APA, Harvard, Vancouver, ISO itp.
6

Chen, Zhi, Yuncong Liu, Lu Chen, Su Zhu, Mengyue Wu, and Kai Yu. "OPAL: Ontology-Aware Pretrained Language Model for End-to-End Task-Oriented Dialogue." Transactions of the Association for Computational Linguistics 11 (2023): 68–84. http://dx.doi.org/10.1162/tacl_a_00534.

Pełny tekst źródła
Streszczenie:
Abstract This paper presents an ontology-aware pretrained language model (OPAL) for end-to-end task-oriented dialogue (TOD). Unlike chit-chat dialogue models, task-oriented dialogue models fulfill at least two task-specific modules: Dialogue state tracker (DST) and response generator (RG). The dialogue state consists of the domain-slot-value triples, which are regarded as the user’s constraints to search the domain-related databases. The large-scale task-oriented dialogue data with the annotated structured dialogue state usually are inaccessible. It prevents the development of the pretrained l
Style APA, Harvard, Vancouver, ISO itp.
7

Xu, Canwen, and Julian McAuley. "A Survey on Model Compression and Acceleration for Pretrained Language Models." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (2023): 10566–75. http://dx.doi.org/10.1609/aaai.v37i9.26255.

Pełny tekst źródła
Streszczenie:
Despite achieving state-of-the-art performance on many NLP tasks, the high energy cost and long inference delay prevent Transformer-based pretrained language models (PLMs) from seeing broader adoption including for edge and mobile computing. Efficient NLP research aims to comprehensively consider computation, time and carbon emission for the entire life-cycle of NLP, including data preparation, model training and inference. In this survey, we focus on the inference stage and review the current state of model compression and acceleration for pretrained language models, including benchmarks, met
Style APA, Harvard, Vancouver, ISO itp.
8

Gu, Yang, and Yanke Hu. "Extractive Summarization with Very Deep Pretrained Language Model." International Journal of Artificial Intelligence & Applications 10, no. 02 (2019): 27–32. http://dx.doi.org/10.5121/ijaia.2019.10203.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Qi, Xianglong, Yang Gao, Ruibin Wang, Minghua Zhao, Shengjia Cui, and Mohsen Mortazavi. "Learning High-Order Semantic Representation for Intent Classification and Slot Filling on Low-Resource Language via Hypergraph." Mathematical Problems in Engineering 2022 (September 16, 2022): 1–16. http://dx.doi.org/10.1155/2022/8407713.

Pełny tekst źródła
Streszczenie:
Representation of language is the first and critical task for Natural Language Understanding (NLU) in a dialogue system. Pretraining, embedding model, and fine-tuning for intent classification and slot-filling are popular and well-performing approaches but are time consuming and inefficient for low-resource languages. Concretely, the out-of-vocabulary and transferring to different languages are two tough challenges for multilingual pretrained and cross-lingual transferring models. Furthermore, quality-proved parallel data are necessary for the current frameworks. Stepping over these challenges
Style APA, Harvard, Vancouver, ISO itp.
10

Won, Hyun-Sik, Min-Ji Kim, Dohyun Kim, Hee-Soo Kim, and Kang-Min Kim. "University Student Dropout Prediction Using Pretrained Language Models." Applied Sciences 13, no. 12 (2023): 7073. http://dx.doi.org/10.3390/app13127073.

Pełny tekst źródła
Streszczenie:
Predicting student dropout from universities is an imperative but challenging task. Numerous data-driven approaches that utilize both student demographic information (e.g., gender, nationality, and high school graduation year) and academic information (e.g., GPA, participation in activities, and course evaluations) have shown meaningful results. Recently, pretrained language models have achieved very successful results in understanding the tasks associated with structured data as well as textual data. In this paper, we propose a novel student dropout prediction framework based on demographic a
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł

Rozprawy doktorskie na temat "Pretrained language model"

1

Pelloin, Valentin. "La compréhension de la parole dans les systèmes de dialogues humain-machine à l'heure des modèles pré-entraînés." Electronic Thesis or Diss., Le Mans, 2024. http://www.theses.fr/2024LEMA1002.

Pełny tekst źródła
Streszczenie:
Dans cette thèse, la compréhension automatique de la parole (SLU) est étudiée dans le cadre applicatif de dialogues téléphoniques à buts définis (réservation de chambres d'hôtel par exemple). Historiquement, la SLU était réalisée en cascade : un système de reconnaissance de la parole réalisait une transcription en mots, puis un système de compréhension y associait une annotation sémantique. Le développement des méthodes neuronales profondes a fait émerger les architectures de bout-en-bout, où la tâche de compréhension est réalisée par un système unique, appliqué directement à partir du signal
Style APA, Harvard, Vancouver, ISO itp.
2

Kulhánek, Jonáš. "End-to-end dialogové systémy s předtrénovanými jazykovými modely." Master's thesis, 2021. http://www.nusl.cz/ntk/nusl-448383.

Pełny tekst źródła
Streszczenie:
Current dialogue systems typically consist of separate components, which are manu- ally engineered to a large part and need extensive annotation. End-to-end trainable sys- tems exist but produce lower-quality, unreliable outputs. The recent transformer-based pre-trained language models such as GPT-2 brought considerable progress to language modelling, but they rely on huge amounts of textual data, which are not available for common dialogue domains. Therefore, training these models runs a high risk of overfit- ting. To overcome these obstacles, we propose a novel end-to-end dialogue system cal
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Pretrained language model"

1

Olgiati, Andrea. Pretrain Vision and Large Language Models in Python: End-To-end Techniques for Building and Deploying Foundation Models on AWS. de Gruyter GmbH, Walter, 2023.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Olgiati, Andrea. Pretrain Vision and Large Language Models in Python: End-To-end Techniques for Building and Deploying Foundation Models on AWS. Packt Publishing, Limited, 2023.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Pretrained language model"

1

Oba, Atsushi, Incheon Paik, and Ayato Kuwana. "Automatic Classification for Ontology Generation by Pretrained Language Model." In Advances and Trends in Artificial Intelligence. Artificial Intelligence Practices. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-79457-6_18.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Yan, Hao, and Yuhong Guo. "Lightweight Unsupervised Federated Learning with Pretrained Vision Language Model." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-82240-7_11.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Kong, Jun, Jin Wang, and Xuejie Zhang. "Accelerating Pretrained Language Model Inference Using Weighted Ensemble Self-distillation." In Natural Language Processing and Chinese Computing. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88480-2_18.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Luo, Xudong, Zhiqi Deng, Kaili Sun, and Pingping Lin. "An Emotion-Aware Human-Computer Negotiation Model Powered by Pretrained Language Model." In Knowledge Science, Engineering and Management. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-5501-1_19.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Ozgul, Gizem, Şeyma Derdiyok, and Fatma Patlar Akbulut. "Turkish Sign Language Recognition Using a Fine-Tuned Pretrained Model." In Communications in Computer and Information Science. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-50920-9_6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Liu, Chenxu, Hongjie Fan, and Junfei Liu. "Span-Based Nested Named Entity Recognition with Pretrained Language Model." In Database Systems for Advanced Applications. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-73197-7_42.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Kucharavy, Andrei. "Adapting LLMs to Downstream Applications." In Large Language Models in Cybersecurity. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_2.

Pełny tekst źródła
Streszczenie:
AbstractBy themselves, pretrained Large Language Models (LLMs) are interesting objects of study. However, they need to undergo a subsequent transfer learning phase to make them useful for downstream applications. While historically referred to as “fine-tuning,” the range of the tools available to LLMs users to better adapt base models to their applications is now significantly wider than the traditional fine-tuning. In order to provide the reader with an idea of the strengths and weaknesses of each method and allow them to pick one that would suit their needs best, an overview and classificati
Style APA, Harvard, Vancouver, ISO itp.
8

Wei, Siwen, Chi Yuan, Zixuan Li, and Huaiyu Wang. "An Unsupervised Clinical Acronym Disambiguation Method Based on Pretrained Language Model." In Communications in Computer and Information Science. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-99-9864-7_18.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Li, Sheng, and Jiyi Li. "Correction while Recognition: Combining Pretrained Language Model for Taiwan-Accented Speech Recognition." In Artificial Neural Networks and Machine Learning – ICANN 2023. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44195-0_32.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Silveira, Raquel, Caio Ponte, Vitor Almeida, Vládia Pinheiro, and Vasco Furtado. "LegalBert-pt: A Pretrained Language Model for the Brazilian Portuguese Legal Domain." In Intelligent Systems. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-45392-2_18.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Pretrained language model"

1

Zakizadeh, Mahdi, and Mohammad Taher Pilehvar. "Gender Encoding Patterns in Pretrained Language Model Representations." In Proceedings of the 5th Workshop on Trustworthy NLP (TrustNLP 2025). Association for Computational Linguistics, 2025. https://doi.org/10.18653/v1/2025.trustnlp-main.31.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Haijie, Shen, and Madhavi Devaraj. "Enhancing sentiment analysis with language model adaptation: leveraging large-scale pretrained models." In Ninth International Symposium on Advances in Electrical, Electronics, and Computer Engineering (ISAEECE 2024), edited by Pierluigi Siano and Wenbing Zhao. SPIE, 2024. http://dx.doi.org/10.1117/12.3033425.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Ding, Shuai, Yifei Xu, Zhuang Lu, Fan Tang, Tong Li, and Jingguo Ge. "Power Microservices Troubleshooting by Pretrained Language Model with Multi-source Data." In 2024 IEEE International Symposium on Parallel and Distributed Processing with Applications (ISPA). IEEE, 2024. https://doi.org/10.1109/ispa63168.2024.00241.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Zeng, Zhuo, and Yu Wang. "Construction of pretrained language model based on big data of power equipment." In Fourth International Conference on Electronics Technology and Artificial Intelligence (ETAI 2025), edited by Shaohua Luo and Akash Saxena. SPIE, 2025. https://doi.org/10.1117/12.3068739.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Ginn, Michael, Lindia Tjuatja, Taiqi He, et al. "GlossLM: A Massively Multilingual Corpus and Pretrained Model for Interlinear Glossed Text." In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.emnlp-main.683.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Hou, Lijuan, Hanyan Qin, Xiankun Zhang, and Yiying Zhang. "Protein Function Prediction Based on the Pretrained Language Model ESM2 and Graph Convolutional Networks." In 2024 IEEE International Symposium on Parallel and Distributed Processing with Applications (ISPA). IEEE, 2024. https://doi.org/10.1109/ispa63168.2024.00242.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Hindi, Iman I., and Gheith A. Abandah. "Improving Arabic Dialect Text Classification by Finetuning A Pretrained Token-Free Large Language Model." In 2025 1st International Conference on Computational Intelligence Approaches and Applications (ICCIAA). IEEE, 2025. https://doi.org/10.1109/icciaa65327.2025.11013550.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Katsumata, Satoru, and Mamoru Komachi. "Stronger Baselines for Grammatical Error Correction Using a Pretrained Encoder-Decoder Model." In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.aacl-main.83.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Safikhani, Parisa, and David Broneske. "AutoML Meets Hugging Face: Domain-Aware Pretrained Model Selection for Text Classification." In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop). Association for Computational Linguistics, 2025. https://doi.org/10.18653/v1/2025.naacl-srw.45.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Zhang, Kaiwen, Feiyu Su, Yixiang Huang, Yanming Li, Fengqi Wu, and Yuhan Mao. "The Application of Fine-Tuning on Pretrained Language Model in Information Extraction for Fault Knowledge Graphs." In 2024 9th International Conference on Intelligent Computing and Signal Processing (ICSP). IEEE, 2024. http://dx.doi.org/10.1109/icsp62122.2024.10743881.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!