Gotowa bibliografia na temat „Pretrained language model”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Pretrained language model”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Pretrained language model"
Lee, Chanhee, Kisu Yang, Taesun Whang, Chanjun Park, Andrew Matteson, and Heuiseok Lim. "Exploring the Data Efficiency of Cross-Lingual Post-Training in Pretrained Language Models." Applied Sciences 11, no. 5 (2021): 1974. http://dx.doi.org/10.3390/app11051974.
Pełny tekst źródłaDe Coster, Mathieu, and Joni Dambre. "Leveraging Frozen Pretrained Written Language Models for Neural Sign Language Translation." Information 13, no. 5 (2022): 220. http://dx.doi.org/10.3390/info13050220.
Pełny tekst źródłaKuwana, Ayato, Atsushi Oba, Ranto Sawai, and Incheon Paik. "Automatic Taxonomy Classification by Pretrained Language Model." Electronics 10, no. 21 (2021): 2656. http://dx.doi.org/10.3390/electronics10212656.
Pełny tekst źródłaLee, Eunchan, Changhyeon Lee, and Sangtae Ahn. "Comparative Study of Multiclass Text Classification in Research Proposals Using Pretrained Language Models." Applied Sciences 12, no. 9 (2022): 4522. http://dx.doi.org/10.3390/app12094522.
Pełny tekst źródłaWang, Canjun, Zhao Li, Tong Chen, Ruishuang Wang, and Zhengyu Ju. "Research on the Application of Prompt Learning Pretrained Language Model in Machine Translation Task with Reinforcement Learning." Electronics 12, no. 16 (2023): 3391. http://dx.doi.org/10.3390/electronics12163391.
Pełny tekst źródłaChen, Zhi, Yuncong Liu, Lu Chen, Su Zhu, Mengyue Wu, and Kai Yu. "OPAL: Ontology-Aware Pretrained Language Model for End-to-End Task-Oriented Dialogue." Transactions of the Association for Computational Linguistics 11 (2023): 68–84. http://dx.doi.org/10.1162/tacl_a_00534.
Pełny tekst źródłaXu, Canwen, and Julian McAuley. "A Survey on Model Compression and Acceleration for Pretrained Language Models." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (2023): 10566–75. http://dx.doi.org/10.1609/aaai.v37i9.26255.
Pełny tekst źródłaGu, Yang, and Yanke Hu. "Extractive Summarization with Very Deep Pretrained Language Model." International Journal of Artificial Intelligence & Applications 10, no. 02 (2019): 27–32. http://dx.doi.org/10.5121/ijaia.2019.10203.
Pełny tekst źródłaQi, Xianglong, Yang Gao, Ruibin Wang, Minghua Zhao, Shengjia Cui, and Mohsen Mortazavi. "Learning High-Order Semantic Representation for Intent Classification and Slot Filling on Low-Resource Language via Hypergraph." Mathematical Problems in Engineering 2022 (September 16, 2022): 1–16. http://dx.doi.org/10.1155/2022/8407713.
Pełny tekst źródłaWon, Hyun-Sik, Min-Ji Kim, Dohyun Kim, Hee-Soo Kim, and Kang-Min Kim. "University Student Dropout Prediction Using Pretrained Language Models." Applied Sciences 13, no. 12 (2023): 7073. http://dx.doi.org/10.3390/app13127073.
Pełny tekst źródłaRozprawy doktorskie na temat "Pretrained language model"
Pelloin, Valentin. "La compréhension de la parole dans les systèmes de dialogues humain-machine à l'heure des modèles pré-entraînés." Electronic Thesis or Diss., Le Mans, 2024. http://www.theses.fr/2024LEMA1002.
Pełny tekst źródłaKulhánek, Jonáš. "End-to-end dialogové systémy s předtrénovanými jazykovými modely." Master's thesis, 2021. http://www.nusl.cz/ntk/nusl-448383.
Pełny tekst źródłaKsiążki na temat "Pretrained language model"
Olgiati, Andrea. Pretrain Vision and Large Language Models in Python: End-To-end Techniques for Building and Deploying Foundation Models on AWS. de Gruyter GmbH, Walter, 2023.
Znajdź pełny tekst źródłaOlgiati, Andrea. Pretrain Vision and Large Language Models in Python: End-To-end Techniques for Building and Deploying Foundation Models on AWS. Packt Publishing, Limited, 2023.
Znajdź pełny tekst źródłaCzęści książek na temat "Pretrained language model"
Oba, Atsushi, Incheon Paik, and Ayato Kuwana. "Automatic Classification for Ontology Generation by Pretrained Language Model." In Advances and Trends in Artificial Intelligence. Artificial Intelligence Practices. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-79457-6_18.
Pełny tekst źródłaYan, Hao, and Yuhong Guo. "Lightweight Unsupervised Federated Learning with Pretrained Vision Language Model." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-82240-7_11.
Pełny tekst źródłaKong, Jun, Jin Wang, and Xuejie Zhang. "Accelerating Pretrained Language Model Inference Using Weighted Ensemble Self-distillation." In Natural Language Processing and Chinese Computing. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88480-2_18.
Pełny tekst źródłaLuo, Xudong, Zhiqi Deng, Kaili Sun, and Pingping Lin. "An Emotion-Aware Human-Computer Negotiation Model Powered by Pretrained Language Model." In Knowledge Science, Engineering and Management. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-5501-1_19.
Pełny tekst źródłaOzgul, Gizem, Şeyma Derdiyok, and Fatma Patlar Akbulut. "Turkish Sign Language Recognition Using a Fine-Tuned Pretrained Model." In Communications in Computer and Information Science. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-50920-9_6.
Pełny tekst źródłaLiu, Chenxu, Hongjie Fan, and Junfei Liu. "Span-Based Nested Named Entity Recognition with Pretrained Language Model." In Database Systems for Advanced Applications. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-73197-7_42.
Pełny tekst źródłaKucharavy, Andrei. "Adapting LLMs to Downstream Applications." In Large Language Models in Cybersecurity. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_2.
Pełny tekst źródłaWei, Siwen, Chi Yuan, Zixuan Li, and Huaiyu Wang. "An Unsupervised Clinical Acronym Disambiguation Method Based on Pretrained Language Model." In Communications in Computer and Information Science. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-99-9864-7_18.
Pełny tekst źródłaLi, Sheng, and Jiyi Li. "Correction while Recognition: Combining Pretrained Language Model for Taiwan-Accented Speech Recognition." In Artificial Neural Networks and Machine Learning – ICANN 2023. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44195-0_32.
Pełny tekst źródłaSilveira, Raquel, Caio Ponte, Vitor Almeida, Vládia Pinheiro, and Vasco Furtado. "LegalBert-pt: A Pretrained Language Model for the Brazilian Portuguese Legal Domain." In Intelligent Systems. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-45392-2_18.
Pełny tekst źródłaStreszczenia konferencji na temat "Pretrained language model"
Zakizadeh, Mahdi, and Mohammad Taher Pilehvar. "Gender Encoding Patterns in Pretrained Language Model Representations." In Proceedings of the 5th Workshop on Trustworthy NLP (TrustNLP 2025). Association for Computational Linguistics, 2025. https://doi.org/10.18653/v1/2025.trustnlp-main.31.
Pełny tekst źródłaHaijie, Shen, and Madhavi Devaraj. "Enhancing sentiment analysis with language model adaptation: leveraging large-scale pretrained models." In Ninth International Symposium on Advances in Electrical, Electronics, and Computer Engineering (ISAEECE 2024), edited by Pierluigi Siano and Wenbing Zhao. SPIE, 2024. http://dx.doi.org/10.1117/12.3033425.
Pełny tekst źródłaDing, Shuai, Yifei Xu, Zhuang Lu, Fan Tang, Tong Li, and Jingguo Ge. "Power Microservices Troubleshooting by Pretrained Language Model with Multi-source Data." In 2024 IEEE International Symposium on Parallel and Distributed Processing with Applications (ISPA). IEEE, 2024. https://doi.org/10.1109/ispa63168.2024.00241.
Pełny tekst źródłaZeng, Zhuo, and Yu Wang. "Construction of pretrained language model based on big data of power equipment." In Fourth International Conference on Electronics Technology and Artificial Intelligence (ETAI 2025), edited by Shaohua Luo and Akash Saxena. SPIE, 2025. https://doi.org/10.1117/12.3068739.
Pełny tekst źródłaGinn, Michael, Lindia Tjuatja, Taiqi He, et al. "GlossLM: A Massively Multilingual Corpus and Pretrained Model for Interlinear Glossed Text." In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.emnlp-main.683.
Pełny tekst źródłaHou, Lijuan, Hanyan Qin, Xiankun Zhang, and Yiying Zhang. "Protein Function Prediction Based on the Pretrained Language Model ESM2 and Graph Convolutional Networks." In 2024 IEEE International Symposium on Parallel and Distributed Processing with Applications (ISPA). IEEE, 2024. https://doi.org/10.1109/ispa63168.2024.00242.
Pełny tekst źródłaHindi, Iman I., and Gheith A. Abandah. "Improving Arabic Dialect Text Classification by Finetuning A Pretrained Token-Free Large Language Model." In 2025 1st International Conference on Computational Intelligence Approaches and Applications (ICCIAA). IEEE, 2025. https://doi.org/10.1109/icciaa65327.2025.11013550.
Pełny tekst źródłaKatsumata, Satoru, and Mamoru Komachi. "Stronger Baselines for Grammatical Error Correction Using a Pretrained Encoder-Decoder Model." In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.aacl-main.83.
Pełny tekst źródłaSafikhani, Parisa, and David Broneske. "AutoML Meets Hugging Face: Domain-Aware Pretrained Model Selection for Text Classification." In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop). Association for Computational Linguistics, 2025. https://doi.org/10.18653/v1/2025.naacl-srw.45.
Pełny tekst źródłaZhang, Kaiwen, Feiyu Su, Yixiang Huang, Yanming Li, Fengqi Wu, and Yuhan Mao. "The Application of Fine-Tuning on Pretrained Language Model in Information Extraction for Fault Knowledge Graphs." In 2024 9th International Conference on Intelligent Computing and Signal Processing (ICSP). IEEE, 2024. http://dx.doi.org/10.1109/icsp62122.2024.10743881.
Pełny tekst źródła