Artykuły w czasopismach na temat „Pretrained language model”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Pretrained language model”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Lee, Chanhee, Kisu Yang, Taesun Whang, Chanjun Park, Andrew Matteson, and Heuiseok Lim. "Exploring the Data Efficiency of Cross-Lingual Post-Training in Pretrained Language Models." Applied Sciences 11, no. 5 (2021): 1974. http://dx.doi.org/10.3390/app11051974.
Pełny tekst źródłaDe Coster, Mathieu, and Joni Dambre. "Leveraging Frozen Pretrained Written Language Models for Neural Sign Language Translation." Information 13, no. 5 (2022): 220. http://dx.doi.org/10.3390/info13050220.
Pełny tekst źródłaKuwana, Ayato, Atsushi Oba, Ranto Sawai, and Incheon Paik. "Automatic Taxonomy Classification by Pretrained Language Model." Electronics 10, no. 21 (2021): 2656. http://dx.doi.org/10.3390/electronics10212656.
Pełny tekst źródłaLee, Eunchan, Changhyeon Lee, and Sangtae Ahn. "Comparative Study of Multiclass Text Classification in Research Proposals Using Pretrained Language Models." Applied Sciences 12, no. 9 (2022): 4522. http://dx.doi.org/10.3390/app12094522.
Pełny tekst źródłaWang, Canjun, Zhao Li, Tong Chen, Ruishuang Wang, and Zhengyu Ju. "Research on the Application of Prompt Learning Pretrained Language Model in Machine Translation Task with Reinforcement Learning." Electronics 12, no. 16 (2023): 3391. http://dx.doi.org/10.3390/electronics12163391.
Pełny tekst źródłaChen, Zhi, Yuncong Liu, Lu Chen, Su Zhu, Mengyue Wu, and Kai Yu. "OPAL: Ontology-Aware Pretrained Language Model for End-to-End Task-Oriented Dialogue." Transactions of the Association for Computational Linguistics 11 (2023): 68–84. http://dx.doi.org/10.1162/tacl_a_00534.
Pełny tekst źródłaXu, Canwen, and Julian McAuley. "A Survey on Model Compression and Acceleration for Pretrained Language Models." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (2023): 10566–75. http://dx.doi.org/10.1609/aaai.v37i9.26255.
Pełny tekst źródłaGu, Yang, and Yanke Hu. "Extractive Summarization with Very Deep Pretrained Language Model." International Journal of Artificial Intelligence & Applications 10, no. 02 (2019): 27–32. http://dx.doi.org/10.5121/ijaia.2019.10203.
Pełny tekst źródłaQi, Xianglong, Yang Gao, Ruibin Wang, Minghua Zhao, Shengjia Cui, and Mohsen Mortazavi. "Learning High-Order Semantic Representation for Intent Classification and Slot Filling on Low-Resource Language via Hypergraph." Mathematical Problems in Engineering 2022 (September 16, 2022): 1–16. http://dx.doi.org/10.1155/2022/8407713.
Pełny tekst źródłaWon, Hyun-Sik, Min-Ji Kim, Dohyun Kim, Hee-Soo Kim, and Kang-Min Kim. "University Student Dropout Prediction Using Pretrained Language Models." Applied Sciences 13, no. 12 (2023): 7073. http://dx.doi.org/10.3390/app13127073.
Pełny tekst źródłaElazar, Yanai, Nora Kassner, Shauli Ravfogel, et al. "Measuring and Improving Consistency in Pretrained Language Models." Transactions of the Association for Computational Linguistics 9 (2021): 1012–31. http://dx.doi.org/10.1162/tacl_a_00410.
Pełny tekst źródłaEdman, Lukas, Gabriele Sarti, Antonio Toral, Gertjan van Noord, and Arianna Bisazza. "Are Character-level Translations Worth the Wait? Comparing ByT5 and mT5 for Machine Translation." Transactions of the Association for Computational Linguistics 12 (2024): 392–410. http://dx.doi.org/10.1162/tacl_a_00651.
Pełny tekst źródłaLu, Kevin, Aditya Grover, Pieter Abbeel, and Igor Mordatch. "Frozen Pretrained Transformers as Universal Computation Engines." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (2022): 7628–36. http://dx.doi.org/10.1609/aaai.v36i7.20729.
Pełny tekst źródłaKhan, Vasima, and Tariq Azfar Meenai. "Pretrained Natural Language Processing Model for Intent Recognition (BERT-IR)." Human-Centric Intelligent Systems 1, no. 3-4 (2021): 66. http://dx.doi.org/10.2991/hcis.k.211109.001.
Pełny tekst źródłaJawale, Shila S., and S. D. Sawarkar. "Exploiting Emotions via Composite Pretrained Embedding and Ensemble Language Model." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 8s (2023): 362–75. http://dx.doi.org/10.17762/ijritcc.v11i8s.7216.
Pełny tekst źródłaLi, Jiahuan, Hao Zhou, Shujian Huang, Shanbo Cheng, and Jiajun Chen. "Eliciting the Translation Ability of Large Language Models via Multilingual Finetuning with Translation Instructions." Transactions of the Association for Computational Linguistics 12 (2024): 576–92. http://dx.doi.org/10.1162/tacl_a_00655.
Pełny tekst źródłaBear Don’t Walk IV, Oliver J., Tony Sun, Adler Perotte, and Noémie Elhadad. "Clinically relevant pretraining is all you need." Journal of the American Medical Informatics Association 28, no. 9 (2021): 1970–76. http://dx.doi.org/10.1093/jamia/ocab086.
Pełny tekst źródłaZhang, Yijia, Tiancheng Zhang, Peng Xie, Minghe Yu, and Ge Yu. "BEM-SM: A BERT-Encoder Model with Symmetry Supervision Module for Solving Math Word Problem." Symmetry 15, no. 4 (2023): 916. http://dx.doi.org/10.3390/sym15040916.
Pełny tekst źródłaReddy, K. Sahit, N. Ragavenderan, Vasanth K., Ganesh N. Naik, Vishalakshi Prabhu H, and Nagaraja G. S. "MedicalBERT: enhancing biomedical natural language processing using pretrained BERT-based model." IAES International Journal of Artificial Intelligence (IJ-AI) 14, no. 3 (2025): 2367. https://doi.org/10.11591/ijai.v14.i3.pp2367-2378.
Pełny tekst źródłaZhang, Wenbo, Xiao Li, Yating Yang, Rui Dong, and Gongxu Luo. "Keeping Models Consistent between Pretraining and Translation for Low-Resource Neural Machine Translation." Future Internet 12, no. 12 (2020): 215. http://dx.doi.org/10.3390/fi12120215.
Pełny tekst źródłaJaved, Tahir, Sumanth Doddapaneni, Abhigyan Raman, et al. "Towards Building ASR Systems for the Next Billion Users." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (2022): 10813–21. http://dx.doi.org/10.1609/aaai.v36i10.21327.
Pełny tekst źródłaKotei, Evans, and Ramkumar Thirunavukarasu. "A Systematic Review of Transformer-Based Pre-Trained Language Models through Self-Supervised Learning." Information 14, no. 3 (2023): 187. http://dx.doi.org/10.3390/info14030187.
Pełny tekst źródłaJoukhadar, Alaa, Nada Ghneim, and Ghaida Rebdawi. "Impact of Using Bidirectional Encoder Representations from Transformers (BERT) Models for Arabic Dialogue Acts Identification." Ingénierie des systèmes d information 26, no. 5 (2021): 469–75. http://dx.doi.org/10.18280/isi.260506.
Pełny tekst źródłaDudaš, Adam, and Jarmila Skrinarova. "Natural Language Processing in Translation of Relational Languages." IPSI Transactions on Internet Research 19, no. 01 (2023): 17–23. http://dx.doi.org/10.58245/ipsi.tir.2301.04.
Pełny tekst źródłaPan, Yu, Ye Yuan, Yichun Yin, et al. "Preparing Lessons for Progressive Training on Language Models." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 17 (2024): 18860–68. http://dx.doi.org/10.1609/aaai.v38i17.29851.
Pełny tekst źródłaZheng, Zhe, Xin-Zheng Lu, Ke-Yin Chen, Yu-Cheng Zhou, and Jia-Rui Lin. "Pretrained domain-specific language model for natural language processing tasks in the AEC domain." Computers in Industry 142 (November 2022): 103733. http://dx.doi.org/10.1016/j.compind.2022.103733.
Pełny tekst źródłaYigzaw, Netsanet, Million Meshesha, and Chala Diriba. "A Generic Approach towards Amharic Sign Language Recognition." Advances in Human-Computer Interaction 2022 (September 22, 2022): 1–11. http://dx.doi.org/10.1155/2022/1112169.
Pełny tekst źródłaYan, Ming, Chenliang Li, Bin Bi, Wei Wang, and Songfang Huang. "A Unified Pretraining Framework for Passage Ranking and Expansion." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 5 (2021): 4555–63. http://dx.doi.org/10.1609/aaai.v35i5.16584.
Pełny tekst źródłaLi, Juncai, and Xiaofei Jiang. "Mol-BERT: An Effective Molecular Representation with BERT for Molecular Property Prediction." Wireless Communications and Mobile Computing 2021 (September 2, 2021): 1–7. http://dx.doi.org/10.1155/2021/7181815.
Pełny tekst źródłaKhilji, Muhammad Danial. "Features Matching using Natural Language Processing." International Journal on Cybernetics & Informatics 12, no. 2 (2023): 251–60. http://dx.doi.org/10.5121/ijci.2023.120218.
Pełny tekst źródłaZhu, Fangqi, Jun Gao, Changlong Yu, et al. "A Generative Approach for Script Event Prediction via Contrastive Fine-Tuning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (2023): 14056–64. http://dx.doi.org/10.1609/aaai.v37i11.26645.
Pełny tekst źródłaNooralahzadeh, Farhad, and Rico Sennrich. "Improving the Cross-Lingual Generalisation in Visual Question Answering." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (2023): 13419–27. http://dx.doi.org/10.1609/aaai.v37i11.26574.
Pełny tekst źródłaShu, Peng, and Sun Cuiqin. "A Statistical English Syntax Analysis Model Based on Linguistic Evaluation Information." Security and Communication Networks 2022 (July 30, 2022): 1–7. http://dx.doi.org/10.1155/2022/3766417.
Pełny tekst źródłaZhang, Dongqiu, and Wenkui Li. "An Improved Math Word Problem (MWP) Model Using Unified Pretrained Language Model (UniLM) for Pretraining." Computational Intelligence and Neuroscience 2022 (July 14, 2022): 1–9. http://dx.doi.org/10.1155/2022/7468286.
Pełny tekst źródłaYang, Jinyu, Ruijia Wang, Cheng Yang, et al. "Harnessing Language Model for Cross-Heterogeneity Graph Knowledge Transfer." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 12 (2025): 13026–34. https://doi.org/10.1609/aaai.v39i12.33421.
Pełny tekst źródłaKim, Boeun, Dohaeng Lee, Damrin Kim, et al. "Generative Model Using Knowledge Graph for Document-Grounded Conversations." Applied Sciences 12, no. 7 (2022): 3367. http://dx.doi.org/10.3390/app12073367.
Pełny tekst źródłaYu, Hyunwook, Yejin Cho, Geunchul Park, and Mucheol Kim. "KRongBERT: Enhanced factorization-based morphological approach for the Korean pretrained language model." Information Processing & Management 62, no. 3 (2025): 104072. https://doi.org/10.1016/j.ipm.2025.104072.
Pełny tekst źródłaZhang, Weihong, Fan Hu, Wang Li, and Peng Yin. "Does protein pretrained language model facilitate the prediction of protein–ligand interaction?" Methods 219 (November 2023): 8–15. http://dx.doi.org/10.1016/j.ymeth.2023.08.016.
Pełny tekst źródłaLee, Jaeseong, Dohyeon Lee, and Seung-won Hwang. "Script, Language, and Labels: Overcoming Three Discrepancies for Low-Resource Language Specialization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (2023): 13004–13. http://dx.doi.org/10.1609/aaai.v37i11.26528.
Pełny tekst źródłaDelgadillo, Josiel, Johnson Kinyua, and Charles Mutigwe. "FinSoSent: Advancing Financial Market Sentiment Analysis through Pretrained Large Language Models." Big Data and Cognitive Computing 8, no. 8 (2024): 87. http://dx.doi.org/10.3390/bdcc8080087.
Pełny tekst źródłaGuo, Jianyu, Jingnan Chen, Li Ren, Huanlai Zhou, Wenbo Xu, and Haitao Jia. "Constructing Chinese taxonomy trees from understanding and generative pretrained language models." PeerJ Computer Science 10 (October 3, 2024): e2358. http://dx.doi.org/10.7717/peerj-cs.2358.
Pełny tekst źródłaAhuir, Vicent, Lluís-F. Hurtado, José Ángel González, and Encarna Segarra. "NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish." Applied Sciences 11, no. 21 (2021): 9872. http://dx.doi.org/10.3390/app11219872.
Pełny tekst źródłaMallappa, Satishkumar, Dhandra B.V., and Gururaj Mukarambi. "SCRIPT IDENTIFICATION FROM CAMERA CAPTURED INDIAN DOCUMENT IMAGES WITH CNN MODEL." ICTACT Journal on Soft Computing 14, no. 2 (2023): 3232–36. http://dx.doi.org/10.21917/ijsc.2023.0453.
Pełny tekst źródłaZhu, Beier, Yulei Niu, Saeil Lee, Minhoe Hur, and Hanwang Zhang. "Debiased Fine-Tuning for Vision-Language Models by Prompt Regularization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 3 (2023): 3834–42. http://dx.doi.org/10.1609/aaai.v37i3.25496.
Pełny tekst źródłaChen, Guanlin, Zhao Cheng, Qi Lu, Wenyong Weng, and Wujian Yang. "Named Entity Recognition of Hazardous Chemical Risk Information Based on Multihead Self-Attention Mechanism and BERT." Wireless Communications and Mobile Computing 2022 (July 7, 2022): 1–8. http://dx.doi.org/10.1155/2022/8300672.
Pełny tekst źródłaJoshi, Herat, and Shenson Joseph. "ULMFiT: Universal Language Model Fine-Tuning for Text Classification." International Journal of Advanced Medical Sciences and Technology 4, no. 6 (2024): 1–9. http://dx.doi.org/10.54105/ijamst.e3049.04061024.
Pełny tekst źródłaHerat, Joshi. "ULMFiT: Universal Language Model Fine-Tuning for Text Classification." International Journal of Advanced Medical Sciences and Technology (IJAMST) 4, no. 6 (2024): 1–9. https://doi.org/10.54105/ijamst.E3049.04061024/.
Pełny tekst źródłaBudige, Usharani, and Srikar Goud Konda. "Text To Image Generation By Using Stable Diffusion Model With Variational Autoencoder Decoder." International Journal for Research in Applied Science and Engineering Technology 11, no. 10 (2023): 514–19. http://dx.doi.org/10.22214/ijraset.2023.56024.
Pełny tekst źródłaAlrashidi, Bedour, Amani Jamal, and Ali Alkhathlan. "Abusive Content Detection in Arabic Tweets Using Multi-Task Learning and Transformer-Based Models." Applied Sciences 13, no. 10 (2023): 5825. http://dx.doi.org/10.3390/app13105825.
Pełny tekst źródłaJiang, Peihai, Xixiang Lyu, Yige Li, and Jing Ma. "Backdoor Token Unlearning: Exposing and Defending Backdoors in Pretrained Language Models." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 23 (2025): 24285–93. https://doi.org/10.1609/aaai.v39i23.34605.
Pełny tekst źródła