To see the other types of publications on this topic, follow the link: Large Language.

Journal articles on the topic 'Large Language'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Large Language.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kumar, Deepak, Dr Amandeep, Pinki Pinki, et al. "Visualization by Natural Language Processing and Large Language Model." International Journal of Research Publication and Reviews 6, no. 6 (2025): 11225–31. https://doi.org/10.55248/gengpi.6.0625.2340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Baral, Elina, and Sagar Shrestha. "Large Vocabulary Continuous Speech Recognition for Nepali Language." International Journal of Signal Processing Systems 8, no. 4 (2020): 68–73. http://dx.doi.org/10.18178/ijsps.8.4.68-73.

Full text
Abstract:
Speech Recognition is a widely studied topic for high-resource languages like English and Mandarin. A plethora of publications exist that study the performance of several recognition methods for these languages. However differences in phonetics, accent, language model, etc between any two different languages demand for a study of speech recognition methodologies and components separately for each language. In this paper, we present a comparative study of popular speech recognition methods for Nepali, a low-resource Indo-Aryan language. We describe our approach to building the phonetic dictiona
APA, Harvard, Vancouver, ISO, and other styles
3

Sharma Shria Verma, Dhananjai. "Automated Penetration Testing using Large Language Models." International Journal of Science and Research (IJSR) 13, no. 4 (2024): 1826–31. http://dx.doi.org/10.21275/sr24427043741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pooley, Jefferson. "Large Language Publishing." KULA: Knowledge Creation, Dissemination, and Preservation Studies 7, no. 1 (2024): 1–11. http://dx.doi.org/10.18357/kula.291.

Full text
Abstract:
The AI hype cycle has come for scholarly publishing. This essay argues that the industry’s feverishーif mostly aspirationalーembrace of artificial intelligence should be read as the latest installment of an ongoing campaign. Led by Elsevier, commercial publishers have, for about a decade, layered a second business on top of their legacy publishing operations. That business is to mine and process scholars’ works and behavior into prediction products, sold back to universities and research agencies. This article focuses on an offshoot of the big firms’ surveillance-publishing businesses: the post-
APA, Harvard, Vancouver, ISO, and other styles
5

Cerf, Vinton G. "Large Language Models." Communications of the ACM 66, no. 8 (2023): 7. http://dx.doi.org/10.1145/3606337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chauhan, Satyam. "Knowledge Discovery in Databases Utilizing Large Language Models." International Journal of Science and Research (IJSR) 13, no. 10 (2024): 1886–94. http://dx.doi.org/10.21275/ms241026170018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Son, Seok-Bin, Joongheon Kim, Changsik Cho, and Soohyun Park. "Trends in Network Optimization Using Large Language Models." Journal of Korean Institute of Communications and Information Sciences 50, no. 7 (2025): 1073–84. https://doi.org/10.7840/kics.2025.50.7.1073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mishra, Vinaytosh. "Large Language Models in Medical Education and Quality Concerns." Journal of Quality in Health Care & Economics 6, no. 1 (2023): 1–3. http://dx.doi.org/10.23880/jqhe-16000319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jain, Migul. "Future of Interacting with Computers and Large Language Models." International Journal of Science and Research (IJSR) 12, no. 10 (2023): 1711–12. http://dx.doi.org/10.21275/sr231023121603.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hong, Lichan, Gregorio Convertino, and Ed Chi. "Language Matters In Twitter: A Large Scale Study." Proceedings of the International AAAI Conference on Web and Social Media 5, no. 1 (2021): 518–21. http://dx.doi.org/10.1609/icwsm.v5i1.14184.

Full text
Abstract:
Despite the widespread adoption of Twitter internationally, little research has investigated the differences among users of different languages. In prior research, the natural tendency has been to assume that the behaviors of English users generalize to other language users. We studied 62 million tweets collected over a four-week period and found that more than 100 languages were used. Only half of the tweets were in English (51%). Other popular languages including Japanese, Portuguese, Indonesian, and Spanish together accounted for 39% of the tweets. Examining users of the top 10 languages, w
APA, Harvard, Vancouver, ISO, and other styles
11

Salem, Nadia, Khawla Al-Tarawneh, Amjad Hudaib, et al. "Generating database schema from requirement specification based on natural language processing and large language model." Computer Research and Modeling 16, no. 7 (2024): 1703–13. https://doi.org/10.20537/2076-7633-2024-16-7-1703-1713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Alostad, Hana. "Large Language Models as Kuwaiti Annotators." Big Data and Cognitive Computing 9, no. 2 (2025): 33. https://doi.org/10.3390/bdcc9020033.

Full text
Abstract:
Stance detection for low-resource languages, such as the Kuwaiti dialect, poses a significant challenge in natural language processing (NLP) due to the scarcity of annotated datasets and specialized tools. This study addresses these limitations by evaluating the effectiveness of open large language models (LLMs) in automating stance detection through zero-shot and few-shot prompt engineering, with a focus on the potential of open-source models to achieve performance levels comparable to those of closed-source alternatives. We also highlight the critical distinctions between zero- and few-shot
APA, Harvard, Vancouver, ISO, and other styles
13

Sokol, Olena. "Chat-based translation of Slavic languages with large language models." Information Technology and Computer Engineering 21, no. 3 (2024): 43–52. https://doi.org/10.63341/vitce/3.2024.43.

Full text
Abstract:
Modern large language models (LLMs) have demonstrated significant advances in machine translation, particularly for Slavic languages that are less commonly represented in traditional translation datasets. This study aimed to evaluate the effectiveness of LLMs (ChatGPT, Claude, and Llama) in translating conversational texts in Slavic languages compared to commercial translators and transformer models. The research utilised the OpenSubtitles2018 dataset to test translations in seven Slavic languages (Ukrainian, Czech, Bulgarian, Russian, Albanian, Macedonian, and Slovak), applying semantic and s
APA, Harvard, Vancouver, ISO, and other styles
14

Qarah, Faisal, and Tawfeeq Alsanoosy. "Evaluation of Arabic Large Language Models on Moroccan Dialect." Engineering, Technology & Applied Science Research 15, no. 3 (2025): 22478–85. https://doi.org/10.48084/etasr.10331.

Full text
Abstract:
Large Language Models (LLMs) have shown outstanding performance in many Natural Language Processing (NLP) tasks for high-resource languages, especially English, primarily because most of them were trained on widely available text resources. As a result, many low-resource languages, such as Arabic and African languages and their dialects, are not well studied, raising concerns about whether LLMs can perform fairly across them. Therefore, evaluating the performance of LLMs for low-resource languages and diverse dialects is crucial. This study investigated the performance of LLMs in Moroccan Arab
APA, Harvard, Vancouver, ISO, and other styles
15

Efrat-Kowalsky, Nour. "Text and the City: How the City Shaped Language." Old World: Journal of Ancient Africa and Eurasia 2, no. 1 (2022): 1–19. http://dx.doi.org/10.1163/26670755-01010013.

Full text
Abstract:
Abstract This paper presents the first case of dramatic and large-scale loss of linguistic diversity. Language death has been part of our history as long as languages were spoken, but in the fourth millennium bce urbanisation and a growing regional economy caused a decrease in both language and typological diversity on a much larger scale than ever before. The first cities in Mesopotamia had developed writing and administration which centralised power and disseminated its influence. In particular, standard languages that were used for official purposes over large areas emerged. Written standar
APA, Harvard, Vancouver, ISO, and other styles
16

Lorenz, Kilian, Pascal Bürklin, Klemens Schnattinger, et al. "Refinetuning Decentralized Large Language Model for Privacy-Sensitive University Data." Journal of Robotics and Automation Research 6, no. 2 (2025): 01–11. https://doi.org/10.33140/jrar.06.02.01.

Full text
Abstract:
This work focuses on refining a decentralized large language model (LLM) tailored for finetuning on privacy-sensitive university data. Devolved AI models, designed to operate across multiple distributed nodes, offer a promising solution for handling sensitive information by ensuring data remains localized at its source while collaboratively training a global model. The key challenge addressed in this study is the adaptation and fine-tuning of a decentralized LLM to work effectively with heterogeneous, privacyrestricted datasets typical in university environments, such as student records, resea
APA, Harvard, Vancouver, ISO, and other styles
17

D’Alessandro, William, Harry R. Lloyd, and Nathaniel Sharadin. "Large Language Models and Biorisk." American Journal of Bioethics 23, no. 10 (2023): 115–18. http://dx.doi.org/10.1080/15265161.2023.2250333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Noever, David. "LARGE LANGUAGE MODELS FOR CIPHERS." International Journal of Artificial Intelligence & Applications 14, no. 03 (2023): 1–20. http://dx.doi.org/10.5121/ijaia.2023.14301.

Full text
Abstract:
This study investigates whether transformer models like ChatGPT (GPT4, MAR2023) can generalize beyond their training data by examining their performance on the novel Cipher Dataset, which scrambles token order. The dataset consists of 654 test cases, and the analysis focuses on 51 text examples and 13 algorithmic choices. Results show that the models perform well on low-difficulty ciphers like Caesar and can unscramble tokens in 77% of the cipher examples. Despite their reliance on training data, the model's ability to generalize outside of token order is surprising, especially when leveraging
APA, Harvard, Vancouver, ISO, and other styles
19

Cheon, Hyundeuk. "Do Large Language Models Understand?" CHUL HAK SA SANG : Journal of Philosophical Ideas 90 (November 30, 2023): 75–105. http://dx.doi.org/10.15750/chss.90.202311.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

B, Mr DHANUSH. "CHATBOT USING LARGE LANGUAGE MODEL." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem34001.

Full text
Abstract:
The concept of Natural Language Processing has seen a remarkable advancement in the recent years. This remarkable advancement was particularly with the development of Large Language Models (LLM). Large Language Models are used to develop a human like conversations. This LLM is a part of Natural Language Processing which focuses on enabling computers to understand, interpret, and generate human language. The existing system of chatbots does not generate human like responses. The proposed system of chatbots uses the power of Large Language Models to generate more human like responses, providing
APA, Harvard, Vancouver, ISO, and other styles
21

Zhang, Chengyi, Xingyu Wang, and Ziyun Wang. "Large language model in electrocatalysis." Chinese Journal of Catalysis 59 (April 2024): 7–14. http://dx.doi.org/10.1016/s1872-2067(23)64612-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Shanahan, Murray. "Talking about Large Language Models." Communications of the ACM 67, no. 2 (2024): 68–79. http://dx.doi.org/10.1145/3624724.

Full text
Abstract:
Interacting with a contemporary LLM-based conversational agent can create an illusion of being in the presence of a thinking creature. Yet, in their very nature, such systems are fundamentally not like us.
APA, Harvard, Vancouver, ISO, and other styles
23

Choi, Jinwook. "Large Language Models in Medicine." Healthcare Informatics Research 31, no. 2 (2025): 111–13. https://doi.org/10.4258/hir.2025.31.2.111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Lebed, S. V., D. E. Namiot, E. V. Zubareva, P. V. Khenkin, A. A. Vorobeva, and D. A. Svichkar. "Large Language Models in Cyberattacks." Doklady Mathematics 110, S2 (2024): S510—S520. https://doi.org/10.1134/s1064562425700012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Orrù, Graziella, Giulia Melis, and Giuseppe Sartori. "Large language models and psychiatry." International Journal of Law and Psychiatry 101 (July 2025): 102086. https://doi.org/10.1016/j.ijlp.2025.102086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Vági, Renátó, and István Üveges. "Laws Clearly: Large language models and plain language transformation." Magyar Nyelvőr 148, no. 5 (2024): 696–716. http://dx.doi.org/10.38143/nyr.2024.5.696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Youssef, Alaa, Samantha Stein, Justin Clapp, and David Magnus. "The Importance of Understanding Language in Large Language Models." American Journal of Bioethics 23, no. 10 (2023): 6–7. http://dx.doi.org/10.1080/15265161.2023.2256614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Hamaniuk, Vita A. "The potential of Large Language Models in language education." Educational Dimension 5 (December 9, 2021): 208–10. http://dx.doi.org/10.31812/ed.650.

Full text
Abstract:
This editorial explores the potential of Large Language Models (LLMs) in language education. It discusses the role of LLMs in machine translation, the concept of ‘prompt programming’, and the inductive bias of LLMs for abstract textual reasoning. The editorial also highlights using LLMs as creative writing tools and their effectiveness in paraphrasing tasks. It concludes by emphasizing the need for responsible and ethical use of these tools in language education.
APA, Harvard, Vancouver, ISO, and other styles
29

Tsai, Ching-Chih, Jung-Chih Tsai, Huei-Min Lin, Yi-Qing Li, and Shih-Pang Tseng. "Applying a Large Language Model to Second Language Acquisition." Sensors and Materials 37, no. 7 (2025): 2743. https://doi.org/10.18494/sam5172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Zhou, Hao, Zhijun Wang, Shujian Huang, et al. "MoE-LPR: Multilingual Extension of Large Language Models Through Mixture-of-Experts with Language Priors Routing." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 24 (2025): 26092–100. https://doi.org/10.1609/aaai.v39i24.34805.

Full text
Abstract:
Large Language Models (LLMs) are often English-centric due to the disproportionate distribution of languages in their pre-training data. Enhancing non-English language capabilities through post-pretraining often results in catastrophic forgetting of high-resource languages. Previous methods either achieve good expansion with severe forgetting or slight forgetting with poor expansion, indicating the challenge of balancing language expansion while preventing forgetting. In this paper, we propose a method called MoE-LPR (Mixture-of-Experts with Language Priors Routing) to alleviate this problem.
APA, Harvard, Vancouver, ISO, and other styles
31

Hölldobler, Katrin, Bernhard Rumpe, and Andreas Wortmann. "Software language engineering in the large: towards composing and deriving languages." Computer Languages, Systems & Structures 54 (December 2018): 386–405. http://dx.doi.org/10.1016/j.cl.2018.08.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Singh, Pranaydeep, Orphée De Clercq, and Els Lefever. "Distilling Monolingual Models from Large Multilingual Transformers." Electronics 12, no. 4 (2023): 1022. http://dx.doi.org/10.3390/electronics12041022.

Full text
Abstract:
Although language modeling has been trending upwards steadily, models available for low-resourced languages are limited to large multilingual models such as mBERT and XLM-RoBERTa, which come with significant overheads for deployment vis-à-vis their model size, inference speeds, etc. We attempt to tackle this problem by proposing a novel methodology to apply knowledge distillation techniques to filter language-specific information from a large multilingual model into a small, fast monolingual model that can often outperform the teacher model. We demonstrate the viability of this methodology on
APA, Harvard, Vancouver, ISO, and other styles
33

Pang, Yafei, Yuyan He, and David Marlow. "Employing Large Language Models in College English Teaching: Insights from China." International Journal of Social Sciences, Language and Linguistics 05, no. 06 (2025): 16–21. https://doi.org/10.55640/ijsll-05-06-03.

Full text
Abstract:
This study focuses on the innovative application of large language models (LLMs) in college spoken English teaching. Based on the ever-evolving principles of Human-Computer Interaction (HCI), it systematically analyzes the functional positioning and implementation paths of LLMs in three traditionally core educational roles: language consultant, conversational language partner, and oral production assessor. Through case studies of prompt engineering in teaching scenarios such as listening and speaking training, situational dialogue, and real-time feedback, this paper explores the practical valu
APA, Harvard, Vancouver, ISO, and other styles
34

Kusters, Annelies. "Language ideologies in the shared signing community of Adamorobe." Language in Society 43, no. 2 (2014): 139–58. http://dx.doi.org/10.1017/s0047404514000013.

Full text
Abstract:
AbstractThis article analyzes language ideologies with regard to sign language in Adamorobe, a “shared signing community” in southern Ghana. Adamorobe Sign Language (AdaSL) is a “shared sign language,” used by all deaf people and a large number of hearing Akan-speaking people. Deaf schoolchildren from Adamorobe attend a school where Ghanaian Sign Language (GSL) is taught. Hearing interviewees have experiential knowledge that everything can be said in AdaSL, emphasise the shared roots of AdaSL and Akan, and called AdaSL “natural.” Deaf interlocutors describe Akan, AdaSL, and GSL as three distin
APA, Harvard, Vancouver, ISO, and other styles
35

Blinn, Andrew, Xiang Li, June Hyung Kim, and Cyrus Omar. "Statically Contextualizing Large Language Models with Typed Holes." Proceedings of the ACM on Programming Languages 8, OOPSLA2 (2024): 468–98. http://dx.doi.org/10.1145/3689728.

Full text
Abstract:
Large language models (LLMs) have reshaped the landscape of program synthesis. However, contemporary LLM-based code completion systems often hallucinate broken code because they lack appropriate code context, particularly when working with definitions that are neither in the training data nor near the cursor. This paper demonstrates that tighter integration with the type and binding structure of the programming language in use, as exposed by its language server, can help address this contextualization problem in a token-efficient manner. In short, we contend that AIs need IDEs, too! In particu
APA, Harvard, Vancouver, ISO, and other styles
36

Hammarström, Harald. "A full-scale test of the language farming dispersal hypothesis." Diachronica 27, no. 2 (2010): 197–213. http://dx.doi.org/10.1075/dia.27.2.02ham.

Full text
Abstract:
One attempt at explaining why some language families are large (while others are small) is the hypothesis that the families that are now large became large because their ancestral speakers had a technological advantage, most often agriculture. Variants of this idea are referred to as the Language Farming Dispersal Hypothesis. Previously, detailed language family studies have uncovered various supporting examples and counterexamples to this idea. In the present paper I weigh the evidence from ALL attested language families. For each family, I use the number of member languages as a measure of c
APA, Harvard, Vancouver, ISO, and other styles
37

Douglas, Ross, and Sevil Aliyeva. "Challenges of teaching English as a language online in large groups." Scientific Bulletin 2 (2020): 7–11. http://dx.doi.org/10.54414/jubr3867.

Full text
Abstract:
The course of instruction is English as a Language not a subject. Bearing it in mind, the teachers of the Center of Working Languages have always used modern approaches to teaching the English language. Briefly, these approaches include extensive interaction between students and creating situations when students are stimulated to produce language (both written and oral). In addition, to learn a language in a more efficient way it is crucial to integrate all the skills -Reading, Writing, Listening and Speaking in your teaching.
APA, Harvard, Vancouver, ISO, and other styles
38

Sagi, Sriram. "Advancing AI: Enhancing Large Language Model Performance through GPU Optimization Techniques." International Journal of Science and Research (IJSR) 13, no. 3 (2024): 630–33. http://dx.doi.org/10.21275/sr24309100709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Lee, Youn-Kyoung. "Integrating the Large Language Model into L2 Pronunciation Teaching and Learning." Studies in Modern Grammar 125 (March 31, 2025): 129–46. https://doi.org/10.14342/smog.2025.125.129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Martínez, Gonzalo, Javier Conde, Elena Merino-Gómez, et al. "Establishing vocabulary tests as a benchmark for evaluating large language models." PLOS ONE 19, no. 12 (2024): e0308259. https://doi.org/10.1371/journal.pone.0308259.

Full text
Abstract:
Vocabulary tests, once a cornerstone of language modeling evaluation, have been largely overlooked in the current landscape of Large Language Models (LLMs) like Llama 2, Mistral, and GPT. While most LLM evaluation benchmarks focus on specific tasks or domain-specific knowledge, they often neglect the fundamental linguistic aspects of language understanding. In this paper, we advocate for the revival of vocabulary tests as a valuable tool for assessing LLM performance. We evaluate seven LLMs using two vocabulary test formats across two languages and uncover surprising gaps in their lexical know
APA, Harvard, Vancouver, ISO, and other styles
41

Meroz, Yoram. "Large-scale Vocabulary Surveys as a Tool for Linguistic Stratigraphy: A California Case Study." Annual Meeting of the Berkeley Linguistics Society 39, no. 1 (2013): 182. http://dx.doi.org/10.3765/bls.v39i1.3880.

Full text
Abstract:
In lieu of an abstract, here is a brief excerpt:This paper presents the results of a comprehensive search for lookalikes among words for plants and animals in California languages. Words in this domain are typically more prone to borrowing than basic vocabulary, especially when speakers of a language move and encounter different species. A survey of such vocabulary is especially suited to identifying and highlighting old language contact. Since a language may be spoken far away from where its ancestor was once in contact with another language, and since words may spread far from their source t
APA, Harvard, Vancouver, ISO, and other styles
42

Oğul, İskender Ülgen, Fatih Soygazi, and Belgin Ergenç Bostanoğlu. "TurkMedNLI: a Turkish medical natural language inference dataset through large language model based translation." PeerJ Computer Science 11 (January 30, 2025): e2662. https://doi.org/10.7717/peerj-cs.2662.

Full text
Abstract:
Natural language inference (NLI) is a subfield of natural language processing (NLP) that aims to identify the contextual relationship between premise and hypothesis sentences. While high-resource languages like English benefit from robust and rich NLI datasets, creating similar datasets for low-resource languages is challenging due to the cost and complexity of manual annotation. Although translation of existing datasets offers a practical solution, direct translation of domain-specific datasets presents unique challenges, particularly in handling abbreviations, metric conversions, and cultura
APA, Harvard, Vancouver, ISO, and other styles
43

Xia, Fei, Carrie Lewis, and William Lewis. "Language ID for a Thousand Languages." LSA Annual Meeting Extended Abstracts 1 (May 2, 2010): 25. http://dx.doi.org/10.3765/exabs.v0i0.504.

Full text
Abstract:
ODIN, the Online Database of INterlinear text, is a resource built over language data harvested from linguistic documents (Lewis, 2006). It currently holds approximately 190,000 instances of Interlinear Glossed Text (IGT) from over 1100 languages, automatically extracted from nearly 3000 documents crawled from the Web. A crucial step in building ODIN is identifying the languages of extracted IGT, a challenging task due to the large number of languages and the lack of training data. We demonstrate that a coreference approach to the language ID task significantly outperforms existing algorithms
APA, Harvard, Vancouver, ISO, and other styles
44

Lege, Ryan, Euan Bonner, and Takako Aikawa. "Pecha, a language practice peer: Guiding language learning interactions through large language models." Technology in Language Teaching & Learning 6, no. 3 (2024): 1716. https://doi.org/10.29140/tltl.v6n3.1716.

Full text
Abstract:
The interaction hypothesis of second language acquisition (Long, 1981) states that negotiated interaction is necessary for language development. In many language learning contexts, educators and stakeholders seek to provide opportunities for learners to engage in meaningful real-life interactions that help them build linguistic, semantic, and rhetorical competence. However, the opportunities provided for interaction can vary in their degree of effectiveness and may only sometimes lead to increased language ability. If these interactions are scaffolded correctly, they can be tuned to maximize t
APA, Harvard, Vancouver, ISO, and other styles
45

Kumar, Deepak, Yousef Anees AbuHashem, and Zakir Durumeric. "Watch Your Language: Investigating Content Moderation with Large Language Models." Proceedings of the International AAAI Conference on Web and Social Media 18 (May 28, 2024): 865–78. http://dx.doi.org/10.1609/icwsm.v18i1.31358.

Full text
Abstract:
Large language models (LLMs) have exploded in popularity due to their ability to perform a wide array of natural language tasks. Text-based content moderation is one LLM use case that has received recent enthusiasm, however, there is little research investigating how LLMs can help in content moderation settings. In this work, we evaluate a suite of commodity LLMs on two common content moderation tasks: rule-based community moderation and toxic content detection. For rule-based community moderation, we instantiate 95 subcommunity specific LLMs by prompting GPT-3.5 with rules from 95 Reddit subc
APA, Harvard, Vancouver, ISO, and other styles
46

Beurer-Kellner, Luca, Marc Fischer, and Martin Vechev. "Prompting Is Programming: A Query Language for Large Language Models." Proceedings of the ACM on Programming Languages 7, PLDI (2023): 1946–69. http://dx.doi.org/10.1145/3591300.

Full text
Abstract:
Large language models have demonstrated outstanding performance on a wide range of tasks such as question answering and code generation. On a high level, given an input, a language model can be used to automatically complete the sequence in a statistically-likely way. Based on this, users prompt these models with language instructions or examples, to implement a variety of downstream tasks. Advanced prompting methods can even imply interaction between the language model, a user, and external tools such as calculators. However, to obtain state-of-the-art performance or adapt language models for
APA, Harvard, Vancouver, ISO, and other styles
47

Kulkarni, Chinmay Shripad. "The Evolution of Large Language Models in Natural Language Understanding." Journal of Artificial Intelligence, Machine Learning and Data Science 1, no. 4 (2023): 49–53. http://dx.doi.org/10.51219/jaimld/chinmay-shripad-kulkarni/28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Du, Mengnan, Fengxiang He, Na Zou, Dacheng Tao, and Xia Hu. "Shortcut Learning of Large Language Models in Natural Language Understanding." Communications of the ACM 67, no. 1 (2023): 110–20. http://dx.doi.org/10.1145/3596490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Dong, Jiarui. "SignifAI: French sign language translation based on deep learning and large language models." Applied and Computational Engineering 95, no. 1 (2024): 230–47. http://dx.doi.org/10.54254/2755-2721/95/20241748.

Full text
Abstract:
In this project, we propose SignifAI which is an AI-powered software that translates French Sign Language into spoken French. The goal of SignifAI is to remove the language barrier that separates the deaf and hearing communities. The translation process proves to be a major challenge for existing methods, as most of them rely on traditional Natural Language Processing (NLP) methods which solve the task in a similar way as spoken languages (e.g., paying more attention to the logics and reasoning). The drastic syntactical difference between sign and spoken languages, however, makes such approach
APA, Harvard, Vancouver, ISO, and other styles
50

y Arcas, Blaise Agüera. "Do Large Language Models Understand Us?" Daedalus 151, no. 2 (2022): 183–97. http://dx.doi.org/10.1162/daed_a_01909.

Full text
Abstract:
Abstract Large language models (LLMs) represent a major advance in artificial intelligence and, in particular, toward the goal of human-like artificial general intelligence. It is sometimes claimed, though, that machine learning is “just statistics,” hence that, in this grander ambition, progress in AI is illusory. Here I take the contrary view that LLMs have a great deal to teach us about the nature of language, understanding, intelligence, sociality, and personhood. Specifically: statistics do amount to understanding, in any falsifiable sense. Furthermore, much of what we consider intelligen
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!