Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Large Language Models (LLM)“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Large Language Models (LLM)" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Large Language Models (LLM)"
Fang, Meng, Shilong Deng, Yudi Zhang, Zijing Shi, Ling Chen, Mykola Pechenizkiy und Jun Wang. „Large Language Models Are Neurosymbolic Reasoners“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 16 (24.03.2024): 17985–93. http://dx.doi.org/10.1609/aaai.v38i16.29754.
Der volle Inhalt der QuelleWang, Runze, Mingqi Yang und Yanming Shen. „Bridging Molecular Graphs and Large Language Models“. Proceedings of the AAAI Conference on Artificial Intelligence 39, Nr. 20 (11.04.2025): 21234–42. https://doi.org/10.1609/aaai.v39i20.35422.
Der volle Inhalt der QuelleMochihashi, Daichi. „Large Language Models(LLM)and Robotics“. Journal of the Robotics Society of Japan 40, Nr. 10 (2022): 863–66. http://dx.doi.org/10.7210/jrsj.40.863.
Der volle Inhalt der QuelleDevyatkin, Dmitry A., Vladimir A. Salimovsky, Natalia V. Chudova, Anastasia A. Ryzhova und Oleg G. Grigoriev. „Large language models and speech genre systematicity“. International Journal “Speech Genres” 20, Nr. 1 (45) (21.02.2025): 6–23. https://doi.org/10.18500/2311-0740-2025-20-1-45-6-23.
Der volle Inhalt der QuelleYang, Jidong. „Large language models privacy and security“. Applied and Computational Engineering 76, Nr. 1 (16.07.2024): 177–88. http://dx.doi.org/10.54254/2755-2721/76/20240584.
Der volle Inhalt der QuelleShanahan, Murray. „Talking about Large Language Models“. Communications of the ACM 67, Nr. 2 (25.01.2024): 68–79. http://dx.doi.org/10.1145/3624724.
Der volle Inhalt der QuelleLiu, Yuxin. „Attention is All Large Language Model Need“. ITM Web of Conferences 73 (2025): 02025. https://doi.org/10.1051/itmconf/20257302025.
Der volle Inhalt der QuelleMa, Ziyang, Guanrou Yang, Yifan Yang, Zhifu Gao, Jiaming Wang, Zhihao Du, Fan Yu et al. „Speech Recognition Meets Large Language Model: Benchmarking, Models, and Exploration“. Proceedings of the AAAI Conference on Artificial Intelligence 39, Nr. 23 (11.04.2025): 24840–48. https://doi.org/10.1609/aaai.v39i23.34666.
Der volle Inhalt der QuelleZelenkov, Yuri A. „Knowledge management in organization and the large language models“. Russian Management Journal 22, Nr. 3 (2024): 573–601. https://doi.org/10.21638/spbu18.2024.309.
Der volle Inhalt der QuelleMartínez, Gonzalo, Javier Conde, Elena Merino-Gómez, Beatriz Bermúdez-Margaretto, José Alberto Hernández, Pedro Reviriego und Marc Brysbaert. „Establishing vocabulary tests as a benchmark for evaluating large language models“. PLOS ONE 19, Nr. 12 (12.12.2024): e0308259. https://doi.org/10.1371/journal.pone.0308259.
Der volle Inhalt der QuelleDissertationen zum Thema "Large Language Models (LLM)"
Naqvi, Syed Muhammad Raza. „Exploration des LLM et de l'XAI sémantique pour les capacités des robots industriels et les connaissances communes en matière de fabrication“. Electronic Thesis or Diss., Université de Toulouse (2023-....), 2025. http://www.theses.fr/2025TLSEP014.
Der volle Inhalt der QuelleIn Industry 4.0, advanced manufacturing is vital in shaping future factories, enabling enhanced planning, scheduling, and control. The ability to adaptproduction lines swiftly in response to customer demands or unexpected situations is essential to enhance the future of manufacturing. While AI is emerging as a solution, industries still rely on human expertise due to trust issues and a lack of transparency in AI decisions. Explainable AI integrating commonsense knowledge related to manufacturing is crucial for making AI decisions understandable and trustworthy. Within this context, we propose the S-XAI framework, an integrated solution combining machine specifications with MCSK to provide explainable and transparent decision-making. The focus is on providing real-time machine capabilities to ensure precise decision-making while simultaneously explaining the decision-making process to all involved stakeholders. Accordingly, the first objective was formalizing machine specifications, including capabilities, capacities, functions, quality, and process characteristics, focusing on robotics. To do so, we created a Robot Capability ontology formalizing all relevant aspects of machine specifications, such as Capability, Capacity, Function, Quality, and Process Characteristics. On top of this formalization, the RCO allows manufacturing stakeholders to capture robotic capabilities described in specification manuals (advertised capabilities) and compare them with real-world performance (operational capabilities). RCO is based on the Machine Service Description Language, a domain reference ontology created for manufacturing services, and aligned with the Basic Formal Ontology, Industrial Foundry Ontology, Information Artifact Ontology, and Relations Ontology. The second objective was the formalization of MCSK. We introduce MCSK and present a methodology for identifying it, starting with recognizing different CSK patterns in manufacturing and aligning them with manufacturing concepts. Extracting MCSK in a usable form is challenging, so our approach structures MCSK into NL statements utilizing LLMs. to facilitate rule-based reasoning, thereby enhancing decision-making capabilities. The third and final objective is to propose an S-XAI framework utilizing RCO and MCSK to assess if existing machines can perform specific tasks and generate understandable NL explanations. This was achieved by integrating the RCO, which provides operational capabilities like repeatability and precision, with MCSK, which outlines the process requirements. By utilizing MCSK-based semantic reasoning, the S-XAI system seamlessly provides NL explanations that detail each logic and outcome. In the S-XAI framework, an NN predicts the operational capabilities of robots, while symbolic AI incorporates these predictions within an MCSK-based reasoning system grounded in the RCO. This hybrid setup maximizes the strengths of each AI system and ensures that predictions support a transparent decision-making process. Additionally, S-XAI enhances the interpretability of NN predictions through XAI techniques such as LIME, SHAP, and PDP, clarifying NN predictions and enabling detailed insights for better calibration and proactive management, ultimately fostering a resilient and informed manufacturing environment
Labeau, Matthieu. „Neural language models : Dealing with large vocabularies“. Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS313/document.
Der volle Inhalt der QuelleThis work investigates practical methods to ease training and improve performances of neural language models with large vocabularies. The main limitation of neural language models is their expensive computational cost: it depends on the size of the vocabulary, with which it grows linearly. Despite several training tricks, the most straightforward way to limit computation time is to limit the vocabulary size, which is not a satisfactory solution for numerous tasks. Most of the existing methods used to train large-vocabulary language models revolve around avoiding the computation of the partition function, ensuring that output scores are normalized into a probability distribution. Here, we focus on sampling-based approaches, including importance sampling and noise contrastive estimation. These methods allow an approximate computation of the partition function. After examining the mechanism of self-normalization in noise-contrastive estimation, we first propose to improve its efficiency with solutions that are adapted to the inner workings of the method and experimentally show that they considerably ease training. Our second contribution is to expand on a generalization of several sampling based objectives as Bregman divergences, in order to experiment with new objectives. We use Beta divergences to derive a set of objectives from which noise contrastive estimation is a particular case. Finally, we aim at improving performances on full vocabulary language models, by augmenting output words representation with subwords. We experiment on a Czech dataset and show that using character-based representations besides word embeddings for output representations gives better results. We also show that reducing the size of the output look-up table improves results even more
Schaeffer, Marion. „Towards efficient Knowledge Graph-based Retrieval Augmented Generation for conversational agents“. Electronic Thesis or Diss., Normandie, 2025. http://www.theses.fr/2025NORMIR06.
Der volle Inhalt der QuelleConversational agents have become widespread in recent years. Today, they have transcended their initial purpose of simulating a conversation with a computer program and are now valuable tools for accessing information and carrying out various tasks, from customer service to personal assistance. With the rise of text-generative models and Large Language Models (LLMs), the capabilities of conversational agents have increased tenfold. However, they are now subject to hallucinations, producing false information. A popular technique to limit the risk of hallucinations is Retrieval Augmented Generation (RAG), which injects knowledge into a text generation process. Such injected knowledge can be drawn from Knowledge Graphs (KGs), which are structured machine-readable knowledge representations. Therefore, we explore Knowledge Graph-based Retrieval Augmented Generation (KG-RAG) to build trusted conversational agents. We demonstrate our approach on a real-world use case for citizen support by building conversational agents for disability management in cities. We first present a history of conversational agents, introducing the approaches implemented over the years and the evaluation techniques. We then define KGs and ontologies, and explore construction and evaluation techniques. As we could not find a directly exploitable KG, our first contribution introduces the Ontology Learning Applied Framework (OLAF). This modular system is built for automated and repeatable KG construction from unstructured text. OLAF integrates linguistic, statistical, and LLM-based techniques to generate Minimum Viable Ontologies for specific domains. Applied to real-world datasets, OLAF demonstrates robust performance through gold-standard evaluations and task-specific Competency Questions. We detail the construction process for a KG about disability management in a French city. We then propose an architecture for KG-RAG systems to enhance information retrieval by aligning user queries with KG structures through entity linking, graph queries, and LLM-based retrieval approaches. We demonstrate our architecture on different use cases, which we evaluate using criteria such as performance, human preference, and environmental impact. While user preferences advantage Text-RAG, KG-RAG's reduced computational footprint underscores its potential for sustainable AI practices. Finally, we identify the critical part of the architecture as the retriever. Therefore, we tackle the retrieval task in our architecture by exploring embeddings in various contexts, i.e. improving EL, retrieval, and providing a caching system. We also propose mechanisms for handling multi-turn conversations. This work establishes a comprehensive framework for KG-RAG systems, combining the semantic depth of KGs with the generative capabilities of LLMs to deliver accurate, contextual, and sustainable conversational agents. Contributions include OLAF for scalable KG construction, a robust KG-RAG pipeline, and embedding-based enhancements for retrieval and interaction quality. By addressing conversational agents' industrial challenges, such as scalability, retrieval precision, and conversational coherence, this research lays the foundation for deploying KG-RAG systems in diverse and specialised domains
Zervakis, Georgios. „Enriching large language models with semantic lexicons and analogies“. Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0039.
Der volle Inhalt der QuelleRecent advances in deep learning and neural networks have made it possible to address complex natural language processing tasks, which find application in a plethora of real-world problems ranging from smart assistants in mobile devices to the prediction of cancer. Nonetheless, modern systems based on these frameworks exhibit various limitations that may compromise their performance and trustworthiness, render them unfair towards minorities, or subject them to privacy leakage. It is our belief that integrating symbolic knowledge and reasoning into the deep learning framework is a necessary step towards addressing the aforementioned limitations. For example, lexical resources can enrich deep neural networks with semantic or syntactic knowledge, and logical rules can provide learning and reasoning mechanisms. Therefore, the scope of this thesis is to develop and evaluate ways of integrating different types of symbolic knowledge and reasoning into a widely used language model, Bidirectional Encoder Representations from Transformers (BERT). ln a first stage, we consider retrofitting, a simple and popular technique for refining distributional word embeddings based on relations coming from a semantic lexicon. Inspired by this technique, we present two methods for incorporating this knowledge into BERT contextualized embeddings. We evaluate these methods on three biomedical datasets for relation extraction and one movie review dataset for sentiment analysis, and show that they do not substantially impact the performance for these tasks. Furthermore, we conduct a qualitative analysis to provide further insights on this negative result. ln a second stage, we integrate analogical reasoning with BERT as a means to improve its performance on the target sense verification task, and make it more robust. To do so, we reformulate target sense verification as an analogy detection task. We present a hybrid model that combines BERT to encode the input data into quadruples and a convolutional neural classifier to decide whether they constitute valid analogies. We test our system on a benchmark dataset, and show that it can outperform existing approaches. Our empirical study shows the importance of the input encoding for BERT, and how this dependence gets alleviated by integrating the axiomatic properties of analogies during training, while preserving performance and improving robustness
Chadha, Vikrampal. „Simulation of large-scale system-level models“. Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-12162009-020334/.
Der volle Inhalt der QuelleBughio, Kulsoom Saima. „IoMT security: A semantic framework for vulnerability detection in remote patient monitoring“. Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2024. https://ro.ecu.edu.au/theses/2841.
Der volle Inhalt der QuelleHittner, Brian Edward. „Rendering large-scale terrain models and positioning objects in relation to 3D terrain“. Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Dec%5FHittner.pdf.
Der volle Inhalt der QuelleThesis advisor(s): Don Brutzman, Curt Blais. Includes bibliographical references (p. 117-118). Also available online.
Kropff, Emilio. „Statistical and dynamical properties of large cortical network models: insights into semantic memory and language“. Doctoral thesis, SISSA, 2007. http://hdl.handle.net/20.500.11767/4639.
Der volle Inhalt der QuelleZhao, Ying, und ying zhao@rmit edu au. „Effective Authorship Attribution in Large Document Collections“. RMIT University. Computer Science and Information Technology, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080730.162501.
Der volle Inhalt der QuellePan, Bi-Yu. „Hierarchical test generation for VHDL behavioral models“. Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-09052009-040449/.
Der volle Inhalt der QuelleBücher zum Thema "Large Language Models (LLM)"
Amaratunga, Thimira. Understanding Large Language Models. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/979-8-8688-0017-7.
Der volle Inhalt der QuelleMartra, Pere. Large Language Models Projects. Berkeley, CA: Apress, 2024. http://dx.doi.org/10.1007/979-8-8688-0515-8.
Der volle Inhalt der QuelleKucharavy, Andrei, Octave Plancherel, Valentin Mulder, Alain Mermoud und Vincent Lenders, Hrsg. Large Language Models in Cybersecurity. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7.
Der volle Inhalt der QuelleMarcondes, Francisco S., Adelino Gala, Renata Magalhães, Fernando Perez de Britto, Dalila Durães und Paulo Novais. Natural Language Analytics with Generative Large-Language Models. Cham: Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-76631-2.
Der volle Inhalt der QuelleKamath, Uday, Kevin Keenan, Garrett Somers und Sarah Sorenson. Large Language Models: A Deep Dive. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-65647-7.
Der volle Inhalt der QuelleSingh, Bhawna. Building Applications with Large Language Models. Berkeley, CA: Apress, 2024. http://dx.doi.org/10.1007/979-8-8688-0569-1.
Der volle Inhalt der QuelleGrigorov, Dilyan. Introduction to Python and Large Language Models. Berkeley, CA: Apress, 2024. http://dx.doi.org/10.1007/979-8-8688-0540-0.
Der volle Inhalt der QuelleMualla, Yazan, Liuwen Yu, Davide Liga, Igor Tchappi und Réka Markovich, Hrsg. Advances in Explainability, Agents, and Large Language Models. Cham: Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-89103-8.
Der volle Inhalt der QuelleTörnberg, Petter. How to Use Large-Language Models for Text Analysis. 1 Oliver’s Yard, 55 City Road, London EC1Y 1SP United Kingdom: SAGE Publications Ltd, 2024. http://dx.doi.org/10.4135/9781529683707.
Der volle Inhalt der QuelleJonnagaddala, Jitendra, Hong-Jie Dai und Ching-Tai Chen, Hrsg. Large Language Models for Automatic Deidentification of Electronic Health Record Notes. Singapore: Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-97-7966-6.
Der volle Inhalt der QuelleBuchteile zum Thema "Large Language Models (LLM)"
Da Silva Gameiro, Henrique. „LLM Detectors“. In Large Language Models in Cybersecurity, 197–204. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_22.
Der volle Inhalt der QuelleKucharavy, Andrei. „Overview of Existing LLM Families“. In Large Language Models in Cybersecurity, 31–44. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_3.
Der volle Inhalt der QuelleWürsch, Maxime, Dimitri Percia David und Alain Mermoud. „Monitoring Emerging Trends in LLM Research“. In Large Language Models in Cybersecurity, 153–61. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_17.
Der volle Inhalt der QuelleMeier, Raphael. „LLM-Aided Social Media Influence Operations“. In Large Language Models in Cybersecurity, 105–12. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_11.
Der volle Inhalt der QuelleMajumdar, Subhabrata. „Standards for LLM Security“. In Large Language Models in Cybersecurity, 225–31. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_25.
Der volle Inhalt der QuelleSchillaci, Zachary. „LLM Adoption Trends and Associated Risks“. In Large Language Models in Cybersecurity, 121–28. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_13.
Der volle Inhalt der QuelleMartra, Pere. „Creating and Publishing Your Own LLM“. In Large Language Models Projects, 297–318. Berkeley, CA: Apress, 2024. http://dx.doi.org/10.1007/979-8-8688-0515-8_7.
Der volle Inhalt der QuelleLi, Yixuan, Julian Parsert und Elizabeth Polgreen. „Guiding Enumerative Program Synthesis with Large Language Models“. In Computer Aided Verification, 280–301. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-65630-9_15.
Der volle Inhalt der QuelleApaydin, Kaan, und Yorck Zisgen. „Local Large Language Models for Business Process Modeling“. In Lecture Notes in Business Information Processing, 605–9. Cham: Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-82225-4_44.
Der volle Inhalt der QuelleVogelsang, Terry. „LLM Controls Execution Flow Hijacking“. In Large Language Models in Cybersecurity, 99–104. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_10.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Large Language Models (LLM)"
Pasupuleti, Rajesh, Ravi Vadapalli, Christopher Mader und Norris Timothy. „Popular LLM-Large Language Models in Enterprise Applications“. In 2024 2nd International Conference on Foundation and Large Language Models (FLLM), 125–31. IEEE, 2024. https://doi.org/10.1109/fllm63129.2024.10852443.
Der volle Inhalt der QuelleFernando, Saruni, Robert Kunzelmann, Daniela Sánchez Lopera, Jad Al Halabi und Wolfgang Ecker. „Boosting Productivity of Hardware Documentation Using Large Language Models“. In 2024 IEEE LLM Aided Design Workshop (LAD), 1. IEEE, 2024. http://dx.doi.org/10.1109/lad62341.2024.10691698.
Der volle Inhalt der QuelleVijayaraghavan, Prashanth, Luyao Shi, Ehsan Degan und Xin Zhang. „CircuitSynth: Leveraging Large Language Models for Circuit Topology Synthesis“. In 2024 IEEE LLM Aided Design Workshop (LAD), 1–6. IEEE, 2024. http://dx.doi.org/10.1109/lad62341.2024.10691716.
Der volle Inhalt der QuelleCoppolillo, Erica, Francesco Calimeri, Giuseppe Manco, Simona Perri und Francesco Ricca. „LLASP: Fine-tuning Large Language Models for Answer Set Programming“. In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 834–44. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/78.
Der volle Inhalt der QuelleJignasu, Anushrut, Kelly Marshall, Baskar Ganapathysubramanian, Aditya Balu, Chinmay Hegde und Adarsh Krishnamurthy. „Evaluating Large Language Models for G-Code Debugging, Manipulation, and Comprehension“. In 2024 IEEE LLM Aided Design Workshop (LAD), 1–5. IEEE, 2024. http://dx.doi.org/10.1109/lad62341.2024.10691700.
Der volle Inhalt der QuelleXu, Haocheng, Haotian Hu und Sitao Huang. „Optimizing High-Level Synthesis Designs with Retrieval-Augmented Large Language Models“. In 2024 IEEE LLM Aided Design Workshop (LAD), 1–5. IEEE, 2024. http://dx.doi.org/10.1109/lad62341.2024.10691855.
Der volle Inhalt der QuellePetrovic, Nenad, Krzysztof Lebioda, Vahid Zolfaghari, André Schamschurko, Sven Kirchner, Nils Purschke, Fengjunjie Pan und Alois Knoll. „LLM-Driven Testing for Autonomous Driving Scenarios“. In 2024 2nd International Conference on Foundation and Large Language Models (FLLM), 173–78. IEEE, 2024. https://doi.org/10.1109/fllm63129.2024.10852505.
Der volle Inhalt der QuelleRu Teh, Jocelyn Shuang, Eng Keong Koay, Shin Wei Lim, Kuan Heng Lee, Mee Sim Lai, Meng Siong Lee und Yuan Kuok Nee. „Adaptive Composite Accuracy Scoring for Domainspecific LLM Evaluation“. In 2024 2nd International Conference on Foundation and Large Language Models (FLLM), 272–79. IEEE, 2024. https://doi.org/10.1109/fllm63129.2024.10852484.
Der volle Inhalt der QuelleZolfaghari, Vahid, Nenad Petrovic, Fengjunjie Pan, Krzysztof Lebioda und Alois Knoll. „Adopting RAG for LLM-Aided Future Vehicle Design“. In 2024 2nd International Conference on Foundation and Large Language Models (FLLM), 437–42. IEEE, 2024. https://doi.org/10.1109/fllm63129.2024.10852467.
Der volle Inhalt der QuelleLoukil, Faiza, Sarah Cadereau, Hervé Verjus, Mattéo Galfre, Kavé Salamatian, David Telisson, Quentin Kembellec und Olivier Le Van. „LLM-centric pipeline for information extraction from invoices“. In 2024 2nd International Conference on Foundation and Large Language Models (FLLM), 569–75. IEEE, 2024. https://doi.org/10.1109/fllm63129.2024.10852504.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Large Language Models (LLM)"
Zhang, Hao. Large Language Model (LLM) Monthly Report (2024 Apr). ResearchHub Technologies, Inc., Mai 2024. http://dx.doi.org/10.55277/researchhub.0ps6xenm.
Der volle Inhalt der QuelleAzuara Herrera, Oliver, Laura Ripani und Eric Torres Ramirez. AI and the Increase of Productivity and Labor Inequality in Latin America: Potential Impact of Large Language Models on Latin American Workforce. Inter-American Development Bank, September 2024. http://dx.doi.org/10.18235/0013152.
Der volle Inhalt der QuelleRosenblat, Sruly, Tim O'Reilly und Ilan Strauss. Beyond Public Access in LLM Pre-Training Data: Non-public book content in OpenAI’s Models. AI Disclosures Project, Social Science Research Council, April 2025. https://doi.org/10.35650/aidp.4111.d.2025.
Der volle Inhalt der QuelleAlonso-Robisco, Andres, und Jose Manuel Carbo. Analysis of CBDC Narrative OF Central Banks using Large Language Models. Madrid: Banco de España, August 2023. http://dx.doi.org/10.53479/33412.
Der volle Inhalt der QuelleMarra de Artiñano, Ignacio, Franco Riottini Depetris und Christian Volpe Martincus. Automatic Product Classification in International Trade: Machine Learning and Large Language Models. Inter-American Development Bank, Juli 2023. http://dx.doi.org/10.18235/0005012.
Der volle Inhalt der QuelleMoreno, Ángel Iván, und Teresa Caminero. Assessing the data challenges of climate-related disclosures in european banks. A text mining study. Madrid: Banco de España, September 2023. http://dx.doi.org/10.53479/33752.
Der volle Inhalt der QuelleHarrison, Stephen. Wikipedia’s Governance Challenge: Policies and Guardrails for New Generative AI Technologies. Balsillie School of International Affairs, November 2024. https://doi.org/10.51644/bcs002.
Der volle Inhalt der QuelleMaerz, Seraphine. Using AI for Text Analysis in R. Instats Inc., 2024. http://dx.doi.org/10.61700/ti5uexui5ilrd1663.
Der volle Inhalt der QuelleKorinek, Anton, und Jai Vipra. Concentrating Intelligence: Scaling and Market Structure in Artificial Intelligence. Institute for New Economic Thinking Working Paper Series, Oktober 2024. http://dx.doi.org/10.36687/inetwp228.
Der volle Inhalt der QuelleBastidas Ripalda, Rafaela, Stephen Hansen, John Leon-Diaz und Yabra Muvdi. Tracking the Reform Process from Newspaper Data. Inter-American Development Bank, März 2025. https://doi.org/10.18235/0013467.
Der volle Inhalt der Quelle