Littérature scientifique sur le sujet « Large Language Models (LLM) »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Large Language Models (LLM) ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Large Language Models (LLM)"
Fang, Meng, Shilong Deng, Yudi Zhang, Zijing Shi, Ling Chen, Mykola Pechenizkiy et Jun Wang. « Large Language Models Are Neurosymbolic Reasoners ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 16 (24 mars 2024) : 17985–93. http://dx.doi.org/10.1609/aaai.v38i16.29754.
Texte intégralWang, Runze, Mingqi Yang et Yanming Shen. « Bridging Molecular Graphs and Large Language Models ». Proceedings of the AAAI Conference on Artificial Intelligence 39, no 20 (11 avril 2025) : 21234–42. https://doi.org/10.1609/aaai.v39i20.35422.
Texte intégralMochihashi, Daichi. « Large Language Models(LLM)and Robotics ». Journal of the Robotics Society of Japan 40, no 10 (2022) : 863–66. http://dx.doi.org/10.7210/jrsj.40.863.
Texte intégralDevyatkin, Dmitry A., Vladimir A. Salimovsky, Natalia V. Chudova, Anastasia A. Ryzhova et Oleg G. Grigoriev. « Large language models and speech genre systematicity ». International Journal “Speech Genres” 20, no 1 (45) (21 février 2025) : 6–23. https://doi.org/10.18500/2311-0740-2025-20-1-45-6-23.
Texte intégralYang, Jidong. « Large language models privacy and security ». Applied and Computational Engineering 76, no 1 (16 juillet 2024) : 177–88. http://dx.doi.org/10.54254/2755-2721/76/20240584.
Texte intégralShanahan, Murray. « Talking about Large Language Models ». Communications of the ACM 67, no 2 (25 janvier 2024) : 68–79. http://dx.doi.org/10.1145/3624724.
Texte intégralLiu, Yuxin. « Attention is All Large Language Model Need ». ITM Web of Conferences 73 (2025) : 02025. https://doi.org/10.1051/itmconf/20257302025.
Texte intégralMa, Ziyang, Guanrou Yang, Yifan Yang, Zhifu Gao, Jiaming Wang, Zhihao Du, Fan Yu et al. « Speech Recognition Meets Large Language Model : Benchmarking, Models, and Exploration ». Proceedings of the AAAI Conference on Artificial Intelligence 39, no 23 (11 avril 2025) : 24840–48. https://doi.org/10.1609/aaai.v39i23.34666.
Texte intégralZelenkov, Yuri A. « Knowledge management in organization and the large language models ». Russian Management Journal 22, no 3 (2024) : 573–601. https://doi.org/10.21638/spbu18.2024.309.
Texte intégralMartínez, Gonzalo, Javier Conde, Elena Merino-Gómez, Beatriz Bermúdez-Margaretto, José Alberto Hernández, Pedro Reviriego et Marc Brysbaert. « Establishing vocabulary tests as a benchmark for evaluating large language models ». PLOS ONE 19, no 12 (12 décembre 2024) : e0308259. https://doi.org/10.1371/journal.pone.0308259.
Texte intégralThèses sur le sujet "Large Language Models (LLM)"
Naqvi, Syed Muhammad Raza. « Exploration des LLM et de l'XAI sémantique pour les capacités des robots industriels et les connaissances communes en matière de fabrication ». Electronic Thesis or Diss., Université de Toulouse (2023-....), 2025. http://www.theses.fr/2025TLSEP014.
Texte intégralIn Industry 4.0, advanced manufacturing is vital in shaping future factories, enabling enhanced planning, scheduling, and control. The ability to adaptproduction lines swiftly in response to customer demands or unexpected situations is essential to enhance the future of manufacturing. While AI is emerging as a solution, industries still rely on human expertise due to trust issues and a lack of transparency in AI decisions. Explainable AI integrating commonsense knowledge related to manufacturing is crucial for making AI decisions understandable and trustworthy. Within this context, we propose the S-XAI framework, an integrated solution combining machine specifications with MCSK to provide explainable and transparent decision-making. The focus is on providing real-time machine capabilities to ensure precise decision-making while simultaneously explaining the decision-making process to all involved stakeholders. Accordingly, the first objective was formalizing machine specifications, including capabilities, capacities, functions, quality, and process characteristics, focusing on robotics. To do so, we created a Robot Capability ontology formalizing all relevant aspects of machine specifications, such as Capability, Capacity, Function, Quality, and Process Characteristics. On top of this formalization, the RCO allows manufacturing stakeholders to capture robotic capabilities described in specification manuals (advertised capabilities) and compare them with real-world performance (operational capabilities). RCO is based on the Machine Service Description Language, a domain reference ontology created for manufacturing services, and aligned with the Basic Formal Ontology, Industrial Foundry Ontology, Information Artifact Ontology, and Relations Ontology. The second objective was the formalization of MCSK. We introduce MCSK and present a methodology for identifying it, starting with recognizing different CSK patterns in manufacturing and aligning them with manufacturing concepts. Extracting MCSK in a usable form is challenging, so our approach structures MCSK into NL statements utilizing LLMs. to facilitate rule-based reasoning, thereby enhancing decision-making capabilities. The third and final objective is to propose an S-XAI framework utilizing RCO and MCSK to assess if existing machines can perform specific tasks and generate understandable NL explanations. This was achieved by integrating the RCO, which provides operational capabilities like repeatability and precision, with MCSK, which outlines the process requirements. By utilizing MCSK-based semantic reasoning, the S-XAI system seamlessly provides NL explanations that detail each logic and outcome. In the S-XAI framework, an NN predicts the operational capabilities of robots, while symbolic AI incorporates these predictions within an MCSK-based reasoning system grounded in the RCO. This hybrid setup maximizes the strengths of each AI system and ensures that predictions support a transparent decision-making process. Additionally, S-XAI enhances the interpretability of NN predictions through XAI techniques such as LIME, SHAP, and PDP, clarifying NN predictions and enabling detailed insights for better calibration and proactive management, ultimately fostering a resilient and informed manufacturing environment
Labeau, Matthieu. « Neural language models : Dealing with large vocabularies ». Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS313/document.
Texte intégralThis work investigates practical methods to ease training and improve performances of neural language models with large vocabularies. The main limitation of neural language models is their expensive computational cost: it depends on the size of the vocabulary, with which it grows linearly. Despite several training tricks, the most straightforward way to limit computation time is to limit the vocabulary size, which is not a satisfactory solution for numerous tasks. Most of the existing methods used to train large-vocabulary language models revolve around avoiding the computation of the partition function, ensuring that output scores are normalized into a probability distribution. Here, we focus on sampling-based approaches, including importance sampling and noise contrastive estimation. These methods allow an approximate computation of the partition function. After examining the mechanism of self-normalization in noise-contrastive estimation, we first propose to improve its efficiency with solutions that are adapted to the inner workings of the method and experimentally show that they considerably ease training. Our second contribution is to expand on a generalization of several sampling based objectives as Bregman divergences, in order to experiment with new objectives. We use Beta divergences to derive a set of objectives from which noise contrastive estimation is a particular case. Finally, we aim at improving performances on full vocabulary language models, by augmenting output words representation with subwords. We experiment on a Czech dataset and show that using character-based representations besides word embeddings for output representations gives better results. We also show that reducing the size of the output look-up table improves results even more
Schaeffer, Marion. « Towards efficient Knowledge Graph-based Retrieval Augmented Generation for conversational agents ». Electronic Thesis or Diss., Normandie, 2025. http://www.theses.fr/2025NORMIR06.
Texte intégralConversational agents have become widespread in recent years. Today, they have transcended their initial purpose of simulating a conversation with a computer program and are now valuable tools for accessing information and carrying out various tasks, from customer service to personal assistance. With the rise of text-generative models and Large Language Models (LLMs), the capabilities of conversational agents have increased tenfold. However, they are now subject to hallucinations, producing false information. A popular technique to limit the risk of hallucinations is Retrieval Augmented Generation (RAG), which injects knowledge into a text generation process. Such injected knowledge can be drawn from Knowledge Graphs (KGs), which are structured machine-readable knowledge representations. Therefore, we explore Knowledge Graph-based Retrieval Augmented Generation (KG-RAG) to build trusted conversational agents. We demonstrate our approach on a real-world use case for citizen support by building conversational agents for disability management in cities. We first present a history of conversational agents, introducing the approaches implemented over the years and the evaluation techniques. We then define KGs and ontologies, and explore construction and evaluation techniques. As we could not find a directly exploitable KG, our first contribution introduces the Ontology Learning Applied Framework (OLAF). This modular system is built for automated and repeatable KG construction from unstructured text. OLAF integrates linguistic, statistical, and LLM-based techniques to generate Minimum Viable Ontologies for specific domains. Applied to real-world datasets, OLAF demonstrates robust performance through gold-standard evaluations and task-specific Competency Questions. We detail the construction process for a KG about disability management in a French city. We then propose an architecture for KG-RAG systems to enhance information retrieval by aligning user queries with KG structures through entity linking, graph queries, and LLM-based retrieval approaches. We demonstrate our architecture on different use cases, which we evaluate using criteria such as performance, human preference, and environmental impact. While user preferences advantage Text-RAG, KG-RAG's reduced computational footprint underscores its potential for sustainable AI practices. Finally, we identify the critical part of the architecture as the retriever. Therefore, we tackle the retrieval task in our architecture by exploring embeddings in various contexts, i.e. improving EL, retrieval, and providing a caching system. We also propose mechanisms for handling multi-turn conversations. This work establishes a comprehensive framework for KG-RAG systems, combining the semantic depth of KGs with the generative capabilities of LLMs to deliver accurate, contextual, and sustainable conversational agents. Contributions include OLAF for scalable KG construction, a robust KG-RAG pipeline, and embedding-based enhancements for retrieval and interaction quality. By addressing conversational agents' industrial challenges, such as scalability, retrieval precision, and conversational coherence, this research lays the foundation for deploying KG-RAG systems in diverse and specialised domains
Zervakis, Georgios. « Enriching large language models with semantic lexicons and analogies ». Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0039.
Texte intégralRecent advances in deep learning and neural networks have made it possible to address complex natural language processing tasks, which find application in a plethora of real-world problems ranging from smart assistants in mobile devices to the prediction of cancer. Nonetheless, modern systems based on these frameworks exhibit various limitations that may compromise their performance and trustworthiness, render them unfair towards minorities, or subject them to privacy leakage. It is our belief that integrating symbolic knowledge and reasoning into the deep learning framework is a necessary step towards addressing the aforementioned limitations. For example, lexical resources can enrich deep neural networks with semantic or syntactic knowledge, and logical rules can provide learning and reasoning mechanisms. Therefore, the scope of this thesis is to develop and evaluate ways of integrating different types of symbolic knowledge and reasoning into a widely used language model, Bidirectional Encoder Representations from Transformers (BERT). ln a first stage, we consider retrofitting, a simple and popular technique for refining distributional word embeddings based on relations coming from a semantic lexicon. Inspired by this technique, we present two methods for incorporating this knowledge into BERT contextualized embeddings. We evaluate these methods on three biomedical datasets for relation extraction and one movie review dataset for sentiment analysis, and show that they do not substantially impact the performance for these tasks. Furthermore, we conduct a qualitative analysis to provide further insights on this negative result. ln a second stage, we integrate analogical reasoning with BERT as a means to improve its performance on the target sense verification task, and make it more robust. To do so, we reformulate target sense verification as an analogy detection task. We present a hybrid model that combines BERT to encode the input data into quadruples and a convolutional neural classifier to decide whether they constitute valid analogies. We test our system on a benchmark dataset, and show that it can outperform existing approaches. Our empirical study shows the importance of the input encoding for BERT, and how this dependence gets alleviated by integrating the axiomatic properties of analogies during training, while preserving performance and improving robustness
Chadha, Vikrampal. « Simulation of large-scale system-level models ». Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-12162009-020334/.
Texte intégralBughio, Kulsoom Saima. « IoMT security : A semantic framework for vulnerability detection in remote patient monitoring ». Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2024. https://ro.ecu.edu.au/theses/2841.
Texte intégralHittner, Brian Edward. « Rendering large-scale terrain models and positioning objects in relation to 3D terrain ». Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Dec%5FHittner.pdf.
Texte intégralThesis advisor(s): Don Brutzman, Curt Blais. Includes bibliographical references (p. 117-118). Also available online.
Kropff, Emilio. « Statistical and dynamical properties of large cortical network models : insights into semantic memory and language ». Doctoral thesis, SISSA, 2007. http://hdl.handle.net/20.500.11767/4639.
Texte intégralZhao, Ying, et ying zhao@rmit edu au. « Effective Authorship Attribution in Large Document Collections ». RMIT University. Computer Science and Information Technology, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080730.162501.
Texte intégralPan, Bi-Yu. « Hierarchical test generation for VHDL behavioral models ». Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-09052009-040449/.
Texte intégralLivres sur le sujet "Large Language Models (LLM)"
Amaratunga, Thimira. Understanding Large Language Models. Berkeley, CA : Apress, 2023. http://dx.doi.org/10.1007/979-8-8688-0017-7.
Texte intégralMartra, Pere. Large Language Models Projects. Berkeley, CA : Apress, 2024. http://dx.doi.org/10.1007/979-8-8688-0515-8.
Texte intégralKucharavy, Andrei, Octave Plancherel, Valentin Mulder, Alain Mermoud et Vincent Lenders, dir. Large Language Models in Cybersecurity. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7.
Texte intégralMarcondes, Francisco S., Adelino Gala, Renata Magalhães, Fernando Perez de Britto, Dalila Durães et Paulo Novais. Natural Language Analytics with Generative Large-Language Models. Cham : Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-76631-2.
Texte intégralKamath, Uday, Kevin Keenan, Garrett Somers et Sarah Sorenson. Large Language Models : A Deep Dive. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-65647-7.
Texte intégralSingh, Bhawna. Building Applications with Large Language Models. Berkeley, CA : Apress, 2024. http://dx.doi.org/10.1007/979-8-8688-0569-1.
Texte intégralGrigorov, Dilyan. Introduction to Python and Large Language Models. Berkeley, CA : Apress, 2024. http://dx.doi.org/10.1007/979-8-8688-0540-0.
Texte intégralMualla, Yazan, Liuwen Yu, Davide Liga, Igor Tchappi et Réka Markovich, dir. Advances in Explainability, Agents, and Large Language Models. Cham : Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-89103-8.
Texte intégralTörnberg, Petter. How to Use Large-Language Models for Text Analysis. 1 Oliver’s Yard, 55 City Road, London EC1Y 1SP United Kingdom : SAGE Publications Ltd, 2024. http://dx.doi.org/10.4135/9781529683707.
Texte intégralJonnagaddala, Jitendra, Hong-Jie Dai et Ching-Tai Chen, dir. Large Language Models for Automatic Deidentification of Electronic Health Record Notes. Singapore : Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-97-7966-6.
Texte intégralChapitres de livres sur le sujet "Large Language Models (LLM)"
Da Silva Gameiro, Henrique. « LLM Detectors ». Dans Large Language Models in Cybersecurity, 197–204. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_22.
Texte intégralKucharavy, Andrei. « Overview of Existing LLM Families ». Dans Large Language Models in Cybersecurity, 31–44. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_3.
Texte intégralWürsch, Maxime, Dimitri Percia David et Alain Mermoud. « Monitoring Emerging Trends in LLM Research ». Dans Large Language Models in Cybersecurity, 153–61. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_17.
Texte intégralMeier, Raphael. « LLM-Aided Social Media Influence Operations ». Dans Large Language Models in Cybersecurity, 105–12. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_11.
Texte intégralMajumdar, Subhabrata. « Standards for LLM Security ». Dans Large Language Models in Cybersecurity, 225–31. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_25.
Texte intégralSchillaci, Zachary. « LLM Adoption Trends and Associated Risks ». Dans Large Language Models in Cybersecurity, 121–28. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_13.
Texte intégralMartra, Pere. « Creating and Publishing Your Own LLM ». Dans Large Language Models Projects, 297–318. Berkeley, CA : Apress, 2024. http://dx.doi.org/10.1007/979-8-8688-0515-8_7.
Texte intégralLi, Yixuan, Julian Parsert et Elizabeth Polgreen. « Guiding Enumerative Program Synthesis with Large Language Models ». Dans Computer Aided Verification, 280–301. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-65630-9_15.
Texte intégralApaydin, Kaan, et Yorck Zisgen. « Local Large Language Models for Business Process Modeling ». Dans Lecture Notes in Business Information Processing, 605–9. Cham : Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-82225-4_44.
Texte intégralVogelsang, Terry. « LLM Controls Execution Flow Hijacking ». Dans Large Language Models in Cybersecurity, 99–104. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_10.
Texte intégralActes de conférences sur le sujet "Large Language Models (LLM)"
Pasupuleti, Rajesh, Ravi Vadapalli, Christopher Mader et Norris Timothy. « Popular LLM-Large Language Models in Enterprise Applications ». Dans 2024 2nd International Conference on Foundation and Large Language Models (FLLM), 125–31. IEEE, 2024. https://doi.org/10.1109/fllm63129.2024.10852443.
Texte intégralFernando, Saruni, Robert Kunzelmann, Daniela Sánchez Lopera, Jad Al Halabi et Wolfgang Ecker. « Boosting Productivity of Hardware Documentation Using Large Language Models ». Dans 2024 IEEE LLM Aided Design Workshop (LAD), 1. IEEE, 2024. http://dx.doi.org/10.1109/lad62341.2024.10691698.
Texte intégralVijayaraghavan, Prashanth, Luyao Shi, Ehsan Degan et Xin Zhang. « CircuitSynth : Leveraging Large Language Models for Circuit Topology Synthesis ». Dans 2024 IEEE LLM Aided Design Workshop (LAD), 1–6. IEEE, 2024. http://dx.doi.org/10.1109/lad62341.2024.10691716.
Texte intégralCoppolillo, Erica, Francesco Calimeri, Giuseppe Manco, Simona Perri et Francesco Ricca. « LLASP : Fine-tuning Large Language Models for Answer Set Programming ». Dans 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 834–44. California : International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/78.
Texte intégralJignasu, Anushrut, Kelly Marshall, Baskar Ganapathysubramanian, Aditya Balu, Chinmay Hegde et Adarsh Krishnamurthy. « Evaluating Large Language Models for G-Code Debugging, Manipulation, and Comprehension ». Dans 2024 IEEE LLM Aided Design Workshop (LAD), 1–5. IEEE, 2024. http://dx.doi.org/10.1109/lad62341.2024.10691700.
Texte intégralXu, Haocheng, Haotian Hu et Sitao Huang. « Optimizing High-Level Synthesis Designs with Retrieval-Augmented Large Language Models ». Dans 2024 IEEE LLM Aided Design Workshop (LAD), 1–5. IEEE, 2024. http://dx.doi.org/10.1109/lad62341.2024.10691855.
Texte intégralPetrovic, Nenad, Krzysztof Lebioda, Vahid Zolfaghari, André Schamschurko, Sven Kirchner, Nils Purschke, Fengjunjie Pan et Alois Knoll. « LLM-Driven Testing for Autonomous Driving Scenarios ». Dans 2024 2nd International Conference on Foundation and Large Language Models (FLLM), 173–78. IEEE, 2024. https://doi.org/10.1109/fllm63129.2024.10852505.
Texte intégralRu Teh, Jocelyn Shuang, Eng Keong Koay, Shin Wei Lim, Kuan Heng Lee, Mee Sim Lai, Meng Siong Lee et Yuan Kuok Nee. « Adaptive Composite Accuracy Scoring for Domainspecific LLM Evaluation ». Dans 2024 2nd International Conference on Foundation and Large Language Models (FLLM), 272–79. IEEE, 2024. https://doi.org/10.1109/fllm63129.2024.10852484.
Texte intégralZolfaghari, Vahid, Nenad Petrovic, Fengjunjie Pan, Krzysztof Lebioda et Alois Knoll. « Adopting RAG for LLM-Aided Future Vehicle Design ». Dans 2024 2nd International Conference on Foundation and Large Language Models (FLLM), 437–42. IEEE, 2024. https://doi.org/10.1109/fllm63129.2024.10852467.
Texte intégralLoukil, Faiza, Sarah Cadereau, Hervé Verjus, Mattéo Galfre, Kavé Salamatian, David Telisson, Quentin Kembellec et Olivier Le Van. « LLM-centric pipeline for information extraction from invoices ». Dans 2024 2nd International Conference on Foundation and Large Language Models (FLLM), 569–75. IEEE, 2024. https://doi.org/10.1109/fllm63129.2024.10852504.
Texte intégralRapports d'organisations sur le sujet "Large Language Models (LLM)"
Zhang, Hao. Large Language Model (LLM) Monthly Report (2024 Apr). ResearchHub Technologies, Inc., mai 2024. http://dx.doi.org/10.55277/researchhub.0ps6xenm.
Texte intégralAzuara Herrera, Oliver, Laura Ripani et Eric Torres Ramirez. AI and the Increase of Productivity and Labor Inequality in Latin America : Potential Impact of Large Language Models on Latin American Workforce. Inter-American Development Bank, septembre 2024. http://dx.doi.org/10.18235/0013152.
Texte intégralRosenblat, Sruly, Tim O'Reilly et Ilan Strauss. Beyond Public Access in LLM Pre-Training Data : Non-public book content in OpenAI’s Models. AI Disclosures Project, Social Science Research Council, avril 2025. https://doi.org/10.35650/aidp.4111.d.2025.
Texte intégralAlonso-Robisco, Andres, et Jose Manuel Carbo. Analysis of CBDC Narrative OF Central Banks using Large Language Models. Madrid : Banco de España, août 2023. http://dx.doi.org/10.53479/33412.
Texte intégralMarra de Artiñano, Ignacio, Franco Riottini Depetris et Christian Volpe Martincus. Automatic Product Classification in International Trade : Machine Learning and Large Language Models. Inter-American Development Bank, juillet 2023. http://dx.doi.org/10.18235/0005012.
Texte intégralMoreno, Ángel Iván, et Teresa Caminero. Assessing the data challenges of climate-related disclosures in european banks. A text mining study. Madrid : Banco de España, septembre 2023. http://dx.doi.org/10.53479/33752.
Texte intégralHarrison, Stephen. Wikipedia’s Governance Challenge : Policies and Guardrails for New Generative AI Technologies. Balsillie School of International Affairs, novembre 2024. https://doi.org/10.51644/bcs002.
Texte intégralMaerz, Seraphine. Using AI for Text Analysis in R. Instats Inc., 2024. http://dx.doi.org/10.61700/ti5uexui5ilrd1663.
Texte intégralKorinek, Anton, et Jai Vipra. Concentrating Intelligence : Scaling and Market Structure in Artificial Intelligence. Institute for New Economic Thinking Working Paper Series, octobre 2024. http://dx.doi.org/10.36687/inetwp228.
Texte intégralBastidas Ripalda, Rafaela, Stephen Hansen, John Leon-Diaz et Yabra Muvdi. Tracking the Reform Process from Newspaper Data. Inter-American Development Bank, mars 2025. https://doi.org/10.18235/0013467.
Texte intégral