Academic literature on the topic 'Large Language Models (LLM)'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Large Language Models (LLM).'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Large Language Models (LLM)"
Fang, Meng, Shilong Deng, Yudi Zhang, Zijing Shi, Ling Chen, Mykola Pechenizkiy, and Jun Wang. "Large Language Models Are Neurosymbolic Reasoners." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 16 (March 24, 2024): 17985–93. http://dx.doi.org/10.1609/aaai.v38i16.29754.
Full textWang, Runze, Mingqi Yang, and Yanming Shen. "Bridging Molecular Graphs and Large Language Models." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 20 (April 11, 2025): 21234–42. https://doi.org/10.1609/aaai.v39i20.35422.
Full textMochihashi, Daichi. "Large Language Models(LLM)and Robotics." Journal of the Robotics Society of Japan 40, no. 10 (2022): 863–66. http://dx.doi.org/10.7210/jrsj.40.863.
Full textDevyatkin, Dmitry A., Vladimir A. Salimovsky, Natalia V. Chudova, Anastasia A. Ryzhova, and Oleg G. Grigoriev. "Large language models and speech genre systematicity." International Journal “Speech Genres” 20, no. 1 (45) (February 21, 2025): 6–23. https://doi.org/10.18500/2311-0740-2025-20-1-45-6-23.
Full textYang, Jidong. "Large language models privacy and security." Applied and Computational Engineering 76, no. 1 (July 16, 2024): 177–88. http://dx.doi.org/10.54254/2755-2721/76/20240584.
Full textShanahan, Murray. "Talking about Large Language Models." Communications of the ACM 67, no. 2 (January 25, 2024): 68–79. http://dx.doi.org/10.1145/3624724.
Full textLiu, Yuxin. "Attention is All Large Language Model Need." ITM Web of Conferences 73 (2025): 02025. https://doi.org/10.1051/itmconf/20257302025.
Full textMa, Ziyang, Guanrou Yang, Yifan Yang, Zhifu Gao, Jiaming Wang, Zhihao Du, Fan Yu, et al. "Speech Recognition Meets Large Language Model: Benchmarking, Models, and Exploration." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 23 (April 11, 2025): 24840–48. https://doi.org/10.1609/aaai.v39i23.34666.
Full textZelenkov, Yuri A. "Knowledge management in organization and the large language models." Russian Management Journal 22, no. 3 (2024): 573–601. https://doi.org/10.21638/spbu18.2024.309.
Full textMartínez, Gonzalo, Javier Conde, Elena Merino-Gómez, Beatriz Bermúdez-Margaretto, José Alberto Hernández, Pedro Reviriego, and Marc Brysbaert. "Establishing vocabulary tests as a benchmark for evaluating large language models." PLOS ONE 19, no. 12 (December 12, 2024): e0308259. https://doi.org/10.1371/journal.pone.0308259.
Full textDissertations / Theses on the topic "Large Language Models (LLM)"
Naqvi, Syed Muhammad Raza. "Exploration des LLM et de l'XAI sémantique pour les capacités des robots industriels et les connaissances communes en matière de fabrication." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2025. http://www.theses.fr/2025TLSEP014.
Full textIn Industry 4.0, advanced manufacturing is vital in shaping future factories, enabling enhanced planning, scheduling, and control. The ability to adaptproduction lines swiftly in response to customer demands or unexpected situations is essential to enhance the future of manufacturing. While AI is emerging as a solution, industries still rely on human expertise due to trust issues and a lack of transparency in AI decisions. Explainable AI integrating commonsense knowledge related to manufacturing is crucial for making AI decisions understandable and trustworthy. Within this context, we propose the S-XAI framework, an integrated solution combining machine specifications with MCSK to provide explainable and transparent decision-making. The focus is on providing real-time machine capabilities to ensure precise decision-making while simultaneously explaining the decision-making process to all involved stakeholders. Accordingly, the first objective was formalizing machine specifications, including capabilities, capacities, functions, quality, and process characteristics, focusing on robotics. To do so, we created a Robot Capability ontology formalizing all relevant aspects of machine specifications, such as Capability, Capacity, Function, Quality, and Process Characteristics. On top of this formalization, the RCO allows manufacturing stakeholders to capture robotic capabilities described in specification manuals (advertised capabilities) and compare them with real-world performance (operational capabilities). RCO is based on the Machine Service Description Language, a domain reference ontology created for manufacturing services, and aligned with the Basic Formal Ontology, Industrial Foundry Ontology, Information Artifact Ontology, and Relations Ontology. The second objective was the formalization of MCSK. We introduce MCSK and present a methodology for identifying it, starting with recognizing different CSK patterns in manufacturing and aligning them with manufacturing concepts. Extracting MCSK in a usable form is challenging, so our approach structures MCSK into NL statements utilizing LLMs. to facilitate rule-based reasoning, thereby enhancing decision-making capabilities. The third and final objective is to propose an S-XAI framework utilizing RCO and MCSK to assess if existing machines can perform specific tasks and generate understandable NL explanations. This was achieved by integrating the RCO, which provides operational capabilities like repeatability and precision, with MCSK, which outlines the process requirements. By utilizing MCSK-based semantic reasoning, the S-XAI system seamlessly provides NL explanations that detail each logic and outcome. In the S-XAI framework, an NN predicts the operational capabilities of robots, while symbolic AI incorporates these predictions within an MCSK-based reasoning system grounded in the RCO. This hybrid setup maximizes the strengths of each AI system and ensures that predictions support a transparent decision-making process. Additionally, S-XAI enhances the interpretability of NN predictions through XAI techniques such as LIME, SHAP, and PDP, clarifying NN predictions and enabling detailed insights for better calibration and proactive management, ultimately fostering a resilient and informed manufacturing environment
Labeau, Matthieu. "Neural language models : Dealing with large vocabularies." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS313/document.
Full textThis work investigates practical methods to ease training and improve performances of neural language models with large vocabularies. The main limitation of neural language models is their expensive computational cost: it depends on the size of the vocabulary, with which it grows linearly. Despite several training tricks, the most straightforward way to limit computation time is to limit the vocabulary size, which is not a satisfactory solution for numerous tasks. Most of the existing methods used to train large-vocabulary language models revolve around avoiding the computation of the partition function, ensuring that output scores are normalized into a probability distribution. Here, we focus on sampling-based approaches, including importance sampling and noise contrastive estimation. These methods allow an approximate computation of the partition function. After examining the mechanism of self-normalization in noise-contrastive estimation, we first propose to improve its efficiency with solutions that are adapted to the inner workings of the method and experimentally show that they considerably ease training. Our second contribution is to expand on a generalization of several sampling based objectives as Bregman divergences, in order to experiment with new objectives. We use Beta divergences to derive a set of objectives from which noise contrastive estimation is a particular case. Finally, we aim at improving performances on full vocabulary language models, by augmenting output words representation with subwords. We experiment on a Czech dataset and show that using character-based representations besides word embeddings for output representations gives better results. We also show that reducing the size of the output look-up table improves results even more
Schaeffer, Marion. "Towards efficient Knowledge Graph-based Retrieval Augmented Generation for conversational agents." Electronic Thesis or Diss., Normandie, 2025. http://www.theses.fr/2025NORMIR06.
Full textConversational agents have become widespread in recent years. Today, they have transcended their initial purpose of simulating a conversation with a computer program and are now valuable tools for accessing information and carrying out various tasks, from customer service to personal assistance. With the rise of text-generative models and Large Language Models (LLMs), the capabilities of conversational agents have increased tenfold. However, they are now subject to hallucinations, producing false information. A popular technique to limit the risk of hallucinations is Retrieval Augmented Generation (RAG), which injects knowledge into a text generation process. Such injected knowledge can be drawn from Knowledge Graphs (KGs), which are structured machine-readable knowledge representations. Therefore, we explore Knowledge Graph-based Retrieval Augmented Generation (KG-RAG) to build trusted conversational agents. We demonstrate our approach on a real-world use case for citizen support by building conversational agents for disability management in cities. We first present a history of conversational agents, introducing the approaches implemented over the years and the evaluation techniques. We then define KGs and ontologies, and explore construction and evaluation techniques. As we could not find a directly exploitable KG, our first contribution introduces the Ontology Learning Applied Framework (OLAF). This modular system is built for automated and repeatable KG construction from unstructured text. OLAF integrates linguistic, statistical, and LLM-based techniques to generate Minimum Viable Ontologies for specific domains. Applied to real-world datasets, OLAF demonstrates robust performance through gold-standard evaluations and task-specific Competency Questions. We detail the construction process for a KG about disability management in a French city. We then propose an architecture for KG-RAG systems to enhance information retrieval by aligning user queries with KG structures through entity linking, graph queries, and LLM-based retrieval approaches. We demonstrate our architecture on different use cases, which we evaluate using criteria such as performance, human preference, and environmental impact. While user preferences advantage Text-RAG, KG-RAG's reduced computational footprint underscores its potential for sustainable AI practices. Finally, we identify the critical part of the architecture as the retriever. Therefore, we tackle the retrieval task in our architecture by exploring embeddings in various contexts, i.e. improving EL, retrieval, and providing a caching system. We also propose mechanisms for handling multi-turn conversations. This work establishes a comprehensive framework for KG-RAG systems, combining the semantic depth of KGs with the generative capabilities of LLMs to deliver accurate, contextual, and sustainable conversational agents. Contributions include OLAF for scalable KG construction, a robust KG-RAG pipeline, and embedding-based enhancements for retrieval and interaction quality. By addressing conversational agents' industrial challenges, such as scalability, retrieval precision, and conversational coherence, this research lays the foundation for deploying KG-RAG systems in diverse and specialised domains
Zervakis, Georgios. "Enriching large language models with semantic lexicons and analogies." Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0039.
Full textRecent advances in deep learning and neural networks have made it possible to address complex natural language processing tasks, which find application in a plethora of real-world problems ranging from smart assistants in mobile devices to the prediction of cancer. Nonetheless, modern systems based on these frameworks exhibit various limitations that may compromise their performance and trustworthiness, render them unfair towards minorities, or subject them to privacy leakage. It is our belief that integrating symbolic knowledge and reasoning into the deep learning framework is a necessary step towards addressing the aforementioned limitations. For example, lexical resources can enrich deep neural networks with semantic or syntactic knowledge, and logical rules can provide learning and reasoning mechanisms. Therefore, the scope of this thesis is to develop and evaluate ways of integrating different types of symbolic knowledge and reasoning into a widely used language model, Bidirectional Encoder Representations from Transformers (BERT). ln a first stage, we consider retrofitting, a simple and popular technique for refining distributional word embeddings based on relations coming from a semantic lexicon. Inspired by this technique, we present two methods for incorporating this knowledge into BERT contextualized embeddings. We evaluate these methods on three biomedical datasets for relation extraction and one movie review dataset for sentiment analysis, and show that they do not substantially impact the performance for these tasks. Furthermore, we conduct a qualitative analysis to provide further insights on this negative result. ln a second stage, we integrate analogical reasoning with BERT as a means to improve its performance on the target sense verification task, and make it more robust. To do so, we reformulate target sense verification as an analogy detection task. We present a hybrid model that combines BERT to encode the input data into quadruples and a convolutional neural classifier to decide whether they constitute valid analogies. We test our system on a benchmark dataset, and show that it can outperform existing approaches. Our empirical study shows the importance of the input encoding for BERT, and how this dependence gets alleviated by integrating the axiomatic properties of analogies during training, while preserving performance and improving robustness
Chadha, Vikrampal. "Simulation of large-scale system-level models." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-12162009-020334/.
Full textBughio, Kulsoom Saima. "IoMT security: A semantic framework for vulnerability detection in remote patient monitoring." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2024. https://ro.ecu.edu.au/theses/2841.
Full textHittner, Brian Edward. "Rendering large-scale terrain models and positioning objects in relation to 3D terrain." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Dec%5FHittner.pdf.
Full textThesis advisor(s): Don Brutzman, Curt Blais. Includes bibliographical references (p. 117-118). Also available online.
Kropff, Emilio. "Statistical and dynamical properties of large cortical network models: insights into semantic memory and language." Doctoral thesis, SISSA, 2007. http://hdl.handle.net/20.500.11767/4639.
Full textZhao, Ying, and ying zhao@rmit edu au. "Effective Authorship Attribution in Large Document Collections." RMIT University. Computer Science and Information Technology, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080730.162501.
Full textPan, Bi-Yu. "Hierarchical test generation for VHDL behavioral models." Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-09052009-040449/.
Full textBooks on the topic "Large Language Models (LLM)"
Amaratunga, Thimira. Understanding Large Language Models. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/979-8-8688-0017-7.
Full textMartra, Pere. Large Language Models Projects. Berkeley, CA: Apress, 2024. http://dx.doi.org/10.1007/979-8-8688-0515-8.
Full textKucharavy, Andrei, Octave Plancherel, Valentin Mulder, Alain Mermoud, and Vincent Lenders, eds. Large Language Models in Cybersecurity. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7.
Full textMarcondes, Francisco S., Adelino Gala, Renata Magalhães, Fernando Perez de Britto, Dalila Durães, and Paulo Novais. Natural Language Analytics with Generative Large-Language Models. Cham: Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-76631-2.
Full textKamath, Uday, Kevin Keenan, Garrett Somers, and Sarah Sorenson. Large Language Models: A Deep Dive. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-65647-7.
Full textSingh, Bhawna. Building Applications with Large Language Models. Berkeley, CA: Apress, 2024. http://dx.doi.org/10.1007/979-8-8688-0569-1.
Full textGrigorov, Dilyan. Introduction to Python and Large Language Models. Berkeley, CA: Apress, 2024. http://dx.doi.org/10.1007/979-8-8688-0540-0.
Full textMualla, Yazan, Liuwen Yu, Davide Liga, Igor Tchappi, and Réka Markovich, eds. Advances in Explainability, Agents, and Large Language Models. Cham: Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-89103-8.
Full textTörnberg, Petter. How to Use Large-Language Models for Text Analysis. 1 Oliver’s Yard, 55 City Road, London EC1Y 1SP United Kingdom: SAGE Publications Ltd, 2024. http://dx.doi.org/10.4135/9781529683707.
Full textJonnagaddala, Jitendra, Hong-Jie Dai, and Ching-Tai Chen, eds. Large Language Models for Automatic Deidentification of Electronic Health Record Notes. Singapore: Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-97-7966-6.
Full textBook chapters on the topic "Large Language Models (LLM)"
Da Silva Gameiro, Henrique. "LLM Detectors." In Large Language Models in Cybersecurity, 197–204. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_22.
Full textKucharavy, Andrei. "Overview of Existing LLM Families." In Large Language Models in Cybersecurity, 31–44. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_3.
Full textWürsch, Maxime, Dimitri Percia David, and Alain Mermoud. "Monitoring Emerging Trends in LLM Research." In Large Language Models in Cybersecurity, 153–61. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_17.
Full textMeier, Raphael. "LLM-Aided Social Media Influence Operations." In Large Language Models in Cybersecurity, 105–12. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_11.
Full textMajumdar, Subhabrata. "Standards for LLM Security." In Large Language Models in Cybersecurity, 225–31. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_25.
Full textSchillaci, Zachary. "LLM Adoption Trends and Associated Risks." In Large Language Models in Cybersecurity, 121–28. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_13.
Full textMartra, Pere. "Creating and Publishing Your Own LLM." In Large Language Models Projects, 297–318. Berkeley, CA: Apress, 2024. http://dx.doi.org/10.1007/979-8-8688-0515-8_7.
Full textLi, Yixuan, Julian Parsert, and Elizabeth Polgreen. "Guiding Enumerative Program Synthesis with Large Language Models." In Computer Aided Verification, 280–301. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-65630-9_15.
Full textApaydin, Kaan, and Yorck Zisgen. "Local Large Language Models for Business Process Modeling." In Lecture Notes in Business Information Processing, 605–9. Cham: Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-82225-4_44.
Full textVogelsang, Terry. "LLM Controls Execution Flow Hijacking." In Large Language Models in Cybersecurity, 99–104. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54827-7_10.
Full textConference papers on the topic "Large Language Models (LLM)"
Pasupuleti, Rajesh, Ravi Vadapalli, Christopher Mader, and Norris Timothy. "Popular LLM-Large Language Models in Enterprise Applications." In 2024 2nd International Conference on Foundation and Large Language Models (FLLM), 125–31. IEEE, 2024. https://doi.org/10.1109/fllm63129.2024.10852443.
Full textFernando, Saruni, Robert Kunzelmann, Daniela Sánchez Lopera, Jad Al Halabi, and Wolfgang Ecker. "Boosting Productivity of Hardware Documentation Using Large Language Models." In 2024 IEEE LLM Aided Design Workshop (LAD), 1. IEEE, 2024. http://dx.doi.org/10.1109/lad62341.2024.10691698.
Full textVijayaraghavan, Prashanth, Luyao Shi, Ehsan Degan, and Xin Zhang. "CircuitSynth: Leveraging Large Language Models for Circuit Topology Synthesis." In 2024 IEEE LLM Aided Design Workshop (LAD), 1–6. IEEE, 2024. http://dx.doi.org/10.1109/lad62341.2024.10691716.
Full textCoppolillo, Erica, Francesco Calimeri, Giuseppe Manco, Simona Perri, and Francesco Ricca. "LLASP: Fine-tuning Large Language Models for Answer Set Programming." In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}, 834–44. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/78.
Full textJignasu, Anushrut, Kelly Marshall, Baskar Ganapathysubramanian, Aditya Balu, Chinmay Hegde, and Adarsh Krishnamurthy. "Evaluating Large Language Models for G-Code Debugging, Manipulation, and Comprehension." In 2024 IEEE LLM Aided Design Workshop (LAD), 1–5. IEEE, 2024. http://dx.doi.org/10.1109/lad62341.2024.10691700.
Full textXu, Haocheng, Haotian Hu, and Sitao Huang. "Optimizing High-Level Synthesis Designs with Retrieval-Augmented Large Language Models." In 2024 IEEE LLM Aided Design Workshop (LAD), 1–5. IEEE, 2024. http://dx.doi.org/10.1109/lad62341.2024.10691855.
Full textPetrovic, Nenad, Krzysztof Lebioda, Vahid Zolfaghari, André Schamschurko, Sven Kirchner, Nils Purschke, Fengjunjie Pan, and Alois Knoll. "LLM-Driven Testing for Autonomous Driving Scenarios." In 2024 2nd International Conference on Foundation and Large Language Models (FLLM), 173–78. IEEE, 2024. https://doi.org/10.1109/fllm63129.2024.10852505.
Full textRu Teh, Jocelyn Shuang, Eng Keong Koay, Shin Wei Lim, Kuan Heng Lee, Mee Sim Lai, Meng Siong Lee, and Yuan Kuok Nee. "Adaptive Composite Accuracy Scoring for Domainspecific LLM Evaluation." In 2024 2nd International Conference on Foundation and Large Language Models (FLLM), 272–79. IEEE, 2024. https://doi.org/10.1109/fllm63129.2024.10852484.
Full textZolfaghari, Vahid, Nenad Petrovic, Fengjunjie Pan, Krzysztof Lebioda, and Alois Knoll. "Adopting RAG for LLM-Aided Future Vehicle Design." In 2024 2nd International Conference on Foundation and Large Language Models (FLLM), 437–42. IEEE, 2024. https://doi.org/10.1109/fllm63129.2024.10852467.
Full textLoukil, Faiza, Sarah Cadereau, Hervé Verjus, Mattéo Galfre, Kavé Salamatian, David Telisson, Quentin Kembellec, and Olivier Le Van. "LLM-centric pipeline for information extraction from invoices." In 2024 2nd International Conference on Foundation and Large Language Models (FLLM), 569–75. IEEE, 2024. https://doi.org/10.1109/fllm63129.2024.10852504.
Full textReports on the topic "Large Language Models (LLM)"
Zhang, Hao. Large Language Model (LLM) Monthly Report (2024 Apr). ResearchHub Technologies, Inc., May 2024. http://dx.doi.org/10.55277/researchhub.0ps6xenm.
Full textAzuara Herrera, Oliver, Laura Ripani, and Eric Torres Ramirez. AI and the Increase of Productivity and Labor Inequality in Latin America: Potential Impact of Large Language Models on Latin American Workforce. Inter-American Development Bank, September 2024. http://dx.doi.org/10.18235/0013152.
Full textRosenblat, Sruly, Tim O'Reilly, and Ilan Strauss. Beyond Public Access in LLM Pre-Training Data: Non-public book content in OpenAI’s Models. AI Disclosures Project, Social Science Research Council, April 2025. https://doi.org/10.35650/aidp.4111.d.2025.
Full textAlonso-Robisco, Andres, and Jose Manuel Carbo. Analysis of CBDC Narrative OF Central Banks using Large Language Models. Madrid: Banco de España, August 2023. http://dx.doi.org/10.53479/33412.
Full textMarra de Artiñano, Ignacio, Franco Riottini Depetris, and Christian Volpe Martincus. Automatic Product Classification in International Trade: Machine Learning and Large Language Models. Inter-American Development Bank, July 2023. http://dx.doi.org/10.18235/0005012.
Full textMoreno, Ángel Iván, and Teresa Caminero. Assessing the data challenges of climate-related disclosures in european banks. A text mining study. Madrid: Banco de España, September 2023. http://dx.doi.org/10.53479/33752.
Full textHarrison, Stephen. Wikipedia’s Governance Challenge: Policies and Guardrails for New Generative AI Technologies. Balsillie School of International Affairs, November 2024. https://doi.org/10.51644/bcs002.
Full textMaerz, Seraphine. Using AI for Text Analysis in R. Instats Inc., 2024. http://dx.doi.org/10.61700/ti5uexui5ilrd1663.
Full textKorinek, Anton, and Jai Vipra. Concentrating Intelligence: Scaling and Market Structure in Artificial Intelligence. Institute for New Economic Thinking Working Paper Series, October 2024. http://dx.doi.org/10.36687/inetwp228.
Full textBastidas Ripalda, Rafaela, Stephen Hansen, John Leon-Diaz, and Yabra Muvdi. Tracking the Reform Process from Newspaper Data. Inter-American Development Bank, March 2025. https://doi.org/10.18235/0013467.
Full text