Academic literature on the topic 'XAI Interpretability'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'XAI Interpretability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "XAI Interpretability"

1

Thalpage, Nipuna. "Unlocking the Black Box: Explainable Artificial Intelligence (XAI) for Trust and Transparency in AI Systems." Journal of Digital Art & Humanities 4, no. 1 (2023): 31–36. http://dx.doi.org/10.33847/2712-8148.4.1_4.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) has emerged as a critical field in AI research, addressing the lack of transparency and interpretability in complex AI models. This conceptual review explores the significance of XAI in promoting trust and transparency in AI systems. The paper analyzes existing literature on XAI, identifies patterns and gaps, and presents a coherent conceptual framework. Various XAI techniques, such as saliency maps, attention mechanisms, rule-based explanations, and model-agnostic approaches, are discussed to enhance interpretability. The paper highlights the challeng
APA, Harvard, Vancouver, ISO, and other styles
2

Verma, Prof Ashish. "Advancements in Explainable AI: Bridging the Gap Between Interpretability and Performance in Machine Learning Models." International Journal of Machine Learning, AI & Data Science Evolution 1, no. 01 (2025): 1–8. https://doi.org/10.63665/ijmlaidse.v1i1.01.

Full text
Abstract:
The growing adoption of Artificial Intelligence (AI) and Machine Learning (ML) in critical decision-making areas such as healthcare, finance, and autonomous systems has raised concerns regarding the interpretability of these models. While deep learning and other advanced ML models deliver high accuracy, their "black box" nature makes it difficult to explain their decision-making processes. Explainable AI (XAI) aims to bridge this gap by introducing methods that enhance transparency without significantly compromising performance. This paper explores key advancements in XAI, including model-agno
APA, Harvard, Vancouver, ISO, and other styles
3

Mohan, Raja Pulicharla. "Explainable AI in the Context of Data Engineering: Unveiling the Black Box in the Pipeline." Explainable AI in the Context of Data Engineering: Unveiling the Black Box in the Pipeline 9, no. 1 (2024): 6. https://doi.org/10.5281/zenodo.10623633.

Full text
Abstract:
The burgeoning integration of Artificial Intelligence (AI) into data engineering pipelines has spurred phenomenal advancements in automation, efficiency, and insights. However, the opaqueness of many AI models, often referred to as "black boxes," raises concerns about trust, accountability, and interpretability. Explainable AI (XAI) emerges as a critical bridge between the power of AI and the human stakeholders in data engineering workflows. This paper delves into the symbiotic relationship between XAI and data engineering, exploring how XAI tools and techniques can enhance the transparency, t
APA, Harvard, Vancouver, ISO, and other styles
4

Milad, Akram, and Mohamed Whiba. "Exploring Explainable Artificial Intelligence Technologies: Approaches, Challenges, and Applications." International Science and Technology Journal 34, no. 1 (2024): 1–21. http://dx.doi.org/10.62341/amia8430.

Full text
Abstract:
This research paper delves into the transformative domain of Explainable Artificial Intelligence (XAI) in response to the evolving complexities of artificial intelligence and machine learning. Navigating through XAI approaches, challenges, applications, and future directions, the paper emphasizes the delicate balance between model accuracy and interpretability. Challenges such as the trade-off between accuracy and interpretability, explaining black-box models, privacy concerns, and ethical considerations are comprehensively addressed. Real-world applications showcase XAI's potential in healthc
APA, Harvard, Vancouver, ISO, and other styles
5

Duggal, Bhanu. "Explainable AI For Fraud Detection in Financial Transactions." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 04 (2025): 1–9. https://doi.org/10.55041/ijsrem44356.

Full text
Abstract:
Abstract—Explainable AI (XAI) improves machine learning models’ interpretability, especially for detecting financial fraud. Financial fraud is a growing threat, with criminals using increas- ingly sophisticated methods to circumvent standard security mea- sures. This research article investigates various XAI strategies for increasing transparency and confidence in fraud detection algorithms. The study examines the efficacy of SHAP (Shap- ley Additive Explanations), LIME (Local Interpretable Model- agnostic Explanations), and attention mechanisms in providing insight into model predictions. We
APA, Harvard, Vancouver, ISO, and other styles
6

Ramakrishna, Jeevakala Siva, Sonagiri China Venkateswarlu, Kommu Naveen Kumar, and Parikipandla Shreya. "Development of explainable machine intelligence models for heart sound abnormality detection." Indonesian Journal of Electrical Engineering and Computer Science 36, no. 2 (2024): 846. http://dx.doi.org/10.11591/ijeecs.v36.i2.pp846-853.

Full text
Abstract:
Developing explainable machine intelligence (XAI) models for heart sound abnormality detection is a crucial area of research aimed at improving the interpretability and transparency of machine learning algorithms in medical diagnostics. In this study, we propose a framework for building XAI models that can effectively detect abnormalities in heart sounds while providing interpretable explanations for their predictions. We leverage techniques such as SHapley additive exPlanations (SHAP) and local interpretable model-agnostic explanations (LIME) to generate explanations for model predictions, en
APA, Harvard, Vancouver, ISO, and other styles
7

Jeevakala, Siva Ramakrishna Sonagiri China Venkateswarlu Kommu Naveen Kumar Parikipandla Shreya. "Development of explainable machine intelligence models for heart sound abnormality detection." Indonesian Journal of Electrical Engineering and Computer Science 36, no. 2 (2024): 846–53. https://doi.org/10.11591/ijeecs.v36.i2.pp846-853.

Full text
Abstract:
Developing explainable machine intelligence (XAI) models for heart sound abnormality detection is a crucial area of research aimed at improving the interpretability and transparency of machine learning algorithms in medical diagnostics. In this study, we propose a framework for building XAI models that can effectively detect abnormalities in heart sounds while providing interpretable explanations for their predictions. We leverage techniques such as SHapley additive exPlanations (SHAP) and local interpretable model-agnostic explanations (LIME) to generate explanations for model predictions, en
APA, Harvard, Vancouver, ISO, and other styles
8

Ozdemir, Olcar. "Explainable AI (XAI) in Healthcare: Bridging the Gap between Accuracy and Interpretability." Journal of Science, Technology and Engineering Research 1, no. 1 (2024): 32–44. https://doi.org/10.64206/0z78ev10.

Full text
Abstract:
Artificial Intelligence (AI) has demonstrated significant potential in revolutionizing healthcare by enhancing diagnostic accuracy, predicting patient outcomes, and optimizing treatment plans. However, the increasing reliance on complex, black-box models has raised critical concerns around transparency, trust, and accountability—particularly in high-stakes medical settings where interpretability is vital for clinical decision-making. This paper explores Explainable AI (XAI) as a solution to bridge the gap between model performance and human interpretability. We review current XAI techniques, i
APA, Harvard, Vancouver, ISO, and other styles
9

Hutke, Prof Ankush, Kiran Sahu, Ameet Mishra, Aniruddha Sawant, and Ruchitha Gowda. "Predict XAI." International Research Journal of Innovations in Engineering and Technology 09, no. 04 (2025): 172–76. https://doi.org/10.47001/irjiet/2025.904026.

Full text
Abstract:
Stroke predictors using Explainable Artificial Intelligence (XAI) aim to provide accurate and interpretable stroke risk predictions. This research integrates machine learning models such as Decision Trees, Random Forest, Logistic Regression, and Support Vector Machines, leveraging ensemble learning techniques like stacking and voting to enhance predictive accuracy. The system employs XAI techniques such as SHAP (SHapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) to ensure model transparency and interpretability. This paper presents the methodology, implem
APA, Harvard, Vancouver, ISO, and other styles
10

Amirineni, Sreenivasarao. "Enhancing Predictive Analytics in Business Intelligence through Explainable AI: A Case Study in Financial Products." Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 6, no. 1 (2024): 258–88. http://dx.doi.org/10.60087/jaigs.v6i1.251.

Full text
Abstract:
Today, when the importance of data-based decision-making is impossible to question, the use of Explainable Artificial Intelligence (XAI) in business intelligence (BI) has inestimable benefits for the financial industry. This paper discusses how XAI influences predictive analytics in BI systems and how it may improve interpretability, and useful suggestions for financial product companies. Thus, within the context of this study, an XAI framework helps the financial institutions to employ higher-performing and more accurate models, like gradient boosting and neural networks, while sustaining int
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "XAI Interpretability"

1

Fall, Ahmad. "Interpretability of Neural Networks applied to Electrocardiograms : Translational Applications in Cardiovascular Diseases." Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS473.pdf.

Full text
Abstract:
L’électrocardiogramme (ECG) est un outil non invasif permettant d’évaluer l’activité électrique du cœur. Ils sont largement utilisés dans la détection d’anomalies cardiaques. Les algorithmes d’apprentissage profond permettent la détection automatique de schémas complexes dans les données ECG, ce qui offre un potentiel important pour l’amélioration du diagnostic médical. Toutefois, leur adoption est freinée par un faible niveau de confiance des cliniciens et un besoin massif de données pour entrainer les modèles. L’intelligence artificielle, en particulier l’apprentissage profond (deep learning
APA, Harvard, Vancouver, ISO, and other styles
2

SEVESO, ANDREA. "Symbolic Reasoning for Contrastive Explanations." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/404830.

Full text
Abstract:
La necessità di spiegazioni sui sistemi di Machine Learning (ML) sta crescendo man mano che i nuovi modelli superano in performance i loro predecessori, diventando più complessi e meno comprensibili per gli utenti finali. Un passaggio essenziale nella ricerca in ambito eXplainable Artificial Intelligence (XAI) è la creazione di modelli interpretabili che mirano ad approssimare la funzione decisionale di un algoritmo black box. Sebbene negli ultimi anni siano stati proposti diversi metodi di XAI, non è stata prestata sufficiente attenzione alla spiegazione di come i modelli modificano il loro
APA, Harvard, Vancouver, ISO, and other styles
3

Matz, Filip, and Yuxiang Luo. "Explaining Automated Decisions in Practice : Insights from the Swedish Credit Scoring Industry." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300897.

Full text
Abstract:
The field of explainable artificial intelligence (XAI) has gained momentum in recent years following the increased use of AI systems across industries leading to bias, discrimination, and data security concerns. Several conceptual frameworks for how to reach AI systems that are fair, transparent, and understandable have been proposed, as well as a number of technical solutions improving some of these aspects in a research context. However, there is still a lack of studies examining the implementation of these concepts and techniques in practice. This research aims to bridge the gap between pro
APA, Harvard, Vancouver, ISO, and other styles
4

Laugel, Thibault. "Interprétabilité locale post-hoc des modèles de classification "boites noires"." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS215.

Full text
Abstract:
Cette thèse porte sur le domaine du XAI (explicabilité de l'IA), et plus particulièrement sur le paradigme de l'interprétabilité locale post-hoc, c'est-à-dire la génération d'explications pour une prédiction unique d'un classificateur entraîné. En particulier, nous étudions un contexte totalement agnostique, c'est-à-dire que l'explication est générée sans utiliser aucune connaissance sur le modèle de classification (traité comme une boîte noire) ni les données utilisées pour l'entraîner. Dans cette thèse, nous identifions plusieurs problèmes qui peuvent survenir dans ce contexte et qui peuvent
APA, Harvard, Vancouver, ISO, and other styles
5

Afchar, Darius. "Interpretable Music Recommender Systems." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS608.

Full text
Abstract:
« Pourquoi est-ce qu’on me recommande toujours les même musiques ? » « Pourquoi notre système recommande-t’il cela aux utilisateurs ? » De nos jours, les plateformes de streaming sont le moyen le plus courant d'écouter de la musique enregistrée. Pourtant, les recommandations musicales — au cœur de ces plateformes — sont loin d’être une mince affaire. Il arrive parfois qu’utilisateurs et ingénieurs soient tout aussi perplexes du comportement d’un système de recommandation musicale (SRM). Les SRM ont été utilisés avec succès pour aider à explorer des catalogues comptant des dizaines de millions
APA, Harvard, Vancouver, ISO, and other styles
6

Bhattacharya, Debarpan. "A Learnable Distillation Approach For Model-agnostic Explainability With Multimodal Applications." Thesis, 2023. https://etd.iisc.ac.in/handle/2005/6108.

Full text
Abstract:
Deep neural networks are the most widely used examples of sophisticated mapping functions from feature space to class labels. In the recent years, several high impact decisions in domains such as finance, healthcare, law and autonomous driving, are made with deep models. In these tasks, the model decisions lack interpretability, and pose difficulties in making the models accountable. Hence, there is a strong demand for developing explainable approaches which can elicit how the deep neural architecture, despite the astounding performance improvements observed in all fields, including computer v
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "XAI Interpretability"

1

Gupta, Surbhi, Shubham Gupta, and Ankur Gupta. "XAI: Interpretability and explainability." In Smart Computing and Communication for Sustainable Convergence. CRC Press, 2025. https://doi.org/10.1201/9781003637530-36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dib, Lynda. "Formal Definition of Interpretability and Explainability in XAI." In Lecture Notes in Networks and Systems. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-66431-1_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pereira, João, Filipe Oliveira, Miguel Guimarães, Davide Carneiro, Miguel Ribeiro, and Gilberto Loureiro. "Addressing the Limitations of LIME for Explainable AI in Manufacturing: A Case Study in Textile Defect Detection." In Lecture Notes in Mechanical Engineering. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-86489-6_27.

Full text
Abstract:
Abstract Explainable Artificial Intelligence (xAI) techniques are nowadays widely accepted as one of the paths towards addressing the interpretability and transparency issues of using black box models. Such techniques may allow to understand, to a certain extent, how or why a model produced a certain output, which may even help identify problems with the model or the data. As in many other domains, the use of xAI techniques in the context of manufacturing is seen as fundamental towards understanding model outputs, supporting informed decision-making, or enabling more human-centric approaches.
APA, Harvard, Vancouver, ISO, and other styles
4

Dinu, Marius-Constantin, Markus Hofmarcher, Vihang P. Patil, et al. "XAI and Strategy Extraction via Reward Redistribution." In xxAI - Beyond Explainable AI. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_10.

Full text
Abstract:
AbstractIn reinforcement learning, an agent interacts with an environment from which it receives rewards, that are then used to learn a task. However, it is often unclear what strategies or concepts the agent has learned to solve the task. Thus, interpretability of the agent’s behavior is an important aspect in practical applications, next to the agent’s performance at the task itself. However, with the increasing complexity of both tasks and agents, interpreting the agent’s behavior becomes much more difficult. Therefore, developing new interpretable RL agents is of high importance. To this e
APA, Harvard, Vancouver, ISO, and other styles
5

Sesana, Michele, Sara Cavallaro, Mattia Calabresi, et al. "Process and Product Quality Optimization with Explainable Artificial Intelligence." In Artificial Intelligence in Manufacturing. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-46452-2_26.

Full text
Abstract:
AbstractIn today’s rapidly evolving technological landscape, businesses across various industries face a critical challenge: maintaining and enhancing the quality of both their processes and the products they deliver. Traditionally, this task has been tackled through manual analysis, statistical methods, and domain expertise. However, with the advent of artificial intelligence (AI) and machine learning, new opportunities have emerged to revolutionize quality optimization. This chapter explores the process and product quality optimization in a real industrial use case with the help of explainab
APA, Harvard, Vancouver, ISO, and other styles
6

Hanif, Ambreen, Amin Beheshti, Boualem Benatallah, et al. "A Comprehensive Survey of Explainable Artificial Intelligence (XAI) Methods: Exploring Transparency and Interpretability." In Web Information Systems Engineering – WISE 2023. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-7254-8_71.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Haji, Abdelilah, and Badr Hssina. "XAI-Driven Credit Risk Modeling: A Comparative Analyses of Model Performance and Interpretability." In Lecture Notes in Networks and Systems. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-88304-0_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stevens, Alexander, Johannes De Smedt, and Jari Peeperkorn. "Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring." In Lecture Notes in Business Information Processing. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.

Full text
Abstract:
AbstractThe growing interest in applying machine and deep learning algorithms in an Outcome-Oriented Predictive Process Monitoring (OOPPM) context has recently fuelled a shift to use models from the explainable artificial intelligence (XAI) paradigm, a field of study focused on creating explainability techniques on top of AI models in order to legitimize the predictions made. Nonetheless, most classification models are evaluated primarily on a performance level, where XAI requires striking a balance between either simple models (e.g. linear regression) or models using complex inference structu
APA, Harvard, Vancouver, ISO, and other styles
9

Hasan, Mahmudul, Abdullah Haque, Md Mahmudul Islam, and Md Al Amin. "How Much Do the Features Affect the Classifiers on UNSW-NB15? An XAI Equipped Model Interpretability." In Cyber Security and Business Intelligence. Routledge, 2023. http://dx.doi.org/10.4324/9781003285854-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Warmuth, Christian, and Henrik Leopold. "On the Potential of Textual Data for Explainable Predictive Process Monitoring." In Lecture Notes in Business Information Processing. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-27815-0_14.

Full text
Abstract:
AbstractPredictive process monitoring techniques leverage machine learning (ML) to predict future characteristics of a case, such as the process outcome or the remaining run time. Available techniques employ various models and different types of input data to produce accurate predictions. However, from a practical perspective, explainability is another important requirement besides accuracy since predictive process monitoring techniques frequently support decision-making in critical domains. Techniques from the area of explainable artificial intelligence (XAI) aim to provide this capability an
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "XAI Interpretability"

1

Nascita, Alfredo, Raffaele Carillo, Federica Giampetraglia, Antonio Iacono, Valerio Persico, and Antonio Pescapé. "Interpretability and Complexity Reduction in Iot Network Anomaly Detection Via XAI." In 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW). IEEE, 2024. http://dx.doi.org/10.1109/icasspw62465.2024.10626031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Haseena Rahmath, P., Kuldeep Chaurasia, and Anika Gupta. "Unlocking Interpretability: XAI Strategies for Enhanced Insight in GNN-Based Hyperspectral Image Classification." In 2024 IEEE International Conference on Computer Vision and Machine Intelligence (CVMI). IEEE, 2024. https://doi.org/10.1109/cvmi61877.2024.10781678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fahim, Md, Md Zia Ul Hassan Chowdhury, Md Jiabul Hoque, and Mohammad Nadib Hasan. "Visualizing Crop Disease Detection Exploring Deep Learning with Custom CNN Model and XAI for Enhanced Interpretability." In 2024 International Conference on Innovations in Science, Engineering and Technology (ICISET). IEEE, 2024. https://doi.org/10.1109/iciset62123.2024.10940041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bobzin, K., H. Heinemann, M. Erck, and G. Nassar. "Reshaping Thermal Spraying: Explainable Artificial Intelligence Meets Plasma Spraying." In ITSC 2025. ASM International, 2025. https://doi.org/10.31399/asm.cp.itsc2025p0237.

Full text
Abstract:
Abstract This study employs an XAI framework to gain insights into Residual Network and Artificial Neural Network models trained on both simulations and experimental data to predict deposition efficiency (DE) in atmospheric plasma spraying (APS). SHapley Additive exPlanations (SHAP), an interpretability framework, was then applied to help identify which process parameters have the most significant influence on the DE and to reveal how changes in specific parameters affect the DE by elucidating their impact on the model predictions.
APA, Harvard, Vancouver, ISO, and other styles
5

Chiu, Min-Chi, Tin-Chih Toly Chen, and Hsin-Chieh Wu. "Enhancing the Understandability and Interpretability of Type-II Diabetes Diagnosis Using a Deep Fuzzy Neural and XAI Approach." In 2025 IEEE 15th Symposium on Computer Applications & Industrial Electronics (ISCAIE). IEEE, 2025. https://doi.org/10.1109/iscaie64985.2025.11080779.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Arenas, Marcelo, Pablo Barceló, Diego Bustamante, Jose Caraball, and Bernardo Subercaseaux. "A Uniform Language to Explain Decision Trees." In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/6.

Full text
Abstract:
The formal XAI community has studied a plethora of interpretability queries aiming to understand the classifications made by decision trees. However, a more uniform understanding of what questions we can hope to answer about these models, traditionally deemed to be easily interpretable, has remained elusive. In an initial attempt to understand uniform languages for interpretability, Arenas et al. proposed FOIL, a logic for explaining black-box ML models, and showed that it can express a variety of interpretability queries. However, we show that FOIL is limited in two important senses: (i) it i
APA, Harvard, Vancouver, ISO, and other styles
7

Vihurskyi, Bohdan. "Credit Card Fraud Detection with XAI: Improving Interpretability and Trust." In 2024 Third International Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE). IEEE, 2024. http://dx.doi.org/10.1109/icdcece60827.2024.10548159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tabassoum, Nafiza, and Md Ali Akber. "Interpretability of Machine Learning Algorithms for News Category Classification Using XAI." In 2024 6th International Conference on Electrical Engineering and Information & Communication Technology (ICEEICT). IEEE, 2024. http://dx.doi.org/10.1109/iceeict62016.2024.10534385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Miyaji, Renato O., and Pedro L. P. Corrêa. "Interpreting ML in Ecology: A RAG-Based Approach to Explainability in Species Distribution Modeling." In Workshop de Computação Aplicada à Gestão do Meio Ambiente e Recursos Naturais. Sociedade Brasileira de Computação - SBC, 2025. https://doi.org/10.5753/wcama.2025.8219.

Full text
Abstract:
Species Distribution Modeling (SDM) relies increasingly on Machine Learning (ML), but many models remain opaque, limiting their usability for non-experts. While SHAP and LIME improve interpretability, they still require technical expertise. This study proposes an agentic Retrieval-Augmented Generation (RAG) framework, integrating ML models (Logistic Regression, Random Forests, MLP), XAI techniques (SHAP, LIME), and a LLM-powered explanation system to enhance explainability. Using GoAmazon 2014/15 environmental data and GBIF species occurrences, we evaluated explanations based on completeness,
APA, Harvard, Vancouver, ISO, and other styles
10

Phan, Hieu, Loc Le, Mao Nguyen, et al. "XGA-Osteo: Towards XAI-Enabled Knee Osteoarthritis Diagnosis with Adversarial Learning." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/1029.

Full text
Abstract:
This research introduces XGA-Osteo, an innovative approach that leverages Explainable Artificial Intelligence (XAI) to enhance the accuracy and interpretability of knee osteoarthritis diagnosis. Recent studies have utilized AI approaches to automate the diagnosis using knee joint X-ray images. However, these studies have primarily focused on predicting the severity of osteoarthritis without providing additional information to assist doctors in their diagnoses. In addition to accurately diagnosing the severity of the condition, XGA-Osteo generates an anomaly map, produced from a reconstructed i
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!