Academic literature on the topic 'AI explainability interpretability model-agnostic explanations graph models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'AI explainability interpretability model-agnostic explanations graph models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "AI explainability interpretability model-agnostic explanations graph models"

1

Amato, Alba, and Dario Branco. "SemFedXAI: A Semantic Framework for Explainable Federated Learning in Healthcare." Information 16, no. 6 (2025): 435. https://doi.org/10.3390/info16060435.

Full text
Abstract:
Federated Learning (FL) is emerging as an encouraging paradigm for AI model training in healthcare that enables collaboration among institutions without revealing sensitive information. The lack of transparency in federated models makes their deployment in healthcare settings more difficult, as knowledge of the decision process is of primary importance. This paper introduces SemFedXAI, a new framework that combines Semantic Web technologies and federated learning to achieve better explainability of artificial intelligence models in healthcare. SemFedXAI extends traditional FL architectures with three key components: (1) Ontology-Enhanced Federated Learning that enriches models with domain knowledge, (2) a Semantic Aggregation Mechanism that uses semantic technologies to improve the consistency and interpretability of federated models, and (3) a Knowledge Graph-Based Explanation component that provides contextualized explanations of model decisions. We evaluated SemFedXAI within the context of e-health, reporting noteworthy advancements in explanation quality and predictive performance compared to conventional federated learning methods. The findings refer to the prospects of combining semantic technologies and federated learning as an avenue for building more explainable and resilient AI systems in healthcare.
APA, Harvard, Vancouver, ISO, and other styles
2

Tauqeer Akhtar. "Explainable AI in E-Commerce: Seller Recommendations with Ethnocentric Transparency." Journal of Electrical Systems 20, no. 11s (2024): 4825–37. https://doi.org/10.52783/jes.8814.

Full text
Abstract:
Personalized seller recommendations are fundamental to enhancing user experiences and increasing sales in e-commerce platforms. Traditional recommendation systems, however, often function as black-box models, offering limited interpretability. This paper explores the integration of Explainable AI (XAI) techniques, particularly Integrated Gradients (IG) and DeepLIFT (Deep Learning Important FeaTures), into a hybrid recommendation system. The proposed approach combines Matrix Factorization (MF) and Graph Neural Networks (GNNs) to deliver personalized and interpretable seller recommendations. Using real-world e-commerce datasets, the study evaluates how different features, such as user interaction history, social connections, and seller reputation contribute to recommendation outcomes. The system addresses the trade-offs between recommendation accuracy and interpretability, ensuring that insights are both actionable and trustworthy. Experimental results demonstrate that the hybrid model achieves substantial improvements in precision, recall, and F1-score compared to standalone MF and GNN-based approaches. Moreover, Integrated Gradients and DeepLIFT provide users with clear and intuitive explanations of the recommendation process, fostering trust in the system. This paper also introduces a comprehensive feature attribution analysis to quantify the impact of key factors, including behavioral patterns and network influence, on recommendation decisions. A comparative evaluation with state-of-the-art neural recommendation models highlights the effectiveness of the proposed system in balancing performance with interpretability. Finally, the study discusses future enhancements, such as incorporating explainability techniques tailored for multimodal data, employing reinforcement learning for adaptive personalization, and extending the model to handle dynamic user preferences. These findings underscore the importance of transparent, user-focused AI in driving innovation in e-commerce recommendation systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Ali, Ali Mohammed Omar. "Explainability in AI: Interpretable Models for Data Science." International Journal for Research in Applied Science and Engineering Technology 13, no. 2 (2025): 766–71. https://doi.org/10.22214/ijraset.2025.66968.

Full text
Abstract:
As artificial intelligence (AI) continues to drive advancements across various domains, the need for explainability in AI models has become increasingly critical. Many state-of-the-art machine learning models, particularly deep learning architectures, operate as "black boxes," making their decision-making processes difficult to interpret. Explainable AI (XAI) aims to enhance model transparency, ensuring that AI-driven decisions are understandable, trustworthy, and aligned with ethical and regulatory standards. This paper explores different approaches to AI interpretability, including intrinsically interpretable models such as decision trees and logistic regression, as well as post-hoc methods like SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations). Additionally, we discuss the challenges of explainability, including the trade-off between accuracy and interpretability, scalability issues, and domain-specific requirements. The paper also highlights real-world applications of XAI in healthcare, finance, and autonomous systems. Finally, we examine future research directions, emphasizing hybrid models, causal explainability, and human-AI collaboration. By fostering more interpretable AI systems, we can enhance trust, fairness, and accountability in data science applications.
APA, Harvard, Vancouver, ISO, and other styles
4

Jishnu, Setia. "Explainable AI: Methods and Applications." Explainable AI: Methods and Applications 8, no. 10 (2023): 5. https://doi.org/10.5281/zenodo.10021461.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) has emerged as a critical area of research, ensuring that AI systems are transparent, interpretable, and accountable. This paper provides a comprehensive overview of various methods and applications of Explainable AI. We delve into the importance of interpretability in AI models, explore different techniques for making complex AI models understandable, and discuss real-world applications where explainability is crucial. Through this paper, I aim to shed light on the advancements in the field of XAI and its potentialto bridge the gap between AI's predictions and human understanding.Keywords:- Explainable AI (XAI), Interpretable Machine Learning, Transparent AI, AI Transparency, Interpretability in AI, Ethical AI, Explainable Machine Learning Models, Model Transparency, AI Accountability, Trustworthy AI, AI  Ethics, XAI Techniques, LIME (Local Interpretable Model- agnostic Explanations), SHAP (SHapley Additive  exPlanations), Rule-based Explanation, Post-hoc Explanation, AI and Society, Human-AI Collaboration, AI Regulation, Trust in Artificial Intelligence.
APA, Harvard, Vancouver, ISO, and other styles
5

Ranjith Gopalan, Dileesh Onniyil, Ganesh Viswanathan, and Gaurav Samdani. "Hybrid models combining explainable AI and traditional machine learning: A review of methods and applications." World Journal of Advanced Engineering Technology and Sciences 15, no. 2 (2025): 1388–402. https://doi.org/10.30574/wjaets.2025.15.2.0635.

Full text
Abstract:
The rapid advancements in artificial intelligence and machine learning have led to the development of highly sophisticated models capable of superhuman performance in a variety of tasks. However, the increasing complexity of these models has also resulted in them becoming "black boxes", where the internal decision-making process is opaque and difficult to interpret. This lack of transparency and explainability has become a significant barrier to the widespread adoption of these models, particularly in sensitive domains such as healthcare and finance. To address this challenge, the field of Explainable AI has emerged, focusing on developing new methods and techniques to improve the interpretability and explainability of machine learning models. This review paper aims to provide a comprehensive overview of the research exploring the combination of Explainable AI and traditional machine learning approaches, known as "hybrid models". This paper discusses the importance of explainability in AI, and the necessity of combining interpretable machine learning models with black-box models to achieve the desired trade-off between accuracy and interpretability. It provides an overview of key methods and applications, integration techniques, implementation frameworks, evaluation metrics, and recent developments in the field of hybrid AI models. The paper also delves into the challenges and limitations in implementing hybrid explainable AI systems, as well as the future trends in the integration of explainable AI and traditional machine learning. Altogether, this paper will serve as a valuable reference for researchers and practitioners working on developing explainable and interpretable AI systems. Keywords: Explainable AI (XAI), Traditional Machine Learning (ML), Hybrid Models, Interpretability, Transparency, Predictive Accuracy, Neural Networks, Ensemble Methods, Decision Trees, Linear Regression, SHAP (Shapley Additive Explanations), LIME (Local Interpretable Model-agnostic Explanations), Healthcare Analytics, Financial Risk Management, Autonomous Systems, Predictive Maintenance, Quality Control, Integration Techniques, Evaluation Metrics, Regulatory Compliance, Ethical Considerations, User Trust, Data Quality, Model Complexity, Future Trends, Emerging Technologies, Attention Mechanisms, Transformer Models, Reinforcement Learning, Data Visualization, Interactive Interfaces, Modular Architectures, Ensemble Learning, Post-Hoc Explainability, Intrinsic Explainability, Combined Models
APA, Harvard, Vancouver, ISO, and other styles
6

Yogeswara, Reddy Avuthu. "Trustworthy AI in Cloud MLOps: Ensuring Explainability, Fairness, and Security in AI-Driven Applications." Journal of Scientific and Engineering Research 8, no. 1 (2021): 246–55. https://doi.org/10.5281/zenodo.14274110.

Full text
Abstract:
The growing reliance on cloud-native Machine Learning Operations (MLOps) to automate and scale AI-driven applications has raised critical concerns about the trustworthiness of these systems. Specifically, ensuring that AI models deployed in cloud environments are explainable, fair, and secure has become paramount. This paper proposes a comprehensive framework that integrates explainability, fairness, and security into MLOps workflows to address these concerns. The framework utilizes state-of-the-art explainability techniques, such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations), to provide continuous interpretability of model predictions. To mitigate bias, the framework includes fairness monitoring tools that assess and mitigate disparities in model outcomes based on demographic attributes. Moreover, the framework enhances the security of AI models by incorporating adversarial training and real-time threat detection mechanisms to defend against adversarial attacks and vulnerabilities in cloud infrastructure. The proposed framework was evaluated in various use cases, including financial risk modeling, healthcare diagnostics, and predictive maintenance, demonstrating improvements in model transparency, reduction in bias, and enhanced security. Our results show that the framework significantly increases the trustworthiness of AI models, making it a practical solution for AI-driven applications in cloud MLOps environments.
APA, Harvard, Vancouver, ISO, and other styles
7

Tamir, Qureshi. "Brain Tumor Detection System." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 04 (2025): 1–9. https://doi.org/10.55041/ijsrem46252.

Full text
Abstract:
Abstract— Brain tumor detection systems using Explainable Artificial Intelligence (XAI) aim to provide accurate and interpretable tumor diagnosis. This research integrates machine learning models such as Decision Trees, Random Forest, Logistic Regression, and Support Vector Machines, leveraging ensemble learning techniques like stacking and voting to enhance predictive accuracy. The system employs XAI techniques such as SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations) to ensure model transparency and interpretability. This paper presents the methodology, implementation, evaluation metrics, and the impact of integrating explainability into brain tumor detection systems, emphasizing how XAI aids clinicians in understanding diagnostic decisions and improving trust in AI-driven outcomes.
APA, Harvard, Vancouver, ISO, and other styles
8

Pasupuleti, Murali Krishna. "Building Interpretable AI Models for Healthcare Decision Support." International Journal of Academic and Industrial Research Innovations(IJAIRI) 05, no. 05 (2025): 549–60. https://doi.org/10.62311/nesx/rphcr16.

Full text
Abstract:
Abstract: The increasing integration of artificial intelligence (AI) into healthcare decision-making underscores the urgent need for models that are not only accurate but also interpretable. This study develops and evaluates interpretable AI models designed to support clinical decision-making while maintaining high predictive performance. Utilizing de-identified electronic health records (EHRs), the research implements tree-based algorithms and attention-augmented neural networks to generate clinically meaningful outputs. A combination of explainability tools—SHAP (Shapley Additive Explanations), LIME (Local Interpretable Model-agnostic Explanations), and Integrated Gradients—is employed to provide granular insights into model predictions. The results demonstrate that interpretable models approach the accuracy levels of black-box models, with enhanced transparency and trustworthiness from a clinical perspective. These findings reinforce the value of interpretable AI in facilitating ethical compliance, increasing clinician trust, and enabling actionable insights. The study concludes with recommendations for deploying such models in real-world healthcare environments, advocating for the routine integration of interpretability techniques in clinical AI pipelines. Keywords: Interpretable AI, Healthcare Decision Support, Explainable Models, SHAP, LIME, Predictive Analytics, EHR, Clinical AI
APA, Harvard, Vancouver, ISO, and other styles
9

Mainuddin Adel Rafi, S M Iftekhar Shaboj, Md Kauser Miah, Iftekhar Rasul, Md Redwanul Islam, and Abir Ahmed. "Explainable AI for Credit Risk Assessment: A Data-Driven Approach to Transparent Lending Decisions." Journal of Economics, Finance and Accounting Studies 6, no. 1 (2024): 108–18. https://doi.org/10.32996/jefas.2024.6.1.11.

Full text
Abstract:
In the era of data-driven decision-making, credit risk assessment plays a pivotal role in ensuring the financial stability of lending institutions. However, traditional machine learning models, while accurate, often function as "black boxes," offering limited interpretability for stakeholders. This paper presents an explainable artificial intelligence (XAI) framework designed to enhance transparency in credit risk evaluation. By integrating interpretable models such as SHAP (SHapley Additive exPlanations), LIME (Local Interpretable Model-agnostic Explanations), and decision trees with robust ensemble methods, we assess creditworthiness using publicly available loan datasets. The proposed approach not only improves predictive accuracy but also offers clear, feature-level insights into lending decisions, fostering trust among loan officers, regulators, and applicants. This study demonstrates that incorporating explainability into AI-driven credit scoring systems bridges the gap between predictive performance and model transparency, paving the way for more ethical and accountable financial practices.
APA, Harvard, Vancouver, ISO, and other styles
10

Researcher. "EXPLAINABLE AI IN DATA ANALYTICS: ENHANCING TRANSPARENCY AND TRUST IN COMPLEX MACHINE LEARNING MODELS." International Journal of Computer Engineering and Technology (IJCET) 15, no. 5 (2024): 1054–61. https://doi.org/10.5281/zenodo.14012791.

Full text
Abstract:
This article provides a comprehensive exploration of Explainable AI (XAI) and its critical role in enhancing transparency and interpretability in data analytics, particularly for complex machine learning models. We begin by examining the theoretical framework of XAI, including its definition, importance in machine learning, and regulatory considerations in sectors such as healthcare and finance. The article then delves into key XAI concepts, including feature importance, surrogate models, Local Interpretable Model-agnostic Explanations (LIME), and SHapley Additive exPlanations (SHAP). A detailed case study on implementing an XAI framework for credit scoring models demonstrates the practical application of these techniques, highlighting their potential to improve model transparency and build trust among stakeholders. The article addresses the benefits of XAI in data analytics, current limitations and challenges, ethical considerations, and future research directions. By synthesizing current research and providing practical insights, this article contributes to the ongoing dialogue on responsible AI development and deployment, emphasizing the crucial role of explainability in fostering trust, ensuring fairness, and meeting regulatory requirements in an increasingly AI-driven world.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "AI explainability interpretability model-agnostic explanations graph models"

1

Stevens, Alexander, Johannes De Smedt, and Jari Peeperkorn. "Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring." In Lecture Notes in Business Information Processing. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.

Full text
Abstract:
AbstractThe growing interest in applying machine and deep learning algorithms in an Outcome-Oriented Predictive Process Monitoring (OOPPM) context has recently fuelled a shift to use models from the explainable artificial intelligence (XAI) paradigm, a field of study focused on creating explainability techniques on top of AI models in order to legitimize the predictions made. Nonetheless, most classification models are evaluated primarily on a performance level, where XAI requires striking a balance between either simple models (e.g. linear regression) or models using complex inference structures (e.g. neural networks) with post-processing to calculate feature importance. In this paper, a comprehensive overview of predictive models with varying intrinsic complexity are measured based on explainability with model-agnostic quantitative evaluation metrics. To this end, explainability is designed as a symbiosis between interpretability and faithfulness and thereby allowing to compare inherently created explanations (e.g. decision tree rules) with post-hoc explainability techniques (e.g. Shapley values) on top of AI models. Moreover, two improved versions of the logistic regression model capable of capturing non-linear interactions and both inherently generating their own explanations are proposed in the OOPPM context. These models are benchmarked with two common state-of-the-art models with post-hoc explanation techniques in the explainability-performance space.
APA, Harvard, Vancouver, ISO, and other styles
2

Rane, Jayesh, Ömer Kaya, Suraj Kumar Mallick, and Nitin Liladhar Rane. "Enhancing black-box models: Advances in explainable artificial intelligence for ethical decision-making." In Future Research Opportunities for Artificial Intelligence in Industry 4.0 and 5.0. Deep Science Publishing, 2024. http://dx.doi.org/10.70593/978-81-981271-0-5_4.

Full text
Abstract:
Transparency, trust, and accountability are among the issues raised by artificial intelligence's (AI) growing reliance on black-box models, especially in high-stakes industries like healthcare, finance, and criminal justice. These models, which are frequently distinguished by their intricacy and opacity, are capable of producing extremely accurate forecasts, but users and decision-makers are still unable to fully understand how they operate. In response to this challenge, the field of Explainable AI (XAI) has emerged with the goal of demystifying these models by offering insights into their decision-making processes. Our ability to interpret model behavior has greatly improved with recent developments in XAI techniques, such as SHAP (Shapley Additive Explanations), LIME (Local Interpretable Model-agnostic Explanations), and counterfactual explanations. These instruments make it easier to recognize bias, promote trust, and guarantee adherence to moral principles and laws like the GDPR and the AI Act. Modern XAI techniques are reviewed in this research along with how they are used in moral decision-making. It looks at how explainability can improve fairness, reduce the risks of AI bias and discrimination, and assist well-informed decision-making in a variety of industries. It also examines the trade-offs between performance and interpretability of models, as well as the growing trends toward user-centric explainability techniques. In order to ensure responsible AI development and deployment, XAI's role in fostering accountability and transparency will become increasingly important as AI becomes more integrated into critical systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Upendran, Shantha Visalakshi, Karthiyayini S, and Dinesh Vijay Jamthe. "Explainable AI (XAI) for Cybersecurity Decision-Making Using SHAP and LIME for Transparent Threat Detection." In Artificial Intelligence in Cybersecurity for Risk Assessment and Transparent Threat Detection Frameworks. RADemics Research Institute, 2025. https://doi.org/10.71443/9789349552029-12.

Full text
Abstract:
The increasing complexity and sophistication of cyber threats have necessitated the integration of Explainable Artificial Intelligence (XAI) into cybersecurity frameworks to enhance transparency, trust, and decision-making. Traditional black-box machine learning models, despite their high accuracy, pose significant challenges in understanding threat detection mechanisms, leading to reduced interpretability and limited adoption in critical security applications. This book chapter explores the role of XAI techniques, specifically Shapley Additive Explanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), in improving the explainability of AI-driven cyber defense systems. A detailed analysis of computational efficiency, real-time applicability, and scalability challenges associated with SHAP and LIME in large-scale cybersecurity environments is provided., the chapter introduces hardware-accelerated approaches, such as FPGA-based optimization, to mitigate computational overhead while ensuring rapid and interpretable threat detection. reinforcement learning-based optimization for explainability is examined to enhance adaptive security mechanisms in dynamic threat landscapes. The integration of XAI-driven security information and event management (SIEM) systems is also discussed to bridge the gap between automated cyber threat detection and human-centric decision-making. This chapter provides a comprehensive exploration of state-of-the-art methodologies, challenges, and future research directions in the domain of XAI for cybersecurity, with a focus on balancing detection accuracy, computational efficiency, and interpretability.
APA, Harvard, Vancouver, ISO, and other styles
4

Rane, Nitin Liladhar, and Mallikarjuna Paramesha. "Explainable Artificial Intelligence (XAI) as a foundation for trustworthy artificial intelligence." In Trustworthy Artificial Intelligence in Industry and Society. Deep Science Publishing, 2024. http://dx.doi.org/10.70593/978-81-981367-4-9_1.

Full text
Abstract:
The rapid integration of artificial intelligence (AI) into various sectors necessitates a focus on trustworthiness, characterized by principles such as fairness, transparency, accountability, robustness, privacy, and ethics. Explainable AI has become essential and central to the achievement of trustworthy AI by answering the "black box" nature of top-of-the-line AI models through its interpretability. The research further develops the core principles relating to trustworthy AI, providing a comprehensive overview of important techniques falling under the XAI rubric, among them LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations). It follows with how this would make agents more trustworthy, better at cooperating with humans, and more compliant with regulations. This will be followed by the integration of XAI with other AI paradigms deep learning, reinforcement learning, and federated learning by contextualizing them in the light of a discussion on the performance-transparency trade-off. This is followed by a review of currently developed regulatory and policy frameworks guiding ethical AI use. Such applications of XAI in domains relevant to healthcare and finance will be presented, demonstrating its impact on diagnosis, trust earned from patients, risk management, and customer engagement. Emerging trends and future directions in XAI research include sophisticated techniques for explainability, causal inference, and ethical considerations. Technical complexities, scalability, and striking a balance between accuracy and interpretability are some of the challenges.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "AI explainability interpretability model-agnostic explanations graph models"

1

Nyaga, Casam, Ruth Wario, Lucy Gitonga, Amos Njeru, and Rosa Njagi. "Integrating Explainable Machine Learning Techniques for Predicting Diabetes: A Transparent Approach to AI-Driven Healthcare." In 16th International Conference on Applied Human Factors and Ergonomics (AHFE 2025). AHFE International, 2025. https://doi.org/10.54941/ahfe1006203.

Full text
Abstract:
Diabetes mellitus is a global health concern affecting millions worldwide, with profound medical and socioeconomic implications. The increasing adoption of machine learning (ML) in healthcare has revolutionized clinical decision-making by enabling predictive diagnostics, personalized treatment plans, and efficient resource allocation. Despite their potential, many ML models are often regarded as "black boxes" due to their lack of transparency, which raises significant challenges in critical fields like healthcare, where explainability is crucial for ethical and accountable decision-making (Hassija et al., 2024).Explainable Artificial Intelligence (XAI) has emerged as a solution to address these challenges by making ML models more interpretable and fostering trust among healthcare practitioners and patients. This paper explores the integration of XAI techniques with ML models for diabetes prediction, emphasizing their potential to enhance transparency, trust, and clinical utility. We present a comparative analysis of popular XAI methods, such as SHAP (Shapley Additive Explanations), LIME (Local Interpretable Model-agnostic Explanations), and attention mechanisms, within the context of healthcare decision support. These techniques are evaluated based on interpretability, computational efficiency, and clinical applicability, highlighting the trade-offs between accuracy and transparency.The study underscores the critical role of interpretability in advancing trust and adoption of AI-driven solutions in healthcare, while addressing challenges such as balancing model performance with explainability. Finally, future directions for deploying explainable ML in healthcare are outlined, aiming to ensure ethical, transparent, and effective AI implementation.
APA, Harvard, Vancouver, ISO, and other styles
2

Stang, Marco, Marc Schindewolf, and Eric Sax. "Unraveling Scenario-Based Behavior of a Self-Learning Function with User Interaction." In 10th International Conference on Human Interaction and Emerging Technologies (IHIET 2023). AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1004028.

Full text
Abstract:
In recent years, the field of Artificial Intelligence (AI) and Machine Learning (ML) has witnessed remarkable advancements, revolutionizing various industries and domains. The proliferation of data availability, computational power, and algorithmic innovations has propelled the development of highly sophisticated AI models, particularly in the realm of Deep Learning (DL). These DL models have demonstrated unprecedented levels of accuracy and performance across a wide range of tasks, including image recognition, natural language processing, and complex decision-making. However, amidst these impressive achievements, a critical challenge has emerged - the lack of interpretability.Highly accurate AI models, including DL models, are often referred to as black boxes because their internal workings and decision-making processes are not readily understandable to humans. While these models excel in generating accurate predictions or classifications, they do not provide clear explanations for their reasoning, leaving users and stakeholders in the dark about how and why specific decisions are made. This lack of interpretability raises concerns and limits the trust that humans can place in these models, particularly in safety-critical or high-stakes applications where accountability, transparency, and understanding are paramount.To address the challenge of interpretability, Explainable AI (xAI) has emerged as a multidisciplinary field that aims to bridge the gap in understanding between machines and humans. xAI encompasses a collection of methods and techniques designed to shed light on the decision-making processes of AI models, making their outputs more transparent, interpretable, and comprehensible to human users.The main objective of this paper is to enhance the explainability of AI-based systems that involve user interaction by employing various xAI methods. The proposed approach revolves around a comprehensive ML workflow, beginning with the utilization of real-world data to train a machine learning model that learns the behavior of a simulated driver. The training process encompasses a diverse range of real-world driving scenarios, ensuring that the model captures the intricacies and nuances of different driving situations. This training data serves as the foundation for the subsequent phases of the workflow, where the model's predictive performance is evaluated.Following the training and testing phases, the predictions generated by the ML model are subjected to explanation using different xAI methods, such as LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations). These xAI methods operate at both the global and local levels, providing distinct perspectives on the model's decision-making process. Global explanations offer insights into the overall behavior of the ML model, enabling a broader understanding of the patterns, relationships, and features that the model deems significant across different instances. These global explanations contribute to a deeper comprehension of the decision-making process employed by the model, allowing users to gain insights into the underlying factors driving its predictions.In contrast, local explanations offer detailed insights into specific instances or predictions made by the model. By analyzing these local explanations, users can better understand why the model made a particular prediction in a given case. This granular analysis facilitates the identification of potential weaknesses, biases, or areas for improvement in the model's performance. By pinpointing the specific features or factors that contribute to the model's decision in individual instances, local explanations offer valuable insights for refining the model and enhancing its accuracy and reliability.In conclusion, the lack of explainability in AI models, particularly in the realm of DL, presents a significant challenge that hinders trust and understanding between machines and humans. Explainable AI (xAI) has emerged as a vital field of research and practice, aiming to address this challenge by providing methods and techniques to enhance the interpretability and transparency of AI models. This paper focuses on enhancing the explainability of AI-based systems involving user interaction by employing various xAI methods. The proposed ML workflow, coupled with global and local explanations, offers valuable insights into the decision-making processes of the model. By unraveling the scenario-based behavior of a self-learning function with user interaction, this paper aims to contribute to the understanding and interpretability of AI-based systems. The insights gained from this research can pave the way for enhanced user trust, improved model performance, and further advancements in the field of explainable AI.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!