Journal articles on the topic 'AI explainability interpretability model-agnostic explanations graph models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 41 journal articles for your research on the topic 'AI explainability interpretability model-agnostic explanations graph models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Amato, Alba, and Dario Branco. "SemFedXAI: A Semantic Framework for Explainable Federated Learning in Healthcare." Information 16, no. 6 (2025): 435. https://doi.org/10.3390/info16060435.

Full text
Abstract:
Federated Learning (FL) is emerging as an encouraging paradigm for AI model training in healthcare that enables collaboration among institutions without revealing sensitive information. The lack of transparency in federated models makes their deployment in healthcare settings more difficult, as knowledge of the decision process is of primary importance. This paper introduces SemFedXAI, a new framework that combines Semantic Web technologies and federated learning to achieve better explainability of artificial intelligence models in healthcare. SemFedXAI extends traditional FL architectures with three key components: (1) Ontology-Enhanced Federated Learning that enriches models with domain knowledge, (2) a Semantic Aggregation Mechanism that uses semantic technologies to improve the consistency and interpretability of federated models, and (3) a Knowledge Graph-Based Explanation component that provides contextualized explanations of model decisions. We evaluated SemFedXAI within the context of e-health, reporting noteworthy advancements in explanation quality and predictive performance compared to conventional federated learning methods. The findings refer to the prospects of combining semantic technologies and federated learning as an avenue for building more explainable and resilient AI systems in healthcare.
APA, Harvard, Vancouver, ISO, and other styles
2

Tauqeer Akhtar. "Explainable AI in E-Commerce: Seller Recommendations with Ethnocentric Transparency." Journal of Electrical Systems 20, no. 11s (2024): 4825–37. https://doi.org/10.52783/jes.8814.

Full text
Abstract:
Personalized seller recommendations are fundamental to enhancing user experiences and increasing sales in e-commerce platforms. Traditional recommendation systems, however, often function as black-box models, offering limited interpretability. This paper explores the integration of Explainable AI (XAI) techniques, particularly Integrated Gradients (IG) and DeepLIFT (Deep Learning Important FeaTures), into a hybrid recommendation system. The proposed approach combines Matrix Factorization (MF) and Graph Neural Networks (GNNs) to deliver personalized and interpretable seller recommendations. Using real-world e-commerce datasets, the study evaluates how different features, such as user interaction history, social connections, and seller reputation contribute to recommendation outcomes. The system addresses the trade-offs between recommendation accuracy and interpretability, ensuring that insights are both actionable and trustworthy. Experimental results demonstrate that the hybrid model achieves substantial improvements in precision, recall, and F1-score compared to standalone MF and GNN-based approaches. Moreover, Integrated Gradients and DeepLIFT provide users with clear and intuitive explanations of the recommendation process, fostering trust in the system. This paper also introduces a comprehensive feature attribution analysis to quantify the impact of key factors, including behavioral patterns and network influence, on recommendation decisions. A comparative evaluation with state-of-the-art neural recommendation models highlights the effectiveness of the proposed system in balancing performance with interpretability. Finally, the study discusses future enhancements, such as incorporating explainability techniques tailored for multimodal data, employing reinforcement learning for adaptive personalization, and extending the model to handle dynamic user preferences. These findings underscore the importance of transparent, user-focused AI in driving innovation in e-commerce recommendation systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Ali, Ali Mohammed Omar. "Explainability in AI: Interpretable Models for Data Science." International Journal for Research in Applied Science and Engineering Technology 13, no. 2 (2025): 766–71. https://doi.org/10.22214/ijraset.2025.66968.

Full text
Abstract:
As artificial intelligence (AI) continues to drive advancements across various domains, the need for explainability in AI models has become increasingly critical. Many state-of-the-art machine learning models, particularly deep learning architectures, operate as "black boxes," making their decision-making processes difficult to interpret. Explainable AI (XAI) aims to enhance model transparency, ensuring that AI-driven decisions are understandable, trustworthy, and aligned with ethical and regulatory standards. This paper explores different approaches to AI interpretability, including intrinsically interpretable models such as decision trees and logistic regression, as well as post-hoc methods like SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations). Additionally, we discuss the challenges of explainability, including the trade-off between accuracy and interpretability, scalability issues, and domain-specific requirements. The paper also highlights real-world applications of XAI in healthcare, finance, and autonomous systems. Finally, we examine future research directions, emphasizing hybrid models, causal explainability, and human-AI collaboration. By fostering more interpretable AI systems, we can enhance trust, fairness, and accountability in data science applications.
APA, Harvard, Vancouver, ISO, and other styles
4

Jishnu, Setia. "Explainable AI: Methods and Applications." Explainable AI: Methods and Applications 8, no. 10 (2023): 5. https://doi.org/10.5281/zenodo.10021461.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) has emerged as a critical area of research, ensuring that AI systems are transparent, interpretable, and accountable. This paper provides a comprehensive overview of various methods and applications of Explainable AI. We delve into the importance of interpretability in AI models, explore different techniques for making complex AI models understandable, and discuss real-world applications where explainability is crucial. Through this paper, I aim to shed light on the advancements in the field of XAI and its potentialto bridge the gap between AI's predictions and human understanding.Keywords:- Explainable AI (XAI), Interpretable Machine Learning, Transparent AI, AI Transparency, Interpretability in AI, Ethical AI, Explainable Machine Learning Models, Model Transparency, AI Accountability, Trustworthy AI, AI  Ethics, XAI Techniques, LIME (Local Interpretable Model- agnostic Explanations), SHAP (SHapley Additive  exPlanations), Rule-based Explanation, Post-hoc Explanation, AI and Society, Human-AI Collaboration, AI Regulation, Trust in Artificial Intelligence.
APA, Harvard, Vancouver, ISO, and other styles
5

Ranjith Gopalan, Dileesh Onniyil, Ganesh Viswanathan, and Gaurav Samdani. "Hybrid models combining explainable AI and traditional machine learning: A review of methods and applications." World Journal of Advanced Engineering Technology and Sciences 15, no. 2 (2025): 1388–402. https://doi.org/10.30574/wjaets.2025.15.2.0635.

Full text
Abstract:
The rapid advancements in artificial intelligence and machine learning have led to the development of highly sophisticated models capable of superhuman performance in a variety of tasks. However, the increasing complexity of these models has also resulted in them becoming "black boxes", where the internal decision-making process is opaque and difficult to interpret. This lack of transparency and explainability has become a significant barrier to the widespread adoption of these models, particularly in sensitive domains such as healthcare and finance. To address this challenge, the field of Explainable AI has emerged, focusing on developing new methods and techniques to improve the interpretability and explainability of machine learning models. This review paper aims to provide a comprehensive overview of the research exploring the combination of Explainable AI and traditional machine learning approaches, known as "hybrid models". This paper discusses the importance of explainability in AI, and the necessity of combining interpretable machine learning models with black-box models to achieve the desired trade-off between accuracy and interpretability. It provides an overview of key methods and applications, integration techniques, implementation frameworks, evaluation metrics, and recent developments in the field of hybrid AI models. The paper also delves into the challenges and limitations in implementing hybrid explainable AI systems, as well as the future trends in the integration of explainable AI and traditional machine learning. Altogether, this paper will serve as a valuable reference for researchers and practitioners working on developing explainable and interpretable AI systems. Keywords: Explainable AI (XAI), Traditional Machine Learning (ML), Hybrid Models, Interpretability, Transparency, Predictive Accuracy, Neural Networks, Ensemble Methods, Decision Trees, Linear Regression, SHAP (Shapley Additive Explanations), LIME (Local Interpretable Model-agnostic Explanations), Healthcare Analytics, Financial Risk Management, Autonomous Systems, Predictive Maintenance, Quality Control, Integration Techniques, Evaluation Metrics, Regulatory Compliance, Ethical Considerations, User Trust, Data Quality, Model Complexity, Future Trends, Emerging Technologies, Attention Mechanisms, Transformer Models, Reinforcement Learning, Data Visualization, Interactive Interfaces, Modular Architectures, Ensemble Learning, Post-Hoc Explainability, Intrinsic Explainability, Combined Models
APA, Harvard, Vancouver, ISO, and other styles
6

Yogeswara, Reddy Avuthu. "Trustworthy AI in Cloud MLOps: Ensuring Explainability, Fairness, and Security in AI-Driven Applications." Journal of Scientific and Engineering Research 8, no. 1 (2021): 246–55. https://doi.org/10.5281/zenodo.14274110.

Full text
Abstract:
The growing reliance on cloud-native Machine Learning Operations (MLOps) to automate and scale AI-driven applications has raised critical concerns about the trustworthiness of these systems. Specifically, ensuring that AI models deployed in cloud environments are explainable, fair, and secure has become paramount. This paper proposes a comprehensive framework that integrates explainability, fairness, and security into MLOps workflows to address these concerns. The framework utilizes state-of-the-art explainability techniques, such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations), to provide continuous interpretability of model predictions. To mitigate bias, the framework includes fairness monitoring tools that assess and mitigate disparities in model outcomes based on demographic attributes. Moreover, the framework enhances the security of AI models by incorporating adversarial training and real-time threat detection mechanisms to defend against adversarial attacks and vulnerabilities in cloud infrastructure. The proposed framework was evaluated in various use cases, including financial risk modeling, healthcare diagnostics, and predictive maintenance, demonstrating improvements in model transparency, reduction in bias, and enhanced security. Our results show that the framework significantly increases the trustworthiness of AI models, making it a practical solution for AI-driven applications in cloud MLOps environments.
APA, Harvard, Vancouver, ISO, and other styles
7

Tamir, Qureshi. "Brain Tumor Detection System." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 04 (2025): 1–9. https://doi.org/10.55041/ijsrem46252.

Full text
Abstract:
Abstract— Brain tumor detection systems using Explainable Artificial Intelligence (XAI) aim to provide accurate and interpretable tumor diagnosis. This research integrates machine learning models such as Decision Trees, Random Forest, Logistic Regression, and Support Vector Machines, leveraging ensemble learning techniques like stacking and voting to enhance predictive accuracy. The system employs XAI techniques such as SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations) to ensure model transparency and interpretability. This paper presents the methodology, implementation, evaluation metrics, and the impact of integrating explainability into brain tumor detection systems, emphasizing how XAI aids clinicians in understanding diagnostic decisions and improving trust in AI-driven outcomes.
APA, Harvard, Vancouver, ISO, and other styles
8

Pasupuleti, Murali Krishna. "Building Interpretable AI Models for Healthcare Decision Support." International Journal of Academic and Industrial Research Innovations(IJAIRI) 05, no. 05 (2025): 549–60. https://doi.org/10.62311/nesx/rphcr16.

Full text
Abstract:
Abstract: The increasing integration of artificial intelligence (AI) into healthcare decision-making underscores the urgent need for models that are not only accurate but also interpretable. This study develops and evaluates interpretable AI models designed to support clinical decision-making while maintaining high predictive performance. Utilizing de-identified electronic health records (EHRs), the research implements tree-based algorithms and attention-augmented neural networks to generate clinically meaningful outputs. A combination of explainability tools—SHAP (Shapley Additive Explanations), LIME (Local Interpretable Model-agnostic Explanations), and Integrated Gradients—is employed to provide granular insights into model predictions. The results demonstrate that interpretable models approach the accuracy levels of black-box models, with enhanced transparency and trustworthiness from a clinical perspective. These findings reinforce the value of interpretable AI in facilitating ethical compliance, increasing clinician trust, and enabling actionable insights. The study concludes with recommendations for deploying such models in real-world healthcare environments, advocating for the routine integration of interpretability techniques in clinical AI pipelines. Keywords: Interpretable AI, Healthcare Decision Support, Explainable Models, SHAP, LIME, Predictive Analytics, EHR, Clinical AI
APA, Harvard, Vancouver, ISO, and other styles
9

Mainuddin Adel Rafi, S M Iftekhar Shaboj, Md Kauser Miah, Iftekhar Rasul, Md Redwanul Islam, and Abir Ahmed. "Explainable AI for Credit Risk Assessment: A Data-Driven Approach to Transparent Lending Decisions." Journal of Economics, Finance and Accounting Studies 6, no. 1 (2024): 108–18. https://doi.org/10.32996/jefas.2024.6.1.11.

Full text
Abstract:
In the era of data-driven decision-making, credit risk assessment plays a pivotal role in ensuring the financial stability of lending institutions. However, traditional machine learning models, while accurate, often function as "black boxes," offering limited interpretability for stakeholders. This paper presents an explainable artificial intelligence (XAI) framework designed to enhance transparency in credit risk evaluation. By integrating interpretable models such as SHAP (SHapley Additive exPlanations), LIME (Local Interpretable Model-agnostic Explanations), and decision trees with robust ensemble methods, we assess creditworthiness using publicly available loan datasets. The proposed approach not only improves predictive accuracy but also offers clear, feature-level insights into lending decisions, fostering trust among loan officers, regulators, and applicants. This study demonstrates that incorporating explainability into AI-driven credit scoring systems bridges the gap between predictive performance and model transparency, paving the way for more ethical and accountable financial practices.
APA, Harvard, Vancouver, ISO, and other styles
10

Researcher. "EXPLAINABLE AI IN DATA ANALYTICS: ENHANCING TRANSPARENCY AND TRUST IN COMPLEX MACHINE LEARNING MODELS." International Journal of Computer Engineering and Technology (IJCET) 15, no. 5 (2024): 1054–61. https://doi.org/10.5281/zenodo.14012791.

Full text
Abstract:
This article provides a comprehensive exploration of Explainable AI (XAI) and its critical role in enhancing transparency and interpretability in data analytics, particularly for complex machine learning models. We begin by examining the theoretical framework of XAI, including its definition, importance in machine learning, and regulatory considerations in sectors such as healthcare and finance. The article then delves into key XAI concepts, including feature importance, surrogate models, Local Interpretable Model-agnostic Explanations (LIME), and SHapley Additive exPlanations (SHAP). A detailed case study on implementing an XAI framework for credit scoring models demonstrates the practical application of these techniques, highlighting their potential to improve model transparency and build trust among stakeholders. The article addresses the benefits of XAI in data analytics, current limitations and challenges, ethical considerations, and future research directions. By synthesizing current research and providing practical insights, this article contributes to the ongoing dialogue on responsible AI development and deployment, emphasizing the crucial role of explainability in fostering trust, ensuring fairness, and meeting regulatory requirements in an increasingly AI-driven world.
APA, Harvard, Vancouver, ISO, and other styles
11

Akpan Itoro Udofot, Akpan Itoro Udofot, Omotosho Moses Oluseyi Omotosho Moses Oluseyi, and Edim Bassey Edim Edim Bassey Edim. "Explainable AI for cyber security. Improving transparency and trust in intrusion detection systems." International Journal of Advances in Engineering and Management 06, no. 12 (2024): 229–40. https://doi.org/10.35629/5252-0612229240.

Full text
Abstract:
In recent years, the integration of Artificial Intelligence (AI) in cybersecurity has significantly enhanced the capabilities of Intrusion Detection Systems (IDS) to detect and mitigate sophisticated cyber threats. However, the increasing complexity and opaque nature of AI models have led to challenges in understanding, interpreting, and trusting these systems. This paper addresses the critical issue of transparency and trust in IDS by exploring the application of Explainable AI (XAI) techniques. By leveraging XAI, we aim to demystify the decision-making processes of AIdriven IDS, enabling security analysts to comprehend and validate the system's outputs effectively. The proposed framework integrates model-agnostic XAI methods, such as Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), with state-of-the-art IDS algorithms to improve both interpretability and performance. Through comprehensive experiments on benchmark datasets, we demonstrate that our approach not only maintains high detection accuracy but also enhances the explainability of the model's decisions, thereby fostering greater trust among end-users. The findings of this study underscore the potential of XAI to bridge the gap between AI’s advanced capabilities and the human need for understanding, ultimately contributing to more secure and reliable cyber defense systems.
APA, Harvard, Vancouver, ISO, and other styles
12

Başağaoğlu, Hakan, Debaditya Chakraborty, Cesar Do Lago, et al. "A Review on Interpretable and Explainable Artificial Intelligence in Hydroclimatic Applications." Water 14, no. 8 (2022): 1230. http://dx.doi.org/10.3390/w14081230.

Full text
Abstract:
This review focuses on the use of Interpretable Artificial Intelligence (IAI) and eXplainable Artificial Intelligence (XAI) models for data imputations and numerical or categorical hydroclimatic predictions from nonlinearly combined multidimensional predictors. The AI models considered in this paper involve Extreme Gradient Boosting, Light Gradient Boosting, Categorical Boosting, Extremely Randomized Trees, and Random Forest. These AI models can transform into XAI models when they are coupled with the explanatory methods such as the Shapley additive explanations and local interpretable model-agnostic explanations. The review highlights that the IAI models are capable of unveiling the rationale behind the predictions while XAI models are capable of discovering new knowledge and justifying AI-based results, which are critical for enhanced accountability of AI-driven predictions. The review also elaborates the importance of domain knowledge and interventional IAI modeling, potential advantages and disadvantages of hybrid IAI and non-IAI predictive modeling, unequivocal importance of balanced data in categorical decisions, and the choice and performance of IAI versus physics-based modeling. The review concludes with a proposed XAI framework to enhance the interpretability and explainability of AI models for hydroclimatic applications.
APA, Harvard, Vancouver, ISO, and other styles
13

Chavan, Devang, and Shrihari Padatare. "Explainable AI for News Classification." International Journal for Research in Applied Science and Engineering Technology 12, no. 11 (2024): 2400–2408. https://doi.org/10.22214/ijraset.2024.65670.

Full text
Abstract:
Abstract: The proliferation of news content across digital platforms necessitates robust and interpretable machine learning models to classify news into predefined categories effectively. This study investigates the integration of Explainable AI (XAI) techniques within the context of traditional machine learning models, including Naive Bayes, Logistic Regression, and Support Vector Machines (SVM), to achieve interpretable and accurate news classification. Utilizing the News Category Dataset, we preprocess the data to focus on the top 15 categories while addressing class imbalance challenges. Models are trained using Term Frequency-Inverse Document Frequency (TF-IDF) vectorization, achieving an acceptable classification accuracy of 67% across all models despite the complexity introduced by the high number of classes. To elucidate the decision-making processes of these models, we employ feature importance visualizations derived from model coefficients and feature log probabilities, complemented by local interpretability techniques such as LIME (Local Interpretable Model-agnostic Explanations). These methodologies enable granular insights into word-level contributions to predictions for each news category. Comparative heatmaps across models reveal significant consistencies and divergences in feature reliance, highlighting nuanced decision-making patterns. The integration of explainability into news classification provides critical interpretive capabilities, offering transparency and mitigating the risks associated with algorithmic opacity. The findings demonstrate how XAI enhances stakeholder trust by aligning model predictions with human interpretability, particularly in ethically sensitive domains. This work emphasizes the role of XAI in fostering responsible AI deployment and paves the way for future advancements, including deep learning integration and multilingual news classification with inherent interpretability frameworks
APA, Harvard, Vancouver, ISO, and other styles
14

Kantapalli, Bhaskar, Arshia Aamena, Chebrolu Yogavarshinee, Badugu Divya Teja, and Dasari Teja Sri. "OPTIMIZED SYMPTOM-BASED DEEP LEARNING FRAMEWORK FOR MONKEYPOX DIAGNOSIS WITH LIME EXPLAINABILITY." Industrial Engineering Journal 54, no. 03 (2025): 84–92. https://doi.org/10.36893/iej.2025.v54i3.009.

Full text
Abstract:
Monkeypox is an emerging zoonotic disease that has raised global health concerns due to its increasing transmission rates. Traditional diagnostic methods rely on laboratory testing, which can be timeconsuming and inaccessible in resource-limited settings. This study presents an Optimized Deep Neural Framework (ODNF) to diagnose monkeypox based on clinical symptoms, leveraging deep learning for accurate and rapid classification. The research explores various machine learning models, including Random Forest, XG Boost, and Cat Boost, before implementing ODNF, which achieved superior performance with a 99% accuracy rate. The dataset underwent preprocessing steps, including handling imbalanced data and feature encoding, ensuring optimal learning. Additionally, Local Interpretable Model-Agnostic Explanations (LIME) was employed to enhance model interpretability, providing insights into symptom-based predictions. Comparative evaluation against traditional models demonstrated that ODNF outperforms existing approaches, making it a viable AI-based diagnostic tool for monkeypox detection.
APA, Harvard, Vancouver, ISO, and other styles
15

Vieira, Carla Piazzon Ramos, and Luciano Antonio Digiampietri. "A study about Explainable Articial Intelligence: using decision tree to explain SVM." Revista Brasileira de Computação Aplicada 12, no. 1 (2020): 113–21. http://dx.doi.org/10.5335/rbca.v12i1.10247.

Full text
Abstract:
The technologies supporting Artificial Intelligence (AI) have advanced rapidly over the past few years and AI is becoming a commonplace in every aspect of life like the future of self-driving cars or earlier health diagnosis. For this to occur shortly, the entire community stands in front of the barrier of explainability, an inherent problem of latest models (e.g. Deep Neural Networks) that were not present in the previous hype of AI (linear and rule-based models). Most of these recent models are used as black boxes without understanding partially or even completely how different features influence the model prediction avoiding algorithmic transparency. In this paper, we focus on how much we can understand the decisions made by an SVM Classifier in a post-hoc model agnostic approach. Furthermore, we train a tree-based model (inherently interpretable) using labels from the SVM, called secondary training data to provide explanations and compare permutation importance method to the more commonly used measures such as accuracy and show that our methods are both more reliable and meaningful techniques to use. We also outline the main challenges for such methods and conclude that model-agnostic interpretability is a key component in making machine learning more trustworthy.
APA, Harvard, Vancouver, ISO, and other styles
16

AlNusif, Mohammed. "Explainable AI in Edge Devices: A Lightweight Framework for Real-Time Decision Transparency." International Journal of Engineering and Computer Science 14, no. 07 (2025): 27447–72. https://doi.org/10.18535/ijecs.v14i07.5181.

Full text
Abstract:
The increasing deployment of Artificial Intelligence (AI) models on edge devices—such as Raspberry Pi, NVIDIA Jetson Nano, and Google Coral TPU—has revolutionized real-time decision-making in critical domains including healthcare, autonomous vehicles, and surveillance. However, these edge-based AI systems often function as opaque "black boxes," making it difficult for end-users to understand, verify, or trust their decisions. This lack of interpretability not only undermines user confidence but also poses serious challenges for ethical accountability, regulatory compliance (e.g., GDPR, HIPAA), and safety in mission-critical applications. To address these limitations, this study proposes a lightweight, modular framework that enables the integration of Explainable AI (XAI) techniques into resource-constrained edge environments. We explore and benchmark several state-of-the-art XAI methods—including SHAP (SHapley Additive Explanations), LIME (Local Interpretable Model-agnostic Explanations), and Saliency Maps—by evaluating their performance in terms of inference latency, memory usage, interpretability score, and user trust across real-world edge devices. Multiple lightweight AI models (such as MobileNetV2, TinyBERT, and XGBoost) are trained and deployed on three benchmark datasets: CIFAR-10, EdgeMNIST, and UCI Human Activity Recognition. Experimental results demonstrate that while SHAP offers high-quality explanations, it imposes significant computational overhead, making it suitable for moderately powered platforms like Jetson Nano. In contrast, LIME achieves a balanced trade-off between transparency and resource efficiency, making it the most viable option for real-time inference on lower-end devices like Raspberry Pi. Saliency Maps, though computationally lightweight, deliver limited interpretability, particularly for non-visual data tasks. Furthermore, two real-world case studies—one in smart health monitoring and the other in drone-based surveillance—validate the framework's applicability. In both scenarios, the integration of XAI significantly enhanced user trust and decision reliability without breaching latency thresholds. Ultimately, this paper contributes a scalable, device-agnostic solution for embedding explainability into edge intelligence, enabling transparent AI decisions at the point of data generation. This advancement is crucial for the future of trustworthy edge AI, particularly in regulated and high-risk environments.
APA, Harvard, Vancouver, ISO, and other styles
17

Przybył, Krzysztof. "Explainable AI: Machine Learning Interpretation in Blackcurrant Powders." Sensors 24, no. 10 (2024): 3198. http://dx.doi.org/10.3390/s24103198.

Full text
Abstract:
Recently, explainability in machine and deep learning has become an important area in the field of research as well as interest, both due to the increasing use of artificial intelligence (AI) methods and understanding of the decisions made by models. The explainability of artificial intelligence (XAI) is due to the increasing consciousness in, among other things, data mining, error elimination, and learning performance by various AI algorithms. Moreover, XAI will allow the decisions made by models in problems to be more transparent as well as effective. In this study, models from the ‘glass box’ group of Decision Tree, among others, and the ‘black box’ group of Random Forest, among others, were proposed to understand the identification of selected types of currant powders. The learning process of these models was carried out to determine accuracy indicators such as accuracy, precision, recall, and F1-score. It was visualized using Local Interpretable Model Agnostic Explanations (LIMEs) to predict the effectiveness of identifying specific types of blackcurrant powders based on texture descriptors such as entropy, contrast, correlation, dissimilarity, and homogeneity. Bagging (Bagging_100), Decision Tree (DT0), and Random Forest (RF7_gini) proved to be the most effective models in the framework of currant powder interpretability. The measures of classifier performance in terms of accuracy, precision, recall, and F1-score for Bagging_100, respectively, reached values of approximately 0.979. In comparison, DT0 reached values of 0.968, 0.972, 0.968, and 0.969, and RF7_gini reached values of 0.963, 0.964, 0.963, and 0.963. These models achieved classifier performance measures of greater than 96%. In the future, XAI using agnostic models can be an additional important tool to help analyze data, including food products, even online.
APA, Harvard, Vancouver, ISO, and other styles
18

Ajkuna MUJO. "Explainable AI in Credit Scoring: Improving Transparency in Loan Decisions." Journal of Information Systems Engineering and Management 10, no. 27s (2025): 506–15. https://doi.org/10.52783/jisem.v10i27s.4437.

Full text
Abstract:
The increasing dependence on Artificial Intelligence (AI) in the realm of credit scoring has led to notable enhancements in loan approval processes, particularly with regard to accuracy, efficiency, and risk evaluation. Yet, due to the opacity of sophisticated AI models, there are worries regarding transparency, fairness, and adherence to regulations. Because traditional black-box models like deep learning and ensemble methods are not interpretable, financial institutions find it challenging to justify credit decisions based on them. This absence of transparency creates difficulties in complying with regulatory standards such as Basel III, the Fair Lending Act, and GDPR, while also heightening the risk of biased or unjust lending practices. This study examines the role of Explainable AI (XAI) in credit scoring to tackle these issues, concentrating on methods that improve model interpretability while maintaining predictive performance. This study puts forward a credit scoring framework driven by XAI, which combines Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) to enhance the transparency of AI-based loan decision-making. Machine learning models such as random forests, gradient boosting, and neural networks are evaluated for their accuracy and explainability using real-world credit risk datasets. The results demonstrate that although AI improves risk prediction, post-hoc interpretability techniques effectively identify the key factors affecting loan approvals, thereby promoting trust and adherence to regulations. This research emphasizes how XAI can reduce bias, enhance fairness, and foster transparency in credit decision-making. These developments open the door to more ethical and accountable AI-based financial systems.
APA, Harvard, Vancouver, ISO, and other styles
19

Mintu Debnath. "Mathematical Foundations of Explainable AI: A Framework based on Topological Data Analysis." Communications on Applied Nonlinear Analysis 32, no. 9s (2025): 3119–42. https://doi.org/10.52783/cana.v32.4650.

Full text
Abstract:
This paper presents a mathematically grounded framework for Explainable Artificial Intelligence (XAI) based on Topological Data Analysis (TDA). By leveraging persistent homology, we construct robust topological feature representations—including persistence images, landscapes, and Betti curves—that enrich traditional machine learning models with geometric and structural insights. We evaluate the framework across five benchmark datasets—Circles, Moons, Iris, MNIST, and Fashion-MNIST—spanning both synthetic and real-world domains with varying dimensionality. Experimental results demonstrate that TDA-derived features significantly enhance both predictive performance and interpretability. Combined models achieved up to +8.9% accuracy improvement, with the highest gains observed in non-linearly separable datasets. Explainability metrics such as Local Fidelity (0.86), Stability (0.92), and Faithfulness (0.91) improved substantially compared to raw-only models. Explanations were also more concise, with sparsity reduced from 5.2 to 3.1 features on average. Sensitivity analysis identified persistence threshold τ = 0.010 as optimal for filtering topological noise. The proposed TDA-XAI framework is model-agnostic, scalable, and compatible with standard interpretability tools like SHAP and LIME. It provides a principled way to bridge data geometry with explainable learning, offering substantial gains in accuracy, robustness, and transparency—particularly in high-stakes or complex decision-making domains.
APA, Harvard, Vancouver, ISO, and other styles
20

Bidve, Vijaykumar, Pathan Mohd Shafi, Pakiriswamy Sarasu, et al. "Use of explainable AI to interpret the results of NLP models for sentimental analysis." Indonesian Journal of Electrical Engineering and Computer Science 35, no. 1 (2024): 511. http://dx.doi.org/10.11591/ijeecs.v35.i1.pp511-519.

Full text
Abstract:
The use of artificial intelligence (AI) systems is significantly increased in the past few years. AI system is expected to provide accurate predictions and it is also crucial that the decisions made by the AI systems are humanly interpretable i.e. anyone must be able to understand and comprehend the results produced by the AI system. AI systems are being implemented even for simple decision support and are easily accessible to the common man on the tip of their fingers. The increase in usage of AI has come with its own limitation, i.e. its interpretability. This work contributes towards the use of explainability methods such as local interpretable model-agnostic explanations (LIME) to interpret the results of various black box models. The conclusion is that, the bidirectional long short-term memory (LSTM) model is superior for sentiment analysis. The operations of a random forest classifier, a black box model, using explainable artificial intelligence (XAI) techniques like LIME is used in this work. The features used by the random forest model for classification are not entirely correct. The use of LIME made this possible. The proposed model can be used to enhance performance, which raises the trustworthiness and legitimacy of AI systems.
APA, Harvard, Vancouver, ISO, and other styles
21

Vijaykumar, Bidve Pathan Mohd Shafi Pakiriswamy Sarasu Aruna Pavate Ashfaq Shaikh Santosh Borde Veer Bhadra Pratap Singh Rahul Raut. "Use of explainable AI to interpret the results of NLP models for sentimental analysis." Indonesian Journal of Electrical Engineering and Computer Science 35, no. 1 (2024): 511–19. https://doi.org/10.11591/ijeecs.v35.i1.pp511-519.

Full text
Abstract:
The use of artificial intelligence (AI) systems is significantly increased in the past few years. AI system is expected to provide accurate predictions and it is also crucial that the decisions made by the AI systems are humanly interpretable i.e. anyone must be able to understand and comprehend the results produced by the AI system. AI systems are being implemented even for simple decision support and are easily accessible to the common man on the tip of their fingers. The increase in usage of AI has come with its own limitation, i.e. its interpretability. This work contributes towards the use of explainability methods such as local interpretable model-agnostic explanations (LIME) to interpret the results of various black box models. The conclusion is that, the bidirectional long short-term memory (LSTM) model is superior for sentiment analysis. The operations of a random forest classifier, a black box model, using explainable artificial intelligence (XAI) techniques like LIME is used in this work. The features used by the random forest model for classification are not entirely correct. The use of LIME made this possible. The proposed model can be used to enhance performance, which raises the trustworthiness and legitimacy of AI systems.
APA, Harvard, Vancouver, ISO, and other styles
22

Zahoor, Kanwal, Narmeen Zakaria Bawany, and Tehreem Qamar. "Evaluating text classification with explainable artificial intelligence." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 1 (2024): 278. http://dx.doi.org/10.11591/ijai.v13.i1.pp278-286.

Full text
Abstract:
<span lang="EN-US">Nowadays, artificial intelligence (AI) in general and machine learning techniques in particular has been widely employed in automated systems. Increasing complexity of these machine learning based systems have consequently given rise to blackbox models that are typically not understandable or explainable by humans. There is a need to understand the logic and reason behind these automated decision-making black box models as they are involved in our day-to-day activities such as driving, facial recognition identity systems, online recruitment. Explainable artificial intelligence (XAI) is an evolving field that makes it possible for humans to evaluate machine learning models for their correctness, fairness, and reliability. We extend our previous research work and perform a detailed analysis of the model created for text classification and sentiment analysis using a popular Explainable AI tool named local interpretable model agnostic explanations (LIME). The results verify that it is essential to evaluate machine learning models using explainable AI tools as accuracy and other related metrics does not ensure the correctness, fairness, and reliability of the model. We also present the comparison of explainability and interpretability of various machine learning algorithms using LIME.</span>
APA, Harvard, Vancouver, ISO, and other styles
23

Zahoor, Kanwal, Narmeen Zakaria Bawany, and Tehreem Qamar. "Evaluating text classification with explainable artificial intelligence." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 1 (2024): 278–86. https://doi.org/10.11591/ijai.v13.i1.pp278-286.

Full text
Abstract:
Nowadays, artificial intelligence (AI) in general and machine learning techniques in particular has been widely employed in automated systems. Increasing complexity of these machine learning based systems have consequently given rise to blackbox models that are typically not understandable or explainable by humans. There is a need to understand the logic and reason behind these automated decision-making black box models as they are involved in our day-to-day activities such as driving, facial recognition identity systems, online recruitment. Explainable artificial intelligence (XAI) is an evolving field that makes it possible for humans to evaluate machine learning models for their correctness, fairness, and reliability. We extend our previous research work and perform a detailed analysis of the model created for text classification and sentiment analysis using a popular Explainable AI tool named local interpretable model agnostic explanations (LIME). The results verify that it is essential to evaluate machine learning models using explainable AI tools as accuracy and other related metrics does not ensure the correctness, fairness, and reliability of the model. We also present the comparison of explainability and interpretability of various machine learning algorithms using LIME.
APA, Harvard, Vancouver, ISO, and other styles
24

Lokesh Gupta and Dinesh Chandra Misra. "Data-Driven Explainable AI In Cyber Security Awareness In Nepal." International Journal of Information Technology and Computer Engineering 13, no. 2 (2025): 1245–50. https://doi.org/10.62647/ijitce.2025.v13.i2.pp1245-1250.

Full text
Abstract:
Cybersecurity expertise is essential in ensuring digital safety, especially in developing economies like Nepal, where the rapid uptake of technology increases vulnerability to cyberattacks. Legacy methodologies such as rule-based systems and human-resource-intensive threat analysis are defeated by issues like scalability issues, lagging response, and poor interpretable insight, thereby being less effective in mitigating dynamically evolving threats. In order to counter such limitations, the current study proposes a Data-Driven Explainable AI (XAI) system to promote cybersecurity awareness in Nepal. The principal aim is to create an explainable AI-based system that guides users, policymakers, and cybersecurity professionals in cyber threat understanding and prevention more efficiently. Unlike black-box models whose internal mechanisms are unknown, the proposed framework utilizes SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations) to render explainable, actionable knowledge about cyber risks. The architecture includes data preprocessing, model training, integration of explainability layer, and real-time cyber risk assessment, facilitating continuous learning and adaptive threat intelligence. Early results show that XAI approaches improve model interpretability, build user trust, and enhance cybersecurity consciousness through open threat analysis. The study establishes the revolutionary promise of explainable, data-driven AI solutions to revolutionize Nepal's cybersecurity and prepare users for future digital threats.
APA, Harvard, Vancouver, ISO, and other styles
25

Olufunke A Akande. "Leveraging explainable AI models to improve predictive accuracy and ethical accountability in healthcare diagnostic decision support systems." World Journal of Advanced Research and Reviews 8, no. 2 (2020): 415–34. https://doi.org/10.30574/wjarr.2020.8.2.0384.

Full text
Abstract:
Artificial intelligence (AI) has emerged as a transformative force in healthcare, particularly within diagnostic decision support systems (DDSS). However, the integration of black-box predictive models into clinical workflows has raised critical concerns about trust, transparency, and ethical accountability. This study presents a framework for leveraging explainable AI (XAI) models to enhance both predictive accuracy and interpretability in healthcare diagnostics, ensuring that algorithmic outputs are clinically meaningful, ethically sound, and aligned with evidence-based practices. The paper investigates the application of various XAI techniques—including SHAP (Shapley Additive Explanations), LIME (Local Interpretable Model-agnostic Explanations), and attention mechanisms—in improving transparency and clinician trust during disease risk stratification and diagnostic recommendation processes. Through comparative modeling experiments across multimodal datasets (EHRs, imaging, lab reports), the study demonstrates that XAI-enhanced models maintain competitive predictive performance while offering interpretable insights into feature contributions and decision logic. To address ethical accountability, the framework includes a real-time auditing layer for bias detection and sensitivity analysis across subpopulations, ensuring fair outcomes for marginalized or underrepresented groups. Integration with clinical feedback loops allows models to evolve iteratively, aligning predictions with practitioner expertise and patient-centered goals. The system is also designed to support regulatory compliance by generating traceable, explainable decision pathways essential for validation and accountability in healthcare governance. By embedding explainability into model design and deployment, this research bridges the gap between AI-driven prediction and ethical, informed clinical judgment. It provides a roadmap for the responsible adoption of AI in healthcare, where transparency, fairness, and trust are as critical as technical performance.
APA, Harvard, Vancouver, ISO, and other styles
26

Haupt, Matteo, Martin H. Maurer, and Rohit Philip Thomas. "Explainable Artificial Intelligence in Radiological Cardiovascular Imaging—A Systematic Review." Diagnostics 15, no. 11 (2025): 1399. https://doi.org/10.3390/diagnostics15111399.

Full text
Abstract:
Background: Artificial intelligence (AI) and deep learning are increasingly applied in cardiovascular imaging. However, the “black box” nature of these models raises challenges for clinical trust and integration. Explainable Artificial Intelligence (XAI) seeks to address these concerns by providing insights into model decision-making. This systematic review synthesizes current research on the use of XAI methods in radiological cardiovascular imaging. Methods: A systematic literature search was conducted in PubMed, Scopus, and Web of Science to identify peer-reviewed original research articles published between January 2015 and March 2025. Studies were included if they applied XAI techniques—such as Gradient-Weighted Class Activation Mapping (Grad-CAM), Shapley Additive Explanations (SHAPs), Local Interpretable Model-Agnostic Explanations (LIMEs), or saliency maps—to cardiovascular imaging modalities, including cardiac computed tomography (CT), magnetic resonance imaging (MRI), echocardiography and other ultrasound examinations, and chest X-ray (CXR). Studies focusing on nuclear medicine, structured/tabular data without imaging, or lacking concrete explainability features were excluded. Screening and data extraction followed PRISMA guidelines. Results: A total of 28 studies met the inclusion criteria. Ultrasound examinations (n = 9) and CT (n = 9) were the most common imaging modalities, followed by MRI (n = 6) and chest X-rays (n = 4). Clinical applications included disease classification (e.g., coronary artery disease and valvular heart disease) and the detection of myocardial or congenital abnormalities. Grad-CAM was the most frequently employed XAI method, followed by SHAP. Most studies used saliency-based techniques to generate visual explanations of model predictions. Conclusions: XAI holds considerable promise for improving the transparency and clinical acceptance of deep learning models in cardiovascular imaging. However, the evaluation of XAI methods remains largely qualitative, and standardization is lacking. Future research should focus on the robust, quantitative assessment of explainability, prospective clinical validation, and the development of more advanced XAI techniques beyond saliency-based methods. Strengthening the interpretability of AI models will be crucial to ensuring their safe, ethical, and effective integration into cardiovascular care.
APA, Harvard, Vancouver, ISO, and other styles
27

Mabokela, Koena Ronny, Mpho Primus, and Turgay Celik. "Explainable Pre-Trained Language Models for Sentiment Analysis in Low-Resourced Languages." Big Data and Cognitive Computing 8, no. 11 (2024): 160. http://dx.doi.org/10.3390/bdcc8110160.

Full text
Abstract:
Sentiment analysis is a crucial tool for measuring public opinion and understanding human communication across digital social media platforms. However, due to linguistic complexities and limited data or computational resources, it is under-represented in many African languages. While state-of-the-art Afrocentric pre-trained language models (PLMs) have been developed for various natural language processing (NLP) tasks, their applications in eXplainable Artificial Intelligence (XAI) remain largely unexplored. In this study, we propose a novel approach that combines Afrocentric PLMs with XAI techniques for sentiment analysis. We demonstrate the effectiveness of incorporating attention mechanisms and visualization techniques in improving the transparency, trustworthiness, and decision-making capabilities of transformer-based models when making sentiment predictions. To validate our approach, we employ the SAfriSenti corpus, a multilingual sentiment dataset for South African under-resourced languages, and perform a series of sentiment analysis experiments. These experiments enable comprehensive evaluations, comparing the performance of Afrocentric models against mainstream PLMs. Our results show that the Afro-XLMR model outperforms all other models, achieving an average F1-score of 71.04% across five tested languages, and the lowest error rate among the evaluated models. Additionally, we enhance the interpretability and explainability of the Afro-XLMR model using Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP). These XAI techniques ensure that sentiment predictions are not only accurate and interpretable but also understandable, fostering trust and reliability in AI-driven NLP technologies, particularly in the context of African languages.
APA, Harvard, Vancouver, ISO, and other styles
28

Tasioulis, Thomas, Evangelos Bagkis, Theodosios Kassandros, and Kostas Karatzas. "The Quest for the Best Explanation: Comparing Models and XAI Methods in Air Quality Modeling Tasks." Applied Sciences 15, no. 13 (2025): 7390. https://doi.org/10.3390/app15137390.

Full text
Abstract:
Air quality (AQ) modeling is at the forefront of estimating pollution levels in areas where the spatial representativity is low. Large metropolitan areas in Asia such as Beijing face significant pollution issues due to rapid industrialization and urbanization. AQ nowcasting, especially in dense urban centers like Beijing, is crucial for public health and safety. One of the most popular and accurate modeling methodologies relies on black-box models that fail to explain the phenomena in an interpretable way. This study investigates the performance and interpretability of Explainable AI (XAI) applied with the eXtreme Gradient Boosting (XGBoost) algorithm employing the SHapley Additive exPlanations (SHAP) and the Local Interpretable Model-Agnostic Explanations (LIME) for PM2.5 nowcasting. Using a SHAP-based technique for dimensionality reduction, we identified the features responsible for 95% of the target variance, allowing us to perform an effective feature selection with minimal impact on accuracy. In addition, the findings show that SHAP and LIME supported orthogonal insights: SHAP provided a view of the model performance at a high level, identifying interaction effects that are often overlooked using gain-based metrics such as feature importance; while LIME presented an enhanced overlook by justifying its local explanation, providing low-bias estimates of the environmental data values that affect predictions. Our evaluation set included 12 monitoring stations using temporal split methods with or without lagged-feature engineering approaches. Moreover, the evaluation showed that models retained a substantial degree of predictive power (R2 > 0.93) even in a reduced complexity size. The findings provide evidence for deploying interpretable and performant AQ modeling tools where policy interventions cannot solely depend on predictive analytics tools. Overall, the findings demonstrate the large potential of directly incorporating explainability methods during model development for equal and more transparent modeling processes.
APA, Harvard, Vancouver, ISO, and other styles
29

Zoumpolia, Dikopoulou, Moustakidis Serafeim, and Karlsson Patrik. "gLIME: A NEW GRAPHICAL METHODOLOGY FOR INTERPRETABLE MODEL-AGNOSTIC EXPLANATIONS." July 22, 2021. https://doi.org/10.5281/zenodo.6487667.

Full text
Abstract:
Explainable artificial intelligence (XAI) is an emerging new domain in which a set of processes and tools allow humans to better comprehend the decisions generated by black box models. However, most of the available XAI tools are often limited to simple explanations mainly quantifying the impact of individual features to the models’ output. Therefore, human users are not able to understand how the features are related to each other to make predictions, whereas the inner workings of the trained models remain hidden. This paper contributes to the development of a novel graphical explainability tool that not only indicates the significant features of the model, but also reveals the conditional relationships between features and the inference capturing both the direct and indirect impact of features to the models’ decision. The proposed XAI methodology, termed as gLIME, provides graphical model-agnostic explanations either at the global (for the entire dataset) or the local scale (for specific data points). It relies on a combination of local interpretable model-agnostic explanations (LIME) with graphical least absolute shrinkage and selection operator (GLASSO) producing undirected Gaussian graphical models. Regularization is adopted to shrink small partial correlation coefficients to zero providing sparser and more interpretable graphical explanations. Two well-known classification datasets (BIOPSY and OAI) were selected to confirm the superiority of gLIME over LIME in terms of both robustness and consistency/sensitivity over multiple permutations. Specifically, gLIME accomplished increased stability over the two datasets with respect to features’ importance (76%-96% compared to 52%-77% using LIME). gLIME demonstrates a unique potential to extend the functionality of the current state-of-the-art in XAI by providing informative graphically given explanations that could unlock black boxes.
APA, Harvard, Vancouver, ISO, and other styles
30

Proietti, Michela, Alessio Ragno, Biagio La Rosa, Rino Ragno, and Roberto Capobianco. "Explainable AI in drug discovery: self-interpretable graph neural network for molecular property prediction using concept whitening." Machine Learning, October 31, 2023. http://dx.doi.org/10.1007/s10994-023-06369-y.

Full text
Abstract:
AbstractMolecular property prediction is a fundamental task in the field of drug discovery. Several works use graph neural networks to leverage molecular graph representations. Although they have been successfully applied in a variety of applications, their decision process is not transparent. In this work, we adapt concept whitening to graph neural networks. This approach is an explainability method used to build an inherently interpretable model, which allows identifying the concepts and consequently the structural parts of the molecules that are relevant for the output predictions. We test popular models on several benchmark datasets from MoleculeNet. Starting from previous work, we identify the most significant molecular properties to be used as concepts to perform classification. We show that the addition of concept whitening layers brings an improvement in both classification performance and interpretability. Finally, we provide several structural and conceptual explanations for the predictions.
APA, Harvard, Vancouver, ISO, and other styles
31

Bakkiyaraj Kanthimathi Malamuthu, Thripthi P Balakrishnan, J. Deepika, Naveenkumar P, B. Venkataramanaiah, and V. Malathy. "Explainable AI for Decision-Making: A Hybrid Approach to Trustworthy Computing." International Journal of Computational and Experimental Science and Engineering 11, no. 2 (2025). https://doi.org/10.22399/ijcesen.1684.

Full text
Abstract:
In the evolving landscape of intelligent systems, ensuring transparency, fairness, and trust in artificial intelligence (AI) decision-making is paramount. This study presents a hybrid Explainable AI (XAI) framework that integrates rule-based models with deep learning techniques to enhance interpretability and trustworthiness in critical computing environments. The proposed system employs Layer-Wise Relevance Propagation (LRP) and SHAP (SHapley Additive exPlanations) for local and global interpretability, respectively, while leveraging a Convolutional Neural Network (CNN) backbone for accurate decision-making across diverse domains, including healthcare, finance, and cybersecurity. The hybrid model achieved an average accuracy of 94.3%, a precision of 91.8%, and an F1-score of 93.6%, while maintaining a computation overhead of only 6.7% compared to standard deep learning models. The trustworthiness index, computed based on interpretability, robustness, and fairness metrics, reached 92.1%, demonstrating significant improvement over traditional black-box models.This work underscores the importance of explainability in AI-driven decision-making and provides a scalable, domain-agnostic solution for trustworthy computing. The results confirm that integrating explainability mechanisms does not compromise performance and can enhance user confidence, regulatory compliance, and ethical AI deployment
APA, Harvard, Vancouver, ISO, and other styles
32

Nambiar, Athira, Harikrishnaa S, and Sharanprasath S. "Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data." Frontiers in Artificial Intelligence 6 (December 4, 2023). http://dx.doi.org/10.3389/frai.2023.1272506.

Full text
Abstract:
IntroductionThe COVID-19 pandemic had a global impact and created an unprecedented emergency in healthcare and other related frontline sectors. Various Artificial-Intelligence-based models were developed to effectively manage medical resources and identify patients at high risk. However, many of these AI models were limited in their practical high-risk applicability due to their “black-box” nature, i.e., lack of interpretability of the model. To tackle this problem, Explainable Artificial Intelligence (XAI) was introduced, aiming to explore the “black box” behavior of machine learning models and offer definitive and interpretable evidence. XAI provides interpretable analysis in a human-compliant way, thus boosting our confidence in the successful implementation of AI systems in the wild.MethodsIn this regard, this study explores the use of model-agnostic XAI models, such as SHapley Additive exPlanations values (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), for COVID-19 symptom analysis in Indian patients toward a COVID severity prediction task. Various machine learning models such as Decision Tree Classifier, XGBoost Classifier, and Neural Network Classifier are leveraged to develop Machine Learning models.Results and discussionThe proposed XAI tools are found to augment the high performance of AI systems with human interpretable evidence and reasoning, as shown through the interpretation of various explainability plots. Our comparative analysis illustrates the significance of XAI tools and their impact within a healthcare context. The study suggests that SHAP and LIME analysis are promising methods for incorporating explainability in model development and can lead to better and more trustworthy ML models in the future.
APA, Harvard, Vancouver, ISO, and other styles
33

Kottahachchi Kankanamge Don, Asitha, and Ibrahim Khalil. "QRLaXAI: quantum representation learning and explainable AI." Quantum Machine Intelligence 7, no. 1 (2025). https://doi.org/10.1007/s42484-025-00253-9.

Full text
Abstract:
Abstract As machine learning grows increasingly complex due to big data and deep learning, model explainability has become essential to fostering user trust. Quantum machine learning (QML) has emerged as a promising field, leveraging quantum computing to enhance classical machine learning methods, particularly through quantum representation learning (QRL). QRL aims to provide more efficient and powerful machine learning capabilities on noisy intermediate-scale quantum (NISQ) devices. However, interpreting QRL models poses significant challenges due to the reliance on quantum gate-based parameterized circuits, which, while analogous to classical neural network layers, operate in the quantum domain. To address these challenges, we propose an explainable QRL framework combining a quantum autoencoder (QAE) with a variational quantum classifier (VQC) and incorporating theoretical and empirical explainability for image data. Our dual approach enhances model interpretability by integrating visual explanations via local interpretable model-agnostic explanations (LIME) and analytical insights using Shapley Additive Explanations (SHAP). These complementary methods provide a deeper understanding of the model’s decision-making process based on prediction outcomes. Experimental evaluations on simulators and superconducting quantum hardware validate the effectiveness of the proposed framework for classification tasks, underscoring the importance of explainable representation learning in advancing QML towards more transparent and reliable applications.
APA, Harvard, Vancouver, ISO, and other styles
34

Altieri, Massimiliano, Michelangelo Ceci, and Roberto Corizzo. "An end-to-end explainability framework for spatio-temporal predictive modeling." Machine Learning 114, no. 4 (2025). https://doi.org/10.1007/s10994-024-06733-6.

Full text
Abstract:
Abstract The rising adoption of AI models in real-world applications characterized by sensor data creates an urgent need for inference explanation mechanisms to support domain experts in making informed decisions. Explainable AI (XAI) opens up a new opportunity to extend black-box deep learning models with such inference explanation capabilities. However, existing XAI approaches for tabular, image, and graph data are ineffective in contexts with spatio-temporal data. In this paper, we fill this gap by proposing a XAI method specifically tailored for spatio-temporal data in sensor networks, where observations are collected at regular time intervals and at different locations. Our model-agnostic masking meta-optimization method for deep learning models uncovers global salient factors influencing model predictions, and generates explanations taking into account multiple analytical views, such as features, timesteps, and node locations. Our qualitative and quantitative experiments with real-world forecasting datasets show that our approach effectively extracts explanations of model predictions, and is competitive with state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Bokai, Pengju Liu, Weizhuo Lu, and Thomas Olofsson. "Explainable Artificial Intelligence (XAI) for Material Design and Engineering Applications: A Quantitative Computational Framework." International Journal of Mechanical System Dynamics, May 20, 2025. https://doi.org/10.1002/msd2.70017.

Full text
Abstract:
ABSTRACTThe advancement of artificial intelligence (AI) in material design and engineering has led to significant improvements in predictive modeling of material properties. However, the lack of interpretability in machine learning (ML)‐based material informatics presents a major barrier to its practical adoption. This study proposes a novel quantitative computational framework that integrates ML models with explainable artificial intelligence (XAI) techniques to enhance both predictive accuracy and interpretability in material property prediction. The framework systematically incorporates a structured pipeline, including data processing, feature selection, model training, performance evaluation, explainability analysis, and real‐world deployment. It is validated through a representative case study on the prediction of high‐performance concrete (HPC) compressive strength, utilizing a comparative analysis of ML models such as Random Forest, XGBoost, Support Vector Regression (SVR), and Deep Neural Networks (DNNs). The results demonstrate that XGBoost achieves the highest predictive performance (), while SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model‐Agnostic Explanations) provide detailed insights into feature importance and material interactions. Additionally, the deployment of the trained model as a cloud‐based Flask‐Gunicorn API enables real‐time inference, ensuring its scalability and accessibility for industrial and research applications. The proposed framework addresses key limitations of existing ML approaches by integrating advanced explainability techniques, systematically handling nonlinear feature interactions, and providing a scalable deployment strategy. This study contributes to the development of interpretable and deployable AI‐driven material informatics, bridging the gap between data‐driven predictions and fundamental material science principles.
APA, Harvard, Vancouver, ISO, and other styles
36

Bhargav Sunkara, Vivek Lakshman. "Explainable AI (XAI) for Product Managers: Bridging the Gap between AI Models and Business Needs." International Journal of Scientific Research and Modern Technology, May 29, 2023, 1–6. https://doi.org/10.38124/ijsrmt.v2i5.571.

Full text
Abstract:
Product managers often grapple with integrating opaque AI systems into decision making all the while ensuring transparency, trust, and alignment with business goals. Explainable AI (XAI) is an emerging field targeting the transparency and interpretability aspects of AI. While considerable progress has been made in AI technology in the last few years, for the average non-technical user, the inner workings of AI are like a black box, still a mystery in the results and a potential misuse of the data by the AI systems. This paper proposes a robust framework –the XAI-Bridge Framework leveraging Explainable AI (XAI) to address these challenges. One of the significant principles of Explainable AI (XAI) is incorporating cutting-edge techniques such as model-agnostic tools like LIME [1], SHAP [2] to enable the users gain an understanding of the high-level behaviour of the model without needing access to its inner structure. These tools help the users with intuitive explanations and causal inference for deeper insights. Through real-world case studies spanning e-commerce personalization, retail demand forecasting, loan approval systems, and cancer diagnostics, this paper illustrates XAI’s capabilities to enhance model interpretability, reduce biases, and build stakeholder confidence. The results of this work include a practical, end-to-end methodology for setting explainability objectives, selecting optimal XAI tools, and assessing their impact on business metrics and compliance. Also, this work equips product managers with actionable strategies to seamlessly connect AI capabilities to organizational success.
APA, Harvard, Vancouver, ISO, and other styles
37

Chowdhury, Prithwijit, Ahmad Mustafa, Mohit Prabhushankar, and Ghassan AlRegib. "A unified framework for evaluating robustness of Machine Learning Interpretability for Prospect Risking." GEOPHYSICS, February 4, 2025, 1–53. https://doi.org/10.1190/geo2024-0020.1.

Full text
Abstract:
In geophysics, hydrocarbon prospect risking involves assessing the risks associated with hydrocarbon exploration by integrating data from various sources. Machine learning-based classifiers trained on tabular data have been recently used to make faster decisions on these prospects. The lack of transparency in the decision-making processes of such models has led to the emergence of explainable AI. Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) are two such examples of these explainability methods which aim to generate insights about a particular decision by ranking the input features in terms of importance. However, results of the same scenario generated by these two different explanation approaches have been shown to disagree or diverge, particularly for complex data. This discrepancy arises because the concepts of “importance” and “relevance” are defined differently across these approaches. Thus, grounding these ranked features using theoretically backed causal notions of necessity and sufficiency can serve as a more reliable and robust way to enhance the trustworthiness of these methodologies. We propose a unified framework to generate counterfactuals, quantify necessity and sufficiency, and use these measures to perform a robustness evaluation of the insights provided by LIME and SHAP on high-dimensional structured prospect risking data. This robustness test yields deeper insights into the models’ capabilities to handle erroneous data and reveals which explainability module pairs most effectively with which model for our dataset for hydrocarbon indicators.
APA, Harvard, Vancouver, ISO, and other styles
38

Umasankar, S., G. Viswanath, V. M. Bharathi, and G. Prathyusha. "Network Intrusion Detection Systems An Evaluation of Black-Box Explainable AI Frameworks (E-XAI)." International Journal of Applied Engineering and Management Letters, June 28, 2025, 93–107. https://doi.org/10.47992/ijaeml.2581.7000.0235.

Full text
Abstract:
In response to the growing complexity and frequency of network attacks, for identifying and alleviating cyber threats, disturbance detection systems (AI) (AI) (AI) have become the necessary systems. However, the opaque nature of many high -performance AI models, often referred to as "black" models, is the main challenge in terms of interpretability and trust. This study represents a comprehensive framework for evaluation of AI (XAI) techniques with an explanation of a black cabinet in connection with the detection of network disruption. The proposed E-XAI combines global and local interpretability tools such as Shap (Shaple Aditive Explanation) and Lime (Local Interpretable Explanations of Model-Agnostic) to detectize the process of deciding different black box ID models. The evaluation is carried out using three widely recognized benchmark data sets: SIMARGL 2021, NSL-KDD, and CIC IDS 2017. The framework not only measures model performance but also examines the transparency and reliability of the decisions made by the models, providing actionable insights for security analysts. Among the tested models, a Voting Classifier that combines boosted decision trees and bagged random forests emerged as the best performer. This ensemble model achieved 97.9% accuracy on the CIC IDS 2017 dataset, 99.4% on NSL-KDD, and a perfect 100% on SIMARGL 2021. These results highlight the framework's ability to maintain high detection accuracy while offering interpretable and trustworthy outputs through integrated XAI techniques. Overall, this work emphasizes the importance of incorporating explainability into high-performing IDS models, ensuring that security analysts can understand and trust the decisions made by AI systems. The E-XAI framework effectively bridges the gap between performance and interpretability, making it a valuable tool in advancing the reliability and usability of modern network security solutions.
APA, Harvard, Vancouver, ISO, and other styles
39

Mekonnen, Ephrem Tibebe, Luca Longo, and Pierpaolo Dondio. "A global model-agnostic rule-based XAI method based on Parameterized Event Primitives for time series classifiers." Frontiers in Artificial Intelligence 7 (September 20, 2024). http://dx.doi.org/10.3389/frai.2024.1381921.

Full text
Abstract:
Time series classification is a challenging research area where machine learning and deep learning techniques have shown remarkable performance. However, often, these are seen as black boxes due to their minimal interpretability. On the one hand, there is a plethora of eXplainable AI (XAI) methods designed to elucidate the functioning of models trained on image and tabular data. On the other hand, adapting these methods to explain deep learning-based time series classifiers may not be straightforward due to the temporal nature of time series data. This research proposes a novel global post-hoc explainable method for unearthing the key time steps behind the inferences made by deep learning-based time series classifiers. This novel approach generates a decision tree graph, a specific set of rules, that can be seen as explanations, potentially enhancing interpretability. The methodology involves two major phases: (1) training and evaluating deep-learning-based time series classification models, and (2) extracting parameterized primitive events, such as increasing, decreasing, local max and local min, from each instance of the evaluation set and clustering such events to extract prototypical ones. These prototypical primitive events are then used as input to a decision-tree classifier trained to fit the model predictions of the test set rather than the ground truth data. Experiments were conducted on diverse real-world datasets sourced from the UCR archive, employing metrics such as accuracy, fidelity, robustness, number of nodes, and depth of the extracted rules. The findings indicate that this global post-hoc method can improve the global interpretability of complex time series classification models.
APA, Harvard, Vancouver, ISO, and other styles
40

Rahimiaghdam, Shakiba, and Hande Alemdar. "Evaluating the quality of visual explanations on chest X-ray images for thorax diseases classification." Neural Computing and Applications, March 10, 2024. http://dx.doi.org/10.1007/s00521-024-09587-0.

Full text
Abstract:
AbstractDeep learning models are extensively used but often lack transparency due to their complex internal mechanics. To bridge this gap, the field of explainable AI (XAI) strives to make these models more interpretable. However, a significant obstacle in XAI is the absence of quantifiable metrics for evaluating explanation quality. Existing techniques, reliant on manual assessment or inadequate metrics, face limitations in scalability, reproducibility, and trustworthiness. Recognizing these issues, the current study specifically addresses the quality assessment of visual explanations in medical imaging, where interpretability profoundly influences diagnostic accuracy and trust in AI-assisted decisions. Introducing novel criteria such as informativeness, localization, coverage, multi-target capturing, and proportionality, this work presents a comprehensive method for the objective assessment of various explainability algorithms. These newly introduced criteria aid in identifying optimal evaluation metrics. The study expands the domain’s analytical toolkit by examining existing metrics, which have been prevalent in recent works for similar applications, and proposing new ones. Rigorous analysis led to selecting Jensen–Shannon divergence (JS_DIV) as the most effective metric for visual explanation quality. Applied to the multi-label, multi-class diagnosis of thoracic diseases using a trained classifier on the CheXpert dataset, local interpretable model-agnostic explanations (LIME) with diverse segmentation strategies interpret the classifier’s decisions. A qualitative analysis on an unseen subset of the VinDr-CXR dataset evaluates these metrics, confirming JS_DIV’s superiority. The subsequent quantitative analysis optimizes LIME’s hyper-parameters and benchmarks its performance across various segmentation algorithms, underscoring the utility of an objective assessment metric in practical applications.
APA, Harvard, Vancouver, ISO, and other styles
41

Patel, Vishva, Hitasvi Shukla, and Aashka Raval. "Enhancing Botnet Detection With Machine Learning And Explainable AI: A Step Towards Trustworthy AI Security." International Journal For Multidisciplinary Research 7, no. 2 (2025). https://doi.org/10.36948/ijfmr.2025.v07i02.39353.

Full text
Abstract:
The rapid proliferation of botnets, armies of compromised machines controlled by malicious actors remotely, has played a pivotal role in the increase in cyber-attacks, such as Distributed Denial-of-Service (DDoS) attacks, credential theft, data exfiltration, command-and-control (C2) activity, and automated exploitation of vulnerabilities. Legacy botnet detection methods, founded on signature matching and deep packet inspection (DPI), are rapidly becoming a relic of the past because of the prevalence of encryption schemes like TLS 1.3, DNS-over-HTTPS (DoH), and encrypted VPN tunneling. These encryption mechanisms conceal packet payloads, making traditional network monitoring technology unsuitable for botnet detection. Faced with the challenge, ML-based botnet detection mechanisms have risen to the top. Existing ML-based approaches, however, are marred by two inherent weaknesses: (1) Lack of granularity in detection because most models are based on binary classification, with no distinction of botnet attack variants, and (2) Uninterpretability, where high-performing AI models behave like black-box mechanisms, which limits trust in security automation and leads to high false positives, thereby making threat analysis difficult for security practitioners. To overcome these challenges, this study proposes an AI-based, multi-class classification botnet detection system for encrypted network traffic that includes Explainable AI (XAI) techniques for improving model explainability and decision transparency. Two datasets, CICIDS-2017 and CTU-NCC, are used in this study, where a systematic data preprocessing step was employed to maximise data quality, feature representation, and model performance. Preprocessing included duplicate record removal, missing and infinite value imputation, categorical feature transformation, and removal of highly correlated and zero-variance features to minimise model bias. Dimensionality reduction was performed using Principal Component Analysis (PCA), lowering features of CICIDS-2017 from 70 to 34 and those of CTU-NCC from 17 to 4 for maximizing computational efficiency. Additionally, to deal with skewed class distributions, Synthetic Minority Over-Sampling Technique (SMOTE) was employed to synthesise minority class samples to offer balanced representation of botnet attack types. For CICIDS-2017, we used three machine learning algorithms: Random Forest (RF) with cross-validation (0.98 accuracy, 100K samples per class), eXtreme Gradient Boosting (XGB) with Bayesian optimisation (0.997 accuracy, 180K samples per class), and our recently introduced Hybrid K-Nearest Neighbours(KNN) + Random Forest (RF) model, resulting in state-of-the-art accuracy of 0.99 (180K samples per class). The CTU-NCC dataset was divided across three network sensors and processed separately. Random Forest (RF), Decision Tree (DT), and KNN models were trained independently for each sensor, and to enhance performance, ensemble learning methods such as stacking and voting were applied to combine the results from each of the sensors. The resulting accuracies were as follows: (Random Forest Stacking: 99.38%, Random Forest Voting: 99.35% ), (Decision Tree Stacking: 99.68%, Decision Tree Voting: 91.65%), and (KNN Stacking: 97.53%, KNN Voting: 97.11%). Explainable AI (XAI) techniques like SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model agnostic Explanation) were integrated to provide enhanced interpretability in eXtreme Gradient Boosting and our Hybrid KNN+Random Forest model, which provided explanations for model decisions and enhanced analyst confidence in the system prediction. Our key contribution is the Hybrid KNN+Random Forest system with 0.99 accuracy and provision of explainability. We illustrate an accurate, scalable, and deployable AI-based solution for botnet attacks. Our experimentation shows that the multi-class classification method greatly assists in botnet attack discrimination, and Explainable AI (XAI) helps enhance clarity and is thus a strong, practical solution in the real case of botnet detection in an encrypted network scenario.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!