To see the other types of publications on this topic, follow the link: LIME (Local Interpretable Model-agnostic Explanations).

Journal articles on the topic 'LIME (Local Interpretable Model-agnostic Explanations)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'LIME (Local Interpretable Model-agnostic Explanations).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zafar, Muhammad Rehman, and Naimul Khan. "Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability." Machine Learning and Knowledge Extraction 3, no. 3 (2021): 525–41. http://dx.doi.org/10.3390/make3030027.

Full text
Abstract:
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g., linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While LIME and similar local algorithms have gained popularity due to their simplicity, th
APA, Harvard, Vancouver, ISO, and other styles
2

Qian, Junyan, Tong Wen, Ming Ling, Xiaofu Du, and Hao Ding. "Pixel-Based Clustering for Local Interpretable Model-Agnostic Explanations." Journal of Artificial Intelligence and Soft Computing Research 15, no. 3 (2025): 257–77. https://doi.org/10.2478/jaiscr-2025-0013.

Full text
Abstract:
Abstract To enhance the interpretability of black-box machine learning, model-agnostic explanations have become a focal point of interest. This paper introduces Pixel-based Local Interpretable Model-agnostic Explanations (PLIME), a method that generates perturbation samples via pixel clustering to derive raw explanations. Through iterative refinement, it reduces the number of features, culminating in an optimal feature set that best predicts the model’s score. PLIME increases the relevance of features associated with correct predictions in the explanations. A comprehensive evaluation of PLIME
APA, Harvard, Vancouver, ISO, and other styles
3

M N., Sowmiya, Jaya Sri S., Deepshika S., and Hanushya Devi G. "Credit Risk Analysis using Explainable Artificial Intelligence." Journal of Soft Computing Paradigm 6, no. 3 (2024): 272–83. http://dx.doi.org/10.36548/jscp.2024.3.004.

Full text
Abstract:
The proposed research focuses on enhancing the interpretability of risk evaluation in credit approvals within the banking sector. This work employs LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) to provide explanations for individual predictions: LIME approximates the model locally with an interpretable model, while SHAP offers insights into the contribution of each feature to the prediction through both global and local explanations. The research integrates gradient boosting algorithms (XGBoost, LightGBM) and Random Forest with these Explainabl
APA, Harvard, Vancouver, ISO, and other styles
4

Jiang, Enshuo. "UniformLIME: A Uniformly Perturbed Local Interpretable Model-Agnostic Explanations Approach for Aerodynamics." Journal of Physics: Conference Series 2171, no. 1 (2022): 012025. http://dx.doi.org/10.1088/1742-6596/2171/1/012025.

Full text
Abstract:
Abstract Machine learning and deep learning are widely used in the field of aerodynamics. But most models are often seen as black boxes due to lack of interpretability. Local Interpretable Model-agnostic Explanations (LIME) is a popular method that uses a local surrogate model to explain a single instance of machine learning. Its main disadvantages are the instability of the explanations and low local fidelity. In this paper, we propose an original modification to LIME by employing a new perturbed sample generation method for aerodynamic tabular data in regression model, which makes the differ
APA, Harvard, Vancouver, ISO, and other styles
5

Islam, Md Rabiul, Tapan Kumar Godder, Ahsan Ul-Ambia, et al. "Ensemble model-based arrhythmia classification with local interpretable model-agnostic explanations." IAES International Journal of Artificial Intelligence (IJ-AI) 14, no. 3 (2025): 2012. https://doi.org/10.11591/ijai.v14.i3.pp2012-2025.

Full text
Abstract:
Arrhythmia can lead to heart failure, stroke, and sudden cardiac arrest. Prompt diagnosis of arrhythmia is crucial for appropriate treatment. This analysis utilized four databases. We utilized seven machine learning (ML) algorithms in our work. These algorithms include logistic regression (LR), decision tree (DT), extreme gradient boosting (XGB), K-nearest neighbors (KNN), naïve Bayes (NB), multilayer perceptron (MLP), AdaBoost, and a bagging ensemble of these approaches. In addition, we conducted an analysis on a stacking ensemble consisting of XGB and bagging XGB. This study examines various
APA, Harvard, Vancouver, ISO, and other styles
6

Bhatnagar, Shweta, and Rashmi Agrawal. "Understanding explainable artificial intelligence techniques: a comparative analysis for practical application." Bulletin of Electrical Engineering and Informatics 13, no. 6 (2024): 4451–55. http://dx.doi.org/10.11591/eei.v13i6.8378.

Full text
Abstract:
Explainable artificial intelligence (XAI) uses artificial intelligence (AI) tools and techniques to build interpretability in black-box algorithms. XAI methods are classified based on their purpose (pre-model, in-model, and post-model), scope (local or global), and usability (model-agnostic and model-specific). XAI methods and techniques were summarized in this paper with real-life examples of XAI applications. Local interpretable model-agnostic explanations (LIME) and shapley additive explanations (SHAP) methods were applied to the moral dataset to compare the performance outcomes of these tw
APA, Harvard, Vancouver, ISO, and other styles
7

Hutke, Prof Ankush, Kiran Sahu, Ameet Mishra, Aniruddha Sawant, and Ruchitha Gowda. "Predict XAI." International Research Journal of Innovations in Engineering and Technology 09, no. 04 (2025): 172–76. https://doi.org/10.47001/irjiet/2025.904026.

Full text
Abstract:
Stroke predictors using Explainable Artificial Intelligence (XAI) aim to provide accurate and interpretable stroke risk predictions. This research integrates machine learning models such as Decision Trees, Random Forest, Logistic Regression, and Support Vector Machines, leveraging ensemble learning techniques like stacking and voting to enhance predictive accuracy. The system employs XAI techniques such as SHAP (SHapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) to ensure model transparency and interpretability. This paper presents the methodology, implem
APA, Harvard, Vancouver, ISO, and other styles
8

Nisha, Mrs M. P. "Interpretable Deep Neural Networks using SHAP and LIME for Decision Making in Smart Home Automation." International Scientific Journal of Engineering and Management 04, no. 05 (2025): 1–7. https://doi.org/10.55041/isjem03409.

Full text
Abstract:
Abstract - Deep Neural Networks (DNNs) are increasingly being used in smart home automation for intelligent decision-making based on IoT sensor data. This project aims to develop an interpretable deep neural network model for decision-making in smart home automation using SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations). The focus is on enhancing transparency in AI-driven automation systems by providing clear explanations for model predictions. The approach involves collecting IoT sensor data from smart home environments, training a deep learning
APA, Harvard, Vancouver, ISO, and other styles
9

Abdullah, Talal A. A., Mohd Soperi Mohd Zahid, Waleed Ali, and Shahab Ul Hassan. "B-LIME: An Improvement of LIME for Interpretable Deep Learning Classification of Cardiac Arrhythmia from ECG Signals." Processes 11, no. 2 (2023): 595. http://dx.doi.org/10.3390/pr11020595.

Full text
Abstract:
Deep Learning (DL) has gained enormous popularity recently; however, it is an opaque technique that is regarded as a black box. To ensure the validity of the model’s prediction, it is necessary to explain its authenticity. A well-known locally interpretable model-agnostic explanation method (LIME) uses surrogate techniques to simulate reasonable precision and provide explanations for a given ML model. However, LIME explanations are limited to tabular, textual, and image data. They cannot be provided for signal data features that are temporally interdependent. Moreover, LIME suffers from critic
APA, Harvard, Vancouver, ISO, and other styles
10

Admassu, Tsehay. "Evaluation of Local Interpretable Model-Agnostic Explanation and Shapley Additive Explanation for Chronic Heart Disease Detection." Proceedings of Engineering and Technology Innovation 23 (January 1, 2023): 48–59. http://dx.doi.org/10.46604/peti.2023.10101.

Full text
Abstract:
This study aims to investigate the effectiveness of local interpretable model-agnostic explanation (LIME) and Shapley additive explanation (SHAP) approaches for chronic heart disease detection. The efficiency of LIME and SHAP are evaluated by analyzing the diagnostic results of the XGBoost model and the stability and quality of counterfactual explanations. Firstly, 1025 heart disease samples are collected from the University of California Irvine. Then, the performance of LIME and SHAP is compared by using the XGBoost model with various measures, such as consistency and proximity. Finally, Pyth
APA, Harvard, Vancouver, ISO, and other styles
11

D, Retika, and Vani N. "Medicare Fraud Detection." International Research Journal of Computer Science 12, no. 04 (2025): 177–82. https://doi.org/10.26562/irjcs.2025.v1204.11.

Full text
Abstract:
This paper explores the development of an intelligent Medicare Fraud Detection System using advanced machine learning and deep learning models. The primary objective is to minimize fraudulent healthcare claims and enhance the efficiency of public fund allocation. The proposed system places a strong emphasis on explainability, transparency, and fairness in its decision-making process. To achieve this, interpretable models such as Random Forest and Decision Trees are employed. Additionally, explainable AI (XAI) tools like SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-a
APA, Harvard, Vancouver, ISO, and other styles
12

Hülsmann, Jonas, Julia Barbosa, and Florian Steinke. "Local Interpretable Explanations of Energy System Designs." Energies 16, no. 5 (2023): 2161. http://dx.doi.org/10.3390/en16052161.

Full text
Abstract:
Optimization-based design tools for energy systems often require a large set of parameter assumptions, e.g., about technology efficiencies and costs or the temporal availability of variable renewable energies. Understanding the influence of all these parameters on the computed energy system design via direct sensitivity analysis is not easy for human decision-makers, since they may become overloaded by the multitude of possible results. We thus propose transferring an approach from explaining complex neural networks, so-called locally interpretable model-agnostic explanations (LIME), to this r
APA, Harvard, Vancouver, ISO, and other styles
13

Hermosilla, Pamela, Sebastián Berríos, and Héctor Allende-Cid. "Explainable AI for Forensic Analysis: A Comparative Study of SHAP and LIME in Intrusion Detection Models." Applied Sciences 15, no. 13 (2025): 7329. https://doi.org/10.3390/app15137329.

Full text
Abstract:
The lack of interpretability in AI-based intrusion detection systems poses a critical barrier to their adoption in forensic cybersecurity, which demands high levels of reliability and verifiable evidence. To address this challenge, the integration of explainable artificial intelligence (XAI) into forensic cybersecurity offers a powerful approach to enhancing transparency, trust, and legal defensibility in network intrusion detection. This study presents a comparative analysis of SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) applied to Extreme G
APA, Harvard, Vancouver, ISO, and other styles
14

Jishnu, Setia. "Explainable AI: Methods and Applications." Explainable AI: Methods and Applications 8, no. 10 (2023): 5. https://doi.org/10.5281/zenodo.10021461.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) has emerged as a critical area of research, ensuring that AI systems are transparent, interpretable, and accountable. This paper provides a comprehensive overview of various methods and applications of Explainable AI. We delve into the importance of interpretability in AI models, explore different techniques for making complex AI models understandable, and discuss real-world applications where explainability is crucial. Through this paper, I aim to shed light on the advancements in the field of XAI and its potentialto bridge the gap between AI's predic
APA, Harvard, Vancouver, ISO, and other styles
15

Sathyan, Anoop, Abraham Itzhak Weinberg, and Kelly Cohen. "Interpretable AI for bio-medical applications." Complex Engineering Systems 2, no. 4 (2022): 18. http://dx.doi.org/10.20517/ces.2022.41.

Full text
Abstract:
This paper presents the use of two popular explainability tools called Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP) to explain the predictions made by a trained deep neural network. The deep neural network used in this work is trained on the UCI Breast Cancer Wisconsin dataset. The neural network is used to classify the masses found in patients as benign or malignant based on 30 features that describe the mass. LIME and SHAP are then used to explain the individual predictions made by the trained neural network model. The explanations provide f
APA, Harvard, Vancouver, ISO, and other styles
16

Tamir, Qureshi. "Brain Tumor Detection System." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 04 (2025): 1–9. https://doi.org/10.55041/ijsrem46252.

Full text
Abstract:
Abstract— Brain tumor detection systems using Explainable Artificial Intelligence (XAI) aim to provide accurate and interpretable tumor diagnosis. This research integrates machine learning models such as Decision Trees, Random Forest, Logistic Regression, and Support Vector Machines, leveraging ensemble learning techniques like stacking and voting to enhance predictive accuracy. The system employs XAI techniques such as SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations) to ensure model transparency and interpretability. This paper presents the metho
APA, Harvard, Vancouver, ISO, and other styles
17

Damilare Tiamiyu, Seun Oluwaremilekun Aremu, Igba Emmanuel, Chidimma Judith Ihejirika, Michael Babatunde Adewoye, and Adeshina Akin Ajayi. "Interpretable Data Analytics in Blockchain Networks Using Variational Autoencoders and Model-Agnostic Explanation Techniques for Enhanced Anomaly Detection." International Journal of Scientific Research in Science and Technology 11, no. 6 (2024): 152–83. http://dx.doi.org/10.32628/ijsrst24116170.

Full text
Abstract:
The rapid growth of blockchain technology has brought about increased transaction volumes and complexity, leading to challenges in detecting fraudulent activities and understanding data patterns. Traditional data analytics approaches often fall short in providing both accurate anomaly detection and interpretability, especially in decentralized environments. This paper explores the integration of Variational Autoencoders (VAEs), a deep learning-based anomaly detection technique, with model-agnostic explanation methods such as SHAP (SHapley Additive Explanations) and LIME (Local Interpretable Mo
APA, Harvard, Vancouver, ISO, and other styles
18

Xu, Jiaxiang, Zhanhao Zhang, Junfei Wang, et al. "Towards Faithful Local Explanations: Leveraging SVM to Interpret Black-Box Machine Learning Models." Symmetry 17, no. 6 (2025): 950. https://doi.org/10.3390/sym17060950.

Full text
Abstract:
Although machine learning (ML) models are widely used in many fields, their prediction processes are often hard to understand. This lack of transparency makes it harder for people to trust them, especially in high-stakes fields like healthcare and finance. Human-interpretable explanations for model predictions are crucial in these contexts. While existing local interpretation methods have been proposed, many suffer from low local fidelity, instability, and limited effectiveness when applied to highly nonlinear models. This paper presents SVM-X, a model-agnostic local explanation approach desig
APA, Harvard, Vancouver, ISO, and other styles
19

An, Junkang, Yiwan Zhang, and Inwhee Joe. "Specific-Input LIME Explanations for Tabular Data Based on Deep Learning Models." Applied Sciences 13, no. 15 (2023): 8782. http://dx.doi.org/10.3390/app13158782.

Full text
Abstract:
Deep learning researchers believe that as deep learning models evolve, they can perform well on many tasks. However, the complex parameters of deep learning models make it difficult for users to understand how deep learning models make predictions. In this paper, we propose the specific-input local interpretable model-agnostic explanations (LIME) model, a novel interpretable artificial intelligence (XAI) method that interprets deep learning models of tabular data. The specific-input process uses feature importance and partial dependency plots (PDPs) to select the “what” and “how”. In our exper
APA, Harvard, Vancouver, ISO, and other styles
20

Akkem, Yaganteeswarudu, Saroj Kumar Biswas, and Aruna Varanasi. "Role of Explainable AI in Crop Recommendation Technique of Smart Farming." International Journal of Intelligent Systems and Applications 17, no. 1 (2025): 31–52. https://doi.org/10.5815/ijisa.2025.01.03.

Full text
Abstract:
Smart farming is undergoing a transformation with the integration of machine learning (ML) and artificial intelligence (AI) to improve crop recommendations. Despite the advancements, a critical gap exists in opaque ML models that need to explain their predictions, leading to a trust deficit among farmers. This research addresses the gap by implementing explainable AI (XAI) techniques, specifically focusing on the crop recommendation technique in smart farming. An experiment was conducted using a Crop recommendation dataset, applying XAI algorithms such as Local Interpretable Model-agnostic Exp
APA, Harvard, Vancouver, ISO, and other styles
21

Hasan, Md Mahmudul. "Understanding Model Predictions: A Comparative Analysis of SHAP and LIME on Various ML Algorithms." Journal of Scientific and Technological Research 5, no. 1 (2024): 17–26. http://dx.doi.org/10.59738/jstr.v5i1.23(17-26).eaqr5800.

Full text
Abstract:
To guarantee the openness and dependability of prediction systems across multiple domains, machine learning model interpretation is essential. In this study, a variety of machine learning algorithms are subjected to a thorough comparative examination of two model-agnostic explainability methodologies, SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations). The study focuses on the performance of the algorithms on a dataset in order to offer subtle insights on the interpretability of models when faced with various algorithms. Intriguing new information o
APA, Harvard, Vancouver, ISO, and other styles
22

Ma, Xianlin, Mengyao Hou, Jie Zhan, and Zhenzhi Liu. "Interpretable Predictive Modeling of Tight Gas Well Productivity with SHAP and LIME Techniques." Energies 16, no. 9 (2023): 3653. http://dx.doi.org/10.3390/en16093653.

Full text
Abstract:
Accurately predicting well productivity is crucial for optimizing gas production and maximizing recovery from tight gas reservoirs. Machine learning (ML) techniques have been applied to build predictive models for the well productivity, but their high complexity and low interpretability can hinder their practical application. This study proposes using interpretable ML solutions, SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), to provide explicit explanations of the ML prediction model. The study uses data from the Eastern Sulige tight gas field
APA, Harvard, Vancouver, ISO, and other styles
23

T. Vengatesh. "Transparent Decision-Making with Explainable Ai (Xai): Advances in Interpretable Deep Learning." Journal of Information Systems Engineering and Management 10, no. 4 (2025): 1295–303. https://doi.org/10.52783/jisem.v10i4.10584.

Full text
Abstract:
As artificial intelligence (AI) systems, particularly deep learning models, become increasingly integrated into critical decision-making processes, the demand for transparency and interpretability grows. Explainable AI (XAI) addresses the "black-box" nature of deep learning by developing methods that make AI decisions understandable to humans. This paper explores recent advances in interpretable deep learning models, focusing on techniques such as attention mechanisms, SHAP (SHapley Additive exPlanations), LIME (Local Interpretable Model-agnostic Explanations), and self-explaining neural netwo
APA, Harvard, Vancouver, ISO, and other styles
24

Viswan, Vimb, Shaffi Noushath, and Mahmud Mufti. "Interpreting artificial intelligence models: a systematic review on the application of LIME and SHAP in Alzheimer's disease detection." Brain Informatics 11 (April 5, 2024): A10. https://doi.org/10.1186/s40708-024-00222-1.

Full text
Abstract:
Explainable artificial intelligence (XAI) has gained much interest in recent years for its ability to explain the complex decision-making process of machine learning (ML) and deep learning (DL) models. The Local Interpretable Model-agnostic Explanations (LIME) and Shaply Additive exPlanation (SHAP) frameworks have grown as popular interpretive tools for ML and DL models. This article provides a systematic review of the application of LIME and SHAP in interpreting the detection of Alzheimer’s disease (AD). Adhering to PRISMA and Kitchenham’s guidelines, we identified 23 relevant art
APA, Harvard, Vancouver, ISO, and other styles
25

Wangui Wanjohi, Jane, Berthine Nyunga Mpinda, and Olushina Olawale Awe. "Comparing logistic regression and neural networks for predicting skewed credit score: a LIME-based explainability approach." Statistics in Transition new series 25, no. 3 (2024): 49–67. http://dx.doi.org/10.59170/stattrans-2024-027.

Full text
Abstract:
Over the past years, machine learning emerged as a powerful tool for credit scoring, producing high-quality results compared to traditional statistical methods. However, literature shows that statistical methods are still being used because they still perform and can be interpretable compared to neural network models, considered to be black boxes. This study compares the predictive power of logistic regression and multilayer perceptron algorithms on two credit-risk datasets by applying the Local Interpretable Model-Agnostic Explanations (LIME) explainability technique. Our results show that mu
APA, Harvard, Vancouver, ISO, and other styles
26

Pasupuleti, Murali Krishna. "Building Interpretable AI Models for Healthcare Decision Support." International Journal of Academic and Industrial Research Innovations(IJAIRI) 05, no. 05 (2025): 549–60. https://doi.org/10.62311/nesx/rphcr16.

Full text
Abstract:
Abstract: The increasing integration of artificial intelligence (AI) into healthcare decision-making underscores the urgent need for models that are not only accurate but also interpretable. This study develops and evaluates interpretable AI models designed to support clinical decision-making while maintaining high predictive performance. Utilizing de-identified electronic health records (EHRs), the research implements tree-based algorithms and attention-augmented neural networks to generate clinically meaningful outputs. A combination of explainability tools—SHAP (Shapley Additive Explanation
APA, Harvard, Vancouver, ISO, and other styles
27

Changtor, Phanupong, Wachiraphong Ratiphaphongthon, Maturada Saengthong, Kittisak Buddhachat, and Nonglak Yimtragool. "Hybrid Cassava Identification from Morphometric Analysis to Deep Convolutional Neural Networks and Confirmation Strategies." Trends in Sciences 22, no. 5 (2025): 9475. https://doi.org/10.48048/tis.2025.9475.

Full text
Abstract:
The correct identification of cassava varieties is critical for crop management, such as for developing high-value products or against agricultural pests. In this study, plant characteristic regions used for classification were verified by principal component analysis (PCA) techniques. A deep learning method was applied using well-known pretrained models to identify hybrid cassava through image classification. The models employed—ResNet-18, VGG-16, AlexNet, and GoogLeNet—yielded impressive accuracies in three-fold cross-validation experiments, achieving 100, 99.06, 99.06, and 98.59 % averaged
APA, Harvard, Vancouver, ISO, and other styles
28

Bandstra, Mark S., Joseph C. Curtis, James M. Ghawaly, A. Chandler Jones, and Tenzing H. Y. Joshi. "Explaining machine-learning models for gamma-ray detection and identification." PLOS ONE 18, no. 6 (2023): e0286829. http://dx.doi.org/10.1371/journal.pone.0286829.

Full text
Abstract:
As more complex predictive models are used for gamma-ray spectral analysis, methods are needed to probe and understand their predictions and behavior. Recent work has begun to bring the latest techniques from the field of Explainable Artificial Intelligence (XAI) into the applications of gamma-ray spectroscopy, including the introduction of gradient-based methods like saliency mapping and Gradient-weighted Class Activation Mapping (Grad-CAM), and black box methods like Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP). In addition, new sources of s
APA, Harvard, Vancouver, ISO, and other styles
29

Lee, Jaeseung, and Jehyeok Rew. "Vision-Language Model-Based Local Interpretable Model-Agnostic Explanations Analysis for Explainable In-Vehicle Controller Area Network Intrusion Detection." Sensors 25, no. 10 (2025): 3020. https://doi.org/10.3390/s25103020.

Full text
Abstract:
The Controller Area Network (CAN) facilitates efficient communication among vehicle components. While it ensures fast and reliable data transmission, its lightweight design makes it susceptible to data manipulation in the absence of security layers. To address these vulnerabilities, machine learning (ML)-based intrusion detection systems (IDS) have been developed and shown to be effective in identifying anomalous CAN traffic. However, these models often function as black boxes, offering limited transparency into their decision-making processes, which hinders trust in safety-critical environmen
APA, Harvard, Vancouver, ISO, and other styles
30

Nakano, Shou, and Yang Liu. "Interpreting Temporal Shifts in Global Annual Data Using Local Surrogate Models." Mathematics 13, no. 4 (2025): 626. https://doi.org/10.3390/math13040626.

Full text
Abstract:
This paper focuses on explaining changes over time in globally sourced annual temporal data with the specific objective of identifying features in black-box models that contribute to these temporal shifts. Leveraging local explanations, a part of explainable machine learning/XAI, can yield explanations behind a country’s growth or downfall after making economic or social decisions. We employ a Local Interpretable Model-Agnostic Explanation (LIME) to shed light on national happiness indices, economic freedom, and population metrics, spanning variable time frames. Acknowledging the presence of m
APA, Harvard, Vancouver, ISO, and other styles
31

Sahu, Diptimayee, and Satya Tripathy. "A comprehensive exploration and interpretability of Android malware with explainable deep learning techniques." International Journal on Information Technologies and Security 16, no. 4 (2024): 117–28. https://doi.org/10.59035/dovz3535.

Full text
Abstract:
This study introduces an innovative approach to tackle evolving Android malware threats using explainable artificial intelligence (XAI) methods combined with deep learning techniques. The framework enhances detection accuracy and provides interpretable insights into the model's decision-making process. The research utilizes the CICInvesAndMal2019 dataset for training with Deep Neural Network (DNN) techniques. It incorporates Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) XAI techniques to refine the model's features and understand its prediction
APA, Harvard, Vancouver, ISO, and other styles
32

Sarvesh Koli Komal Bhat Prajwal Korade, Deepak Mane Anand Magar Om Khode. "Unlocking Machine Learning Model Decisions: A Comparative Analysis of LIME and SHAP for Enhanced Interpretability." Journal of Electrical Systems 20, no. 2s (2024): 598–613. http://dx.doi.org/10.52783/jes.1480.

Full text
Abstract:
XAI is critical for establishing trust and enabling the appropriate development of machine learning models. By offering transparency into how these models make judgements, XAI enables researchers and users to uncover potential biases, admit limits, and eventually enhance the fairness and dependability of AI systems. In this paper, we demonstrates two techniques, LIME and SHAP, used to improve the interpretability of machine learning models. Assessing Explainable AI (XAI) approaches is critical in searching for transparent and interpretable artificial intelligence (AI) models. Explainable AI (X
APA, Harvard, Vancouver, ISO, and other styles
33

Sarvesh Koli, Komal Bhat, Prajwal Korade, Deepak Mane, Anand Magar, Om Khode,. "Unlocking Machine Learning Model Decisions: A Comparative Analysis of LIME and SHAP for Enhanced Interpretability." Journal of Electrical Systems 20, no. 2s (2024): 1252–67. http://dx.doi.org/10.52783/jes.1768.

Full text
Abstract:
XAI is critical for establishing trust and enabling the appropriate development of machine learning models. By offering transparency into how these models make judgements, XAI enables researchers and users to uncover potential biases, admit limits, and eventually enhance the fairness and dependability of AI systems. In this paper, we demonstrates two techniques, LIME and SHAP, used to improve the interpretability of machine learning models. Assessing Explainable AI (XAI) approaches is critical in searching for transparent and interpretable artificial intelligence (AI) models. Explainable AI (X
APA, Harvard, Vancouver, ISO, and other styles
34

Bacevicius, Mantas, Agne Paulauskaite-Taraseviciene, Gintare Zokaityte, Lukas Kersys, and Agne Moleikaityte. "Comparative Analysis of Perturbation Techniques in LIME for Intrusion Detection Enhancement." Machine Learning and Knowledge Extraction 7, no. 1 (2025): 21. https://doi.org/10.3390/make7010021.

Full text
Abstract:
The growing sophistication of cyber threats necessitates robust and interpretable intrusion detection systems (IDS) to safeguard network security. While machine learning models such as Decision Tree (DT), Random Forest (RF), k-Nearest Neighbors (K-NN), and XGBoost demonstrate high effectiveness in detecting malicious activities, their interpretability decreases as their complexity and accuracy increase, posing challenges for critical cybersecurity applications. Local Interpretable Model-agnostic Explanations (LIME) is widely used to address this limitation; however, its reliance on normal dist
APA, Harvard, Vancouver, ISO, and other styles
35

Researcher. "BUILDING USER TRUST IN CONVERSATIONAL AI: THE ROLE OF EXPLAINABLE AI IN CHATBOT TRANSPARENCY." International Journal of Computer Engineering and Technology (IJCET) 15, no. 5 (2024): 406–13. https://doi.org/10.5281/zenodo.13833413.

Full text
Abstract:
This article explores the application of Explainable AI (XAI) techniques to enhance transparency and trust in chatbot decision-making processes. As chatbots become increasingly sophisticated, understanding their internal reasoning remains a significant challenge. We investigate the implementation of three key XAI methods—LIME (Local Interpretable Model-agnostic Explanations), SHAP (SHapley Additive exPlanations), and counterfactual explanations—in the context of modern chatbot systems.Through a comprehensive analysis involving multiple chatbot models and user studies, we demonstrat
APA, Harvard, Vancouver, ISO, and other styles
36

Igoche, Bern Igoche, Olumuyiwa Matthew, Peter Bednar, and Alexander Gegov. "Integrating Structural Causal Model Ontologies with LIME for Fair Machine Learning Explanations in Educational Admissions." Journal of Computing Theories and Applications 2, no. 1 (2024): 65–85. http://dx.doi.org/10.62411/jcta.10501.

Full text
Abstract:
This study employed knowledge discovery in databases (KDD) to extract and discover knowledge from the Benue State Polytechnic (Benpoly) admission database and used a structural causal model (SCM) ontological framework to represent the admission process in the Nigerian polytechnic education system. The SCM ontology identified important causal relations in features needed to model the admission process and was validated using the conditional independence test (CIT) criteria. The SCM ontology was further employed to identify and constrain input features causing bias in the local interpretable mod
APA, Harvard, Vancouver, ISO, and other styles
37

Vardhan, B. Ajay, K. P. Rushil Phanindra, K. Sumith, T. Sirisha, and V. Kakulapati. "Explainable AI for Livestock Disease Detection: An Integrated ML/DL Framework." Asian Journal of Research in Computer Science 18, no. 6 (2025): 147–53. https://doi.org/10.9734/ajrcos/2025/v18i6686.

Full text
Abstract:
Livestock diseases lead to significant economic loss and threaten food security. With the increasing demand for dairy and meat products, maintaining animal health has become a critical global priority. Although farmers and agricultural workers often lack deep technical understanding of data processing, modern AI and ML technologies are now central to early disease detection in livestock. Interpretable Machine Learning (IML) and Explainable AI (XAI) provide opportunities to build trust by making model predictions transparent and understandable. This article explores XAI and IML approaches for h
APA, Harvard, Vancouver, ISO, and other styles
38

Knapič, Samanta, Avleen Malhi, Rohit Saluja, and Kary Främling. "Explainable Artificial Intelligence for Human Decision Support System in the Medical Domain." Machine Learning and Knowledge Extraction 3, no. 3 (2021): 740–70. http://dx.doi.org/10.3390/make3030037.

Full text
Abstract:
In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). In vivo gastral images obtained by a video capsule endoscopy (VCE) were the subject of visual explanations, with the goal of increasing health professionals’ trust in black-box predictions. We implemented two post hoc interpretable machine learning methods, c
APA, Harvard, Vancouver, ISO, and other styles
39

Ramakrishna, Jeevakala Siva, Sonagiri China Venkateswarlu, Kommu Naveen Kumar, and Parikipandla Shreya. "Development of explainable machine intelligence models for heart sound abnormality detection." Indonesian Journal of Electrical Engineering and Computer Science 36, no. 2 (2024): 846. http://dx.doi.org/10.11591/ijeecs.v36.i2.pp846-853.

Full text
Abstract:
Developing explainable machine intelligence (XAI) models for heart sound abnormality detection is a crucial area of research aimed at improving the interpretability and transparency of machine learning algorithms in medical diagnostics. In this study, we propose a framework for building XAI models that can effectively detect abnormalities in heart sounds while providing interpretable explanations for their predictions. We leverage techniques such as SHapley additive exPlanations (SHAP) and local interpretable model-agnostic explanations (LIME) to generate explanations for model predictions, en
APA, Harvard, Vancouver, ISO, and other styles
40

Jeevakala, Siva Ramakrishna Sonagiri China Venkateswarlu Kommu Naveen Kumar Parikipandla Shreya. "Development of explainable machine intelligence models for heart sound abnormality detection." Indonesian Journal of Electrical Engineering and Computer Science 36, no. 2 (2024): 846–53. https://doi.org/10.11591/ijeecs.v36.i2.pp846-853.

Full text
Abstract:
Developing explainable machine intelligence (XAI) models for heart sound abnormality detection is a crucial area of research aimed at improving the interpretability and transparency of machine learning algorithms in medical diagnostics. In this study, we propose a framework for building XAI models that can effectively detect abnormalities in heart sounds while providing interpretable explanations for their predictions. We leverage techniques such as SHapley additive exPlanations (SHAP) and local interpretable model-agnostic explanations (LIME) to generate explanations for model predictions, en
APA, Harvard, Vancouver, ISO, and other styles
41

Venkatsubramaniam, Bhaskaran, and Pallav Kumar Baruah. "COMPARATIVE STUDY OF XAI USING FORMAL CONCEPT LATTICE AND LIME." ICTACT Journal on Soft Computing 13, no. 1 (2022): 2782–91. http://dx.doi.org/10.21917/ijsc.2022.0396.

Full text
Abstract:
Local Interpretable Model Agnostic Explanation (LIME) is a technique to explain a black box machine learning model using a surrogate model approach. While this technique is very popular, inherent to its approach, explanations are generated from the surrogate model and not directly from the black box model. In sensitive domains like healthcare, this need not be acceptable as trustworthy. These techniques also assume that features are independent and provide feature weights of the surrogate linear model as feature importance. In real life datasets, features may be dependent and a combination of
APA, Harvard, Vancouver, ISO, and other styles
42

Hermosilla, Pamela, Mauricio Díaz, Sebastián Berríos, and Héctor Allende-Cid . "Use of Explainable Artificial Intelligence for Analyzing and Explaining Intrusion Detection Systems." Computers 14, no. 5 (2025): 160. https://doi.org/10.3390/computers14050160.

Full text
Abstract:
The increase in malicious cyber activities has generated the need to produce effective tools for the field of digital forensics and incident response. Artificial intelligence (AI) and its fields, specifically machine learning (ML) and deep learning (DL), have shown great potential to aid the task of processing and analyzing large amounts of information. However, models generated by DL are often considered “black boxes”, a name derived due to the difficulties faced by users when trying to understand the decision-making process for obtaining results. This research seeks to address the challenges
APA, Harvard, Vancouver, ISO, and other styles
43

Anil, Bellary Chiterki, Jayasimha Sondekoppa Rajkumar, T. L. Divya, Samitha Khaiyum, Rakshitha Kiran P., and Balakrishnan Ramadoss. "A Radiomics-based Framework for Liver Cancer Analysis using Explainable Artificial Intelligence (XAI) Methods." Engineering, Technology & Applied Science Research 15, no. 3 (2025): 24098–103. https://doi.org/10.48084/etasr.10377.

Full text
Abstract:
This study presents a radiomics-based framework for liver cancer analysis, integrating imaging techniques with Explainable Artificial Intelligence (XAI) methods. The workflow involves collecting imaging data, extracting radiomics features to quantify tumor characteristics, and training Machine Learning (ML) models with Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) to enhance interpretability. Its results demonstrate improved predictive performance, with significant imaging biomarkers identified for disease progression and classification. The in
APA, Harvard, Vancouver, ISO, and other styles
44

Hong, Sin Yi, and Lih Poh Lin. "Skin Lesion Classification: A Deep Learning Approach with Local Interpretable Model-Agnostic Explanations (LIME) for Explainable Artificial Intelligence (XAI)." JOIV : International Journal on Informatics Visualization 8, no. 3-2 (2024): 1536. https://doi.org/10.62527/joiv.8.3-2.3022.

Full text
Abstract:
The classification of skin cancer is crucial as the chance of survival increases significantly with timely and accurate treatment. Convolution Neural Networks (CNNs) have proven effective in classifying skin cancer. However, CNN models are often regarded as "black boxes”, due to the lack of transparency in the decision-making. Therefore, explainable artificial intelligence (XAI) has emerged as a tool for understanding AI decisions. This study employed a CNN model, VGG16, to classify five skin lesion classes. The hyperparameters were adjusted to optimize its classification performance. The best
APA, Harvard, Vancouver, ISO, and other styles
45

Ali, Ali Mohammed Omar. "Explainability in AI: Interpretable Models for Data Science." International Journal for Research in Applied Science and Engineering Technology 13, no. 2 (2025): 766–71. https://doi.org/10.22214/ijraset.2025.66968.

Full text
Abstract:
As artificial intelligence (AI) continues to drive advancements across various domains, the need for explainability in AI models has become increasingly critical. Many state-of-the-art machine learning models, particularly deep learning architectures, operate as "black boxes," making their decision-making processes difficult to interpret. Explainable AI (XAI) aims to enhance model transparency, ensuring that AI-driven decisions are understandable, trustworthy, and aligned with ethical and regulatory standards. This paper explores different approaches to AI interpretability, including intrinsic
APA, Harvard, Vancouver, ISO, and other styles
46

Zhu, Xiyue, Yu Cheng, Jiafeng He, and Juan Guo. "Adaptive Mask-Based Interpretable Convolutional Neural Network (AMI-CNN) for Modulation Format Identification." Applied Sciences 14, no. 14 (2024): 6302. http://dx.doi.org/10.3390/app14146302.

Full text
Abstract:
Recently, various deep learning methods have been applied to Modulation Format Identification (MFI). The interpretability of deep learning models is important. However, this interpretability is challenged due to the black-box nature of deep learning. To deal with this difficulty, we propose an Adaptive Mask-Based Interpretable Convolutional Neural Network (AMI-CNN) that utilizes a mask structure for feature selection during neural network training and feeds the selected features into the classifier for decision making. During training, the masks are updated dynamically with parameters to optim
APA, Harvard, Vancouver, ISO, and other styles
47

Tasioulis, Thomas, Evangelos Bagkis, Theodosios Kassandros, and Kostas Karatzas. "The Quest for the Best Explanation: Comparing Models and XAI Methods in Air Quality Modeling Tasks." Applied Sciences 15, no. 13 (2025): 7390. https://doi.org/10.3390/app15137390.

Full text
Abstract:
Air quality (AQ) modeling is at the forefront of estimating pollution levels in areas where the spatial representativity is low. Large metropolitan areas in Asia such as Beijing face significant pollution issues due to rapid industrialization and urbanization. AQ nowcasting, especially in dense urban centers like Beijing, is crucial for public health and safety. One of the most popular and accurate modeling methodologies relies on black-box models that fail to explain the phenomena in an interpretable way. This study investigates the performance and interpretability of Explainable AI (XAI) app
APA, Harvard, Vancouver, ISO, and other styles
48

Sazib, Ahmmod Musa, Joynul Arefin, Sabab Al Farabi, Fozlur Rayhan, Md Asif Karim, and Shamima Akhter. "Advancing Renewable Energy Systems through Explainable Artificial Intelligence: A Comprehensive Review and Interdisciplinary Framework." Journal of Computer Science and Technology Studies 7, no. 2 (2025): 56–70. https://doi.org/10.32996/jcsts.2025.7.2.5.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) plays a pivotal role in advancing transparency, reliability, and informed decision-making in renewable energy systems. This review provides a comprehensive analysis of state-of-the-art XAI methodologies—including Shapley Additive Explanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), Deep Learning Important FeaTures (DeepLIFT), and rule-based models—by critically evaluating their applications, advantages, and limitations within renewable energy research. Despite notable progress, significant challenges persist, including computati
APA, Harvard, Vancouver, ISO, and other styles
49

R. S. Deshpande, P. V. Ambatkar. "Interpretable Deep Learning Models: Enhancing Transparency and Trustworthiness in Explainable AI." Proceeding International Conference on Science and Engineering 11, no. 1 (2023): 1352–63. http://dx.doi.org/10.52783/cienceng.v11i1.286.

Full text
Abstract:
Explainable AI (XAI) aims to address the opacity of deep learning models, which can limit their adoption in critical decision-making applications. This paper presents a novel framework that integrates interpretable components and visualization techniques to enhance the transparency and trustworthiness of deep learning models. We propose a hybrid explanation method combining saliency maps, feature attribution, and local interpretable model-agnostic explanations (LIME) to provide comprehensive insights into the model's decision-making process.
 Our experiments with convolutional neural netw
APA, Harvard, Vancouver, ISO, and other styles
50

Chowdhury, NHM Hassan Imam, Md Ezharul Islam, Md Sunjid Hasan, and Abdullah Al Mehedi. "AI-Powered Behavior-Based Malware Detection Using Advanced Temporal and Process-State Features: A Robust Explainable Framework." Cuestiones de Fisioterapia 54, no. 3 (2025): 3883–902. https://doi.org/10.48047/z6pyk694.

Full text
Abstract:
Malware detection remains a critical challenge in modern cybersecurity due to the rapidevolution of attack techniques and the proliferation of adversarial threats. This study introducesa robust, explainable framework for malware detection that leverages advanced temporal andprocess-state features. Using VirusTotal, a large dataset of practical metrics, including system,memory, and process metrics, was created. Gated Recurrent Units (GRU) and Long Short-TermMemory (LSTM) architectures were implemented and frequently tested to model this sequentialbehavioral data. GRU outperformed the others reg
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!