Academic literature on the topic 'LIME (Local Interpretable Model-agnostic Explanations)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'LIME (Local Interpretable Model-agnostic Explanations).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "LIME (Local Interpretable Model-agnostic Explanations)"

1

Zafar, Muhammad Rehman, and Naimul Khan. "Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability." Machine Learning and Knowledge Extraction 3, no. 3 (2021): 525–41. http://dx.doi.org/10.3390/make3030027.

Full text
Abstract:
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g., linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While LIME and similar local algorithms have gained popularity due to their simplicity, th
APA, Harvard, Vancouver, ISO, and other styles
2

Qian, Junyan, Tong Wen, Ming Ling, Xiaofu Du, and Hao Ding. "Pixel-Based Clustering for Local Interpretable Model-Agnostic Explanations." Journal of Artificial Intelligence and Soft Computing Research 15, no. 3 (2025): 257–77. https://doi.org/10.2478/jaiscr-2025-0013.

Full text
Abstract:
Abstract To enhance the interpretability of black-box machine learning, model-agnostic explanations have become a focal point of interest. This paper introduces Pixel-based Local Interpretable Model-agnostic Explanations (PLIME), a method that generates perturbation samples via pixel clustering to derive raw explanations. Through iterative refinement, it reduces the number of features, culminating in an optimal feature set that best predicts the model’s score. PLIME increases the relevance of features associated with correct predictions in the explanations. A comprehensive evaluation of PLIME
APA, Harvard, Vancouver, ISO, and other styles
3

M N., Sowmiya, Jaya Sri S., Deepshika S., and Hanushya Devi G. "Credit Risk Analysis using Explainable Artificial Intelligence." Journal of Soft Computing Paradigm 6, no. 3 (2024): 272–83. http://dx.doi.org/10.36548/jscp.2024.3.004.

Full text
Abstract:
The proposed research focuses on enhancing the interpretability of risk evaluation in credit approvals within the banking sector. This work employs LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) to provide explanations for individual predictions: LIME approximates the model locally with an interpretable model, while SHAP offers insights into the contribution of each feature to the prediction through both global and local explanations. The research integrates gradient boosting algorithms (XGBoost, LightGBM) and Random Forest with these Explainabl
APA, Harvard, Vancouver, ISO, and other styles
4

Jiang, Enshuo. "UniformLIME: A Uniformly Perturbed Local Interpretable Model-Agnostic Explanations Approach for Aerodynamics." Journal of Physics: Conference Series 2171, no. 1 (2022): 012025. http://dx.doi.org/10.1088/1742-6596/2171/1/012025.

Full text
Abstract:
Abstract Machine learning and deep learning are widely used in the field of aerodynamics. But most models are often seen as black boxes due to lack of interpretability. Local Interpretable Model-agnostic Explanations (LIME) is a popular method that uses a local surrogate model to explain a single instance of machine learning. Its main disadvantages are the instability of the explanations and low local fidelity. In this paper, we propose an original modification to LIME by employing a new perturbed sample generation method for aerodynamic tabular data in regression model, which makes the differ
APA, Harvard, Vancouver, ISO, and other styles
5

Islam, Md Rabiul, Tapan Kumar Godder, Ahsan Ul-Ambia, et al. "Ensemble model-based arrhythmia classification with local interpretable model-agnostic explanations." IAES International Journal of Artificial Intelligence (IJ-AI) 14, no. 3 (2025): 2012. https://doi.org/10.11591/ijai.v14.i3.pp2012-2025.

Full text
Abstract:
Arrhythmia can lead to heart failure, stroke, and sudden cardiac arrest. Prompt diagnosis of arrhythmia is crucial for appropriate treatment. This analysis utilized four databases. We utilized seven machine learning (ML) algorithms in our work. These algorithms include logistic regression (LR), decision tree (DT), extreme gradient boosting (XGB), K-nearest neighbors (KNN), naïve Bayes (NB), multilayer perceptron (MLP), AdaBoost, and a bagging ensemble of these approaches. In addition, we conducted an analysis on a stacking ensemble consisting of XGB and bagging XGB. This study examines various
APA, Harvard, Vancouver, ISO, and other styles
6

Bhatnagar, Shweta, and Rashmi Agrawal. "Understanding explainable artificial intelligence techniques: a comparative analysis for practical application." Bulletin of Electrical Engineering and Informatics 13, no. 6 (2024): 4451–55. http://dx.doi.org/10.11591/eei.v13i6.8378.

Full text
Abstract:
Explainable artificial intelligence (XAI) uses artificial intelligence (AI) tools and techniques to build interpretability in black-box algorithms. XAI methods are classified based on their purpose (pre-model, in-model, and post-model), scope (local or global), and usability (model-agnostic and model-specific). XAI methods and techniques were summarized in this paper with real-life examples of XAI applications. Local interpretable model-agnostic explanations (LIME) and shapley additive explanations (SHAP) methods were applied to the moral dataset to compare the performance outcomes of these tw
APA, Harvard, Vancouver, ISO, and other styles
7

Hutke, Prof Ankush, Kiran Sahu, Ameet Mishra, Aniruddha Sawant, and Ruchitha Gowda. "Predict XAI." International Research Journal of Innovations in Engineering and Technology 09, no. 04 (2025): 172–76. https://doi.org/10.47001/irjiet/2025.904026.

Full text
Abstract:
Stroke predictors using Explainable Artificial Intelligence (XAI) aim to provide accurate and interpretable stroke risk predictions. This research integrates machine learning models such as Decision Trees, Random Forest, Logistic Regression, and Support Vector Machines, leveraging ensemble learning techniques like stacking and voting to enhance predictive accuracy. The system employs XAI techniques such as SHAP (SHapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) to ensure model transparency and interpretability. This paper presents the methodology, implem
APA, Harvard, Vancouver, ISO, and other styles
8

Abdullah, Talal A. A., Mohd Soperi Mohd Zahid, Waleed Ali, and Shahab Ul Hassan. "B-LIME: An Improvement of LIME for Interpretable Deep Learning Classification of Cardiac Arrhythmia from ECG Signals." Processes 11, no. 2 (2023): 595. http://dx.doi.org/10.3390/pr11020595.

Full text
Abstract:
Deep Learning (DL) has gained enormous popularity recently; however, it is an opaque technique that is regarded as a black box. To ensure the validity of the model’s prediction, it is necessary to explain its authenticity. A well-known locally interpretable model-agnostic explanation method (LIME) uses surrogate techniques to simulate reasonable precision and provide explanations for a given ML model. However, LIME explanations are limited to tabular, textual, and image data. They cannot be provided for signal data features that are temporally interdependent. Moreover, LIME suffers from critic
APA, Harvard, Vancouver, ISO, and other styles
9

Nisha, Mrs M. P. "Interpretable Deep Neural Networks using SHAP and LIME for Decision Making in Smart Home Automation." International Scientific Journal of Engineering and Management 04, no. 05 (2025): 1–7. https://doi.org/10.55041/isjem03409.

Full text
Abstract:
Abstract - Deep Neural Networks (DNNs) are increasingly being used in smart home automation for intelligent decision-making based on IoT sensor data. This project aims to develop an interpretable deep neural network model for decision-making in smart home automation using SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations). The focus is on enhancing transparency in AI-driven automation systems by providing clear explanations for model predictions. The approach involves collecting IoT sensor data from smart home environments, training a deep learning
APA, Harvard, Vancouver, ISO, and other styles
10

Admassu, Tsehay. "Evaluation of Local Interpretable Model-Agnostic Explanation and Shapley Additive Explanation for Chronic Heart Disease Detection." Proceedings of Engineering and Technology Innovation 23 (January 1, 2023): 48–59. http://dx.doi.org/10.46604/peti.2023.10101.

Full text
Abstract:
This study aims to investigate the effectiveness of local interpretable model-agnostic explanation (LIME) and Shapley additive explanation (SHAP) approaches for chronic heart disease detection. The efficiency of LIME and SHAP are evaluated by analyzing the diagnostic results of the XGBoost model and the stability and quality of counterfactual explanations. Firstly, 1025 heart disease samples are collected from the University of California Irvine. Then, the performance of LIME and SHAP is compared by using the XGBoost model with various measures, such as consistency and proximity. Finally, Pyth
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "LIME (Local Interpretable Model-agnostic Explanations)"

1

Fjellström, Lisa. "The Contribution of Visual Explanations in Forensic Investigations of Deepfake Video : An Evaluation." Thesis, Umeå universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184671.

Full text
Abstract:
Videos manipulated by machine learning have rapidly increased online in the past years. So called deepfakes can depict people who never participated in a video recording by transposing their faces onto others in it. This raises the concern of authenticity of media, which demand for higher performing detection methods in forensics. Introduction of AI detectors have been of interest, but is held back today by their lack of interpretability. The objective of this thesis was therefore to examine what the explainable AI method local interpretable model-agnostic explanations (LIME) could contribute
APA, Harvard, Vancouver, ISO, and other styles
2

Norrie, Christian. "Explainable AI techniques for sepsis diagnosis : Evaluating LIME and SHAP through a user study." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19845.

Full text
Abstract:
Articial intelligence has had a large impact on many industries and transformed some domains quite radically. There is tremendous potential in applying AI to the eld of medical diagnostics. A major issue with applying these techniques to some domains is an inability for AI models to provide an explanation or justication for their predictions. This creates a problem wherein a user may not trust an AI prediction, or there are legal requirements for justifying decisions that are not met. This thesis overviews how two explainable AI techniques (Shapley Additive Explanations and Local Interpretable
APA, Harvard, Vancouver, ISO, and other styles
3

Malmberg, Jacob, Öhman Marcus Nystad, and Alexandra Hotti. "Implementing Machine Learning in the Credit Process of a Learning Organization While Maintaining Transparency Using LIME." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-232579.

Full text
Abstract:
To determine whether a credit limit for a corporate client should be changed, a financial institution writes a PM containingtext and financial data that then is assessed by a credit committee which decides whether to increase the limit or not. To make thisprocess more efficient, machine learning algorithms was used to classify the credit PMs instead of a committee. Since most machinelearning algorithms are black boxes, the LIME framework was used to find the most important features driving the classification. Theresults of this study show that credit memos can be classified with high accuracy
APA, Harvard, Vancouver, ISO, and other styles
4

Stanzione, Vincenzo Maria. "Developing a new approach for machine learning explainability combining local and global model-agnostic approaches." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25480/.

Full text
Abstract:
The last couple of past decades have seen a new flourishing season for the Artificial Intelligence, in particular for Machine Learning (ML). This is reflected in the great number of fields that are employing ML solutions to overcome a broad spectrum of problems. However, most of the last employed ML models have a black-box behavior. This means that given a certain input, we are not able to understand why one of these models produced a certain output or made a certain decision. Most of the time, we are not interested in knowing what and how the model is thinking, but if we think of a mo
APA, Harvard, Vancouver, ISO, and other styles
5

Saluja, Rohit. "Interpreting Multivariate Time Series for an Organization Health Platform." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-289465.

Full text
Abstract:
Machine learning-based systems are rapidly becoming popular because it has been realized that machines are more efficient and effective than humans at performing certain tasks. Although machine learning algorithms are extremely popular, they are also very literal and undeviating. This has led to a huge research surge in the field of interpretability in machine learning to ensure that machine learning models are reliable, fair, and can be held liable for their decision-making process. Moreover, in most real-world problems just making predictions using machine learning algorithms only solves the
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "LIME (Local Interpretable Model-agnostic Explanations)"

1

Sivaprasad, Adarsa, Ehud Reiter, Nava Tintarev, and Nir Oren. "Evaluation of Human-Understandability of Global Model Explanations Using Decision Tree." In Communications in Computer and Information Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-50396-2_3.

Full text
Abstract:
AbstractIn explainable artificial intelligence (XAI) research, the predominant focus has been on interpreting models for experts and practitioners. Model agnostic and local explanation approaches are deemed interpretable and sufficient in many applications. However, in domains like healthcare, where end users are patients without AI or domain expertise, there is an urgent need for model explanations that are more comprehensible and instil trust in the model’s operations. We hypothesise that generating model explanations that are narrative, patient-specific and global (holistic of the model) wo
APA, Harvard, Vancouver, ISO, and other styles
2

Recio-Garcí­a, Juan A., Belén Dí­az-Agudo, and Victor Pino-Castilla. "CBR-LIME: A Case-Based Reasoning Approach to Provide Specific Local Interpretable Model-Agnostic Explanations." In Case-Based Reasoning Research and Development. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58342-2_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ohara, Genji, Keigo Kimura, and Mineichi Kudo. "R-LIME: Rectangular Constraints and Optimization for Local Interpretable Model-agnostic Explanation Methods." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-78354-8_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Karna, Siwani, Poluru Reddy Jahanve, S. Yadukul, Sreebha Bhaskaran, Susmitha Vekkot, and Shinu M. Rajagopal. "DDoS Detection in SDN Environments: Leveraging Machine Learning Algorithms with Local Interpretable Model-Agnostic Explanations (LIME) for Enhanced Security." In Smart Innovation, Systems and Technologies. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-97-7717-4_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Holm, Sarah, and Luis Macedo. "The Accuracy and Faithfullness of AL-DLIME - Active Learning-Based Deterministic Local Interpretable Model-Agnostic Explanations: A Comparison with LIME and DLIME in Medicine." In Communications in Computer and Information Science. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44064-9_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cinquini, Martina, and Riccardo Guidotti. "Causality-Aware Local Interpretable Model-Agnostic Explanations." In Communications in Computer and Information Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-63800-8_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Davagdorj, Khishigsuren, Meijing Li, and Keun Ho Ryu. "Local Interpretable Model-Agnostic Explanations of Predictive Models for Hypertension." In Advances in Intelligent Information Hiding and Multimedia Signal Processing. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-6757-9_53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Aelgani, Vivekanand, Suneet K. Gupta, and V. A. Narayana. "Local Agnostic Interpretable Model for Diabetes Prediction with Explanations Using XAI." In Lecture Notes in Networks and Systems. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-8563-8_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Thanh-Hai, Nguyen, Toan Bao Tran, An Cong Tran, and Nguyen Thai-Nghe. "Feature Selection Using Local Interpretable Model-Agnostic Explanations on Metagenomic Data." In Future Data and Security Engineering. Big Data, Security and Privacy, Smart City and Industry 4.0 Applications. Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-33-4370-2_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Graziani, Mara, Iam Palatnik de Sousa, Marley M. B. R. Vellasco, Eduardo Costa da Silva, Henning Müller, and Vincent Andrearczyk. "Sharpening Local Interpretable Model-Agnostic Explanations for Histopathology: Improved Understandability and Reliability." In Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87199-4_51.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "LIME (Local Interpretable Model-agnostic Explanations)"

1

Fan, Qingwu, and Jiayuan Li. "M-LIME: a marginalized local interpretable model-agnostic explanations." In 2024 6th International Conference on Robotics, Intelligent Control and Artificial Intelligence (RICAI). IEEE, 2024. https://doi.org/10.1109/ricai64321.2024.10911256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Acampora, Giovanni, and Autilia Vitiello. "Local Interpretable Model-agnostic Explanations for Crime Prediction." In 2025 IEEE Symposium on Trustworthy, Explainable and Responsible Computational Intelligence (CITREx Companion). IEEE, 2025. https://doi.org/10.1109/citrexcompanion65208.2025.10981498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hjuler, Maja J., Line H. Clemmensen, and Sneha Das. "Exploring Local Interpretable Model-Agnostic Explanations for Speech Emotion Recognition with Distribution-Shift." In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025. https://doi.org/10.1109/icassp49660.2025.10889825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Luqman, Muhammad, and Muhammad Arif. "Interpretable Machine Learning Modeling for Surface Chloride Concentration Prediction in Marine Concrete." In 14th International Civil Engineering Conference. Trans Tech Publications Ltd, 2025. https://doi.org/10.4028/p-r9up7x.

Full text
Abstract:
The chloride concentration (Cs) on the concrete surface is an indispensable metric for designing resilience and estimating the lifespan of concrete structures in aquatic settings. Consequently, due to Chloride action, many reinforced concrete constructions cannot reach their intended or planned life span and experience early degradation. This study utilizes three independent machine learning techniques, Extreme Gradient Boosting (XGBoost), Multi Expression Programming (MEP), and Decision Tree (DT), to forecast the concrete’s surface chloride concentration (Cs). To achieve this objective, a com
APA, Harvard, Vancouver, ISO, and other styles
5

Rahnama, Amir Hossein Akhavan, Laura Galera Alfaro, Zhendong Wang, and Maria Movin. "Local Interpretable Model-Agnostic Explanations for Neural Ranking Models." In 14th Scandinavian Conference on Artificial Intelligence SCAI 2024, June 10-11, 2024, Jönköping, Sweden. Linköping University Electronic Press, 2024. http://dx.doi.org/10.3384/ecp208017.

Full text
Abstract:
Neural Ranking Models have shown state-of-the-art performance in Learning-To-Rank (LTR) tasks. However, they are considered black-box models. Understanding the logic behind the predictions of such black-box models is paramount for their adaptability in the real-world and high-stake decision-making domains. Local explanation techniques can help us understand the importance of features in the dataset relative to the predicted output of these black-box models. This study investigates new adaptations of Local Interpretable Model-Agnostic Explanation (LIME) explanation for explaining Neural ranking
APA, Harvard, Vancouver, ISO, and other styles
6

Shi, Peichang, Aryya Gangopadhyay, and Ping Yu. "LIVE: A Local Interpretable model-agnostic Visualizations and Explanations." In 2022 IEEE 10th International Conference on Healthcare Informatics (ICHI). IEEE, 2022. http://dx.doi.org/10.1109/ichi54592.2022.00045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hamilton, Nicholas, Adam Webb, Matt Wilder, et al. "Enhancing Visualization and Explainability of Computer Vision Models with Local Interpretable Model-Agnostic Explanations (LIME)." In 2022 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2022. http://dx.doi.org/10.1109/ssci51031.2022.10022096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Awadallah, Mohamed Salah, Francisco de Arriba-Pérez, Enrique Costa-Montenegro, Mohamed Kholief, and Nashwa El-Bendary. "Investigation of Local Interpretable Model-Agnostic Explanations (LIME) Framework with Multi-Dialect Arabic Text Sentiment Classification." In 2022 32nd International Conference on Computer Theory and Applications (ICCTA). IEEE, 2022. http://dx.doi.org/10.1109/iccta58027.2022.10206274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Miyaji, Renato Okabayashi, Felipe Valencia Almeida, and Pedro Luiz Pizzigatti Corrêa. "Evaluating the Explainability of Machine Learning Classifiers: A case study of Species Distribution Modeling in the Amazon." In Symposium on Knowledge Discovery, Mining and Learning. Sociedade Brasileira de Computação - SBC, 2023. http://dx.doi.org/10.5753/kdmile.2023.232929.

Full text
Abstract:
Machine Learning Models are widely used in Computational Ecology. They can be applied for Species Distribution Modeling, which aims to determine the probability of occurrence of a species, given the environmental conditions. However, for ecologists, these models are considered as "black boxes", since basic Machine Learning knowledge is necessary to interpret them. Thus, in this work four Explainable Artificial Intelligence techniques - Local Interpretable Model-Agnostic Explanation (LIME), SHapley Additive exPlanations (SHAP), BreakDown and Partial Dependence Plots - were evaluated to the Rand
APA, Harvard, Vancouver, ISO, and other styles
10

Protopapadakis, Giorgois, Asteris Apostolidis, and Anestis I. Kalfas. "Explainable and Interpretable AI-Assisted Remaining Useful Life Estimation for Aeroengines." In ASME Turbo Expo 2022: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/gt2022-80777.

Full text
Abstract:
Abstract Remaining Useful Life (RUL) estimation is directly related with the application of predictive maintenance. When RUL estimation is performed via data-driven methods and Artificial Intelligence algorithms, explainability and interpretability of the model are necessary for trusted predictions. This is especially important when predictive maintenance is applied to gas turbines or aeroengines, as they have high operational and maintenance costs, while their safety standards are strict and highly regulated. The objective of this work is to study the explainability of a Deep Neural Network (
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!