To see the other types of publications on this topic, follow the link: SHAP (SHapley Additive exPlanations).

Journal articles on the topic 'SHAP (SHapley Additive exPlanations)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'SHAP (SHapley Additive exPlanations).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Utkin, Lev, and Andrei Konstantinov. "Ensembles of Random SHAPs." Algorithms 15, no. 11 (2022): 431. http://dx.doi.org/10.3390/a15110431.

Full text
Abstract:
The ensemble-based modifications of the well-known SHapley Additive exPlanations (SHAP) method for the local explanation of a black-box model are proposed. The modifications aim to simplify the SHAP which is computationally expensive when there is a large number of features. The main idea behind the proposed modifications is to approximate the SHAP by an ensemble of SHAPs with a smaller number of features. According to the first modification, called the ER-SHAP, several features are randomly selected many times from the feature set, and the Shapley values for the features are computed by means
APA, Harvard, Vancouver, ISO, and other styles
2

Chituru, Chinwe Miracle, Sin-Ban Ho, and Ian Chai. "Diabetes Risk Prediction using Shapley Additive Explanations for Feature Engineering." Journal of Informatics and Web Engineering 4, no. 2 (2025): 18–35. https://doi.org/10.33093/jiwe.2025.4.2.2.

Full text
Abstract:
Diabetes is prevalent globally, expected to increase in the next few years. This includes people with different types of diabetes including type 1 diabetes and type 2 diabetes. There are several causes for the increase: dietary decisions and lack of exercise as the main ones. This global health challenge calls for effective prediction and early management of the disease. This research focuses on the decision tree algorithm utilization to predict the risk of diabetes and model interpretability with the integration of SHapley Additive exPlanations (SHAP) for feature engineering. Random forest an
APA, Harvard, Vancouver, ISO, and other styles
3

Sullivan, Robert S., and Luca Longo. "Explaining Deep Q-Learning Experience Replay with SHapley Additive exPlanations." Machine Learning and Knowledge Extraction 5, no. 4 (2023): 1433–55. http://dx.doi.org/10.3390/make5040072.

Full text
Abstract:
Reinforcement Learning (RL) has shown promise in optimizing complex control and decision-making processes but Deep Reinforcement Learning (DRL) lacks interpretability, limiting its adoption in regulated sectors like manufacturing, finance, and healthcare. Difficulties arise from DRL’s opaque decision-making, hindering efficiency and resource use, this issue is amplified with every advancement. While many seek to move from Experience Replay to A3C, the latter demands more resources. Despite efforts to improve Experience Replay selection strategies, there is a tendency to keep the capacity high.
APA, Harvard, Vancouver, ISO, and other styles
4

Admassu, Tsehay. "Evaluation of Local Interpretable Model-Agnostic Explanation and Shapley Additive Explanation for Chronic Heart Disease Detection." Proceedings of Engineering and Technology Innovation 23 (January 1, 2023): 48–59. http://dx.doi.org/10.46604/peti.2023.10101.

Full text
Abstract:
This study aims to investigate the effectiveness of local interpretable model-agnostic explanation (LIME) and Shapley additive explanation (SHAP) approaches for chronic heart disease detection. The efficiency of LIME and SHAP are evaluated by analyzing the diagnostic results of the XGBoost model and the stability and quality of counterfactual explanations. Firstly, 1025 heart disease samples are collected from the University of California Irvine. Then, the performance of LIME and SHAP is compared by using the XGBoost model with various measures, such as consistency and proximity. Finally, Pyth
APA, Harvard, Vancouver, ISO, and other styles
5

Baniecki, Hubert, and Przemyslaw Biecek. "Manipulating SHAP via Adversarial Data Perturbations (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (2022): 12907–8. http://dx.doi.org/10.1609/aaai.v36i11.21590.

Full text
Abstract:
We introduce a model-agnostic algorithm for manipulating SHapley Additive exPlanations (SHAP) with perturbation of tabular data. It is evaluated on predictive tasks from healthcare and financial domains to illustrate how crucial is the context of data distribution in interpreting machine learning models. Our method supports checking the stability of the explanations used by various stakeholders apparent in the domain of responsible AI; moreover, the result highlights the explanations' vulnerability that can be exploited by an adversary.
APA, Harvard, Vancouver, ISO, and other styles
6

Bandstra, Mark S., Joseph C. Curtis, James M. Ghawaly, A. Chandler Jones, and Tenzing H. Y. Joshi. "Explaining machine-learning models for gamma-ray detection and identification." PLOS ONE 18, no. 6 (2023): e0286829. http://dx.doi.org/10.1371/journal.pone.0286829.

Full text
Abstract:
As more complex predictive models are used for gamma-ray spectral analysis, methods are needed to probe and understand their predictions and behavior. Recent work has begun to bring the latest techniques from the field of Explainable Artificial Intelligence (XAI) into the applications of gamma-ray spectroscopy, including the introduction of gradient-based methods like saliency mapping and Gradient-weighted Class Activation Mapping (Grad-CAM), and black box methods like Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP). In addition, new sources of s
APA, Harvard, Vancouver, ISO, and other styles
7

Younisse, Remah, Ashraf Ahmad, and Qasem Abu Al-Haija. "Explaining Intrusion Detection-Based Convolutional Neural Networks Using Shapley Additive Explanations (SHAP)." Big Data and Cognitive Computing 6, no. 4 (2022): 126. http://dx.doi.org/10.3390/bdcc6040126.

Full text
Abstract:
Artificial intelligence (AI) and machine learning (ML) models have become essential tools used in many critical systems to make significant decisions; the decisions taken by these models need to be trusted and explained on many occasions. On the other hand, the performance of different ML and AI models varies with the same used dataset. Sometimes, developers have tried to use multiple models before deciding which model should be used without understanding the reasons behind this variance in performance. Explainable artificial intelligence (XAI) models have presented an explanation for the mode
APA, Harvard, Vancouver, ISO, and other styles
8

Hermosilla, Pamela, Sebastián Berríos, and Héctor Allende-Cid. "Explainable AI for Forensic Analysis: A Comparative Study of SHAP and LIME in Intrusion Detection Models." Applied Sciences 15, no. 13 (2025): 7329. https://doi.org/10.3390/app15137329.

Full text
Abstract:
The lack of interpretability in AI-based intrusion detection systems poses a critical barrier to their adoption in forensic cybersecurity, which demands high levels of reliability and verifiable evidence. To address this challenge, the integration of explainable artificial intelligence (XAI) into forensic cybersecurity offers a powerful approach to enhancing transparency, trust, and legal defensibility in network intrusion detection. This study presents a comparative analysis of SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) applied to Extreme G
APA, Harvard, Vancouver, ISO, and other styles
9

Bifarin, Olatomiwa O. "Interpretable machine learning with tree-based shapley additive explanations: Application to metabolomics datasets for binary classification." PLOS ONE 18, no. 5 (2023): e0284315. http://dx.doi.org/10.1371/journal.pone.0284315.

Full text
Abstract:
Machine learning (ML) models are used in clinical metabolomics studies most notably for biomarker discoveries, to identify metabolites that discriminate between a case and control group. To improve understanding of the underlying biomedical problem and to bolster confidence in these discoveries, model interpretability is germane. In metabolomics, partial least square discriminant analysis (PLS-DA) and its variants are widely used, partly due to the model’s interpretability with the Variable Influence in Projection (VIP) scores, a global interpretable method. Herein, Tree-based Shapley Additive
APA, Harvard, Vancouver, ISO, and other styles
10

M N., Sowmiya, Jaya Sri S., Deepshika S., and Hanushya Devi G. "Credit Risk Analysis using Explainable Artificial Intelligence." Journal of Soft Computing Paradigm 6, no. 3 (2024): 272–83. http://dx.doi.org/10.36548/jscp.2024.3.004.

Full text
Abstract:
The proposed research focuses on enhancing the interpretability of risk evaluation in credit approvals within the banking sector. This work employs LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) to provide explanations for individual predictions: LIME approximates the model locally with an interpretable model, while SHAP offers insights into the contribution of each feature to the prediction through both global and local explanations. The research integrates gradient boosting algorithms (XGBoost, LightGBM) and Random Forest with these Explainabl
APA, Harvard, Vancouver, ISO, and other styles
11

Iffadah, Adhisa Shilfadianis, Trimono, and Dwi Arman Prasetya. "Shapley Additive Explanations Interpretation of the XGBoost Model in Predicting Air Quality in Jakarta." Jurnal Riset Informatika 7, no. 3 (2025): 119–27. https://doi.org/10.34288/jri.v7i3.366.

Full text
Abstract:
Air quality degradation has become an increasing global problem since 2008, including in Jakarta. By 2024, air pollution in Jakarta is estimated to cause 8,400 deaths and losses of around 34 billion rupiah. To address air pollution, air quality prediction is needed using historical data of Jakarta Air Quality Index from January 2021 to May 2024. The XGBoost ensemble model was chosen for its ability to handle complex data and prevent overfitting. And Shapley Additive Explanations (SHAP) to understand how the model makes decisions. Results showed the XGBoost model achieved MAPE 4.44%. Analysis w
APA, Harvard, Vancouver, ISO, and other styles
12

Al-Fayoumi, Mustafa, Bushra Alhijawi, Qasem Abu Al-Haija, and Rakan Armoush. "XAI-PhD: Fortifying Trust of Phishing URL Detection Empowered by Shapley Additive Explanations." International Journal of Online and Biomedical Engineering (iJOE) 20, no. 11 (2024): 80–101. http://dx.doi.org/10.3991/ijoe.v20i11.49533.

Full text
Abstract:
The rapid growth of the Internet has led to an increased demand for online services. However, this surge in online activity has also brought about a new threat: phishing attacks. Phishing is a type of cyberattack that utilizes social engineering techniques and technological manipulations to steal crucial information from unsuspecting individuals. Consequently, there is a rising necessity to create dependable phishing URL detection models that can effectively identify phishing URLs with enhanced accuracy and reduced prediction overhead. This study introduces XAI-PhD, an innovative phishing dete
APA, Harvard, Vancouver, ISO, and other styles
13

Hartati, Hartati, Rudy Herteno, Mohammad Reza Faisal, Fatma Indriani, and Friska Abadi. "Recursive Feature Elimination Optimization Using Shapley Additive Explanations in Software Defect Prediction with LightGBM Classification." JURNAL INFOTEL 17, no. 1 (2025): 1–16. https://doi.org/10.20895/infotel.v17i1.1159.

Full text
Abstract:
Software defect refers to issues where the software does not function properly. The mistakes in the software development process are the reasons for software defects. Software defect prediction is performed to ensure the software is defect-free. Machine learning classification is used to classify defects in software. To improve the classification model, it is necessary to select the best features from the dataset. Recursive Feature Elimination (RFE) is a feature selection method. Shapley Additive Explanations (SHAP) is a method that can optimize feature selection algorithms to produce better r
APA, Harvard, Vancouver, ISO, and other styles
14

Al-Najjar, Husam, Bahareh Kalantar, Biswajeet Pradhan, Ghassan Beydoun, and Naonori Ueda. "SHapley Additive exPlanations (SHAP) for Landslide Susceptibility Models: Shedding Light on Explainable AI." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences X-G-2025 (July 10, 2025): 81–85. https://doi.org/10.5194/isprs-annals-x-g-2025-81-2025.

Full text
Abstract:
Abstract. This research examines the effectiveness of the SHapley Additive exPlanations (SHAP) approach in enhancing the interpretability of landslide susceptibility models. With the growing popularity of machine learning, we aim to understand how geoenvironmental and physically based factors impact modelling and to explain their interactions. The study focuses on the landslide-prone region of Bhutan and compares the performance of two approaches. The first approach incorporates geoenvironmental factors, while the second integrates both geoenvironmental factors and an additional physically bas
APA, Harvard, Vancouver, ISO, and other styles
15

Kim, Yesuel, and Youngchul Kim. "Explainable heat-related mortality with random forest and SHapley Additive exPlanations (SHAP) models." Sustainable Cities and Society 79 (April 2022): 103677. http://dx.doi.org/10.1016/j.scs.2022.103677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Santos, Mailson Ribeiro, Affonso Guedes, and Ignacio Sanchez-Gendriz. "SHapley Additive exPlanations (SHAP) for Efficient Feature Selection in Rolling Bearing Fault Diagnosis." Machine Learning and Knowledge Extraction 6, no. 1 (2024): 316–41. http://dx.doi.org/10.3390/make6010016.

Full text
Abstract:
This study introduces an efficient methodology for addressing fault detection, classification, and severity estimation in rolling element bearings. The methodology is structured into three sequential phases, each dedicated to generating distinct machine-learning-based models for the tasks of fault detection, classification, and severity estimation. To enhance the effectiveness of fault diagnosis, information acquired in one phase is leveraged in the subsequent phase. Additionally, in the pursuit of attaining models that are both compact and efficient, an explainable artificial intelligence (XA
APA, Harvard, Vancouver, ISO, and other styles
17

Nguyen, Hung Viet, and Haewon Byeon. "Predicting Depression during the COVID-19 Pandemic Using Interpretable TabNet: A Case Study in South Korea." Mathematics 11, no. 14 (2023): 3145. http://dx.doi.org/10.3390/math11143145.

Full text
Abstract:
COVID-19 has further aggravated problems by compelling people to stay indoors and limit social interactions, leading to a worsening of the depression situation. This study aimed to construct a TabNet model combined with SHapley Additive exPlanations (SHAP) to predict depression in South Korean society during the COVID-19 pandemic. We used a tabular dataset extracted from the Seoul Welfare Survey with a total of 3027 samples. The TabNet model was trained on this dataset, and its performance was compared to that of several other machine learning models, including Random Forest, eXtreme Gradient
APA, Harvard, Vancouver, ISO, and other styles
18

Sarder Abdulla Al Shiam, Md Mahdi Hasan, Md Jubair Pantho, et al. "Credit Risk Prediction Using Explainable AI." Journal of Business and Management Studies 6, no. 2 (2024): 61–66. http://dx.doi.org/10.32996/jbms.2024.6.2.6.

Full text
Abstract:
Despite advancements in machine-learning prediction techniques, the majority of lenders continue to rely on conventional methods for predicting credit defaults, largely due to their lack of transparency and explainability. This reluctance to embrace newer approaches persists as there is a compelling need for credit default prediction models to be explainable. This study introduces credit default prediction models employing several tree-based ensemble methods, with the most effective model, XGBoost, being further utilized to enhance explainability. We implement SHapley Additive exPlanations (SH
APA, Harvard, Vancouver, ISO, and other styles
19

Alba, Eduardo Luiz, Gilson Adamczuk Oliveira, Matheus Henrique Dal Molin Ribeiro, and Érick Oliveira Rodrigues. "Electricity Consumption Forecasting: An Approach Using Cooperative Ensemble Learning with SHapley Additive exPlanations." Forecasting 6, no. 3 (2024): 839–63. http://dx.doi.org/10.3390/forecast6030042.

Full text
Abstract:
Electricity expense management presents significant challenges, as this resource is susceptible to various influencing factors. In universities, the demand for this resource is rapidly growing with institutional expansion and has a significant environmental impact. In this study, the machine learning models long short-term memory (LSTM), random forest (RF), support vector regression (SVR), and extreme gradient boosting (XGBoost) were trained with historical consumption data from the Federal Institute of Paraná (IFPR) over the last seven years and climatic variables to forecast electricity cons
APA, Harvard, Vancouver, ISO, and other styles
20

Knapič, Samanta, Avleen Malhi, Rohit Saluja, and Kary Främling. "Explainable Artificial Intelligence for Human Decision Support System in the Medical Domain." Machine Learning and Knowledge Extraction 3, no. 3 (2021): 740–70. http://dx.doi.org/10.3390/make3030037.

Full text
Abstract:
In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). In vivo gastral images obtained by a video capsule endoscopy (VCE) were the subject of visual explanations, with the goal of increasing health professionals’ trust in black-box predictions. We implemented two post hoc interpretable machine learning methods, c
APA, Harvard, Vancouver, ISO, and other styles
21

Chowdhury, Shihab Uddin, Sanjana Sayeed, Iktisad Rashid, Md Golam Rabiul Alam, Abdul Kadar Muhammad Masum, and M. Ali Akber Dewan. "Shapley-Additive-Explanations-Based Factor Analysis for Dengue Severity Prediction using Machine Learning." Journal of Imaging 8, no. 9 (2022): 229. http://dx.doi.org/10.3390/jimaging8090229.

Full text
Abstract:
Dengue is a viral disease that primarily affects tropical and subtropical regions and is especially prevalent in South-East Asia. This mosquito-borne disease sometimes triggers nationwide epidemics, which results in a large number of fatalities. The development of Dengue Haemorrhagic Fever (DHF) is where most cases occur, and a large portion of them are detected among children under the age of ten, with severe conditions often progressing to a critical state known as Dengue Shock Syndrome (DSS). In this study, we analysed two separate datasets from two different countries– Vietnam and Banglade
APA, Harvard, Vancouver, ISO, and other styles
22

Mohanty, Prasant Kumar, Sharmila Anand John Francis, Rabindra Kumar Barik, Diptendu Sinha Roy, and Manob Jyoti Saikia. "Leveraging Shapley Additive Explanations for Feature Selection in Ensemble Models for Diabetes Prediction." Bioengineering 11, no. 12 (2024): 1215. https://doi.org/10.3390/bioengineering11121215.

Full text
Abstract:
Diabetes, a significant global health crisis, is primarily driven in India by unhealthy diets and sedentary lifestyles, with rapid urbanization amplifying these effects through convenience-oriented living and limited physical activity opportunities, underscoring the need for advanced preventative strategies and technology for effective management. This study integrates Shapley Additive explanations (SHAPs) into ensemble machine learning models to improve the accuracy and efficiency of diabetes predictions. By identifying the most influential features using SHAP, this study examined their role
APA, Harvard, Vancouver, ISO, and other styles
23

Lamens, Alec, and Jürgen Bajorath. "Explaining Multiclass Compound Activity Predictions Using Counterfactuals and Shapley Values." Molecules 28, no. 14 (2023): 5601. http://dx.doi.org/10.3390/molecules28145601.

Full text
Abstract:
Most machine learning (ML) models produce black box predictions that are difficult, if not impossible, to understand. In pharmaceutical research, black box predictions work against the acceptance of ML models for guiding experimental work. Hence, there is increasing interest in approaches for explainable ML, which is a part of explainable artificial intelligence (XAI), to better understand prediction outcomes. Herein, we have devised a test system for the rationalization of multiclass compound activity prediction models that combines two approaches from XAI for feature relevance or importance
APA, Harvard, Vancouver, ISO, and other styles
24

Ikhlass, Boukrouh, and Azmani Abdellah. "Explainable machine learning models applied to predicting customer churn for e-commerce." IAES International Journal of Artificial Intelligence (IJ-AI) 14, no. 1 (2025): 286–97. https://doi.org/10.11591/ijai.v14.i1.pp286-297.

Full text
Abstract:
Precise identification of customer churn is crucial for e-commerce companies due to the high costs associated with acquiring new customers. In this sector, where revenues are affected by customer churn, the challenge is intensified by the diversity of product choices offered on various marketplaces. Customers can easily switch from one platform to another, emphasizing the need for accurate churn classification to anticipate revenue fluctuations in e-commerce. In this context, this study proposes seven machine learning classification models to predict customer churn, including decision tree (DT
APA, Harvard, Vancouver, ISO, and other styles
25

Scheda, Riccardo, and Stefano Diciotti. "Explanations of Machine Learning Models in Repeated Nested Cross-Validation: An Application in Age Prediction Using Brain Complexity Features." Applied Sciences 12, no. 13 (2022): 6681. http://dx.doi.org/10.3390/app12136681.

Full text
Abstract:
SHAP (Shapley additive explanations) is a framework for explainable AI that makes explanations locally and globally. In this work, we propose a general method to obtain representative SHAP values within a repeated nested cross-validation procedure and separately for the training and test sets of the different cross-validation rounds to assess the real generalization abilities of the explanations. We applied this method to predict individual age using brain complexity features extracted from MRI scans of 159 healthy subjects. In particular, we used four implementations of the fractal dimension
APA, Harvard, Vancouver, ISO, and other styles
26

Sharipov, D. K., and A. D. Saidov. "Modified SHAP approach for interpretable prediction of cardiovascular complications." Проблемы вычислительной и прикладной математики, no. 2(64) (May 15, 2025): 114–22. https://doi.org/10.71310/pcam.2_64.2025.10.

Full text
Abstract:
This article explores the significance of modifying SHAP (SHapley Additive exPlana tions) values to enhance model interpretability in machine learning. SHAP values provide a fair attribution of feature contributions, making AI-driven decision-making more trans parent and reliable. However, raw SHAP values can sometimes be difficult to interpret due to feature interactions, noise, and inconsistencies in scale. The article discusses key techniques for modifying SHAP values, including feature aggregation, normalization, cus tom weighting, and noise reduction, to improve clarity and relevance in e
APA, Harvard, Vancouver, ISO, and other styles
27

Assegie, Tsehay Admassu. "Evaluation of the Shapley Additive Explanation Technique for Ensemble Learning Methods." Proceedings of Engineering and Technology Innovation 21 (April 22, 2022): 20–26. http://dx.doi.org/10.46604/peti.2022.9025.

Full text
Abstract:
This study aims to explore the effectiveness of the Shapley additive explanation (SHAP) technique in developing a transparent, interpretable, and explainable ensemble method for heart disease diagnosis using random forest algorithms. Firstly, the features with high impact on the heart disease prediction are selected by SHAP using 1025 heart disease datasets, obtained from a publicly available Kaggle data repository. After that, the features which have the greatest influence on the heart disease prediction are used to develop an interpretable ensemble learning model to automate the heart diseas
APA, Harvard, Vancouver, ISO, and other styles
28

El Jihaoui, Mohamed, Oum El Kheir Abra, and Khalifa Mansouri. "Predicting and Interpreting Student Academic Performance: A Deep Learning and Shapley Additive Explanations Approach." SHS Web of Conferences 214 (2025): 01001. https://doi.org/10.1051/shsconf/202521401001.

Full text
Abstract:
Predicting students' performance in high-risk exams, such as the baccalaureate, is essential for early identification of at-risk students and designing targeted interventions. This study introduces a deep learning approach to predict final baccalaureate outcomes among Moroccan high school students based on their current performance in the first semester and previous academic achievements. The dataset comprises 182.636 records containing demographic, socioeconomic, and prior academic performance features. We used a neural network model to predict the cumulative grade point average (CGPA). In th
APA, Harvard, Vancouver, ISO, and other styles
29

Nisha, Mrs M. P. "Interpretable Deep Neural Networks using SHAP and LIME for Decision Making in Smart Home Automation." International Scientific Journal of Engineering and Management 04, no. 05 (2025): 1–7. https://doi.org/10.55041/isjem03409.

Full text
Abstract:
Abstract - Deep Neural Networks (DNNs) are increasingly being used in smart home automation for intelligent decision-making based on IoT sensor data. This project aims to develop an interpretable deep neural network model for decision-making in smart home automation using SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations). The focus is on enhancing transparency in AI-driven automation systems by providing clear explanations for model predictions. The approach involves collecting IoT sensor data from smart home environments, training a deep learning
APA, Harvard, Vancouver, ISO, and other styles
30

Oiza-Zapata, Irati, and Ascensión Gallardo-Antolín. "Alzheimer’s Disease Detection from Speech Using Shapley Additive Explanations for Feature Selection and Enhanced Interpretability." Electronics 14, no. 11 (2025): 2248. https://doi.org/10.3390/electronics14112248.

Full text
Abstract:
Smart cities provide an ideal framework for the integration of advanced healthcare applications, such as early Alzheimer’s Disease (AD) detection that is essential to facilitate timely interventions and slow its progression. In this context, speech analysis, combined with Artificial Intelligence (AI) techniques, has emerged as a promising approach for the automatic detection of AD, as vocal biomarkers can provide valuable indicators of cognitive decline. The proposed approach focuses on two key goals: minimizing computational overhead while maintaining high accuracy, and improving model interp
APA, Harvard, Vancouver, ISO, and other styles
31

Zhang, Ling, Ning Lin, and Lu Yang. "Machine Learning Approaches for Predicting the Elastic Modulus of Basalt Fibers Combined with SHapley Additive exPlanations Analysis." Minerals 15, no. 4 (2025): 387. https://doi.org/10.3390/min15040387.

Full text
Abstract:
The elastic modulus of basalt fibers is closely associated with their chemical composition. In this study, eight machine learning models were developed to predict the elastic modulus, with hyper-parameter tuning implemented through the GridSearchCV technique. Model performance was evaluated using the coefficient of determination (R2), root-mean-square error (RMSE), and mean absolute error (MAE). SHAP analysis was employed to uncover the relevance of oxide compositions and their interactions with the elastic modulus. Among these models, the Categorical Boosting algorithm exhibited the best resu
APA, Harvard, Vancouver, ISO, and other styles
32

Pezoa, R., L. Salinas, and C. Torres. "Explainability of High Energy Physics events classification using SHAP." Journal of Physics: Conference Series 2438, no. 1 (2023): 012082. http://dx.doi.org/10.1088/1742-6596/2438/1/012082.

Full text
Abstract:
Abstract Complex machine learning models have been fundamental for achieving accurate results regarding events classification in High Energy Physics (HEP). However, these complex models or black-box systems lack transparency and interpretability. In this work, we use the SHapley Additive exPlanations (SHAP) method for explaining the output of two event machine learning classifiers, based on eXtreme Gradient Boost (XGBoost) and deep neural networks (DNN). We compute SHAP values to interpret the results and analyze the importance of individual features, and the experiments show that SHAP method
APA, Harvard, Vancouver, ISO, and other styles
33

Gebreyesus, Yibrah, Damian Dalton, Davide De Chiara, Marta Chinnici, and Andrea Chinnici. "AI for Automating Data Center Operations: Model Explainability in the Data Centre Context Using Shapley Additive Explanations (SHAP)." Electronics 13, no. 9 (2024): 1628. http://dx.doi.org/10.3390/electronics13091628.

Full text
Abstract:
The application of Artificial Intelligence (AI) and Machine Learning (ML) models is increasingly leveraged to automate and optimize Data Centre (DC) operations. However, the interpretability and transparency of these complex models pose critical challenges. Hence, this paper explores the Shapley Additive exPlanations (SHAP) values model explainability method for addressing and enhancing the critical interpretability and transparency challenges of predictive maintenance models. This method computes and assigns Shapley values for each feature, then quantifies and assesses their impact on the mod
APA, Harvard, Vancouver, ISO, and other styles
34

Wang, Zhen, Xiangnan He, Yuting Wang, and Xian Li. "Multi-Modal Vision Transformer with Explainable Shapley Additive Explanations Value Embedding for Cymbidium goeringii Quality Grading." Applied Sciences 14, no. 22 (2024): 10157. http://dx.doi.org/10.3390/app142210157.

Full text
Abstract:
Cymbidium goeringii (Rchb. f.) is a traditional Chinese flower with highly valued biological, cultural, and artistic properties. However, the valuation of Rchb. f. mainly relies on subjective judgment, lacking a standardized digital evaluation and grading methods. Traditional grading methods solely rely on unimodal data and are based on fuzzy grading standards; the key features for values are especially inexplicable. Accurately evaluating Rchb. f. quality through multi-modal algorithms and clarifying the impact mechanism of key features on Rchb. f. value is essential for providing scientific r
APA, Harvard, Vancouver, ISO, and other styles
35

Fernández-Loría, Carlos, Foster Provost, and Xintian Han. "Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach." MIS Quarterly 45, no. 3 (2022): 1635–60. http://dx.doi.org/10.25300/misq/2022/16749.

Full text
Abstract:
We examine counterfactual explanations for explaining the decisions made by model-based AI systems. The counterfactual approach we consider defines an explanation as a set of the system’s data inputs that causally drives the decision (i.e., changing the inputs in the set changes the decision) and is irreducible (i.e., changing any subset of the inputs does not change the decision). We (1) demonstrate how this framework may be used to provide explanations for decisions made by general data-driven AI systems that can incorporate features with arbitrary data types and multiple predictive models,
APA, Harvard, Vancouver, ISO, and other styles
36

Dandotiya, Monika, Ajay Khunteta, and Rajni Ranjan Singh Makwana. "Enhancing SDN security : Mitigating DDoS attacks with robust authentication and shapley analysis." Journal of Discrete Mathematical Sciences and Cryptography 28, no. 1 (2025): 249–65. https://doi.org/10.47974/jdmsc-2219.

Full text
Abstract:
SDN (Software-Defined Networking) revolutionized network administration by splitting the control plane from the data plane, allowing for centralized control also improved network flexibility. However, SDN also introduces security vulnerabilities, particularly Distributed Denial of Service (DDoS) attacks, which can significantly disrupt network services. This research introduces a novel two-phase method to improve the security of SDN environments through the use of SHAP (Shapley Additive Explanations). To mitigate the risk of unauthorized access, initial phase entails the establishment of a sec
APA, Harvard, Vancouver, ISO, and other styles
37

Researcher. "BUILDING USER TRUST IN CONVERSATIONAL AI: THE ROLE OF EXPLAINABLE AI IN CHATBOT TRANSPARENCY." International Journal of Computer Engineering and Technology (IJCET) 15, no. 5 (2024): 406–13. https://doi.org/10.5281/zenodo.13833413.

Full text
Abstract:
This article explores the application of Explainable AI (XAI) techniques to enhance transparency and trust in chatbot decision-making processes. As chatbots become increasingly sophisticated, understanding their internal reasoning remains a significant challenge. We investigate the implementation of three key XAI methods—LIME (Local Interpretable Model-agnostic Explanations), SHAP (SHapley Additive exPlanations), and counterfactual explanations—in the context of modern chatbot systems.Through a comprehensive analysis involving multiple chatbot models and user studies, we demonstrat
APA, Harvard, Vancouver, ISO, and other styles
38

Akkem, Yaganteeswarudu, Saroj Kumar Biswas, and Aruna Varanasi. "Role of Explainable AI in Crop Recommendation Technique of Smart Farming." International Journal of Intelligent Systems and Applications 17, no. 1 (2025): 31–52. https://doi.org/10.5815/ijisa.2025.01.03.

Full text
Abstract:
Smart farming is undergoing a transformation with the integration of machine learning (ML) and artificial intelligence (AI) to improve crop recommendations. Despite the advancements, a critical gap exists in opaque ML models that need to explain their predictions, leading to a trust deficit among farmers. This research addresses the gap by implementing explainable AI (XAI) techniques, specifically focusing on the crop recommendation technique in smart farming. An experiment was conducted using a Crop recommendation dataset, applying XAI algorithms such as Local Interpretable Model-agnostic Exp
APA, Harvard, Vancouver, ISO, and other styles
39

Mun, Seongil, and Jehyeung Yoo. "Operating Key Factor Analysis of a Rotary Kiln Using a Predictive Model and Shapley Additive Explanations." Electronics 13, no. 22 (2024): 4413. http://dx.doi.org/10.3390/electronics13224413.

Full text
Abstract:
The global smelting business of nickel using rotary kilns and electric furnaces is expanding due to the growth of the secondary battery market. Efficient operation of electric furnaces requires consistent calcine temperature in rotary kilns. Direct measurement of calcine temperature in rotary kilns presents challenges due to inaccuracies and operational limitations, and while AI predictions are feasible, reliance on them without understanding influencing factors is risky. To address this challenge, various algorithms including XGBoost, LightGBM, CatBoost, and GRU were employed for calcine temp
APA, Harvard, Vancouver, ISO, and other styles
40

Agarwal, Devansh. "Explainable AI in Cancer Diagnosis: Enhancing Interpretability with SHAP on Benign and Malignant Tumor Detection." International Journal for Research in Applied Science and Engineering Technology 13, no. 1 (2025): 1394–402. https://doi.org/10.22214/ijraset.2025.66580.

Full text
Abstract:
Machine learning (ML) is revolutionizing cancer diagnosis by providing advanced algorithms capable of detecting and classifying tumors with high accuracy. However, these models are often perceived as "black-boxes" due to their lack of transparency and interpretability, which limits their adoption in clinical settings where understanding the reasoning behind a diagnosis is vital for decision-making. In critical fields like oncology, the opacity of ML models undermines trust among medical professionals. This research applies Explainable Artificial Intelligence (XAI) techniques to a hybrid ML mod
APA, Harvard, Vancouver, ISO, and other styles
41

Shi, Zhongji, Yingping Wang, Dong Guo, Fangtong Jiao, Hu Zhang, and Feng Sun. "The Urban Intersection Accident Detection Method Based on the GAN-XGBoost and Shapley Additive Explanations Hybrid Model." Sustainability 17, no. 2 (2025): 453. https://doi.org/10.3390/su17020453.

Full text
Abstract:
Traffic accidents at urban intersections may lead to severe traffic congestion, necessitating effective detection and timely intervention. To achieve real-time traffic accident monitoring at intersections more effectively, this paper proposes an urban road intersection accident detection method based on Generative Adversarial Networks (GANs), Extreme Gradient Boosting (XGBoost), and the SHAP interpretability framework. Data extraction and processing methods are described, and a brief analysis of accident impact features is provided. To address the issue of data imbalance, GAN is used to genera
APA, Harvard, Vancouver, ISO, and other styles
42

Hasan, Md Mahmudul. "Understanding Model Predictions: A Comparative Analysis of SHAP and LIME on Various ML Algorithms." Journal of Scientific and Technological Research 5, no. 1 (2024): 17–26. http://dx.doi.org/10.59738/jstr.v5i1.23(17-26).eaqr5800.

Full text
Abstract:
To guarantee the openness and dependability of prediction systems across multiple domains, machine learning model interpretation is essential. In this study, a variety of machine learning algorithms are subjected to a thorough comparative examination of two model-agnostic explainability methodologies, SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations). The study focuses on the performance of the algorithms on a dataset in order to offer subtle insights on the interpretability of models when faced with various algorithms. Intriguing new information o
APA, Harvard, Vancouver, ISO, and other styles
43

Sathyan, Anoop, Abraham Itzhak Weinberg, and Kelly Cohen. "Interpretable AI for bio-medical applications." Complex Engineering Systems 2, no. 4 (2022): 18. http://dx.doi.org/10.20517/ces.2022.41.

Full text
Abstract:
This paper presents the use of two popular explainability tools called Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP) to explain the predictions made by a trained deep neural network. The deep neural network used in this work is trained on the UCI Breast Cancer Wisconsin dataset. The neural network is used to classify the masses found in patients as benign or malignant based on 30 features that describe the mass. LIME and SHAP are then used to explain the individual predictions made by the trained neural network model. The explanations provide f
APA, Harvard, Vancouver, ISO, and other styles
44

Yang, Changlan, Xuefeng Guan, Qingyang Xu, et al. "How can SHAP (SHapley Additive exPlanations) interpretations improve deep learning based urban cellular automata model?" Computers, Environment and Urban Systems 111 (July 2024): 102133. http://dx.doi.org/10.1016/j.compenvurbsys.2024.102133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Hutke, Prof Ankush, Kiran Sahu, Ameet Mishra, Aniruddha Sawant, and Ruchitha Gowda. "Predict XAI." International Research Journal of Innovations in Engineering and Technology 09, no. 04 (2025): 172–76. https://doi.org/10.47001/irjiet/2025.904026.

Full text
Abstract:
Stroke predictors using Explainable Artificial Intelligence (XAI) aim to provide accurate and interpretable stroke risk predictions. This research integrates machine learning models such as Decision Trees, Random Forest, Logistic Regression, and Support Vector Machines, leveraging ensemble learning techniques like stacking and voting to enhance predictive accuracy. The system employs XAI techniques such as SHAP (SHapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) to ensure model transparency and interpretability. This paper presents the methodology, implem
APA, Harvard, Vancouver, ISO, and other styles
46

Cynthia, C., Debayani Ghosh, and Gopal Krishna Kamath. "Detection of DDoS Attacks Using SHAP-Based Feature Reduction." International Journal of Machine Learning 13, no. 4 (2023): 173–80. http://dx.doi.org/10.18178/ijml.2023.13.4.1147.

Full text
Abstract:
Machine learning techniques are widely used to protect cyberspace against malicious attacks. In this paper, we propose a machine learning-based intrusion detection system to alleviate Distributed Denial-of-Service (DDoS) attacks, which is one of the most prevalent attacks that disrupt the normal traffic of the targeted network. The model prediction is interpreted using the SHapley Additive exPlanations (SHAP) technique, which also provides the most essential features with the highest Shapley values. For the proposed model, the CICIDS2017 dataset from Kaggle is used for training the classificat
APA, Harvard, Vancouver, ISO, and other styles
47

Miranda, Eka, Suko Adiarto, Faqir M. Bhatti, Alfi Yusrotis Zakiyyah, Mediana Aryuni, and Charles Bernando. "Understanding Arteriosclerotic Heart Disease Patients Using Electronic Health Records: A Machine Learning and Shapley Additive exPlanations Approach." Healthcare Informatics Research 29, no. 3 (2023): 228–38. http://dx.doi.org/10.4258/hir.2023.29.3.228.

Full text
Abstract:
Objectives: The number of deaths from cardiovascular disease is projected to reach 23.3 million by 2030. As a contribution to preventing this phenomenon, this paper proposed a machine learning (ML) model to predict patients with arteriosclerotic heart disease (AHD). We also interpreted the prediction model results based on the ML approach and deployed modelagnostic ML methods to identify informative features and their interpretations.Methods: We used a hematology Electronic Health Record (EHR) with information on erythrocytes, hematocrit, hemoglobin, mean corpuscular hemoglobin, mean corpuscul
APA, Harvard, Vancouver, ISO, and other styles
48

Jishnu, Setia. "Explainable AI: Methods and Applications." Explainable AI: Methods and Applications 8, no. 10 (2023): 5. https://doi.org/10.5281/zenodo.10021461.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) has emerged as a critical area of research, ensuring that AI systems are transparent, interpretable, and accountable. This paper provides a comprehensive overview of various methods and applications of Explainable AI. We delve into the importance of interpretability in AI models, explore different techniques for making complex AI models understandable, and discuss real-world applications where explainability is crucial. Through this paper, I aim to shed light on the advancements in the field of XAI and its potentialto bridge the gap between AI's predic
APA, Harvard, Vancouver, ISO, and other styles
49

Wieland, Ralf, Tobia Lakes, and Claas Nendel. "Using Shapley additive explanations to interpret extreme gradient boosting predictions of grassland degradation in Xilingol, China." Geoscientific Model Development 14, no. 3 (2021): 1493–510. http://dx.doi.org/10.5194/gmd-14-1493-2021.

Full text
Abstract:
Abstract. Machine learning (ML) and data-driven approaches are increasingly used in many research areas. Extreme gradient boosting (XGBoost) is a tree boosting method that has evolved into a state-of-the-art approach for many ML challenges. However, it has rarely been used in simulations of land use change so far. Xilingol, a typical region for research on serious grassland degradation and its drivers, was selected as a case study to test whether XGBoost can provide alternative insights that conventional land-use models are unable to generate. A set of 20 drivers was analysed using XGBoost, in
APA, Harvard, Vancouver, ISO, and other styles
50

Guan, Jianhua, Zuguo Yu, Yongan Liao, Runbin Tang, Ming Duan, and Guosheng Han. "Predicting Critical Path of Labor Dispute Resolution in Legal Domain by Machine Learning Models Based on SHapley Additive exPlanations and Soft Voting Strategy." Mathematics 12, no. 2 (2024): 272. http://dx.doi.org/10.3390/math12020272.

Full text
Abstract:
The labor dispute is one of the most common civil disputes. It can be resolved in the order of the following steps, which include mediation in arbitration, arbitration award, first-instance mediation, first-instance judgment, and second-instance judgment. The process can cease at any step when it is successfully resolved. In recent years, due to the increasing rights awareness of employees, the number of labor disputes has been rising annually. However, resolving labor disputes is time-consuming and labor-intensive, which brings a heavy burden to employees and dispute resolution institutions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!