To see the other types of publications on this topic, follow the link: Model-agnostic Explainability.

Journal articles on the topic 'Model-agnostic Explainability'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Model-agnostic Explainability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Diprose, William K., Nicholas Buist, Ning Hua, Quentin Thurier, George Shand, and Reece Robinson. "Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator." Journal of the American Medical Informatics Association 27, no. 4 (2020): 592–600. http://dx.doi.org/10.1093/jamia/ocz229.

Full text
Abstract:
Abstract Objective Implementation of machine learning (ML) may be limited by patients’ right to “meaningful information about the logic involved” when ML influences healthcare decisions. Given the complexity of healthcare decisions, it is likely that ML outputs will need to be understood and trusted by physicians, and then explained to patients. We therefore investigated the association between physician understanding of ML outputs, their ability to explain these to patients, and their willingness to trust the ML outputs, using various ML explainability methods. Materials and Methods We design
APA, Harvard, Vancouver, ISO, and other styles
2

Zafar, Muhammad Rehman, and Naimul Khan. "Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability." Machine Learning and Knowledge Extraction 3, no. 3 (2021): 525–41. http://dx.doi.org/10.3390/make3030027.

Full text
Abstract:
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g., linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While LIME and similar local algorithms have gained popularity due to their simplicity, th
APA, Harvard, Vancouver, ISO, and other styles
3

TOPCU, Deniz. "How to explain a machine learning model: HbA1c classification example." Journal of Medicine and Palliative Care 4, no. 2 (2023): 117–25. http://dx.doi.org/10.47582/jompac.1259507.

Full text
Abstract:
Aim: Machine learning tools have various applications in healthcare. However, the implementation of developed models is still limited because of various challenges. One of the most important problems is the lack of explainability of machine learning models. Explainability refers to the capacity to reveal the reasoning and logic behind the decisions made by AI systems, making it straightforward for human users to understand the process and how the system arrived at a specific outcome. The study aimed to compare the performance of different model-agnostic explanation methods using two different
APA, Harvard, Vancouver, ISO, and other styles
4

Ullah, Ihsan, Andre Rios, Vaibhav Gala, and Susan Mckeever. "Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation." Applied Sciences 12, no. 1 (2021): 136. http://dx.doi.org/10.3390/app12010136.

Full text
Abstract:
Trust and credibility in machine learning models are bolstered by the ability of a model to explain its decisions. While explainability of deep learning models is a well-known challenge, a further challenge is clarity of the explanation itself for relevant stakeholders of the model. Layer-wise Relevance Propagation (LRP), an established explainability technique developed for deep models in computer vision, provides intuitive human-readable heat maps of input images. We present the novel application of LRP with tabular datasets containing mixed data (categorical and numerical) using a deep neur
APA, Harvard, Vancouver, ISO, and other styles
5

Srinivasu, Parvathaneni Naga, N. Sandhya, Rutvij H. Jhaveri, and Roshani Raut. "From Blackbox to Explainable AI in Healthcare: Existing Tools and Case Studies." Mobile Information Systems 2022 (June 13, 2022): 1–20. http://dx.doi.org/10.1155/2022/8167821.

Full text
Abstract:
Introduction. Artificial intelligence (AI) models have been employed to automate decision-making, from commerce to more critical fields directly affecting human lives, including healthcare. Although the vast majority of these proposed AI systems are considered black box models that lack explainability, there is an increasing trend of attempting to create medical explainable Artificial Intelligence (XAI) systems using approaches such as attention mechanisms and surrogate models. An AI system is said to be explainable if humans can tell how the system reached its decision. Various XAI-driven hea
APA, Harvard, Vancouver, ISO, and other styles
6

Lv, Ge, Chen Jason Zhang, and Lei Chen. "HENCE-X: Toward Heterogeneity-Agnostic Multi-Level Explainability for Deep Graph Networks." Proceedings of the VLDB Endowment 16, no. 11 (2023): 2990–3003. http://dx.doi.org/10.14778/3611479.3611503.

Full text
Abstract:
Deep graph networks (DGNs) have demonstrated their outstanding effectiveness on both heterogeneous and homogeneous graphs. However their black-box nature does not allow human users to understand their working mechanisms. Recently, extensive efforts have been devoted to explaining DGNs' prediction, yet heterogeneity-agnostic multi-level explainability is still less explored. Since the two types of graphs are both irreplaceable in real-life applications, having a more general and end-to-end explainer becomes a natural and inevitable choice. In the meantime, feature-level explanation is often ign
APA, Harvard, Vancouver, ISO, and other styles
7

Fauvel, Kevin, Tao Lin, Véronique Masson, Élisa Fromont, and Alexandre Termier. "XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification." Mathematics 9, no. 23 (2021): 3137. http://dx.doi.org/10.3390/math9233137.

Full text
Abstract:
Multivariate Time Series (MTS) classification has gained importance over the past decade with the increase in the number of temporal datasets in multiple domains. The current state-of-the-art MTS classifier is a heavyweight deep learning approach, which outperforms the second-best MTS classifier only on large datasets. Moreover, this deep learning approach cannot provide faithful explanations as it relies on post hoc model-agnostic explainability methods, which could prevent its use in numerous applications. In this paper, we present XCM, an eXplainable Convolutional neural network for MTS cla
APA, Harvard, Vancouver, ISO, and other styles
8

Hassan, Fayaz, Jianguo Yu, Zafi Sherhan Syed, Nadeem Ahmed, Mana Saleh Al Reshan, and Asadullah Shaikh. "Achieving model explainability for intrusion detection in VANETs with LIME." PeerJ Computer Science 9 (June 22, 2023): e1440. http://dx.doi.org/10.7717/peerj-cs.1440.

Full text
Abstract:
Vehicular ad hoc networks (VANETs) are intelligent transport subsystems; vehicles can communicate through a wireless medium in this system. There are many applications of VANETs such as traffic safety and preventing the accident of vehicles. Many attacks affect VANETs communication such as denial of service (DoS) and distributed denial of service (DDoS). In the past few years the number of DoS (denial of service) attacks are increasing, so network security and protection of the communication systems are challenging topics; intrusion detection systems need to be improved to identify these attac
APA, Harvard, Vancouver, ISO, and other styles
9

Vieira, Carla Piazzon Ramos, and Luciano Antonio Digiampietri. "A study about Explainable Articial Intelligence: using decision tree to explain SVM." Revista Brasileira de Computação Aplicada 12, no. 1 (2020): 113–21. http://dx.doi.org/10.5335/rbca.v12i1.10247.

Full text
Abstract:
The technologies supporting Artificial Intelligence (AI) have advanced rapidly over the past few years and AI is becoming a commonplace in every aspect of life like the future of self-driving cars or earlier health diagnosis. For this to occur shortly, the entire community stands in front of the barrier of explainability, an inherent problem of latest models (e.g. Deep Neural Networks) that were not present in the previous hype of AI (linear and rule-based models). Most of these recent models are used as black boxes without understanding partially or even completely how different features infl
APA, Harvard, Vancouver, ISO, and other styles
10

Nguyen, Hung Viet, and Haewon Byeon. "Prediction of Out-of-Hospital Cardiac Arrest Survival Outcomes Using a Hybrid Agnostic Explanation TabNet Model." Mathematics 11, no. 9 (2023): 2030. http://dx.doi.org/10.3390/math11092030.

Full text
Abstract:
Survival after out-of-hospital cardiac arrest (OHCA) is contingent on time-sensitive interventions taken by onlookers, emergency call operators, first responders, emergency medical services (EMS) personnel, and hospital healthcare staff. By building integrated cardiac resuscitation systems of care, measurement systems, and techniques for assuring the correct execution of evidence-based treatments by bystanders, EMS professionals, and hospital employees, survival results can be improved. To aid in OHCA prognosis and treatment, we develop a hybrid agnostic explanation TabNet (HAE-TabNet) model t
APA, Harvard, Vancouver, ISO, and other styles
11

Szepannaek, Gero, and Karsten Lübke. "How much do we see? On the explainability of partial dependence plots for credit risk scoring." Argumenta Oeconomica 2023, no. 2 (2023): 137–50. http://dx.doi.org/10.15611/aoe.2023.1.07.

Full text
Abstract:
Risk prediction models in credit scoring have to fulfil regulatory requirements, one of which consists in the interpretability of the model. Unfortunately, many popular modern machine learning algorithms result in models that do not satisfy this business need, whereas the research activities in the field of explainable machine learning have strongly increased in recent years. Partial dependence plots denote one of the most popular methods for model-agnostic interpretation of a feature’s effect on the model outcome, but in practice they are usually applied without answering the question of how
APA, Harvard, Vancouver, ISO, and other styles
12

Sovrano, Francesco, Salvatore Sapienza, Monica Palmirani, and Fabio Vitali. "Metrics, Explainability and the European AI Act Proposal." J 5, no. 1 (2022): 126–38. http://dx.doi.org/10.3390/j5010010.

Full text
Abstract:
On 21 April 2021, the European Commission proposed the first legal framework on Artificial Intelligence (AI) to address the risks posed by this emerging method of computation. The Commission proposed a Regulation known as the AI Act. The proposed AI Act considers not only machine learning, but expert systems and statistical models long in place. Under the proposed AI Act, new obligations are set to ensure transparency, lawfulness, and fairness. Their goal is to establish mechanisms to ensure quality at launch and throughout the whole life cycle of AI-based systems, thus ensuring legal certaint
APA, Harvard, Vancouver, ISO, and other styles
13

Kaplun, Dmitry, Alexander Krasichkov, Petr Chetyrbok, Nikolay Oleinikov, Anupam Garg, and Husanbir Singh Pannu. "Cancer Cell Profiling Using Image Moments and Neural Networks with Model Agnostic Explainability: A Case Study of Breast Cancer Histopathological (BreakHis) Database." Mathematics 9, no. 20 (2021): 2616. http://dx.doi.org/10.3390/math9202616.

Full text
Abstract:
With the evolution of modern digital pathology, examining cancer cell tissues has paved the way to quantify subtle symptoms, for example, by means of image staining procedures using Eosin and Hematoxylin. Cancer tissues in the case of breast and lung cancer are quite challenging to examine by manual expert analysis of patients suffering from cancer. Merely relying on the observable characteristics by histopathologists for cell profiling may under-constrain the scale and diagnostic quality due to tedious repetition with constant concentration. Thus, automatic analysis of cancer cells has been p
APA, Harvard, Vancouver, ISO, and other styles
14

Ibrahim, Muhammad Amien, Samsul Arifin, I. Gusti Agung Anom Yudistira, et al. "An Explainable AI Model for Hate Speech Detection on Indonesian Twitter." CommIT (Communication and Information Technology) Journal 16, no. 2 (2022): 175–82. http://dx.doi.org/10.21512/commit.v16i2.8343.

Full text
Abstract:
To avoid citizen disputes, hate speech on social media, such as Twitter, must be automatically detected. The current research in Indonesian Twitter focuses on developing better hate speech detection models. However, there is limited study on the explainability aspects of hate speech detection. The research aims to explain issues that previous researchers have not detailed and attempt to answer the shortcomings of previous researchers. There are 13,169 tweets in the dataset with labels like “hate speech” and “abusive language”. The dataset also provides binary labels on whether hate speech is d
APA, Harvard, Vancouver, ISO, and other styles
15

Manikis, Georgios C., Georgios S. Ioannidis, Loizos Siakallis, et al. "Multicenter DSC–MRI-Based Radiomics Predict IDH Mutation in Gliomas." Cancers 13, no. 16 (2021): 3965. http://dx.doi.org/10.3390/cancers13163965.

Full text
Abstract:
To address the current lack of dynamic susceptibility contrast magnetic resonance imaging (DSC–MRI)-based radiomics to predict isocitrate dehydrogenase (IDH) mutations in gliomas, we present a multicenter study that featured an independent exploratory set for radiomics model development and external validation using two independent cohorts. The maximum performance of the IDH mutation status prediction on the validation set had an accuracy of 0.544 (Cohen’s kappa: 0.145, F1-score: 0.415, area under the curve-AUC: 0.639, sensitivity: 0.733, specificity: 0.491), which significantly improved to an
APA, Harvard, Vancouver, ISO, and other styles
16

Oubelaid, Adel, Abdelhameed Ibrahim, and Ahmed M. Elshewey. "Bridging the Gap: An Explainable Methodology for Customer Churn Prediction in Supply Chain Management." Journal of Artificial Intelligence and Metaheuristics 4, no. 1 (2023): 16–23. http://dx.doi.org/10.54216/jaim.040102.

Full text
Abstract:
Customer churn prediction is a critical task for businesses aiming to retain their valuable customers. Nevertheless, the lack of transparency and interpretability in machine learning models hinders their implementation in real-world applications. In this paper, we introduce a novel methodology for customer churn prediction in supply chain management that addresses the need for explainability. Our approach take advantage of XGBoost as the underlying predictive model. We recognize the importance of not only accurately predicting churn but also providing actionable insights into the key factors d
APA, Harvard, Vancouver, ISO, and other styles
17

Sathyan, Anoop, Abraham Itzhak Weinberg, and Kelly Cohen. "Interpretable AI for bio-medical applications." Complex Engineering Systems 2, no. 4 (2022): 18. http://dx.doi.org/10.20517/ces.2022.41.

Full text
Abstract:
This paper presents the use of two popular explainability tools called Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP) to explain the predictions made by a trained deep neural network. The deep neural network used in this work is trained on the UCI Breast Cancer Wisconsin dataset. The neural network is used to classify the masses found in patients as benign or malignant based on 30 features that describe the mass. LIME and SHAP are then used to explain the individual predictions made by the trained neural network model. The explanations provide f
APA, Harvard, Vancouver, ISO, and other styles
18

Ahmed, Md Sabbir, Md Tasin Tazwar, Haseen Khan, et al. "Yield Response of Different Rice Ecotypes to Meteorological, Agro-Chemical, and Soil Physiographic Factors for Interpretable Precision Agriculture Using Extreme Gradient Boosting and Support Vector Regression." Complexity 2022 (September 19, 2022): 1–20. http://dx.doi.org/10.1155/2022/5305353.

Full text
Abstract:
The food security of more than half of the world’s population depends on rice production which is one of the key objectives of precision agriculture. The traditional rice almanac used astronomical and climate factors to estimate yield response. However, this research integrated meteorological, agro-chemical, and soil physiographic factors for yield response prediction. Besides, the impact of those factors on the production of three major rice ecotypes has also been studied in this research. Moreover, this study found a different set of those factors with respect to the yield response of differ
APA, Harvard, Vancouver, ISO, and other styles
19

Başağaoğlu, Hakan, Debaditya Chakraborty, Cesar Do Lago, et al. "A Review on Interpretable and Explainable Artificial Intelligence in Hydroclimatic Applications." Water 14, no. 8 (2022): 1230. http://dx.doi.org/10.3390/w14081230.

Full text
Abstract:
This review focuses on the use of Interpretable Artificial Intelligence (IAI) and eXplainable Artificial Intelligence (XAI) models for data imputations and numerical or categorical hydroclimatic predictions from nonlinearly combined multidimensional predictors. The AI models considered in this paper involve Extreme Gradient Boosting, Light Gradient Boosting, Categorical Boosting, Extremely Randomized Trees, and Random Forest. These AI models can transform into XAI models when they are coupled with the explanatory methods such as the Shapley additive explanations and local interpretable model-a
APA, Harvard, Vancouver, ISO, and other styles
20

Mehta, Harshkumar, and Kalpdrum Passi. "Social Media Hate Speech Detection Using Explainable Artificial Intelligence (XAI)." Algorithms 15, no. 8 (2022): 291. http://dx.doi.org/10.3390/a15080291.

Full text
Abstract:
Explainable artificial intelligence (XAI) characteristics have flexible and multifaceted potential in hate speech detection by deep learning models. Interpreting and explaining decisions made by complex artificial intelligence (AI) models to understand the decision-making process of these model were the aims of this research. As a part of this research study, two datasets were taken to demonstrate hate speech detection using XAI. Data preprocessing was performed to clean data of any inconsistencies, clean the text of the tweets, tokenize and lemmatize the text, etc. Categorical variables were
APA, Harvard, Vancouver, ISO, and other styles
21

Lu, Haohui, and Shahadat Uddin. "Explainable Stacking-Based Model for Predicting Hospital Readmission for Diabetic Patients." Information 13, no. 9 (2022): 436. http://dx.doi.org/10.3390/info13090436.

Full text
Abstract:
Artificial intelligence is changing the practice of healthcare. While it is essential to employ such solutions, making them transparent to medical experts is more critical. Most of the previous work presented disease prediction models, but did not explain them. Many healthcare stakeholders do not have a solid foundation in these models. Treating these models as ‘black box’ diminishes confidence in their predictions. The development of explainable artificial intelligence (XAI) methods has enabled us to change the models into a ‘white box’. XAI allows human users to comprehend the results from m
APA, Harvard, Vancouver, ISO, and other styles
22

Abdullah, Talal A. A., Mohd Soperi Mohd Zahid, Waleed Ali, and Shahab Ul Hassan. "B-LIME: An Improvement of LIME for Interpretable Deep Learning Classification of Cardiac Arrhythmia from ECG Signals." Processes 11, no. 2 (2023): 595. http://dx.doi.org/10.3390/pr11020595.

Full text
Abstract:
Deep Learning (DL) has gained enormous popularity recently; however, it is an opaque technique that is regarded as a black box. To ensure the validity of the model’s prediction, it is necessary to explain its authenticity. A well-known locally interpretable model-agnostic explanation method (LIME) uses surrogate techniques to simulate reasonable precision and provide explanations for a given ML model. However, LIME explanations are limited to tabular, textual, and image data. They cannot be provided for signal data features that are temporally interdependent. Moreover, LIME suffers from critic
APA, Harvard, Vancouver, ISO, and other styles
23

Merone, Mario, Alessandro Graziosi, Valerio Lapadula, Lorenzo Petrosino, Onorato d’Angelis, and Luca Vollero. "A Practical Approach to the Analysis and Optimization of Neural Networks on Embedded Systems." Sensors 22, no. 20 (2022): 7807. http://dx.doi.org/10.3390/s22207807.

Full text
Abstract:
The exponential increase in internet data poses several challenges to cloud systems and data centers, such as scalability, power overheads, network load, and data security. To overcome these limitations, research is focusing on the development of edge computing systems, i.e., based on a distributed computing model in which data processing occurs as close as possible to where the data are collected. Edge computing, indeed, mitigates the limitations of cloud computing, implementing artificial intelligence algorithms directly on the embedded devices enabling low latency responses without network
APA, Harvard, Vancouver, ISO, and other styles
24

Kim, Jaehun. "Increasing trust in complex machine learning systems." ACM SIGIR Forum 55, no. 1 (2021): 1–3. http://dx.doi.org/10.1145/3476415.3476435.

Full text
Abstract:
Machine learning (ML) has become a core technology for many real-world applications. Modern ML models are applied to unprecedentedly complex and difficult challenges, including very large and subjective problems. For instance, applications towards multimedia understanding have been advanced substantially. Here, it is already prevalent that cultural/artistic objects such as music and videos are analyzed and served to users according to their preference, enabled through ML techniques. One of the most recent breakthroughs in ML is Deep Learning (DL), which has been immensely adopted to tackle suc
APA, Harvard, Vancouver, ISO, and other styles
25

Du, Yuhan, Anthony R. Rafferty, Fionnuala M. McAuliffe, John Mehegan, and Catherine Mooney. "Towards an explainable clinical decision support system for large-for-gestational-age births." PLOS ONE 18, no. 2 (2023): e0281821. http://dx.doi.org/10.1371/journal.pone.0281821.

Full text
Abstract:
A myriad of maternal and neonatal complications can result from delivery of a large-for-gestational-age (LGA) infant. LGA birth rates have increased in many countries since the late 20th century, partially due to a rise in maternal body mass index, which is associated with LGA risk. The objective of the current study was to develop LGA prediction models for women with overweight and obesity for the purpose of clinical decision support in a clinical setting. Maternal characteristics, serum biomarkers and fetal anatomy scan measurements for 465 pregnant women with overweight and obesity before a
APA, Harvard, Vancouver, ISO, and other styles
26

Antoniadi, Anna Markella, Yuhan Du, Yasmine Guendouz, et al. "Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review." Applied Sciences 11, no. 11 (2021): 5088. http://dx.doi.org/10.3390/app11115088.

Full text
Abstract:
Machine Learning and Artificial Intelligence (AI) more broadly have great immediate and future potential for transforming almost all aspects of medicine. However, in many applications, even outside medicine, a lack of transparency in AI applications has become increasingly problematic. This is particularly pronounced where users need to interpret the output of AI systems. Explainable AI (XAI) provides a rationale that allows users to understand why a system has produced a given output. The output can then be interpreted within a given context. One area that is in great need of XAI is that of C
APA, Harvard, Vancouver, ISO, and other styles
27

Kim, Kipyo, Hyeonsik Yang, Jinyeong Yi, et al. "Real-Time Clinical Decision Support Based on Recurrent Neural Networks for In-Hospital Acute Kidney Injury: External Validation and Model Interpretation." Journal of Medical Internet Research 23, no. 4 (2021): e24120. http://dx.doi.org/10.2196/24120.

Full text
Abstract:
Background Acute kidney injury (AKI) is commonly encountered in clinical practice and is associated with poor patient outcomes and increased health care costs. Despite it posing significant challenges for clinicians, effective measures for AKI prediction and prevention are lacking. Previously published AKI prediction models mostly have a simple design without external validation. Furthermore, little is known about the process of linking model output and clinical decisions due to the black-box nature of neural network models. Objective We aimed to present an externally validated recurrent neura
APA, Harvard, Vancouver, ISO, and other styles
28

Abir, Wahidul Hasan, Md Fahim Uddin, Faria Rahman Khanam, et al. "Explainable AI in Diagnosing and Anticipating Leukemia Using Transfer Learning Method." Computational Intelligence and Neuroscience 2022 (April 27, 2022): 1–14. http://dx.doi.org/10.1155/2022/5140148.

Full text
Abstract:
White blood cells (WBCs) are blood cells that fight infections and diseases as a part of the immune system. They are also known as “defender cells.” But the imbalance in the number of WBCs in the blood can be hazardous. Leukemia is the most common blood cancer caused by an overabundance of WBCs in the immune system. Acute lymphocytic leukemia (ALL) usually occurs when the bone marrow creates many immature WBCs that destroy healthy cells. People of all ages, including children and adolescents, can be affected by ALL. The rapid proliferation of atypical lymphocyte cells can cause a reduction in
APA, Harvard, Vancouver, ISO, and other styles
29

Wikle, Christopher K., Abhirup Datta, Bhava Vyasa Hari, et al. "An illustration of model agnostic explainability methods applied to environmental data." Environmetrics, October 25, 2022. http://dx.doi.org/10.1002/env.2772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Xu, Zhichao, Hansi Zeng, Juntao Tan, Zuohui Fu, Yongfeng Zhang, and Qingyao Ai. "A Reusable Model-agnostic Framework for Faithfully Explainable Recommendation and System Scrutability." ACM Transactions on Information Systems, June 18, 2023. http://dx.doi.org/10.1145/3605357.

Full text
Abstract:
State-of-the-art industrial-level recommender system applications mostly adopt complicated model structures such as deep neural networks. While this helps with the model performance, the lack of system explainability caused by these nearly blackbox models also raises concerns and potentially weakens the users’ trust on the system. Existing work on explainable recommendation mostly focuses on designing interpretable model structures to generate model-intrinsic explanations. However, most of them have complex structures and it is difficult to directly apply these designs onto existing recommenda
APA, Harvard, Vancouver, ISO, and other styles
31

Joyce, Dan W., Andrey Kormilitzin, Katharine A. Smith, and Andrea Cipriani. "Explainable artificial intelligence for mental health through transparency and interpretability for understandability." npj Digital Medicine 6, no. 1 (2023). http://dx.doi.org/10.1038/s41746-023-00751-9.

Full text
Abstract:
AbstractThe literature on artificial intelligence (AI) or machine learning (ML) in mental health and psychiatry lacks consensus on what “explainability” means. In the more general XAI (eXplainable AI) literature, there has been some convergence on explainability meaning model-agnostic techniques that augment a complex model (with internal mechanics intractable for human understanding) with a simpler model argued to deliver results that humans can comprehend. Given the differing usage and intended meaning of the term “explainability” in AI and ML, we propose instead to approximate model/algorit
APA, Harvard, Vancouver, ISO, and other styles
32

Nakashima, Heitor Hoffman, Daielly Mantovani, and Celso Machado Junior. "Users’ trust in black-box machine learning algorithms." Revista de Gestão, October 25, 2022. http://dx.doi.org/10.1108/rege-06-2022-0100.

Full text
Abstract:
PurposeThis paper aims to investigate whether professional data analysts’ trust of black-box systems is increased by explainability artifacts.Design/methodology/approachThe study was developed in two phases. First a black-box prediction model was estimated using artificial neural networks, and local explainability artifacts were estimated using local interpretable model-agnostic explanations (LIME) algorithms. In the second phase, the model and explainability outcomes were presented to a sample of data analysts from the financial market and their trust of the models was measured. Finally, inte
APA, Harvard, Vancouver, ISO, and other styles
33

Szepannek, Gero, and Karsten Lübke. "Explaining Artificial Intelligence with Care." KI - Künstliche Intelligenz, May 16, 2022. http://dx.doi.org/10.1007/s13218-022-00764-8.

Full text
Abstract:
AbstractIn the recent past, several popular failures of black box AI systems and regulatory requirements have increased the research interest in explainable and interpretable machine learning. Among the different available approaches of model explanation, partial dependence plots (PDP) represent one of the most famous methods for model-agnostic assessment of a feature’s effect on the model response. Although PDPs are commonly used and easy to apply they only provide a simplified view on the model and thus risk to be misleading. Relying on a model interpretation given by a PDP can be of dramati
APA, Harvard, Vancouver, ISO, and other styles
34

Sharma, Jeetesh, Murari Lal Mittal, Gunjan Soni, and Arvind Keprate. "Explainable Artificial Intelligence (XAI) Approaches in Predictive Maintenance: A Review." Recent Patents on Engineering 18 (April 17, 2023). http://dx.doi.org/10.2174/1872212118666230417084231.

Full text
Abstract:
Background: Predictive maintenance (PdM) is a technique that keeps track of the condition and performance of equipment during normal operation to reduce the possibility of failures. Accurate anomaly detection, fault diagnosis, and fault prognosis form the basis of a PdM procedure. Objective: This paper aims to explore and discuss research addressing PdM using machine learning and complications using explainable artificial intelligence (XAI) techniques. Methods: While machine learning and artificial intelligence techniques have gained great interest in recent years, the absence of model interpr
APA, Harvard, Vancouver, ISO, and other styles
35

Szczepański, Mateusz, Marek Pawlicki, Rafał Kozik, and Michał Choraś. "New explainability method for BERT-based model in fake news detection." Scientific Reports 11, no. 1 (2021). http://dx.doi.org/10.1038/s41598-021-03100-6.

Full text
Abstract:
AbstractThe ubiquity of social media and their deep integration in the contemporary society has granted new ways to interact, exchange information, form groups, or earn money—all on a scale never seen before. Those possibilities paired with the widespread popularity contribute to the level of impact that social media display. Unfortunately, the benefits brought by them come at a cost. Social Media can be employed by various entities to spread disinformation—so called ‘Fake News’, either to make a profit or influence the behaviour of the society. To reduce the impact and spread of Fake News, a
APA, Harvard, Vancouver, ISO, and other styles
36

ÖZTOPRAK, Samet, and Zeynep ORMAN. "A New Model-Agnostic Method and Implementation for Explaining the Prediction on Finance Data." European Journal of Science and Technology, June 29, 2022. http://dx.doi.org/10.31590/ejosat.1079145.

Full text
Abstract:
Artificial neural networks (ANNs) are widely used in critical mission systems such as healthcare, self-driving vehicles and the army, which directly affect human life, and in predicting data related to these systems. However, the black-box nature of ANN algorithms makes their use in mission-critical applications difficult, while raising ethical and forensic concerns that lead to a lack of trust. The development of the Artificial Intelligence (AI) day by day and gaining more space in our lives have revealed that the results obtained from these algorithms should be more explainable and understan
APA, Harvard, Vancouver, ISO, and other styles
37

Bachoc, François, Fabrice Gamboa, Max Halford, Jean-Michel Loubes, and Laurent Risser. "Explaining machine learning models using entropic variable projection." Information and Inference: A Journal of the IMA 12, no. 3 (2023). http://dx.doi.org/10.1093/imaiai/iaad010.

Full text
Abstract:
Abstract In this paper, we present a new explainability formalism designed to shed light on how each input variable of a test set impacts the predictions of machine learning models. Hence, we propose a group explainability formalism for trained machine learning decision rules, based on their response to the variability of the input variables distribution. In order to emphasize the impact of each input variable, this formalism uses an information theory framework that quantifies the influence of all input–output observations based on entropic projections. This is thus the first unified and mode
APA, Harvard, Vancouver, ISO, and other styles
38

Loveleen, Gaur, Bhandari Mohan, Bhadwal Singh Shikhar, Jhanjhi Nz, Mohammad Shorfuzzaman, and Mehedi Masud. "Explanation-driven HCI Model to Examine the Mini-Mental State for Alzheimer’s Disease." ACM Transactions on Multimedia Computing, Communications, and Applications, April 2022. http://dx.doi.org/10.1145/3527174.

Full text
Abstract:
Directing research on Alzheimer’s towards only early prediction and accuracy cannot be considered a feasible approach towards tackling a ubiquitous degenerative disease today. Applying deep learning (DL), Explainable artificial intelligence(XAI) and advancing towards the human-computer interface(HCI) model can be a leap forward in medical research. This research aims to propose a robust explainable HCI model using shapley additive explanation (SHAP), local interpretable model-agnostic explanations (LIME) and DL algorithms. The use of DL algorithms: logistic regression(80.87%), support vector m
APA, Harvard, Vancouver, ISO, and other styles
39

Vilone, Giulia, and Luca Longo. "A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods." Frontiers in Artificial Intelligence 4 (November 3, 2021). http://dx.doi.org/10.3389/frai.2021.717899.

Full text
Abstract:
Understanding the inferences of data-driven, machine-learned models can be seen as a process that discloses the relationships between their input and output. These relationships consist and can be represented as a set of inference rules. However, the models usually do not explicit these rules to their end-users who, subsequently, perceive them as black-boxes and might not trust their predictions. Therefore, scholars have proposed several methods for extracting rules from data-driven machine-learned models to explain their logic. However, limited work exists on the evaluation and comparison of
APA, Harvard, Vancouver, ISO, and other styles
40

Alabi, Rasheed Omobolaji, Mohammed Elmusrati, Ilmo Leivo, Alhadi Almangush, and Antti A. Mäkitie. "Machine learning explainability in nasopharyngeal cancer survival using LIME and SHAP." Scientific Reports 13, no. 1 (2023). http://dx.doi.org/10.1038/s41598-023-35795-0.

Full text
Abstract:
AbstractNasopharyngeal cancer (NPC) has a unique histopathology compared with other head and neck cancers. Individual NPC patients may attain different outcomes. This study aims to build a prognostic system by combining a highly accurate machine learning model (ML) model with explainable artificial intelligence to stratify NPC patients into low and high chance of survival groups. Explainability is provided using Local Interpretable Model Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) techniques. A total of 1094 NPC patients were retrieved from the Surveillance, Epidemiol
APA, Harvard, Vancouver, ISO, and other styles
41

Bogdanova, Anna, Akira Imakura, and Tetsuya Sakurai. "DC-SHAP Method for Consistent Explainability in Privacy-Preserving Distributed Machine Learning." Human-Centric Intelligent Systems, July 6, 2023. http://dx.doi.org/10.1007/s44230-023-00032-4.

Full text
Abstract:
AbstractEnsuring the transparency of machine learning models is vital for their ethical application in various industries. There has been a concurrent trend of distributed machine learning designed to limit access to training data for privacy concerns. Such models, trained over horizontally or vertically partitioned data, present a challenge for explainable AI because the explaining party may have a biased view of background data or a partial view of the feature space. As a result, explanations obtained from different participants of distributed machine learning might not be consistent with on
APA, Harvard, Vancouver, ISO, and other styles
42

Zini, Julia El, and Mariette Awad. "On the Explainability of Natural Language Processing Deep Models." ACM Computing Surveys, July 19, 2022. http://dx.doi.org/10.1145/3529755.

Full text
Abstract:
Despite their success, deep networks are used as black-box models with outputs that are not easily explainable during the learning and the prediction phases. This lack of interpretability is significantly limiting the adoption of such models in domains where decisions are critical such as the medical and legal fields. Recently, researchers have been interested in developing methods that help explain individual decisions and decipher the hidden representations of machine learning models in general and deep networks specifically. While there has been a recent explosion of work on Ex plainable A
APA, Harvard, Vancouver, ISO, and other styles
43

Esam Noori, Worood, and A. S. Albahri. "Towards Trustworthy Myopia Detection: Integration Methodology of Deep Learning Approach, XAI Visualization, and User Interface System." Applied Data Science and Analysis, February 23, 2023, 1–15. http://dx.doi.org/10.58496/adsa/2023/001.

Full text
Abstract:
Myopia, a prevalent vision disorder with potential complications if untreated, requires early and accurate detection for effective treatment. However, traditional diagnostic methods often lack trustworthiness and explainability, leading to biases and mistrust. This study presents a four-phase methodology to develop a robust myopia detection system. In the initial phase, the dataset containing training and testing images is located, preprocessed, and balanced. Subsequently, two models are deployed: a pre-trained VGG16 model renowned for image classification tasks, and a sequential CNN with conv
APA, Harvard, Vancouver, ISO, and other styles
44

Filho, Renato Miranda, Anísio M. Lacerda, and Gisele L. Pappa. "Explainable regression via prototypes." ACM Transactions on Evolutionary Learning and Optimization, December 15, 2022. http://dx.doi.org/10.1145/3576903.

Full text
Abstract:
Model interpretability/explainability is increasingly a concern when applying machine learning to real-world problems. In this paper, we are interested in explaining regression models by exploiting prototypes, which are exemplar cases in the problem domain. Previous works focused on finding prototypes that are representative of all training data but ignore the model predictions, i.e., they explain the data distribution but not necessarily the predictions. We propose a two-level model-agnostic method that considers prototypes to provide global and local explanations for regression problems and
APA, Harvard, Vancouver, ISO, and other styles
45

Ahmed, Zia U., Kang Sun, Michael Shelly, and Lina Mu. "Explainable artificial intelligence (XAI) for exploring spatial variability of lung and bronchus cancer (LBC) mortality rates in the contiguous USA." Scientific Reports 11, no. 1 (2021). http://dx.doi.org/10.1038/s41598-021-03198-8.

Full text
Abstract:
AbstractMachine learning (ML) has demonstrated promise in predicting mortality; however, understanding spatial variation in risk factor contributions to mortality rate requires explainability. We applied explainable artificial intelligence (XAI) on a stack-ensemble machine learning model framework to explore and visualize the spatial distribution of the contributions of known risk factors to lung and bronchus cancer (LBC) mortality rates in the conterminous United States. We used five base-learners—generalized linear model (GLM), random forest (RF), Gradient boosting machine (GBM), extreme Gra
APA, Harvard, Vancouver, ISO, and other styles
46

Chen, Tao, Meng Song, Hongxun Hui, and Huan Long. "Battery Electrode Mass Loading Prognostics and Analysis for Lithium-Ion Battery–Based Energy Storage Systems." Frontiers in Energy Research 9 (October 5, 2021). http://dx.doi.org/10.3389/fenrg.2021.754317.

Full text
Abstract:
With the rapid development of renewable energy, the lithium-ion battery has become one of the most important sources to store energy for many applications such as electrical vehicles and smart grids. As battery performance would be highly and directly affected by its electrode manufacturing process, it is vital to design an effective solution for achieving accurate battery electrode mass loading prognostics at early manufacturing stages and analyzing the effects of manufacturing parameters of interest. To achieve this, this study proposes a hybrid data analysis solution, which integrates the k
APA, Harvard, Vancouver, ISO, and other styles
47

Chen, Tao, Meng Song, Hongxun Hui, and Huan Long. "Battery Electrode Mass Loading Prognostics and Analysis for Lithium-Ion Battery–Based Energy Storage Systems." Frontiers in Energy Research 9 (October 5, 2021). http://dx.doi.org/10.3389/fenrg.2021.754317.

Full text
Abstract:
With the rapid development of renewable energy, the lithium-ion battery has become one of the most important sources to store energy for many applications such as electrical vehicles and smart grids. As battery performance would be highly and directly affected by its electrode manufacturing process, it is vital to design an effective solution for achieving accurate battery electrode mass loading prognostics at early manufacturing stages and analyzing the effects of manufacturing parameters of interest. To achieve this, this study proposes a hybrid data analysis solution, which integrates the k
APA, Harvard, Vancouver, ISO, and other styles
48

Javed, Abdul Rehman, Habib Ullah Khan, Mohammad Kamel Bader Alomari, et al. "Toward explainable AI-empowered cognitive health assessment." Frontiers in Public Health 11 (March 9, 2023). http://dx.doi.org/10.3389/fpubh.2023.1024195.

Full text
Abstract:
Explainable artificial intelligence (XAI) is of paramount importance to various domains, including healthcare, fitness, skill assessment, and personal assistants, to understand and explain the decision-making process of the artificial intelligence (AI) model. Smart homes embedded with smart devices and sensors enabled many context-aware applications to recognize physical activities. This study presents XAI-HAR, a novel XAI-empowered human activity recognition (HAR) approach based on key features identified from the data collected from sensors located at different places in a smart home. XAI-HA
APA, Harvard, Vancouver, ISO, and other styles
49

Mustafa, Ahmad, Klaas Koster, and Ghassan AlRegib. "Explainable Machine Learning for Hydrocarbon Risk Assessment." GEOPHYSICS, July 13, 2023, 1–52. http://dx.doi.org/10.1190/geo2022-0594.1.

Full text
Abstract:
Hydrocarbon prospect risk assessment is an important process in oil and gas exploration involving the integrated analysis of various geophysical data modalities including seismic data, well logs, and geological information to estimate the likelihood of drilling success for a given drill location. Over the years, geophysicists have attempted to understand the various factors at play influencing the probability of success for hydrocarbon prospects. Towards this end, a large database of prospect drill outcomes and associated attributes has been collected and analyzed via correlation-based techniq
APA, Harvard, Vancouver, ISO, and other styles
50

Yang, Darrion Bo-Yun, Alexander Smith, Emily J. Smith, et al. "The State of Machine Learning in Outcomes Prediction of Transsphenoidal Surgery: A Systematic Review." Journal of Neurological Surgery Part B: Skull Base, September 12, 2022. http://dx.doi.org/10.1055/a-1941-3618.

Full text
Abstract:
The purpose of this analysis is to assess the use of machine learning (ML) algorithms in the prediction of post-operative outcomes, including complications, recurrence, and death in transsphenoidal surgery. Following PRISMA guidelines, we systematically reviewed all papers that used at least one ML algorithm to predict outcomes after transsphenoidal surgery. We searched Scopus, PubMed, and Web of Science databases for studies published prior to May 12th, 2021. We identified 13 studies enrolling 5048 patients. We extracted the general characteristics of each study; the sensitivity, specificity,
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!