To see the other types of publications on this topic, follow the link: Deterministic methods for XAI.

Journal articles on the topic 'Deterministic methods for XAI'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Deterministic methods for XAI.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bhaskaran, Venkatsubramaniam, and Pallav Kumar Baruah Prof. "A Novel Approach to Explainable AI using Formal Concept Lattice." International Journal of Innovative Technology and Exploring Engineering (IJITEE) 11, no. 7 (2022): 36–48. https://doi.org/10.35940/ijitee.G9992.0611722.

Full text
Abstract:
<strong>Abstract</strong>: Current approaches in explainable AI use an interpretable model to approximate a black box model or use gradient techniques to determine the salient parts of the input. While it is true that such approaches provide intuition about the black box model, the primary purpose of an explanation is to be exact at an individual instance and also from a global perspective, which is difficult to obtain using such model based approximations or from salient parts. On the other hand, traditional, deterministic approaches satisfy this primary purpose of explainability of being exact at an individual instance and globally, while posing a challenge to scale for large amounts of data. In this work, we propose a deterministic, novel approach to explainability using a formal concept lattice for classification problems, that reveal accurate explanations both globally and locally, including generation of similar and contrastive examples around an instance. This technique consists of preliminary lattice construction, synthetic data generation using implications from the preliminary lattice followed by actual lattice construction which is used to generate local, global, similar and contrastive explanations. Using sanity tests like Implementation Invariance, Input transformation Invariance, Model parameter randomization sensitivity and model-outcome relationship randomization sensitivity, its credibility is proven. Explanations from the lattice are compared to a white box model in order to prove its trustworthiness.
APA, Harvard, Vancouver, ISO, and other styles
2

Goričan, Tomaž, Milan Terčelj, and Iztok Peruš. "New Approach for Automated Explanation of Material Phenomena (AA6082) Using Artificial Neural Networks and ChatGPT." Applied Sciences 14, no. 16 (2024): 7015. http://dx.doi.org/10.3390/app14167015.

Full text
Abstract:
Artificial intelligence methods, especially artificial neural networks (ANNs), have increasingly been utilized for the mathematical description of physical phenomena in (metallic) material processing. Traditional methods often fall short in explaining the complex, real-world data observed in production. While ANN models, typically functioning as “black boxes”, improve production efficiency, a deeper understanding of the phenomena, akin to that provided by explicit mathematical formulas, could enhance this efficiency further. This article proposes a general framework that leverages ANNs (i.e., Conditional Average Estimator—CAE) to explain predicted results alongside their graphical presentation, marking a significant improvement over previous approaches and those relying on expert assessments. Unlike existing Explainable AI (XAI) methods, the proposed framework mimics the standard scientific methodology, utilizing minimal parameters for the mathematical representation of physical phenomena and their derivatives. Additionally, it analyzes the reliability and accuracy of the predictions using well-known statistical metrics, transitioning from deterministic to probabilistic descriptions for better handling of real-world phenomena. The proposed approach addresses both aleatory and epistemic uncertainties inherent in the data. The concept is demonstrated through the hot extrusion of aluminum alloy 6082, where CAE ANN models and predicts key parameters, and ChatGPT explains the results, enabling researchers and/or engineers to better understand the phenomena and outcomes obtained by ANNs.
APA, Harvard, Vancouver, ISO, and other styles
3

Moradi, A., M. Satari, and M. Momeni. "INDIVIDUAL TREE OF URBAN FOREST EXTRACTION FROM VERY HIGH DENSITY LIDAR DATA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 337–43. http://dx.doi.org/10.5194/isprsarchives-xli-b3-337-2016.

Full text
Abstract:
Airborne LiDAR (Light Detection and Ranging) data have a high potential to provide 3D information from trees. Most proposed methods to extract individual trees detect points of tree top or bottom firstly and then using them as starting points in a segmentation algorithm. Hence, in these methods, the number and the locations of detected peak points heavily effect on the process of detecting individual trees. In this study, a new method is presented to extract individual tree segments using LiDAR points with 10cm point density. In this method, a two-step strategy is performed for the extraction of individual tree LiDAR points: finding deterministic segments of individual trees points and allocation of other LiDAR points based on these segments. This research is performed on two study areas in Zeebrugge, Bruges, Belgium (51.33° N, 3.20° E). The accuracy assessment of this method showed that it could correctly classified 74.51% of trees with 21.57% and 3.92% under- and over-segmentation errors respectively.
APA, Harvard, Vancouver, ISO, and other styles
4

Moradi, A., M. Satari, and M. Momeni. "INDIVIDUAL TREE OF URBAN FOREST EXTRACTION FROM VERY HIGH DENSITY LIDAR DATA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 337–43. http://dx.doi.org/10.5194/isprs-archives-xli-b3-337-2016.

Full text
Abstract:
Airborne LiDAR (Light Detection and Ranging) data have a high potential to provide 3D information from trees. Most proposed methods to extract individual trees detect points of tree top or bottom firstly and then using them as starting points in a segmentation algorithm. Hence, in these methods, the number and the locations of detected peak points heavily effect on the process of detecting individual trees. In this study, a new method is presented to extract individual tree segments using LiDAR points with 10cm point density. In this method, a two-step strategy is performed for the extraction of individual tree LiDAR points: finding deterministic segments of individual trees points and allocation of other LiDAR points based on these segments. This research is performed on two study areas in Zeebrugge, Bruges, Belgium (51.33° N, 3.20° E). The accuracy assessment of this method showed that it could correctly classified 74.51% of trees with 21.57% and 3.92% under- and over-segmentation errors respectively.
APA, Harvard, Vancouver, ISO, and other styles
5

Xiang, Yiheng, Yanghe Liu, Xiangxi Zou, Tao Peng, Zhiyuan Yin, and Yufeng Ren. "Post-Processing Ensemble Precipitation Forecasts and Their Applications in Summer Streamflow Prediction over a Mountain River Basin." Atmosphere 14, no. 11 (2023): 1645. http://dx.doi.org/10.3390/atmos14111645.

Full text
Abstract:
Ensemble precipitation forecasts (EPFs) can help to extend lead times and provide reliable probabilistic forecasts, which have been widely applied for streamflow predictions by driving hydrological models. Nonetheless, inherent biases and under-dispersion in EPFs require post-processing for accurate application. It is imperative to explore the skillful lead time of post-processed EPFs for summer streamflow predictions, particularly in mountainous regions. In this study, four popular EPFs, i.e., the CMA, ECMWF, JMA, and NCEP, were post-processed by two state of art methods, i.e., the Bayesian model averaging (BMA) and generator-based post-processing (GPP) methods. These refined forecasts were subsequently integrated with the Xin’anjiang (XAJ) model for summer streamflow prediction. The performances of precipitation forecasts and streamflow predictions were comprehensively evaluated before and after post-processing. The results reveal that raw EPFs frequently deviate from ensemble mean forecasts, particularly underestimating torrential rain. There are also clear underestimations of uncertainty in their probabilistic forecasts. Among the four EPFs, the ECMWF outperforms its peers, delivering skillful precipitation forecasts for 1–7 lead days and streamflow predictions for 1–4 lead days. The effectiveness of post-processing methods varies, yet both GPP and BMA address the under-dispersion of EPFs effectively. The GPP method, recommended as the superior method, can effectively improve both deterministic and probabilistic forecasting accuracy. Moreover, the ECMWF post-processed by GPP extends the effective lead time to seven days and reduces the underestimation of peak flows. The findings of this study underscore the potential benefits of adeptly post-processed EPFs, providing a reference for streamflow prediction over mountain river basins.
APA, Harvard, Vancouver, ISO, and other styles
6

Kupidura, P. "COMPARISON OF FILTERS DEDICATED TO SPECKLE SUPPRESSION IN SAR IMAGES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 21, 2016): 269–76. http://dx.doi.org/10.5194/isprsarchives-xli-b7-269-2016.

Full text
Abstract:
This paper presents the results of research on the effectiveness of different filtering methods dedicated to speckle suppression in SAR images. The tests were performed on RadarSat-2 images and on an artificial image treated with simulated speckle noise. The research analysed the performance of particular filters related to the effectiveness of speckle suppression and to the ability to preserve image details and edges. Speckle is a phenomenon inherent to radar images – a deterministic noise connected with land cover type, but also causing significant changes in digital numbers of pixels. As a result, it may affect interpretation, classification and other processes concerning radar images. Speckle, resembling “salt and pepper” noise, has the form of a set of relatively small groups of pixels of values markedly different from values of other pixels representing the same type of land cover. Suppression of this noise may also cause suppression of small image details, therefore the ability to preserve the important parts of an image, was analysed as well. In the present study, selected filters were tested, and methods dedicated particularly to speckle noise suppression: Frost, Gamma-MAP, Lee, Lee-Sigma, Local Region, general filtering methods which might be effective in this respect: Mean, Median, in addition to morphological filters (alternate sequential filters with multiple structuring element and by reconstruction). The analysis presented in this paper compared the effectiveness of different filtering methods. It proved that some of the dedicated radar filters are efficient tools for speckle suppression, but also demonstrated a significant efficiency of the morphological approach, especially its ability to preserve image details.
APA, Harvard, Vancouver, ISO, and other styles
7

Kupidura, P. "COMPARISON OF FILTERS DEDICATED TO SPECKLE SUPPRESSION IN SAR IMAGES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 21, 2016): 269–76. http://dx.doi.org/10.5194/isprs-archives-xli-b7-269-2016.

Full text
Abstract:
This paper presents the results of research on the effectiveness of different filtering methods dedicated to speckle suppression in SAR images. The tests were performed on RadarSat-2 images and on an artificial image treated with simulated speckle noise. The research analysed the performance of particular filters related to the effectiveness of speckle suppression and to the ability to preserve image details and edges. Speckle is a phenomenon inherent to radar images – a deterministic noise connected with land cover type, but also causing significant changes in digital numbers of pixels. As a result, it may affect interpretation, classification and other processes concerning radar images. Speckle, resembling “salt and pepper” noise, has the form of a set of relatively small groups of pixels of values markedly different from values of other pixels representing the same type of land cover. Suppression of this noise may also cause suppression of small image details, therefore the ability to preserve the important parts of an image, was analysed as well. In the present study, selected filters were tested, and methods dedicated particularly to speckle noise suppression: Frost, Gamma-MAP, Lee, Lee-Sigma, Local Region, general filtering methods which might be effective in this respect: Mean, Median, in addition to morphological filters (alternate sequential filters with multiple structuring element and by reconstruction). The analysis presented in this paper compared the effectiveness of different filtering methods. It proved that some of the dedicated radar filters are efficient tools for speckle suppression, but also demonstrated a significant efficiency of the morphological approach, especially its ability to preserve image details.
APA, Harvard, Vancouver, ISO, and other styles
8

Kedar, Ms Mayuri Manish. "Exploring the Effectiveness of SHAP over other Explainable AI Methods." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 06 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem35556.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) has emerged as a critical domain to demystify the opaque decision-making processes of machine learning models, fostering trust and understanding among users. Among various XAI methods, SHAP (SHapley Additive exPlanations) has gained prominence for its theo- retically grounded approach and practical applicability. The paper presents a comprehensive exploration of SHAP’s effectiveness compared to other promi- nent XAI methods.Methods such as LIME (Local Interpretable Model-agnostic Explanations), permutation importance, Anchors and partial dependence plots are examined for their respective strengths and limitations. Through a detailed analysis of their principles, strengths, and limitations through reviewing differ- ent research papers based on some important factors of XAI, the paper aims to provide insights into the effectiveness and suitability of these methods.The study offers valuable guidance for researchers and practitioners seeking to incorporate XAI into their AI systems. Keywords: SHAP, XAI, LIME, permutation importance, Anchors and par- tial dependence plots
APA, Harvard, Vancouver, ISO, and other styles
9

Jishnu, Setia. "Explainable AI: Methods and Applications." Explainable AI: Methods and Applications 8, no. 10 (2023): 5. https://doi.org/10.5281/zenodo.10021461.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) has emerged as a critical area of research, ensuring that AI systems are transparent, interpretable, and accountable. This paper provides a comprehensive overview of various methods and applications of Explainable AI. We delve into the importance of interpretability in AI models, explore different techniques for making complex AI models understandable, and discuss real-world applications where explainability is crucial. Through this paper, I aim to shed light on the advancements in the field of XAI and its potentialto bridge the gap between AI's predictions and human understanding.Keywords:- Explainable AI (XAI), Interpretable Machine Learning, Transparent AI, AI Transparency, Interpretability in AI, Ethical AI, Explainable Machine Learning Models, Model Transparency, AI Accountability, Trustworthy AI, AI &nbsp;Ethics, XAI Techniques, LIME (Local Interpretable Model- agnostic Explanations), SHAP (SHapley Additive &nbsp;exPlanations), Rule-based Explanation, Post-hoc Explanation, AI and Society, Human-AI Collaboration, AI Regulation, Trust in Artificial Intelligence.
APA, Harvard, Vancouver, ISO, and other styles
10

Mohseni, Sina, Niloofar Zarei, and Eric D. Ragan. "A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems." ACM Transactions on Interactive Intelligent Systems 11, no. 3-4 (2021): 1–45. http://dx.doi.org/10.1145/3387166.

Full text
Abstract:
The need for interpretable and accountable intelligent systems grows along with the prevalence of artificial intelligence ( AI ) applications used in everyday life. Explainable AI ( XAI ) systems are intended to self-explain the reasoning behind system decisions and predictions. Researchers from different disciplines work together to define, design, and evaluate explainable systems. However, scholars from different disciplines focus on different objectives and fairly independent topics of XAI research, which poses challenges for identifying appropriate design and evaluation methodology and consolidating knowledge across efforts. To this end, this article presents a survey and framework intended to share knowledge and experiences of XAI design and evaluation methods across multiple disciplines. Aiming to support diverse design goals and evaluation methods in XAI research, after a thorough review of XAI related papers in the fields of machine learning, visualization, and human-computer interaction, we present a categorization of XAI design goals and evaluation methods. Our categorization presents the mapping between design goals for different XAI user groups and their evaluation methods. From our findings, we develop a framework with step-by-step design guidelines paired with evaluation methods to close the iterative design and evaluation cycles in multidisciplinary XAI teams. Further, we provide summarized ready-to-use tables of evaluation methods and recommendations for different goals in XAI research.
APA, Harvard, Vancouver, ISO, and other styles
11

Nazat, Sazid, Osvaldo Arreche, and Mustafa Abdallah. "On Evaluating Black-Box Explainable AI Methods for Enhancing Anomaly Detection in Autonomous Driving Systems." Sensors 24, no. 11 (2024): 3515. http://dx.doi.org/10.3390/s24113515.

Full text
Abstract:
The recent advancements in autonomous driving come with the associated cybersecurity issue of compromising networks of autonomous vehicles (AVs), motivating the use of AI models for detecting anomalies on these networks. In this context, the usage of explainable AI (XAI) for explaining the behavior of these anomaly detection AI models is crucial. This work introduces a comprehensive framework to assess black-box XAI techniques for anomaly detection within AVs, facilitating the examination of both global and local XAI methods to elucidate the decisions made by XAI techniques that explain the behavior of AI models classifying anomalous AV behavior. By considering six evaluation metrics (descriptive accuracy, sparsity, stability, efficiency, robustness, and completeness), the framework evaluates two well-known black-box XAI techniques, SHAP and LIME, involving applying XAI techniques to identify primary features crucial for anomaly classification, followed by extensive experiments assessing SHAP and LIME across the six metrics using two prevalent autonomous driving datasets, VeReMi and Sensor. This study advances the deployment of black-box XAI methods for real-world anomaly detection in autonomous driving systems, contributing valuable insights into the strengths and limitations of current black-box XAI methods within this critical domain.
APA, Harvard, Vancouver, ISO, and other styles
12

Bhatnagar, Shweta, and Rashmi Agrawal. "Understanding explainable artificial intelligence techniques: a comparative analysis for practical application." Bulletin of Electrical Engineering and Informatics 13, no. 6 (2024): 4451–55. http://dx.doi.org/10.11591/eei.v13i6.8378.

Full text
Abstract:
Explainable artificial intelligence (XAI) uses artificial intelligence (AI) tools and techniques to build interpretability in black-box algorithms. XAI methods are classified based on their purpose (pre-model, in-model, and post-model), scope (local or global), and usability (model-agnostic and model-specific). XAI methods and techniques were summarized in this paper with real-life examples of XAI applications. Local interpretable model-agnostic explanations (LIME) and shapley additive explanations (SHAP) methods were applied to the moral dataset to compare the performance outcomes of these two methods. Through this study, it was found that XAI algorithms can be custom-built for enhanced model-specific explanations. There are several limitations to using only one method of XAI and a combination of techniques gives complete insight for all stakeholders.
APA, Harvard, Vancouver, ISO, and other styles
13

Venkatsubramaniam, Bhaskaran, and Pallav Kumar Baruah. "EVALUATION OF LATTICE BASED XAI." ICTACT Journal on Soft Computing 14, no. 2 (2023): 3180–87. http://dx.doi.org/10.21917/ijsc.2023.0445.

Full text
Abstract:
With multiple methods to extract explanations from a black box model, it becomes significant to evaluate the correctness of these Explainable AI (XAI) techniques themselves. While there are many XAI evaluation methods that need manual intervention, in order to be objective, we use computable XAI evaluation methods to test the basic nature and sanity of an XAI technique. We pick four basic axioms and three sanity tests from existing literature that the XAI techniques are expected to satisfy. Axioms like Feature Sensitivity, Implementation Invariance, Symmetry preservation and sanity tests like Model parameter randomization, Model-Outcome relationship, Input transformation invariance are used. After reviewing the axioms and sanity tests, we apply it on existing XAI techniques to check if they satisfy them or not. Thereafter, we evaluate our lattice based XAI technique with these axioms and sanity tests using a mathematical approach. This work proves these axioms and sanity tests to establish the correctness of explanations extracted from our Lattice based XAI technique.
APA, Harvard, Vancouver, ISO, and other styles
14

Lopes, Pedro, Eduardo Silva, Cristiana Braga, Tiago Oliveira, and Luís Rosado. "XAI Systems Evaluation: A Review of Human and Computer-Centred Methods." Applied Sciences 12, no. 19 (2022): 9423. http://dx.doi.org/10.3390/app12199423.

Full text
Abstract:
The lack of transparency of powerful Machine Learning systems paired with their growth in popularity over the last decade led to the emergence of the eXplainable Artificial Intelligence (XAI) field. Instead of focusing solely on obtaining highly performing models, researchers also develop explanation techniques that help better understand the system’s reasoning for a particular output. An explainable system can be designed, developed, and evaluated from different perspectives, which enables researchers from different disciplines to work together on this topic. However, the multidisciplinary nature of XAI systems creates new challenges for condensing and structuring adequate methodologies to design and evaluate such systems. This paper presents a survey of Human-centred and Computer-centred methods to evaluate XAI systems. We propose a new taxonomy to categorize XAI evaluation methods more clearly and intuitively. This categorization gathers knowledge from different disciplines and organizes the evaluation methods according to a set of categories that represent key properties of XAI systems. Possible ways to use the proposed taxonomy in the design and evaluation of XAI systems are also discussed, alongside with some concluding remarks and future directions of research.
APA, Harvard, Vancouver, ISO, and other styles
15

Tulsani, Vijya, Prashant Sahatiya, Jignasha Parmar, and Jayshree Parmar. "XAI Applications in Medical Imaging: A Survey of Methods and Challenges." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9 (2023): 181–86. http://dx.doi.org/10.17762/ijritcc.v11i9.8332.

Full text
Abstract:
Medical imaging plays a pivotal role in modern healthcare, aiding in the diagnosis, monitoring, and treatment of various medical conditions. With the advent of Artificial Intelligence (AI), medical imaging has witnessed remarkable advancements, promising more accurate and efficient analysis. However, the black-box nature of many AI models used in medical imaging has raised concerns regarding their interpretability and trustworthiness. In response to these challenges, Explainable AI (XAI) has emerged as a critical field, aiming to provide transparent and interpretable solutions for medical image analysis. This survey paper comprehensively explores the methods and challenges associated with XAI applications in medical imaging. The survey begins with an introduction to the significance of XAI in medical imaging, emphasizing the need for transparent and interpretable AI solutions in healthcare. We delve into the background of medical imaging in healthcare and discuss the increasing role of AI in this domain. The paper then presents a detailed survey of various XAI techniques, ranging from interpretable machine learning models to deep learning approaches with built-in interpretability and post hoc interpretation methods. Furthermore, the survey outlines a wide range of applications where XAI is making a substantial impact, including disease diagnosis and detection, medical image segmentation, radiology reports, surgical planning, and telemedicine. Real-world case studies illustrate successful applications of XAI in medical imaging. The challenges associated with implementing XAI in medical imaging are thoroughly examined, addressing issues related to data quality, ethics, regulation, clinical integration, model robustness, and human-AI interaction. The survey concludes by discussing emerging trends and future directions in the field, highlighting the ongoing efforts to enhance XAI methods for medical imaging and the critical role XAI will play in the future of healthcare. This survey paper serves as a comprehensive resource for researchers, clinicians, and policymakers interested in the integration of Explainable AI into medical imaging, providing insights into the latest methods, successful applications, and the challenges that lie ahead.
APA, Harvard, Vancouver, ISO, and other styles
16

Kuppa, Aditya, and Nhien-An Le-Khac. "Adversarial XAI Methods in Cybersecurity." IEEE Transactions on Information Forensics and Security 16 (2021): 4924–38. http://dx.doi.org/10.1109/tifs.2021.3117075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Damaševičius, Robertas. "Explainable Artificial Intelligence Methods for Breast Cancer Recognition." Innovation Discovery 1, no. 3 (2024): 25. http://dx.doi.org/10.53964/id.2024025.

Full text
Abstract:
Breast cancer remains a leading cause of cancer-related mortality among women worldwide, necessitating early and accurate detection for effective treatment and improved survival rates. Artificial intelligence (AI) has shown significant potential in enhancing the diagnostic and prognostic capabilities in breast cancer recognition. However, the black-box nature of many AI models poses challenges for their clinical adoption due to the lack of transparency and interpretability. Explainable AI (XAI) methods address these issues by providing human-understandable explanations of AI models’ decision-making processes, thereby increasing trust, accountability, and ethical compliance. This review explores the current state of XAI methods (Local Interpretable Model-agnostic Explanations, Shapley Additive explanations, Gradient-weighted Class Activation Mapping) in breast cancer recognition, detailing their applications in various tasks such as classification, detection, segmentation, prognosis, and biomarker discovery. By integrating domain-specific knowledge and developing visualization techniques, XAI methods enhance the usability and interpretability of AI systems in clinical settings. The study also identifies the key challenges and future directions in the evaluation of XAI methods, the development of standardized metrics, and the seamless integration of XAI into clinical workflows.
APA, Harvard, Vancouver, ISO, and other styles
18

Owens, Emer, Barry Sheehan, Martin Mullins, Martin Cunneen, Juliane Ressel, and German Castignani. "Explainable Artificial Intelligence (XAI) in Insurance." Risks 10, no. 12 (2022): 230. http://dx.doi.org/10.3390/risks10120230.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) models allow for a more transparent and understandable relationship between humans and machines. The insurance industry represents a fundamental opportunity to demonstrate the potential of XAI, with the industry’s vast stores of sensitive data on policyholders and centrality in societal progress and innovation. This paper analyses current Artificial Intelligence (AI) applications in insurance industry practices and insurance research to assess their degree of explainability. Using search terms representative of (X)AI applications in insurance, 419 original research articles were screened from IEEE Xplore, ACM Digital Library, Scopus, Web of Science and Business Source Complete and EconLit. The resulting 103 articles (between the years 2000–2021) representing the current state-of-the-art of XAI in insurance literature are analysed and classified, highlighting the prevalence of XAI methods at the various stages of the insurance value chain. The study finds that XAI methods are particularly prevalent in claims management, underwriting and actuarial pricing practices. Simplification methods, called knowledge distillation and rule extraction, are identified as the primary XAI technique used within the insurance value chain. This is important as the combination of large models to create a smaller, more manageable model with distinct association rules aids in building XAI models which are regularly understandable. XAI is an important evolution of AI to ensure trust, transparency and moral values are embedded within the system’s ecosystem. The assessment of these XAI foci in the context of the insurance industry proves a worthwhile exploration into the unique advantages of XAI, highlighting to industry professionals, regulators and XAI developers where particular focus should be directed in the further development of XAI. This is the first study to analyse XAI’s current applications within the insurance industry, while simultaneously contributing to the interdisciplinary understanding of applied XAI. Advancing the literature on adequate XAI definitions, the authors propose an adapted definition of XAI informed by the systematic review of XAI literature in insurance.
APA, Harvard, Vancouver, ISO, and other styles
19

Qian, Jinzhao, Hailong Li, Junqi Wang, and Lili He. "Recent Advances in Explainable Artificial Intelligence for Magnetic Resonance Imaging." Diagnostics 13, no. 9 (2023): 1571. http://dx.doi.org/10.3390/diagnostics13091571.

Full text
Abstract:
Advances in artificial intelligence (AI), especially deep learning (DL), have facilitated magnetic resonance imaging (MRI) data analysis, enabling AI-assisted medical image diagnoses and prognoses. However, most of the DL models are considered as “black boxes”. There is an unmet need to demystify DL models so domain experts can trust these high-performance DL models. This has resulted in a sub-domain of AI research called explainable artificial intelligence (XAI). In the last decade, many experts have dedicated their efforts to developing novel XAI methods that are competent at visualizing and explaining the logic behind data-driven DL models. However, XAI techniques are still in their infancy for medical MRI image analysis. This study aims to outline the XAI applications that are able to interpret DL models for MRI data analysis. We first introduce several common MRI data modalities. Then, a brief history of DL models is discussed. Next, we highlight XAI frameworks and elaborate on the principles of multiple popular XAI methods. Moreover, studies on XAI applications in MRI image analysis are reviewed across the tissues/organs of the human body. A quantitative analysis is conducted to reveal the insights of MRI researchers on these XAI techniques. Finally, evaluations of XAI methods are discussed. This survey presents recent advances in the XAI domain for explaining the DL models that have been utilized in MRI applications.
APA, Harvard, Vancouver, ISO, and other styles
20

S. Suriya. "Credit Card Fraud Detection using Explainable AI Methods." Journal of Information Systems Engineering and Management 10, no. 24s (2025): 415–28. https://doi.org/10.52783/jisem.v10i24s.3917.

Full text
Abstract:
Explainable AI (XAI) system assists users in understanding the underlying processes of AI's decision making. XAI algorithms differ from conventional AI algorithms as XAI systems highlight decision-making processes and therefore can be regarded as trustworthy. Fraud detection should be precise in credit card transactions as the volume of global transactions is enormous. Most of these transactions are legitimate but an alarming amount of them are fraudulent. Detecting these fraudulent transactions enables banks and consumers to save enormous amounts of resources that would have otherwise been spent on compensation. Tools like Watson OpenScale by companies like IBM are designed to ensure that AI models are unbiased and transparent. The proposed project relies on the use of XAI methods such as LIME and SHAP designed to identify fraud in credit card transactions. LIME understands the reason an AI model made a decision and presents that rationale in a simplified manner. SHAP illustrates the transaction features like transaction amount or location and how these elements affect the model's choice. The aid of these XAI enabled methods improves the comprehension of the automated fraud detection systems and why certain transactions were unsuccessfully authenticated. Furthermore, we need to balance this dataset using SMOTE because there could be an imbalance between Legit and fraudulent transactions. XGBoost is great for large datasets which is why we will build our predictive model with that algorithm. The project merges XAI with powerful fraud detection approaches like SMOTE and hyperparameter tuning to builds a system that can be easily manipulated for its effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
21

Metta, Carlo, Andrea Beretta, Roberto Pellungrini, Salvatore Rinzivillo, and Fosca Giannotti. "Towards Transparent Healthcare: Advancing Local Explanation Methods in Explainable Artificial Intelligence." Bioengineering 11, no. 4 (2024): 369. http://dx.doi.org/10.3390/bioengineering11040369.

Full text
Abstract:
This paper focuses on the use of local Explainable Artificial Intelligence (XAI) methods, particularly the Local Rule-Based Explanations (LORE) technique, within healthcare and medical settings. It emphasizes the critical role of interpretability and transparency in AI systems for diagnosing diseases, predicting patient outcomes, and creating personalized treatment plans. While acknowledging the complexities and inherent trade-offs between interpretability and model performance, our work underscores the significance of local XAI methods in enhancing decision-making processes in healthcare. By providing granular, case-specific insights, local XAI methods like LORE enhance physicians’ and patients’ understanding of machine learning models and their outcome. Our paper reviews significant contributions to local XAI in healthcare, highlighting its potential to improve clinical decision making, ensure fairness, and comply with regulatory standards.
APA, Harvard, Vancouver, ISO, and other styles
22

Undie, Franka Anyama, Larisa Vladimirovna Kruglova, Matthew Okache Okache, Victor Agorye Undie, and Racheal Aniah Aloye. "Exploring Explainable Artificial Intelligence (XAI) to Enhance Healthcare Decision Support Systems in Nigeria." Journal of Innovative Research 2, no. 3 (2024): 41–48. http://dx.doi.org/10.54536/jir.v2i3.3450.

Full text
Abstract:
In Nigeria, the healthcare sector faces big challenges. Limited access to quality services and not enough resources are major issues. Using Artificial Intelligence (AI) could help improve healthcare. But understanding AI predictions is hard, especially in healthcare where transparency is crucial. This article looks at Explainable AI (XAI) to help with this problem in Nigeria. It talks about XAI techniques like feature importance examination, model-agnostic methods (e.g., LIME, SHAP), and interactive visualization tools. These tools can make AI models easier to understand and help with decision-making. A literature review was done to see how XAI can help healthcare in Nigeria. The review included scholarly articles, books, and reports on AI in Nigerian healthcare. We looked at methods from past XAI studies to find common approaches and best practices. XAI offers techniques that make AI models easier to understand in healthcare systems. These techniques include feature importance examination, model-agnostic methods, and interactive visualization tools. Case studies from Nigeria show how XAI is used in areas like disease diagnosis, treatment recommendations, and public health interventions. The findings show the importance of XAI in solving interpretability issues in healthcare AI, especially in places with limited resources like Nigeria. By explaining why AI makes certain predictions, XAI helps healthcare workers make better decisions for Nigerian patients. However, more research is needed to improve XAI techniques for Nigeria’s healthcare system. Policymakers and healthcare leaders should focus on using XAI-enabled systems to drive innovation and improve healthcare outcomes in Nigeria.
APA, Harvard, Vancouver, ISO, and other styles
23

E. Ben George. "Explainable AI Methods for Predicting Student Grades and Improving Academic Success." Journal of Information Systems Engineering and Management 10, no. 23s (2025): 117–26. https://doi.org/10.52783/jisem.v10i23s.3680.

Full text
Abstract:
Introduction: This study explores applying Explainable Artificial Intelligence (XAI) techniques to predict student performance in educational settings. Predicting student outcomes in advance has become more accurate with the help of AI and machine learning. However, there is a lack of clarity in many AI models and their predictions, which are termed black box models. This is a significant problem in the education industry because it can erode administrators' and educators' faith in the explainability or openness of predicted outcomes. Objectives: This research aims to reduce the shortcomings of traditional AI models by making them more understandable using XAI. XAI provides stakeholders with a better understanding of the underlying logic of the predictions to make better decisions. By utilizing XAI techniques, this paper provided valuable and reasonable intelligence-driven student grade predictions to increase confidence in AI systems. These interpretable predictions will guide students who may perform poorly at the very early stage. Methods: This research employs XAI techniques such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) to explain the predictions. Students' performance scores, such as quizzes, midterm examinations, practical tests, assignments, and activities were used as features to predict the final grades using the Random Forest Classifier (RFC). The investigation uses Partial Dependence Plots (PDPs), SHAP, and LIME to improve the comprehension of the model's predictions. Results: Applying these XAI techniques will enhance comprehension of the critical features impacting student performance. The results provide clear insights into the areas students can improve to achieve higher grades. They also provide a broader view of the factors that influence academic accomplishment or failure, aiding educators and stakeholders in making proper decisions. Conclusions: The findings demonstrate that using XAI in student performance data will provide transparency in predicting results. The outcome of this research will help create more effective instructional techniques, and students can improve their weaknesses.
APA, Harvard, Vancouver, ISO, and other styles
24

McDermid, John A., Yan Jia, Zoe Porter, and Ibrahim Habli. "Artificial intelligence explainability: the technical and ethical dimensions." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 379, no. 2207 (2021): 20200363. http://dx.doi.org/10.1098/rsta.2020.0363.

Full text
Abstract:
In recent years, several new technical methods have been developed to make AI-models more transparent and interpretable. These techniques are often referred to collectively as ‘AI explainability’ or ‘XAI’ methods. This paper presents an overview of XAI methods, and links them to stakeholder purposes for seeking an explanation. Because the underlying stakeholder purposes are broadly ethical in nature, we see this analysis as a contribution towards bringing together the technical and ethical dimensions of XAI. We emphasize that use of XAI methods must be linked to explanations of human decisions made during the development life cycle. Situated within that wider accountability framework, our analysis may offer a helpful starting point for designers, safety engineers, service providers and regulators who need to make practical judgements about which XAI methods to employ or to require. This article is part of the theme issue ‘Towards symbiotic autonomous systems’.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Yiming, Ying Weng, and Jonathan Lund. "Applications of Explainable Artificial Intelligence in Diagnosis and Surgery." Diagnostics 12, no. 2 (2022): 237. http://dx.doi.org/10.3390/diagnostics12020237.

Full text
Abstract:
In recent years, artificial intelligence (AI) has shown great promise in medicine. However, explainability issues make AI applications in clinical usages difficult. Some research has been conducted into explainable artificial intelligence (XAI) to overcome the limitation of the black-box nature of AI methods. Compared with AI techniques such as deep learning, XAI can provide both decision-making and explanations of the model. In this review, we conducted a survey of the recent trends in medical diagnosis and surgical applications using XAI. We have searched articles published between 2019 and 2021 from PubMed, IEEE Xplore, Association for Computing Machinery, and Google Scholar. We included articles which met the selection criteria in the review and then extracted and analyzed relevant information from the studies. Additionally, we provide an experimental showcase on breast cancer diagnosis, and illustrate how XAI can be applied in medical XAI applications. Finally, we summarize the XAI methods utilized in the medical XAI applications, the challenges that the researchers have met, and discuss the future research directions. The survey result indicates that medical XAI is a promising research direction, and this study aims to serve as a reference to medical experts and AI scientists when designing medical XAI applications.
APA, Harvard, Vancouver, ISO, and other styles
26

Liao, Q. Vera, Yunfeng Zhang, Ronny Luss, Finale Doshi-Velez, and Amit Dhurandhar. "Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 10, no. 1 (2022): 147–59. http://dx.doi.org/10.1609/hcomp.v10i1.21995.

Full text
Abstract:
Recent years have seen a surge of interest in the field of explainable AI (XAI), with a plethora of algorithms proposed in the literature. However, a lack of consensus on how to evaluate XAI hinders the advancement of the field. We highlight that XAI is not a monolithic set of technologies---researchers and practitioners have begun to leverage XAI algorithms to build XAI systems that serve different usage contexts, such as model debugging and decision-support. Algorithmic research of XAI, however, often does not account for these diverse downstream usage contexts, resulting in limited effectiveness or even unintended consequences for actual users, as well as difficulties for practitioners to make technical choices. We argue that one way to close the gap is to develop evaluation methods that account for different user requirements in these usage contexts. Towards this goal, we introduce a perspective of contextualized XAI evaluation by considering the relative importance of XAI evaluation criteria for prototypical usage contexts of XAI. To explore the context dependency of XAI evaluation criteria, we conduct two survey studies, one with XAI topical experts and another with crowd workers. Our results urge for responsible AI research with usage-informed evaluation practices, and provide a nuanced understanding of user requirements for XAI in different usage contexts.
APA, Harvard, Vancouver, ISO, and other styles
27

Fresz, Benjamin, Elena Dubovitskaya, Danilo Brajovic, Marco F. Huber, and Christian Horz. "How Should AI Decisions Be Explained? Requirements for Explanations from the Perspective of European Law." Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (October 16, 2024): 438–50. http://dx.doi.org/10.1609/aies.v7i1.31648.

Full text
Abstract:
This paper investigates the relationship between law and eXplainable Artificial Intelligence (XAI). While there is much discussion about the AI Act, which was adopted by the European Parliament in March 2024, other areas of law seem underexplored. This paper focuses on European (and in part German) law, although with international concepts and regulations such as fiduciary duties, the General Data Protection Regulation (GDPR), and product safety and liability. Based on XAI-taxonomies, requirements for XAI methods are derived from each of the legal fields, resulting in the conclusion that each legal field requires different XAI properties and that the current state of the art does not fulfill these to full satisfaction, especially regarding the correctness (sometimes called fidelity) and confidence estimates of XAI methods.
APA, Harvard, Vancouver, ISO, and other styles
28

Clement, Tobias, Nils Kemmerzell, Mohamed Abdelaal, and Michael Amberg. "XAIR: A Systematic Metareview of Explainable AI (XAI) Aligned to the Software Development Process." Machine Learning and Knowledge Extraction 5, no. 1 (2023): 78–108. http://dx.doi.org/10.3390/make5010006.

Full text
Abstract:
Currently, explainability represents a major barrier that Artificial Intelligence (AI) is facing in regard to its practical implementation in various application domains. To combat the lack of understanding of AI-based systems, Explainable AI (XAI) aims to make black-box AI models more transparent and comprehensible for humans. Fortunately, plenty of XAI methods have been introduced to tackle the explainability problem from different perspectives. However, due to the vast search space, it is challenging for ML practitioners and data scientists to start with the development of XAI software and to optimally select the most suitable XAI methods. To tackle this challenge, we introduce XAIR, a novel systematic metareview of the most promising XAI methods and tools. XAIR differentiates itself from existing reviews by aligning its results to the five steps of the software development process, including requirement analysis, design, implementation, evaluation, and deployment. Through this mapping, we aim to create a better understanding of the individual steps of developing XAI software and to foster the creation of real-world AI applications that incorporate explainability. Finally, we conclude with highlighting new directions for future research.
APA, Harvard, Vancouver, ISO, and other styles
29

Nguyen, Quoc-Toan. "Advancing Early Alzheimer's Disease Detection in Underdeveloped Areas with Fair Explainable AI Methods." Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7, no. 2 (2025): 47–49. https://doi.org/10.1609/aies.v7i2.31907.

Full text
Abstract:
Artificial intelligence (AI)-based telemedicine systems for early Alzheimer's detection using low-cost modalities are vital for rural or underdeveloped areas where travelling distance and high-cost devices like MRI are drawbacks. These systems require eXplainable AI (XAI) for reliable outcomes and intuitive explanations. Current XAI evaluations lack input from medical professionals and overlook stakeholder diversity, leading to potential biases. This project aims to develop a cost-effective AI telemedicine system, enhance early AD detection in underdeveloped areas, reduce healthcare disparities, and assess XAI methods with quality and fairness to mitigate biases for high-quality and fair explained outcomes.
APA, Harvard, Vancouver, ISO, and other styles
30

Stassin, Sédrick, Valentin Corduant, Sidi Ahmed Mahmoudi, and Xavier Siebert. "Explainability and Evaluation of Vision Transformers: An In-Depth Experimental Study." Electronics 13, no. 1 (2023): 175. http://dx.doi.org/10.3390/electronics13010175.

Full text
Abstract:
In the era of artificial intelligence (AI), the deployment of intelligent systems for autonomous decision making has surged across diverse fields. However, the widespread adoption of AI technology is hindered by the risks associated with ceding control to autonomous systems, particularly in critical domains. Explainable artificial intelligence (XAI) has emerged as a critical subdomain fostering human understanding and trust. It addresses the opacity of complex models such as vision transformers (ViTs), which have gained prominence lately. With the expanding landscape of XAI methods, selecting the most effective method remains an open question, due to the lack of a ground-truth label for explainability. To avoid subjective human judgment, numerous metrics have been developed, with each aiming to fulfill certain properties required for a valid explanation. This study conducts a detailed evaluation of various XAI methods applied to the ViT architecture, thereby exploring metrics criteria like faithfulness, coherence, robustness, and complexity. We especially study the metric convergence, correlation, discriminative power, and inference time of both XAI methods and metrics. Contrary to expectations, the metrics of each criterion reveal minimal convergence and correlation. This study not only challenges the conventional practice of metric-based ranking of XAI methods but also underscores the dependence of explanations on the experimental environment, thereby presenting crucial considerations for the future development and adoption of XAI methods in real-world applications.
APA, Harvard, Vancouver, ISO, and other styles
31

Chiaburu, Teodor, Frank Haußer, and Felix Bießmann. "Uncertainty in XAI: Human Perception and Modeling Approaches." Machine Learning and Knowledge Extraction 6, no. 2 (2024): 1170–92. http://dx.doi.org/10.3390/make6020055.

Full text
Abstract:
Artificial Intelligence (AI) plays an increasingly integral role in decision-making processes. In order to foster trust in AI predictions, many approaches towards explainable AI (XAI) have been developed and evaluated. Surprisingly, one factor that is essential for trust has been underrepresented in XAI research so far: uncertainty, both with respect to how it is modeled in Machine Learning (ML) and XAI as well as how it is perceived by humans relying on AI assistance. This review paper provides an in-depth analysis of both aspects. We review established and recent methods to account for uncertainty in ML models and XAI approaches and we discuss empirical evidence on how model uncertainty is perceived by human users of XAI systems. We summarize the methodological advancements and limitations of methods and human perception. Finally, we discuss the implications of the current state of the art in model development and research on human perception. We believe highlighting the role of uncertainty in XAI will be helpful to both practitioners and researchers and could ultimately support more responsible use of AI in practical applications.
APA, Harvard, Vancouver, ISO, and other styles
32

Soham Date and Meenakshi Thalor. "AI in Healthcare 5.0: Opportunities and Challenges." International Journal of Applied and Advanced Multidisciplinary Research 2, no. 1 (2024): 39–46. http://dx.doi.org/10.59890/ijaamr.v2i1.281.

Full text
Abstract:
The advent of Explainable AI (XAI) in healthcare, often referred to as Healthcare 5.0, presents both significant opportunities and challenges. XAI promises to enhance clinical decision-making by providing transparent and interpretable insights into AI-driven diagnoses and treatment recommendations, thereby increasing trust and adoption among healthcare practitioners. This paper explores the evolving landscape of XAI in healthcare, highlighting its potential to improve patient outcomes, reduce errors, and optimize resource allocation. However, it also addresses the challenges of implementing XAI, including data privacy concerns, regulatory hurdles, and the need for robust validation methods. Balancing these opportunities and challenges is critical for realizing the full potential of XAI in revolutionizing healthcare delivery.
APA, Harvard, Vancouver, ISO, and other styles
33

Date, Soham, and Meenakshi Thalor. "AI in Healthcare 5.0: Opportunities and Challenges." AI in Healthcare 5.0: Opportunities and Challenges 2, Vol. 2 No. 1 (2024): January, 2024 (2024): 8. https://doi.org/10.59890/ijaamr.v2i1.281.

Full text
Abstract:
The advent of Explainable AI (XAI) in healthcare, often referred to as Healthcare 5.0, presents both significant opportunities and challenges. XAI promises to enhance clinical decision-making by providing transparent and interpretable insights into AI-driven diagnoses and treatment recommendations, thereby increasing trust and adoption among healthcare practitioners. This paper explores the evolving landscape of XAI in healthcare, highlighting its potential to improve patient outcomes, reduce errors, and optimize resource allocation. However, it also addresses the challenges of implementing XAI, including data privacy concerns, regulatory hurdles, and the need for robust validation methods. Balancing these opportunities and challenges is critical for realizing the full potential of XAI in revolutionizing healthcare delivery.
APA, Harvard, Vancouver, ISO, and other styles
34

Matejová, Miroslava, Lucia Gojdičová, and Ján Paralič. "A Study Comparing Explainability Methods: A Medical User Perspective." Acta Electrotechnica et Informatica 25, no. 2 (2025): 3–9. https://doi.org/10.2478/aei-2025-0005.

Full text
Abstract:
Abstract In recent years, we have witnessed the rapid development of artificial intelligence systems and their presence in various fields. These systems are very efficient and powerful, but often unclear and insufficiently transparent. Explainable artificial intelligence (XAI) methods try to solve this problem. XAI is still a developing area of research, but it already has considerable potential for improving the transparency and trustworthiness of AI models. Thanks to XAI, we can build more responsible and ethical AI systems that better serve people’s needs. The aim of this study is to focus on the role of the user. Part of the work is a comparison of several explainability methods such as LIME, SHAP, ANCHORS and PDP on a selected data set from the field of medicine. The comparison of individual explainability methods from various aspects was carried out using a user study.
APA, Harvard, Vancouver, ISO, and other styles
35

Korgialas, Christos, Evangelia Pantraki, Angeliki Bolari, Martha Sotiroudi, and Constantine Kotropoulos. "Face Aging by Explainable Conditional Adversarial Autoencoders." Journal of Imaging 9, no. 5 (2023): 96. http://dx.doi.org/10.3390/jimaging9050096.

Full text
Abstract:
This paper deals with Generative Adversarial Networks (GANs) applied to face aging. An explainable face aging framework is proposed that builds on a well-known face aging approach, namely the Conditional Adversarial Autoencoder (CAAE). The proposed framework, namely, xAI-CAAE, couples CAAE with explainable Artificial Intelligence (xAI) methods, such as Saliency maps or Shapley additive explanations, to provide corrective feedback from the discriminator to the generator. xAI-guided training aims to supplement this feedback with explanations that provide a “reason” for the discriminator’s decision. Moreover, Local Interpretable Model-agnostic Explanations (LIME) are leveraged to provide explanations for the face areas that most influence the decision of a pre-trained age classifier. To the best of our knowledge, xAI methods are utilized in the context of face aging for the first time. A thorough qualitative and quantitative evaluation demonstrates that the incorporation of the xAI systems contributed significantly to the generation of more realistic age-progressed and regressed images.
APA, Harvard, Vancouver, ISO, and other styles
36

Altukhi, Zaid M., Sojen Pradhan, and Nasser Aljohani. "A Systematic Literature Review of the Latest Advancements in XAI." Technologies 13, no. 3 (2025): 93. https://doi.org/10.3390/technologies13030093.

Full text
Abstract:
This systematic review details recent advancements in the field of Explainable Artificial Intelligence (XAI) from 2014 to 2024. XAI utilises a wide range of frameworks, techniques, and methods used to interpret machine learning (ML) black-box models. We aim to understand the technical advancements in the field and future directions. We followed the PRISMA methodology and selected 30 relevant publications from three main databases: IEEE Xplore, ACM, and ScienceDirect. Through comprehensive thematic analysis, we categorised the research into three main topics: ‘model developments’, ‘evaluation metrics and methods’, and ‘user-centred and XAI system design’. Our results uncover ‘What’, ‘How’, and ‘Why’ these advancements were developed. We found that 13 papers focused on model developments, 8 studies focused on the XAI evaluation metrics, and 12 papers focused on user-centred and XAI system design. Moreover, it was found that these advancements aimed to bridge the gap between technical model outputs and user understanding.
APA, Harvard, Vancouver, ISO, and other styles
37

Medianovskyi, Kyrylo, and Ahti-Veikko Pietarinen. "On Explainable AI and Abductive Inference." Philosophies 7, no. 2 (2022): 35. http://dx.doi.org/10.3390/philosophies7020035.

Full text
Abstract:
Modern explainable AI (XAI) methods remain far from providing human-like answers to ‘why’ questions, let alone those that satisfactorily agree with human-level understanding. Instead, the results that such methods provide boil down to sets of causal attributions. Currently, the choice of accepted attributions rests largely, if not solely, on the explainee’s understanding of the quality of explanations. The paper argues that such decisions may be transferred from a human to an XAI agent, provided that its machine-learning (ML) algorithms perform genuinely abductive inferences. The paper outlines the key predicament in the current inductive paradigm of ML and the associated XAI techniques, and sketches the desiderata for a truly participatory, second-generation XAI, which is endowed with abduction.
APA, Harvard, Vancouver, ISO, and other styles
38

Aysel, Halil Ibrahim, Xiaohao Cai, and Adam Prugel-Bennett. "Explainable Artificial Intelligence: Advancements and Limitations." Applied Sciences 15, no. 13 (2025): 7261. https://doi.org/10.3390/app15137261.

Full text
Abstract:
Explainable artificial intelligence (XAI) has emerged as a crucial field for understanding and interpreting the decisions of complex machine learning models, particularly deep neural networks. This review presents a structured overview of XAI methodologies, encompassing a diverse range of techniques designed to provide explainability at different levels of abstraction. We cover pixel-level explanation strategies such as saliency maps, perturbation-based methods and gradient-based visualisations, as well as concept-based approaches that align model behaviour with human-understandable semantics. Additionally, we touch upon the relevance of XAI in the context of weakly supervised semantic segmentation. By synthesising recent developments, this paper aims to clarify the landscape of XAI methods and offer insights into their comparative utility and role in fostering trustworthy AI systems.
APA, Harvard, Vancouver, ISO, and other styles
39

Payrovnaziri, Seyedeh Neelufar, Zhaoyi Chen, Pablo Rengifo-Moreno, et al. "Explainable artificial intelligence models using real-world electronic health record data: a systematic scoping review." Journal of the American Medical Informatics Association 27, no. 7 (2020): 1173–85. http://dx.doi.org/10.1093/jamia/ocaa053.

Full text
Abstract:
Abstract Objective To conduct a systematic scoping review of explainable artificial intelligence (XAI) models that use real-world electronic health record data, categorize these techniques according to different biomedical applications, identify gaps of current studies, and suggest future research directions. Materials and Methods We searched MEDLINE, IEEE Xplore, and the Association for Computing Machinery (ACM) Digital Library to identify relevant papers published between January 1, 2009 and May 1, 2019. We summarized these studies based on the year of publication, prediction tasks, machine learning algorithm, dataset(s) used to build the models, the scope, category, and evaluation of the XAI methods. We further assessed the reproducibility of the studies in terms of the availability of data and code and discussed open issues and challenges. Results Forty-two articles were included in this review. We reported the research trend and most-studied diseases. We grouped XAI methods into 5 categories: knowledge distillation and rule extraction (N = 13), intrinsically interpretable models (N = 9), data dimensionality reduction (N = 8), attention mechanism (N = 7), and feature interaction and importance (N = 5). Discussion XAI evaluation is an open issue that requires a deeper focus in the case of medical applications. We also discuss the importance of reproducibility of research work in this field, as well as the challenges and opportunities of XAI from 2 medical professionals’ point of view. Conclusion Based on our review, we found that XAI evaluation in medicine has not been adequately and formally practiced. Reproducibility remains a critical concern. Ample opportunities exist to advance XAI research in medicine.
APA, Harvard, Vancouver, ISO, and other styles
40

Hoffmann, Rudolf, and Christoph Reich. "A Systematic Literature Review on Artificial Intelligence and Explainable Artificial Intelligence for Visual Quality Assurance in Manufacturing." Electronics 12, no. 22 (2023): 4572. http://dx.doi.org/10.3390/electronics12224572.

Full text
Abstract:
Quality assurance (QA) plays a crucial role in manufacturing to ensure that products meet their specifications. However, manual QA processes are costly and time-consuming, thereby making artificial intelligence (AI) an attractive solution for automation and expert support. In particular, convolutional neural networks (CNNs) have gained a lot of interest in visual inspection. Next to AI methods, the explainable artificial intelligence (XAI) systems, which achieve transparency and interpretability by providing insights into the decision-making process of the AI, are interesting methods for achieveing quality inspections in manufacturing processes. In this study, we conducted a systematic literature review (SLR) to explore AI and XAI approaches for visual QA (VQA) in manufacturing. Our objective was to assess the current state of the art and identify research gaps in this context. Our findings revealed that AI-based systems predominantly focused on visual quality control (VQC) for defect detection. Research addressing VQA practices, like process optimization, predictive maintenance, or root cause analysis, are more rare. Least often cited are papers that utilize XAI methods. In conclusion, this survey emphasizes the importance and potential of AI and XAI in VQA across various industries. By integrating XAI, organizations can enhance model transparency, interpretability, and trust in AI systems. Overall, leveraging AI and XAI improves VQA practices and decision-making in industries.
APA, Harvard, Vancouver, ISO, and other styles
41

Du, Yuhan, Anna Markella Antoniadi, Catherine McNestry, Fionnuala M. McAuliffe, and Catherine Mooney. "The Role of XAI in Advice-Taking from a Clinical Decision Support System: A Comparative User Study of Feature Contribution-Based and Example-Based Explanations." Applied Sciences 12, no. 20 (2022): 10323. http://dx.doi.org/10.3390/app122010323.

Full text
Abstract:
Explainable artificial intelligence (XAI) has shown benefits in clinical decision support systems (CDSSs); however, it is still unclear to CDSS developers how to select an XAI method to optimize the advice-taking of healthcare practitioners. We performed a user study on healthcare practitioners based on a machine learning-based CDSS for the prediction of gestational diabetes mellitus to explore and compare two XAI methods: explanation by feature contribution and explanation by example. Participants were asked to make estimates for both correctly and incorrectly predicted cases to determine if there were any over-reliance or self-reliance issues. We examined the weight of advice and healthcare practitioners’ preferences. Our results based on statistical tests showed no significant difference between the two XAI methods regarding the advice-taking. The CDSS explained by either method had a substantial impact on the decision-making of healthcare practitioners; however, both methods may lead to over-reliance issues. We identified the inclination towards CDSS use as a key factor in the advice-taking from an explainable CDSS among obstetricians. Additionally, we found that different types of healthcare practitioners had differing preferences for explanations; therefore, we suggest that CDSS developers should select XAI methods according to their target users.
APA, Harvard, Vancouver, ISO, and other styles
42

Joseph, Tibakanya, Male Henry Kenneth, and Nakasi Rose. "Explainable AI for Transparent and Trustworthy Tuberculosis Diagnosis: From Mere Pixels to Actionable Insights." East African Journal of Information Technology 7, no. 1 (2024): 341–54. http://dx.doi.org/10.37284/eajit.7.1.2276.

Full text
Abstract:
Building transparent and trustworthy AI-powered systems for disease diagnosis has become more paramount than ever due to a lack of understanding of black box models. A lack of transparency and explainability in AI-driven models can propagate biases and erode patients' and medical practitioners' trust. To answer this challenge, Explainable AI (XAI) is drastically emerging as a practical solution and approach to tackling ethical concerns in the health sector. The overarching purpose of this paper is to highlight the advancement in XAI for tuberculosis diagnosis (TB) and identify the benefits and challenges associated with improved trust in AI-powered TB diagnosis. We explore the potential of XAI in improving TB diagnosis. We attempt to provide a complete plan to promote XAI. We examine the significant problems associated with using XAI in TB diagnosis. We argue that XAI is critical for reliable TB diagnosis by improving the interpretability of AI decision-making processes and recognising possible biases and mistakes. We evaluate techniques and methods for XAI in TB diagnosis and examine the ethical and societal ramifications. By leveraging explainable AI, we can create a more reliable and trustworthy TB diagnostic framework, ultimately improving patient outcomes and global health. Finally, we provide thorough recommendations for developing and implementing XAI in TB diagnosis using X-ray imaging
APA, Harvard, Vancouver, ISO, and other styles
43

R, Jain. "Transparency in AI Decision Making: A Survey of Explainable AI Methods and Applications." Advances in Robotic Technology 2, no. 1 (2024): 1–10. http://dx.doi.org/10.23880/art-16000110.

Full text
Abstract:
Artificial Intelligence (AI) systems have become pervasive in numerous facets of modern life, wielding considerable influence in critical decision-making realms such as healthcare, finance, criminal justice, and beyond. Yet, the inherent opacity of many AI models presents significant hurdles concerning trust, accountability, and fairness. To address these challenges, Explainable AI (XAI) has emerged as a pivotal area of research, striving to augment the transparency and interpretability of AI systems. This survey paper serves as a comprehensive exploration of the state-of-the-art in XAI methods and their practical applications. We delve into a spectrum of techniques, spanning from model-agnostic approaches to interpretable machine learning models, meticulously scrutinizing their respective strengths, limitations, and real-world implications. The landscape of XAI is rich and varied, with diverse methodologies tailored to address different facets of interpretability. Model-agnostic approaches offer versatility by providing insights into model behavior across various AI architectures. In contrast, interpretable machine learning models prioritize transparency by design, offering inherent understandability at the expense of some predictive performance. Layer-wise Relevance Propagation (LRP) and attention mechanisms delve into the inner workings of neural networks, shedding light on feature importance and decision processes. Additionally, counterfactual explanations open avenues for exploring what-if scenarios, elucidating the causal relationships between input features and model outcomes. In tandem with methodological exploration, this survey scrutinizes the deployment and impact of XAI across multifarious domains. Successful case studies showcase the practical utility of transparent AI in healthcare diagnostics, financial risk assessment, criminal justice systems, and more. By elucidating these use cases, we illuminate the transformative potential of XAI in enhancing decision-making processes while fostering accountability and fairness. Nevertheless, the journey towards fully transparent AI systems is fraught with challenges and opportunities. As we traverse the current landscape of XAI, we identify pressing areas for further research and development. These include refining interpretability metrics, addressing the scalability of XAI techniques to complex models, and navigating the ethical dimensions of transparency in AI decision-making.Through this survey, we endeavor to cultivate a deeper understanding of transparency in AI decision-making, empowering stakeholders to navigate the intricate interplay between accuracy, interpretability, and ethical considerations. By fostering interdisciplinary dialogue and inspiring collaborative innovation, we aspire to catalyze future advancements in Explainable AI, ultimately paving the way towards more accountable and trustworthy AI systems.
APA, Harvard, Vancouver, ISO, and other styles
44

Wyatt, Lucie S., Lennard M. van Karnenbeek, Mark Wijkhuizen, Freija Geldof, and Behdad Dashtbozorg. "Explainable Artificial Intelligence (XAI) for Oncological Ultrasound Image Analysis: A Systematic Review." Applied Sciences 14, no. 18 (2024): 8108. http://dx.doi.org/10.3390/app14188108.

Full text
Abstract:
This review provides an overview of explainable AI (XAI) methods for oncological ultrasound image analysis and compares their performance evaluations. A systematic search of Medline Embase and Scopus between 25 March and 14 April 2024 identified 17 studies describing 14 XAI methods, including visualization, semantics, example-based, and hybrid functions. These methods primarily provided specific, local, and post hoc explanations. Performance evaluations focused on AI model performance, with limited assessment of explainability impact. Standardized evaluations incorporating clinical end-users are generally lacking. Enhanced XAI transparency may facilitate AI integration into clinical workflows. Future research should develop real-time methodologies and standardized quantitative evaluative metrics.
APA, Harvard, Vancouver, ISO, and other styles
45

G, Anushree, Suraj B. Madagaonkar, and Ravili C H. "Unveiling the Black Box: A Comprehensive Review of Explainable AI Techniques." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 008 (2024): 1–6. http://dx.doi.org/10.55041/ijsrem37405.

Full text
Abstract:
As artificial intelligence (AI) continues to integrate into various sectors, the complexity and opacity of AI models, particularly in machine learning (ML), pose significant challenges to interpret-ability and trust. This review paper addresses the critical need for explainable AI (XAI) to enhance understanding and transparency in ML models. We provide a comprehensive survey of state-of-the-art XAI techniques, including feature importance methods such as LIME (Local Interpret- able Model-agnostic Explanations) and SHAP (Shapely Additive expla- nation), as well as perturbation and attention-based mechanisms, to elucidate model decisions. Our analysis spans a diverse range of applications, including finance, education, and healthcare, showcasing the practical utility and impact of XAI methods. We discuss crucial issues such as the trade-offs between model accuracy and interpret ability, the de- sign of user-friendly explanations, and the development of comprehensive evaluation metrics. Furthermore, we explore the implications of XAI on user trust and decision-making, emphasizing the importance of reliable and ethical AI systems. This review contributes to the ongoing efforts to make AI systems more interpret- able, reliable, and aligned with societal needs, providing a robust foundation for future research and practical implementations of XAI. Keywords: Explainable AI · Machine Learning · Interpret-ability · Transparency · Ethical AI · XAI Techniques.
APA, Harvard, Vancouver, ISO, and other styles
46

Mashfiquer Rahman, Shafiq Ullah, Sharmin Nahar, Mohammad Shahadat Hossain, Mostafizur Rahman, and Mostafijur Rahman. "The Role of Explainable AI in cyber threat intelligence: Enhancing transparency and trust in security systems." World Journal of Advanced Research and Reviews 23, no. 2 (2024): 2897–907. https://doi.org/10.30574/wjarr.2024.23.2.2404.

Full text
Abstract:
XAI technology transforms cybersecurity by enabling transparent, secure systems that gain users' trust in AI threat information processes. This research examines how XAI improves cybersecurity systems through CTI by enhancing security models' interpretability and decision-making capabilities based on AI algorithms. The research evaluates how XAI addresses trust problems in typical AI systems because of their "black box" operation. Security frameworks with XAI components enhance user reliability and defensive quality by improving detection methods and response capabilities. Experts have confirmed that transparent artificial intelligence models increase trust between security professionals, policymakers, and organizational units. XAI is vital in modern cybersecurity developments because it strengthens organizational protection while improving decision choices. This study provides reasonable recommendations for industry stakeholders and academic institutions to develop explainable AI strategies for future cybersecurity application development.
APA, Harvard, Vancouver, ISO, and other styles
47

Verma, Prof Ashish. "Advancements in Explainable AI: Bridging the Gap Between Interpretability and Performance in Machine Learning Models." International Journal of Machine Learning, AI & Data Science Evolution 1, no. 01 (2025): 1–8. https://doi.org/10.63665/ijmlaidse.v1i1.01.

Full text
Abstract:
The growing adoption of Artificial Intelligence (AI) and Machine Learning (ML) in critical decision-making areas such as healthcare, finance, and autonomous systems has raised concerns regarding the interpretability of these models. While deep learning and other advanced ML models deliver high accuracy, their "black box" nature makes it difficult to explain their decision-making processes. Explainable AI (XAI) aims to bridge this gap by introducing methods that enhance transparency without significantly compromising performance. This paper explores key advancements in XAI, including model-agnostic and model-specific interpretability techniques, and evaluates their effectiveness in balancing model explainability and performance. We conduct an empirical analysis on commonly used XAI techniques, present a case study on AI-assisted healthcare diagnostics, and analyze stakeholder perspectives through a structured questionnaire. Our findings suggest that while XAI methods improve interpretability and stakeholder trust, they often come with computational and accuracy trade-offs. The study also highlights the challenges and opportunities in integrating XAI into real-world applications and provides recommendations for future research.
APA, Harvard, Vancouver, ISO, and other styles
48

Pellano, Kimji N., Inga Strümke, and Espen A. F. Ihlen. "From Movements to Metrics: Evaluating Explainable AI Methods in Skeleton-Based Human Activity Recognition." Sensors 24, no. 6 (2024): 1940. http://dx.doi.org/10.3390/s24061940.

Full text
Abstract:
The advancement of deep learning in human activity recognition (HAR) using 3D skeleton data is critical for applications in healthcare, security, sports, and human–computer interaction. This paper tackles a well-known gap in the field, which is the lack of testing in the applicability and reliability of XAI evaluation metrics in the skeleton-based HAR domain. We have tested established XAI metrics, namely faithfulness and stability on Class Activation Mapping (CAM) and Gradient-weighted Class Activation Mapping (Grad-CAM) to address this problem. This study introduces a perturbation method that produces variations within the error tolerance of motion sensor tracking, ensuring the resultant skeletal data points remain within the plausible output range of human movement as captured by the tracking device. We used the NTU RGB+D 60 dataset and the EfficientGCN architecture for HAR model training and testing. The evaluation involved systematically perturbing the 3D skeleton data by applying controlled displacements at different magnitudes to assess the impact on XAI metric performance across multiple action classes. Our findings reveal that faithfulness may not consistently serve as a reliable metric across all classes for the EfficientGCN model, indicating its limited applicability in certain contexts. In contrast, stability proves to be a more robust metric, showing dependability across different perturbation magnitudes. Additionally, CAM and Grad-CAM yielded almost identical explanations, leading to closely similar metric outcomes. This suggests a need for the exploration of additional metrics and the application of more diverse XAI methods to broaden the understanding and effectiveness of XAI in skeleton-based HAR.
APA, Harvard, Vancouver, ISO, and other styles
49

Lu, Manxue. "An Analysis of Personalization Factors in Explainable Artificial Intelligence (XAI) and Its Correlation with Decision Making." Applied and Computational Engineering 165, no. 1 (2025): 109–18. https://doi.org/10.54254/2755-2721/2025.ld25048.

Full text
Abstract:
This study delves into Explainable Artificial Intelligence (XAI), aiming to address two crucial questions: what aspect we should focus in personalization and how can we make XAI more individualized based on it? What the relationship between XAI and decision making and how should we balance it? A systematic literature search and backward snowball method were used to collect 44 core literatures. In this experiment, four types of personalized feature classifications were summarized: knowledge, character, XAI understanding, need. Four common methods to implement Personalized XAI were identified: correctness rate, interaction, visualization, and segmentation. The study found that blind pursuit of XAI's credibility can have detrimental effects, suggesting building human - machine systems and using XAI as a tool, which allow users to question and overturn XAI's wrong decisions. This study also found that XAI's effectiveness in decision - making varies with user's personalization. For users with limited knowledge, providing more understandable explanations can be beneficial. For the users only want to make decision quickly, XAI can provide them with correctness rate and simple explanation instead of detailed explanation to help them make decision quickly. However, it acknowledges limitations such as potential sample bias from the snowball backwards and subjective inferences. This research offers a theoretical basis for future personalized XAI research and calls for a more balanced view on XAI's role in decision making.
APA, Harvard, Vancouver, ISO, and other styles
50

Finzel, Bettina. "Current methods in explainable artificial intelligence and future prospects for integrative physiology." Pflügers Archiv - European Journal of Physiology, February 25, 2025. https://doi.org/10.1007/s00424-025-03067-7.

Full text
Abstract:
Abstract Explainable artificial intelligence (XAI) is gaining importance in physiological research, where artificial intelligence is now used as an analytical and predictive tool for many medical research questions. The primary goal of XAI is to make AI models understandable for human decision-makers. This can be achieved in particular through providing inherently interpretable AI methods or by making opaque models and their outputs transparent using post hoc explanations. This review introduces XAI core topics and provides a selective overview of current XAI methods in physiology. It further illustrates solved and discusses open challenges in XAI research using existing practical examples from the medical field. The article gives an outlook on two possible future prospects: (1) using XAI methods to provide trustworthy AI for integrative physiological research and (2) integrating physiological expertise about human explanation into XAI method development for useful and beneficial human-AI partnerships.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!