To see the other types of publications on this topic, follow the link: Post-hoc Explainability.

Journal articles on the topic 'Post-hoc Explainability'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Post-hoc Explainability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Xiaopu, Wubing Miao, and Guodong Liu. "Explainable Data Mining Framework of Identifying Root Causes of Rocket Engine Anomalies Based on Knowledge and Physics-Informed Feature Selection." Machines 13, no. 8 (2025): 640. https://doi.org/10.3390/machines13080640.

Full text
Abstract:
Liquid rocket engines occasionally experience abnormal phenomena with unclear mechanisms, causing difficulty in design improvements. To address the above issue, a data mining method that combines ante hoc explainability, post hoc explainability, and prediction accuracy is proposed. For ante hoc explainability, a feature selection method driven by data, models, and domain knowledge is established. Global sensitivity analysis of a physical model combined with expert knowledge and data correlation is utilized to establish the correlations between different types of parameters. Then a two-stage op
APA, Harvard, Vancouver, ISO, and other styles
2

Acun, Cagla, Ali Ashary, Dan O. Popa, and Olfa Nasraoui. "Optimizing Local Explainability in Robotic Grasp Failure Prediction." Electronics 14, no. 12 (2025): 2363. https://doi.org/10.3390/electronics14122363.

Full text
Abstract:
This paper presents a local explainability mechanism for robotic grasp failure prediction that enhances machine learning transparency at the instance level. Building upon pre hoc explainability concepts, we develop a neighborhood-based optimization approach that leverages the Jensen–Shannon divergence to ensure fidelity between predictor and explainer models at a local level. Unlike traditional post hoc methods such as LIME, our local in-training explainability framework directly optimizes the predictor model during training, then fine-tunes the pre-trained explainer for each test instance wit
APA, Harvard, Vancouver, ISO, and other styles
3

Alfano, Gianvincenzo, Sergio Greco, Domenico Mandaglio, Francesco Parisi, Reza Shahbazian, and Irina Trubitsyna. "Even-if Explanations: Formal Foundations, Priorities and Complexity." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 15 (2025): 15347–55. https://doi.org/10.1609/aaai.v39i15.33684.

Full text
Abstract:
Explainable AI has received significant attention in recent years. Machine learning models often operate as black boxes, lacking explainability and transparency while supporting decision-making processes. Local post-hoc explainability queries attempt to answer why individual inputs are classified in a certain way by a given model. While there has been important work on counterfactual explanations, less attention has been devoted to semifactual ones. In this paper, we focus on local post-hoc explainability queries within the semifactual `even-if' thinking and their computational complexity amon
APA, Harvard, Vancouver, ISO, and other styles
4

Mochaourab, Rami, Arun Venkitaraman, Isak Samsten, Panagiotis Papapetrou, and Cristian R. Rojas. "Post Hoc Explainability for Time Series Classification: Toward a signal processing perspective." IEEE Signal Processing Magazine 39, no. 4 (2022): 119–29. http://dx.doi.org/10.1109/msp.2022.3155955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fauvel, Kevin, Tao Lin, Véronique Masson, Élisa Fromont, and Alexandre Termier. "XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification." Mathematics 9, no. 23 (2021): 3137. http://dx.doi.org/10.3390/math9233137.

Full text
Abstract:
Multivariate Time Series (MTS) classification has gained importance over the past decade with the increase in the number of temporal datasets in multiple domains. The current state-of-the-art MTS classifier is a heavyweight deep learning approach, which outperforms the second-best MTS classifier only on large datasets. Moreover, this deep learning approach cannot provide faithful explanations as it relies on post hoc model-agnostic explainability methods, which could prevent its use in numerous applications. In this paper, we present XCM, an eXplainable Convolutional neural network for MTS cla
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Gin Chong, and Chu Kiong Loo. "On the Post Hoc Explainability of Optimized Self-Organizing Reservoir Network for Action Recognition." Sensors 22, no. 5 (2022): 1905. http://dx.doi.org/10.3390/s22051905.

Full text
Abstract:
This work proposes a novel unsupervised self-organizing network, called the Self-Organizing Convolutional Echo State Network (SO-ConvESN), for learning node centroids and interconnectivity maps compatible with the deterministic initialization of Echo State Network (ESN) input and reservoir weights, in the context of human action recognition (HAR). To ensure stability and echo state property in the reservoir, Recurrent Plots (RPs) and Recurrence Quantification Analysis (RQA) techniques are exploited for explainability and characterization of the reservoir dynamics and hence tuning ESN hyperpara
APA, Harvard, Vancouver, ISO, and other styles
7

Hildt, Elisabeth. "What Is the Role of Explainability in Medical Artificial Intelligence? A Case-Based Approach." Bioengineering 12, no. 4 (2025): 375. https://doi.org/10.3390/bioengineering12040375.

Full text
Abstract:
This article reflects on explainability in the context of medical artificial intelligence (AI) applications, focusing on AI-based clinical decision support systems (CDSS). After introducing the concept of explainability in AI and providing a short overview of AI-based clinical decision support systems (CDSSs) and the role of explainability in CDSSs, four use cases of AI-based CDSSs will be presented. The examples were chosen to highlight different types of AI-based CDSSs as well as different types of explanations: a machine language (ML) tool that lacks explainability; an approach with post ho
APA, Harvard, Vancouver, ISO, and other styles
8

Boya Marqas, Ridwan, Saman M. Almufti, and Rezhna Azad Yusif. "Unveiling explainability in artificial intelligence: a step to-‎wards transparent AI." International Journal of Scientific World 11, no. 1 (2025): 13–20. https://doi.org/10.14419/f2agrs86.

Full text
Abstract:
Explainability in artificial intelligence (AI) is an essential factor for building transparent, trustworthy, and ethical systems, particularly in ‎high-stakes domains such as healthcare, finance, justice, and autonomous systems. This study examines the foundations of AI explainability, ‎its critical role in fostering trust, and the current methodologies used to interpret AI models, such as post-hoc techniques, intrinsically inter-‎pretable models, and hybrid approaches. Despite these advancements, challenges persist, including trade-offs between accuracy and inter-‎pretability, scalability, et
APA, Harvard, Vancouver, ISO, and other styles
9

Maddala, Suresh Kumar. "Understanding Explainability in Enterprise AI Models." International Journal of Management Technology 12, no. 1 (2025): 58–68. https://doi.org/10.37745/ijmt.2013/vol12n25868.

Full text
Abstract:
This article examines the critical role of explainability in enterprise AI deployments, where algorithmic transparency has emerged as both a regulatory necessity and a business imperative. As organizations increasingly rely on sophisticated machine learning models for consequential decisions, the "black box" problem threatens stakeholder trust, regulatory compliance, and effective model governance. We explore the multifaceted business case for explainable AI across regulated industries, analyze the spectrum of interpretability techniques—from inherently transparent models to post-hoc explanati
APA, Harvard, Vancouver, ISO, and other styles
10

Kabir, Sami, Mohammad Shahadat Hossain, and Karl Andersson. "An Advanced Explainable Belief Rule-Based Framework to Predict the Energy Consumption of Buildings." Energies 17, no. 8 (2024): 1797. http://dx.doi.org/10.3390/en17081797.

Full text
Abstract:
The prediction of building energy consumption is beneficial to utility companies, users, and facility managers to reduce energy waste. However, due to various drawbacks of prediction algorithms, such as, non-transparent output, ad hoc explanation by post hoc tools, low accuracy, and the inability to deal with data uncertainties, such prediction has limited applicability in this domain. As a result, domain knowledge-based explainability with high accuracy is critical for making energy predictions trustworthy. Motivated by this, we propose an advanced explainable Belief Rule-Based Expert System
APA, Harvard, Vancouver, ISO, and other styles
11

Maree, Charl, and Christian Omlin. "Reinforcement Learning Your Way: Agent Characterization through Policy Regularization." AI 3, no. 2 (2022): 250–59. http://dx.doi.org/10.3390/ai3020015.

Full text
Abstract:
The increased complexity of state-of-the-art reinforcement learning (RL) algorithms has resulted in an opacity that inhibits explainability and understanding. This has led to the development of several post hoc explainability methods that aim to extract information from learned policies, thus aiding explainability. These methods rely on empirical observations of the policy, and thus aim to generalize a characterization of agents’ behaviour. In this study, we have instead developed a method to imbue agents’ policies with a characteristic behaviour through regularization of their objective funct
APA, Harvard, Vancouver, ISO, and other styles
12

Yan, Fei, Yunqing Chen, Yiwen Xia, Zhiliang Wang, and Ruoxiu Xiao. "An Explainable Brain Tumor Detection Framework for MRI Analysis." Applied Sciences 13, no. 6 (2023): 3438. http://dx.doi.org/10.3390/app13063438.

Full text
Abstract:
Explainability in medical images analysis plays an important role in the accurate diagnosis and treatment of tumors, which can help medical professionals better understand the images analysis results based on deep models. This paper proposes an explainable brain tumor detection framework that can complete the tasks of segmentation, classification, and explainability. The re-parameterization method is applied to our classification network, and the effect of explainable heatmaps is improved by modifying the network architecture. Our classification model also has the advantage of post-hoc explain
APA, Harvard, Vancouver, ISO, and other styles
13

Maarten Schraagen, Jan, Sabin Kerwien Lopez, Carolin Schneider, Vivien Schneider, Stephanie Tönjes, and Emma Wiechmann. "The Role of Transparency and Explainability in Automated Systems." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 65, no. 1 (2021): 27–31. http://dx.doi.org/10.1177/1071181321651063.

Full text
Abstract:
This study investigates the differences and effects of transparency and explainability on trust, situation awareness, and satisfaction in the context of an automated car. Three groups were compared in a between-subjects design (n = 73). Participants in every group saw six graphically manipulated videos of an automated car from the driver’s perspective with either transparency, post-hoc explanations or both combined. Transparency resulted in higher trust, higher satisfaction and higher level 2 situational awareness (SA) than explainability. Transparency also resulted in higher level 2 SA than t
APA, Harvard, Vancouver, ISO, and other styles
14

Acun, Cagla, and Olfa Nasraoui. "Pre Hoc and Co Hoc Explainability: Frameworks for Integrating Interpretability into Machine Learning Training for Enhanced Transparency and Performance." Applied Sciences 15, no. 13 (2025): 7544. https://doi.org/10.3390/app15137544.

Full text
Abstract:
Post hoc explanations for black-box machine learning models have been criticized for potentially inaccurate surrogate models and computational burden at prediction time. We propose pre hoc and co hoc explainability frameworks that integrate interpretability directly into the training process through an inherently interpretable white-box model. Pre hoc uses the white-box model to regularize the black-box model, while co hoc jointly optimizes both models with a shared loss function. We extend these frameworks to generate instance-specific explanations using Jensen–Shannon divergence as a regular
APA, Harvard, Vancouver, ISO, and other styles
15

Ali, Ali Mohammed Omar. "Explainability in AI: Interpretable Models for Data Science." International Journal for Research in Applied Science and Engineering Technology 13, no. 2 (2025): 766–71. https://doi.org/10.22214/ijraset.2025.66968.

Full text
Abstract:
As artificial intelligence (AI) continues to drive advancements across various domains, the need for explainability in AI models has become increasingly critical. Many state-of-the-art machine learning models, particularly deep learning architectures, operate as "black boxes," making their decision-making processes difficult to interpret. Explainable AI (XAI) aims to enhance model transparency, ensuring that AI-driven decisions are understandable, trustworthy, and aligned with ethical and regulatory standards. This paper explores different approaches to AI interpretability, including intrinsic
APA, Harvard, Vancouver, ISO, and other styles
16

Gunasekara, Sachini, and Mirka Saarela. "Explainable AI in Education: Techniques and Qualitative Assessment." Applied Sciences 15, no. 3 (2025): 1239. https://doi.org/10.3390/app15031239.

Full text
Abstract:
Many of the articles on AI in education compare the performance and fairness of different models, but few specifically focus on quantitatively analyzing their explainability. To bridge this gap, we analyzed key evaluation metrics for two machine learning models—ANN and DT—with a focus on their performance and explainability in predicting student outcomes using the OULAD. The methodology involved evaluating the DT, an intrinsically explainable model, against the more complex ANN, which requires post hoc explainability techniques. The results show that, although the feature-based and structured
APA, Harvard, Vancouver, ISO, and other styles
17

Srinivasu, Parvathaneni Naga, N. Sandhya, Rutvij H. Jhaveri, and Roshani Raut. "From Blackbox to Explainable AI in Healthcare: Existing Tools and Case Studies." Mobile Information Systems 2022 (June 13, 2022): 1–20. http://dx.doi.org/10.1155/2022/8167821.

Full text
Abstract:
Introduction. Artificial intelligence (AI) models have been employed to automate decision-making, from commerce to more critical fields directly affecting human lives, including healthcare. Although the vast majority of these proposed AI systems are considered black box models that lack explainability, there is an increasing trend of attempting to create medical explainable Artificial Intelligence (XAI) systems using approaches such as attention mechanisms and surrogate models. An AI system is said to be explainable if humans can tell how the system reached its decision. Various XAI-driven hea
APA, Harvard, Vancouver, ISO, and other styles
18

Methuku, Vijayalaxmi, Sharath Chandra Kondaparthy, and Direesh Reddy Aunugu. "Explainability and Transparency in Artificial Intelligence: Ethical Imperatives and Practical Challenges." International Journal of Electrical, Electronics and Computers 8, no. 3 (2023): 7–12. https://doi.org/10.22161/eec.84.2.

Full text
Abstract:
Artificial Intelligence (AI) is increasingly embedded in high-stakes domains such as healthcare, finance, and law enforcement, where opaque decision-making raises significant ethical concerns. Among the core challenges in AI ethics are explainability and transparency—key to fostering trust, accountability, and fairness in algorithmic systems. This review explores the ethical foundations of explainable AI (XAI), surveys leading technical approaches such as model-agnostic interpretability techniques and post-hoc explanation methods and examines their inherent limitations and trade-offs. A real-w
APA, Harvard, Vancouver, ISO, and other styles
19

Abdelaal, Yasmin, Michaël Aupetit, Abdelkader Baggag, and Dena Al-Thani. "Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review." Journal of Medical Internet Research 26 (December 24, 2024): e53863. https://doi.org/10.2196/53863.

Full text
Abstract:
Background Wearable technologies have become increasingly prominent in health care. However, intricate machine learning and deep learning algorithms often lead to the development of “black box” models, which lack transparency and comprehensibility for medical professionals and end users. In this context, the integration of explainable artificial intelligence (XAI) has emerged as a crucial solution. By providing insights into the inner workings of complex algorithms, XAI aims to foster trust and empower stakeholders to use wearable technologies responsibly. Objective This paper aims to review t
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Xiaoming, Xilin Hu, and Huiyong Wang. "Post Hoc Multi-Granularity Explanation for Multimodal Knowledge Graph Link Prediction." Electronics 14, no. 7 (2025): 1390. https://doi.org/10.3390/electronics14071390.

Full text
Abstract:
The multimodal knowledge graph link prediction model integrates entity features from multiple modalities, such as text and images, and uses these fused features to infer potential entity links in the knowledge graph. This process is highly dependent on the fitting and generalization capabilities of deep learning models, enabling the models to accurately capture complex semantic and relational patterns. However, it is this deep reliance on the fitting and generalization capabilities of deep learning models that leads to the black-box nature of the decision-making mechanisms and prediction bases
APA, Harvard, Vancouver, ISO, and other styles
21

Larriva-Novo, Xavier, Luis Pérez Miguel, Victor A. Villagra, Manuel Álvarez-Campana, Carmen Sanchez-Zas, and Óscar Jover. "Post-Hoc Categorization Based on Explainable AI and Reinforcement Learning for Improved Intrusion Detection." Applied Sciences 14, no. 24 (2024): 11511. https://doi.org/10.3390/app142411511.

Full text
Abstract:
The massive usage of Internet services nowadays has led to a drastic increase in cyberattacks, including sophisticated techniques, so that Intrusion Detection Systems (IDSs) need to use AP technologies to enhance their effectiveness. However, this has resulted in a lack of interpretability and explainability from different applications that use AI predictions, making it hard to understand by cybersecurity operators why decisions were made. To address this, the concept of Explainable AI (XAI) has been introduced to make the AI’s decisions more understandable at both global and local levels. Thi
APA, Harvard, Vancouver, ISO, and other styles
22

Kong, Weihao, Jianping Chen, and Pengfei Zhu. "Machine Learning-Based Uranium Prospectivity Mapping and Model Explainability Research." Minerals 14, no. 2 (2024): 128. http://dx.doi.org/10.3390/min14020128.

Full text
Abstract:
Sandstone-hosted uranium deposits are indeed significant sources of uranium resources globally. They are typically found in sedimentary basins and have been extensively explored and exploited in various countries. They play a significant role in meeting global uranium demand and are considered important resources for nuclear energy production. Erlian Basin, as one of the sedimentary basins in northern China, is known for its uranium mineralization hosted within sandstone formations. In this research, machine learning (ML) methodology was applied to mineral prospectivity mapping (MPM) of the me
APA, Harvard, Vancouver, ISO, and other styles
23

Biryukov, D. N., and A. S. Dudkin. "Explainability and interpretability are important aspects in ensuring the security of decisions made by intelligent systems (review article)." Scientific and Technical Journal of Information Technologies, Mechanics and Optics 25, no. 3 (2025): 373–86. https://doi.org/10.17586/2226-1494-2025-25-3-373-386.

Full text
Abstract:
The issues of trust in decisions made (formed) by intelligent systems are becoming more and more relevant. A systematic review of Explicable Artificial Intelligence (XAI) methods and tools aimed at bridging the gap between the complexity of neural networks and the need for interpretability of results for end users is presented. A theoretical analysis of the differences between explainability and interpretability in the context of artificial intelligence as well as their role in ensuring the security of decisions made by intelligent systems is carried out. It is shown that explainability implie
APA, Harvard, Vancouver, ISO, and other styles
24

Cho, Hyeoncheol, Youngrock Oh, and Eunjoo Jeon. "SEEN: Seen: Sharpening Explanations for Graph Neural Networks Using Explanations From Neighborhoods." Advances in Artificial Intelligence and Machine Learning 03, no. 02 (2023): 1165–79. http://dx.doi.org/10.54364/aaiml.2023.1168.

Full text
Abstract:
Explaining the foundations for predictions obtained from graph neural networks (GNNs) is critical for credible use of GNN models for real-world problems. Owing to the rapid growth of GNN applications, recent progress in explaining predictions from GNNs, such as sensitivity analysis, perturbation methods, and attribution methods, showed great opportunities and possibilities for explaining GNN predictions. In this study, we propose a method to improve the explanation quality of node classification tasks that can be applied in a post hoc manner through aggregation of auxiliary explanations from i
APA, Harvard, Vancouver, ISO, and other styles
25

Wyatt, Lucie S., Lennard M. van Karnenbeek, Mark Wijkhuizen, Freija Geldof, and Behdad Dashtbozorg. "Explainable Artificial Intelligence (XAI) for Oncological Ultrasound Image Analysis: A Systematic Review." Applied Sciences 14, no. 18 (2024): 8108. http://dx.doi.org/10.3390/app14188108.

Full text
Abstract:
This review provides an overview of explainable AI (XAI) methods for oncological ultrasound image analysis and compares their performance evaluations. A systematic search of Medline Embase and Scopus between 25 March and 14 April 2024 identified 17 studies describing 14 XAI methods, including visualization, semantics, example-based, and hybrid functions. These methods primarily provided specific, local, and post hoc explanations. Performance evaluations focused on AI model performance, with limited assessment of explainability impact. Standardized evaluations incorporating clinical end-users a
APA, Harvard, Vancouver, ISO, and other styles
26

Ganguly, Rita, Dharmpal Singh, and Rajesh Bose. "The next frontier of explainable artificial intelligence (XAI) in healthcare services: A study on PIMA diabetes dataset." Scientific Temper 16, no. 05 (2025): 4165–70. https://doi.org/10.58414/scientifictemper.2025.16.5.01.

Full text
Abstract:
The integration of Artificial Intelligence (AI) in healthcare has revolutionized disease diagnosis and risk prediction. However, the "black-box" nature of AI models raises concerns about trust, interpretability, and regulatory compliance. Explainable AI (XAI) addresses these issues by enhancing transparency in AI-driven decisions. This study explores the role of XAI in diabetes prediction using the PIMA Diabetes Dataset, evaluating machine learning models—logistic regression, decision trees, random forests, and deep learning—alongside SHAP and LIME explainability techniques. Data pre-processin
APA, Harvard, Vancouver, ISO, and other styles
27

Nogueira, Caio, Luís Fernandes, João N. D. Fernandes, and Jaime S. Cardoso. "Explaining Bounding Boxes in Deep Object Detectors Using Post Hoc Methods for Autonomous Driving Systems." Sensors 24, no. 2 (2024): 516. http://dx.doi.org/10.3390/s24020516.

Full text
Abstract:
Deep learning has rapidly increased in popularity, leading to the development of perception solutions for autonomous driving. The latter field leverages techniques developed for computer vision in other domains for accomplishing perception tasks such as object detection. However, the black-box nature of deep neural models and the complexity of the autonomous driving context motivates the study of explainability in these models that perform perception tasks. Moreover, this work explores explainable AI techniques for the object detection task in the context of autonomous driving. An extensive an
APA, Harvard, Vancouver, ISO, and other styles
28

Vinayak Pillai. "Enhancing the transparency of data and ml models using explainable AI (XAI)." World Journal of Advanced Engineering Technology and Sciences 13, no. 1 (2024): 397–406. http://dx.doi.org/10.30574/wjaets.2024.13.1.0428.

Full text
Abstract:
To this end, this paper focuses on the increasing demand for the explainability of Machine Learning (ML) models especially in environments where these models are employed to make critical decisions such as in healthcare, finance, and law. Although the typical ML models are considered opaque, XAI provides a set of ways and means to propose making these models more transparent and, thus, easier to explain. This paper describes and analyzes the model-agnostic approach, method of intrinsic explanation, post-hoc explanation, and visualization instruments and demonstrates the use of XAI in various f
APA, Harvard, Vancouver, ISO, and other styles
29

Ranjith Gopalan, Dileesh Onniyil, Ganesh Viswanathan, and Gaurav Samdani. "Hybrid models combining explainable AI and traditional machine learning: A review of methods and applications." World Journal of Advanced Engineering Technology and Sciences 15, no. 2 (2025): 1388–402. https://doi.org/10.30574/wjaets.2025.15.2.0635.

Full text
Abstract:
The rapid advancements in artificial intelligence and machine learning have led to the development of highly sophisticated models capable of superhuman performance in a variety of tasks. However, the increasing complexity of these models has also resulted in them becoming "black boxes", where the internal decision-making process is opaque and difficult to interpret. This lack of transparency and explainability has become a significant barrier to the widespread adoption of these models, particularly in sensitive domains such as healthcare and finance. To address this challenge, the field of Exp
APA, Harvard, Vancouver, ISO, and other styles
30

Grozdanovski, Ljupcho. "THE EXPLANATIONS ONE NEEDS FOR THE EXPLANATIONS ONE GIVES—THE NECESSITY OF EXPLAINABLE AI (XAI) FOR CAUSAL EXPLANATIONS OF AI-RELATED HARM:DECONSTRUCTING THE ‘REFUGE OF IGNORANCE’ IN THE EU’S AI LIABILITY REGULATION." International Journal of Law, Ethics, and Technology 2024, no. 2 (2024): 155–262. http://dx.doi.org/10.55574/tqcg5204.

Full text
Abstract:
This paper examines how explanations related to the adverse outcomes of Artificial Intelligence (AI) contribute to the development of causal evidentiary explanations in disputes surrounding AI liability. The study employs a dual approach: first, it analyzes the emerging global caselaw in the field of AI liability, seeking to discern prevailing trends regarding the evidence and explanations considered essential for the fair resolution of disputes. Against the backdrop of those trends, the paper evaluates the upcoming legislation in the European Union (EU) concerning AI liability, namely the AI
APA, Harvard, Vancouver, ISO, and other styles
31

Sai Teja Boppiniti. "A SURVEY ON EXPLAINABLE AI: TECHNIQUES AND CHALLENGES." International Journal of Innovations in Engineering Research and Technology 7, no. 3 (2020): 57–66. http://dx.doi.org/10.26662/ijiert.v7i3.pp57-66.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) is a rapidly evolving field aimed at making AI systems more interpretable and transparent to human users. As AI technologies become increasingly integrated into critical sectors such as healthcare, finance, and autonomous systems, the need for explanations behind AI decisions has grown significantly. This survey provides a comprehensive review of XAI techniques, categorizing them into post-hoc and intrinsic methods, and examines their application in various domains. Additionally, the paper explores the major challenges in achieving explainability, incl
APA, Harvard, Vancouver, ISO, and other styles
32

Valentinos, Pariza, Pal Avik, Pawar Madhura, and Serra Faber Quim. "[Re] Reproducibility Study of "Label-Free Explainability for Unsupervised Models"." ReScience C 9, no. 2 (2023): #11. https://doi.org/10.5281/zenodo.8173674.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Trejo-Moncada, Denise M. "The eXplainable Artificial Intelligence Paradox in Law: Technological Limits and Legal Transparency." Journal of Artificial Intelligence and Computing Applications 2, no. 1 (2024): 19–27. https://doi.org/10.5281/zenodo.14692066.

Full text
Abstract:
The integration of Artificial Intelligence (AI) into legal systems offers transformative potential, promising enhanced efficiency and predictive accuracy. However, this progress also brings to the spotlight the explainability paradox: the unavoidable trade-off between the accuracy of complex Machine Learning (ML) and Deep Learning (DL) models and their lack of transparency. This paradox challenges foundational legal principles such as fairness, due process, and the right to explanation. While eXplainable AI (XAI) techniques have emerged to address this issue, their post-hoc nature, limited fid
APA, Harvard, Vancouver, ISO, and other styles
34

Roscher, R., B. Bohn, M. F. Duarte, and J. Garcke. "EXPLAIN IT TO ME – FACING REMOTE SENSING CHALLENGES IN THE BIO- AND GEOSCIENCES WITH EXPLAINABLE MACHINE LEARNING." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2020 (August 3, 2020): 817–24. http://dx.doi.org/10.5194/isprs-annals-v-3-2020-817-2020.

Full text
Abstract:
Abstract. For some time now, machine learning methods have been indispensable in many application areas. Especially with the recent development of efficient neural networks, these methods are increasingly used in the sciences to obtain scientific outcomes from observational or simulated data. Besides a high accuracy, a desired goal is to learn explainable models. In order to reach this goal and obtain explanation, knowledge from the respective domain is necessary, which can be integrated into the model or applied post-hoc. We discuss explainable machine learning approaches which are used to ta
APA, Harvard, Vancouver, ISO, and other styles
35

Kulaklıoğlu, Duru. "Explainable AI: Enhancing Interpretability of Machine Learning Models." Human Computer Interaction 8, no. 1 (2024): 91. https://doi.org/10.62802/z3pde490.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) is emerging as a critical field to address the “black box” nature of many machine learning (ML) models. While these models achieve high predictive accuracy, their opacity undermines trust, adoption, and ethical compliance in critical domains such as healthcare, finance, and autonomous systems. This research explores methodologies and frameworks to enhance the interpretability of ML models, focusing on techniques like feature attribution, surrogate models, and counterfactual explanations. By balancing model complexity and transparency, this study highli
APA, Harvard, Vancouver, ISO, and other styles
36

Apostolopoulos, Ioannis D., Ifigeneia Athanasoula, Mpesi Tzani, and Peter P. Groumpos. "An Explainable Deep Learning Framework for Detecting and Localising Smoke and Fire Incidents: Evaluation of Grad-CAM++ and LIME." Machine Learning and Knowledge Extraction 4, no. 4 (2022): 1124–35. http://dx.doi.org/10.3390/make4040057.

Full text
Abstract:
Climate change is expected to increase fire events and activity with multiple impacts on human lives. Large grids of forest and city monitoring devices can assist in incident detection, accelerating human intervention in extinguishing fires before they get out of control. Artificial Intelligence promises to automate the detection of fire-related incidents. This study enrols 53,585 fire/smoke and normal images and benchmarks seventeen state-of-the-art Convolutional Neural Networks for distinguishing between the two classes. The Xception network proves to be superior to the rest of the CNNs, obt
APA, Harvard, Vancouver, ISO, and other styles
37

Chatterjee, Soumick, Arnab Das, Chirag Mandal, et al. "TorchEsegeta: Framework for Interpretability and Explainability of Image-Based Deep Learning Models." Applied Sciences 12, no. 4 (2022): 1834. http://dx.doi.org/10.3390/app12041834.

Full text
Abstract:
Clinicians are often very sceptical about applying automatic image processing approaches, especially deep learning-based methods, in practice. One main reason for this is the black-box nature of these approaches and the inherent problem of missing insights of the automatically derived decisions. In order to increase trust in these methods, this paper presents approaches that help to interpret and explain the results of deep learning algorithms by depicting the anatomical areas that influence the decision of the algorithm most. Moreover, this research presents a unified framework, TorchEsegeta,
APA, Harvard, Vancouver, ISO, and other styles
38

Li, Lu, Jiale Liu, Xingyu Ji, Maojun Wang, and Zeyu Zhang. "Self-Explainable Graph Transformer for Link Sign Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 11 (2025): 12084–92. https://doi.org/10.1609/aaai.v39i11.33316.

Full text
Abstract:
Signed Graph Neural Networks (SGNNs) have been shown to be effective in analyzing complex patterns in real-world situations where positive and negative links coexist. However, SGNN models suffer from poor explainability, which limit their adoptions in critical scenarios that require understanding the rationale behind predictions. To the best of our knowledge, there is currently no research work on the explainability of the SGNN models. Our goal is to address the explainability of decision-making for the downstream task of link sign prediction specific to signed graph neural networks. Since pos
APA, Harvard, Vancouver, ISO, and other styles
39

Rohr, Maurice, Benedikt Müller, Sebastian Dill, Gökhan Güney, and Christoph Hoog Antink. "Multiple instance learning framework can facilitate explainability in murmur detection." PLOS Digital Health 3, no. 3 (2024): e0000461. http://dx.doi.org/10.1371/journal.pdig.0000461.

Full text
Abstract:
Objective Cardiovascular diseases (CVDs) account for a high fatality rate worldwide. Heart murmurs can be detected from phonocardiograms (PCGs) and may indicate CVDs. Still, they are often overlooked as their detection and correct clinical interpretation require expert skills. In this work, we aim to predict the presence of murmurs and clinical outcomes from multiple PCG recordings employing an explainable multitask model. Approach Our approach consists of a two-stage multitask model. In the first stage, we predict the murmur presence in single PCGs using a multiple instance learning (MIL) fra
APA, Harvard, Vancouver, ISO, and other styles
40

Antoniadi, Anna Markella, Miriam Galvin, Mark Heverin, Lan Wei, Orla Hardiman, and Catherine Mooney. "A Clinical Decision Support System for the Prediction of Quality of Life in ALS." Journal of Personalized Medicine 12, no. 3 (2022): 435. http://dx.doi.org/10.3390/jpm12030435.

Full text
Abstract:
Amyotrophic Lateral Sclerosis (ALS), also known as Motor Neuron Disease (MND), is a rare and fatal neurodegenerative disease. As ALS is currently incurable, the aim of the treatment is mainly to alleviate symptoms and improve quality of life (QoL). We designed a prototype Clinical Decision Support System (CDSS) to alert clinicians when a person with ALS is experiencing low QoL in order to inform and personalise the support they receive. Explainability is important for the success of a CDSS and its acceptance by healthcare professionals. The aim of this work isto announce our prototype (C-ALS),
APA, Harvard, Vancouver, ISO, and other styles
41

Jishnu, Setia. "Explainable AI: Methods and Applications." Explainable AI: Methods and Applications 8, no. 10 (2023): 5. https://doi.org/10.5281/zenodo.10021461.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) has emerged as a critical area of research, ensuring that AI systems are transparent, interpretable, and accountable. This paper provides a comprehensive overview of various methods and applications of Explainable AI. We delve into the importance of interpretability in AI models, explore different techniques for making complex AI models understandable, and discuss real-world applications where explainability is crucial. Through this paper, I aim to shed light on the advancements in the field of XAI and its potentialto bridge the gap between AI's predic
APA, Harvard, Vancouver, ISO, and other styles
42

Sudars, Kaspars, Ivars Namatēvs, and Kaspars Ozols. "Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach." Journal of Imaging 8, no. 2 (2022): 30. http://dx.doi.org/10.3390/jimaging8020030.

Full text
Abstract:
Model understanding is critical in many domains, particularly those involved in high-stakes decisions, e.g., medicine, criminal justice, and autonomous driving. Explainable AI (XAI) methods are essential for working with black-box models such as convolutional neural networks. This paper evaluates the traffic sign classifier of the Deep Neural Network (DNN) from the Programmable Systems for Intelligence in Automobiles (PRYSTINE) project for explainability. The results of explanations were further used for the CNN PRYSTINE classifier vague kernels’ compression. Then, the precision of the classif
APA, Harvard, Vancouver, ISO, and other styles
43

Moustakidis, Serafeim, Christos Kokkotis, Dimitrios Tsaopoulos, et al. "Identifying Country-Level Risk Factors for the Spread of COVID-19 in Europe Using Machine Learning." Viruses 14, no. 3 (2022): 625. http://dx.doi.org/10.3390/v14030625.

Full text
Abstract:
Coronavirus disease 2019 (COVID-19) has resulted in approximately 5 million deaths around the world with unprecedented consequences in people’s daily routines and in the global economy. Despite vast increases in time and money spent on COVID-19-related research, there is still limited information about the factors at the country level that affected COVID-19 transmission and fatality in EU. The paper focuses on the identification of these risk factors using a machine learning (ML) predictive pipeline and an associated explainability analysis. To achieve this, a hybrid dataset was created employ
APA, Harvard, Vancouver, ISO, and other styles
44

Djoumessi, Kerol, Ziwei Huang, Laura Kühlewein, et al. "An inherently interpretable AI model improves screening speed and accuracy for early diabetic retinopathy." PLOS Digital Health 4, no. 5 (2025): e0000831. https://doi.org/10.1371/journal.pdig.0000831.

Full text
Abstract:
Diabetic retinopathy (DR) is a frequent complication of diabetes, affecting millions worldwide. Screening for this disease based on fundus images has been one of the first successful use cases for modern artificial intelligence in medicine. However, current state-of-the-art systems typically use black-box models to make referral decisions, requiring post-hoc methods for AI-human interaction and clinical decision support. We developed and evaluated an inherently interpretable deep learning model, which explicitly models the local evidence of DR as part of its network architecture, for clinical
APA, Harvard, Vancouver, ISO, and other styles
45

Hong, Jung-Ho, Woo-Jeoung Nam, Kyu-Sung Jeon, and Seong-Whan Lee. "Towards Better Visualizing the Decision Basis of Networks via Unfold and Conquer Attribution Guidance." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (2023): 7884–92. http://dx.doi.org/10.1609/aaai.v37i7.25954.

Full text
Abstract:
Revealing the transparency of Deep Neural Networks (DNNs) has been widely studied to describe the decision mechanisms of network inner structures. In this paper, we propose a novel post-hoc framework, Unfold and Conquer Attribution Guidance (UCAG), which enhances the explainability of the network decision by spatially scrutinizing the input features with respect to the model confidence. Addressing the phenomenon of missing detailed descriptions, UCAG sequentially complies with the confidence of slices of the image, leading to providing an abundant and clear interpretation. Therefore, it is pos
APA, Harvard, Vancouver, ISO, and other styles
46

Huang, Rundong, Farhad Shirani, and Dongsheng Luo. "Factorized Explainer for Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (2024): 12626–34. http://dx.doi.org/10.1609/aaai.v38i11.29157.

Full text
Abstract:
Graph Neural Networks (GNNs) have received increasing attention due to their ability to learn from graph-structured data. To open the black-box of these deep learning models, post-hoc instance-level explanation methods have been proposed to understand GNN predictions. These methods seek to discover substructures that explain the prediction behavior of a trained GNN. In this paper, we show analytically that for a large class of explanation tasks, conventional approaches, which are based on the principle of graph information bottleneck (GIB), admit trivial solutions that do not align with the no
APA, Harvard, Vancouver, ISO, and other styles
47

Naresh Vurukonda. "A Novel Framework for Inherently Interpretable Deep Neural Networks Using Attention-Based Feature Attribution in High-Dimensional Tabular Data." Journal of Information Systems Engineering and Management 10, no. 50s (2025): 599–604. https://doi.org/10.52783/jisem.v10i50s.10290.

Full text
Abstract:
Deep learning models for tabular data often lack interpretability, posing challenges in domains like healthcare and finance where trust is critical. We propose an attention-augmented neural network architecture that inherently highlights the most informative features, thus providing intrinsic explanations for its predictions. Drawing inspiration from TabNet and Transformer-based models, our model applies multi-head feature-wise attention to automatically weight each feature’s contribution. We incorporate an attention-weight regularization scheme (e.g. sparsemax) to encourage focused attributio
APA, Harvard, Vancouver, ISO, and other styles
48

Naresh Vurukonda. "A Novel Framework for Inherently Interpretable Deep Neural Networks Using Attention-Based Feature Attribution in High-Dimensional Tabular Data." Journal of Information Systems Engineering and Management 10, no. 51s (2025): 1076–81. https://doi.org/10.52783/jisem.v10i51s.10626.

Full text
Abstract:
Deep learning models for tabular data often lack interpretability, posing challenges in domains like healthcare and finance where trust is critical. We propose an attention-augmented neural network architecture that inherently highlights the most informative features, thus providing intrinsic explanations for its predictions. Drawing inspiration from TabNet and Transformer-based models, our model applies multi-head feature-wise attention to automatically weight each feature’s contribution. We incorporate an attention-weight regularization scheme (e.g. sparsemax) to encourage focused attributio
APA, Harvard, Vancouver, ISO, and other styles
49

Singh, Rajeev Kumar, Rohan Gorantla, Sai Giridhar Rao Allada, and Pratap Narra. "SkiNet: A deep learning framework for skin lesion diagnosis with uncertainty estimation and explainability." PLOS ONE 17, no. 10 (2022): e0276836. http://dx.doi.org/10.1371/journal.pone.0276836.

Full text
Abstract:
Skin cancer is considered to be the most common human malignancy. Around 5 million new cases of skin cancer are recorded in the United States annually. Early identification and evaluation of skin lesions are of great clinical significance, but the disproportionate dermatologist-patient ratio poses a significant problem in most developing nations. Therefore a novel deep architecture, named as SkiNet, is proposed to provide faster screening solution and assistance to newly trained physicians in the process of clinical diagnosis of skin cancer. The main motive behind SkiNet’s design and developme
APA, Harvard, Vancouver, ISO, and other styles
50

Noriega, Jomark, Luis Rivera, Jorge Castañeda, and José Herrera. "From Crisis to Algorithm: Credit Delinquency Prediction in Peru Under Critical External Factors Using Machine Learning." Data 10, no. 5 (2025): 63. https://doi.org/10.3390/data10050063.

Full text
Abstract:
Robust credit risk prediction in emerging economies increasingly demands the integration of external factors (EFs) beyond borrowers’ control. This study introduces a scenario-based methodology to incorporate EF—namely COVID-19 severity (mortality and confirmed cases), climate anomalies (temperature deviations, weather-induced road blockages), and social unrest—into machine learning (ML) models for credit delinquency prediction. The approach is grounded in a CRISP-DM framework, combining stationarity testing (Dickey–Fuller), causality analysis (Granger), and post hoc explainability (SHAP, LIME)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!