Siga este enlace para ver otros tipos de publicaciones sobre el tema: Post-hoc interpretability.

Artículos de revistas sobre el tema "Post-hoc interpretability"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Post-hoc interpretability".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Feng, Jiangfan, Yukun Liang, and Lin Li. "Anomaly Detection in Videos Using Two-Stream Autoencoder with Post Hoc Interpretability." Computational Intelligence and Neuroscience 2021 (July 26, 2021): 1–15. http://dx.doi.org/10.1155/2021/7367870.

Texto completo
Resumen
The growing interest in deep learning approaches to video surveillance raises concerns about the accuracy and efficiency of neural networks. However, fast and reliable detection of abnormal events is still a challenging work. Here, we introduce a two-stream approach that offers an autoencoder-based structure for fast and efficient detection to facilitate anomaly detection from surveillance video without labeled abnormal events. Furthermore, we present post hoc interpretability of feature map visualization to show the process of feature learning, revealing uncertain and ambiguous decision bound
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Sinhamahapatra, Poulami, Suprosanna Shit, Anjany Sekuboyina, et al. "Enhancing Interpretability of Vertebrae Fracture Grading using Human-interpretable Prototypes." Machine Learning for Biomedical Imaging 2, July 2024 (2024): 977–1002. http://dx.doi.org/10.59275/j.melba.2024-258b.

Texto completo
Resumen
Vertebral fracture grading classifies the severity of vertebral fractures, which is a challenging task in medical imaging and has recently attracted Deep Learning (DL) models. Only a few works attempted to make such models human-interpretable despite the need for transparency and trustworthiness in critical use cases like DL-assisted medical diagnosis. Moreover, such models either rely on post-hoc methods or additional annotations. In this work, we propose a novel interpretable-by-design method, ProtoVerse, to find relevant sub-parts of vertebral fractures (prototypes) that reliably explain th
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Sarma Borah, Proyash Paban, Devraj Kashyap, Ruhini Aktar Laskar, and Ankur Jyoti Sarmah. "A Comprehensive Study on Explainable AI Using YOLO and Post Hoc Method on Medical Diagnosis." Journal of Physics: Conference Series 2919, no. 1 (2024): 012045. https://doi.org/10.1088/1742-6596/2919/1/012045.

Texto completo
Resumen
Abstract Medical imaging plays a pivotal role in disease detection and intervention. The black-box nature of deep learning models, such as YOLOv8, creates challenges in interpreting their decisions. This paper presents a toolset to enhance interpretability in AI based diagnostics by integrating Explainable AI (XAI) techniques with YOLOv8. This paper explores implementation of post hoc methods, including Grad-CAM and Eigen CAM, to assist end users in understanding the decision making of the model. This comprehensive evaluation utilises CT-Datasets, demonstrating the efficacy of YOLOv8 for objec
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Zhang, Zaixi, Qi Liu, Hao Wang, Chengqiang Lu, and Cheekong Lee. "ProtGNN: Towards Self-Explaining Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (2022): 9127–35. http://dx.doi.org/10.1609/aaai.v36i8.20898.

Texto completo
Resumen
Despite the recent progress in Graph Neural Networks (GNNs), it remains challenging to explain the predictions made by GNNs. Existing explanation methods mainly focus on post-hoc explanations where another explanatory model is employed to provide explanations for a trained GNN. The fact that post-hoc methods fail to reveal the original reasoning process of GNNs raises the need of building GNNs with built-in interpretability. In this work, we propose Prototype Graph Neural Network (ProtGNN), which combines prototype learning with GNNs and provides a new perspective on the explanations of GNNs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Alfano, Gianvincenzo, Sergio Greco, Domenico Mandaglio, Francesco Parisi, Reza Shahbazian, and Irina Trubitsyna. "Even-if Explanations: Formal Foundations, Priorities and Complexity." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 15 (2025): 15347–55. https://doi.org/10.1609/aaai.v39i15.33684.

Texto completo
Resumen
Explainable AI has received significant attention in recent years. Machine learning models often operate as black boxes, lacking explainability and transparency while supporting decision-making processes. Local post-hoc explainability queries attempt to answer why individual inputs are classified in a certain way by a given model. While there has been important work on counterfactual explanations, less attention has been devoted to semifactual ones. In this paper, we focus on local post-hoc explainability queries within the semifactual `even-if' thinking and their computational complexity amon
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Xu, Qian, Wenzhao Xie, Bolin Liao, et al. "Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review." Journal of Healthcare Engineering 2023 (February 3, 2023): 1–13. http://dx.doi.org/10.1155/2023/9919269.

Texto completo
Resumen
Background. Artificial intelligence (AI) has developed rapidly, and its application extends to clinical decision support system (CDSS) for improving healthcare quality. However, the interpretability of AI-driven CDSS poses significant challenges to widespread application. Objective. This study is a review of the knowledge-based and data-based CDSS literature regarding interpretability in health care. It highlights the relevance of interpretability for CDSS and the area for improvement from technological and medical perspectives. Methods. A systematic search was conducted on the interpretabilit
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Gill, Navdeep, Patrick Hall, Kim Montgomery, and Nicholas Schmidt. "A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing." Information 11, no. 3 (2020): 137. http://dx.doi.org/10.3390/info11030137.

Texto completo
Resumen
This manuscript outlines a viable approach for training and evaluating machine learning systems for high-stakes, human-centered, or regulated applications using common Python programming tools. The accuracy and intrinsic interpretability of two types of constrained models, monotonic gradient boosting machines and explainable neural networks, a deep learning architecture well-suited for structured data, are assessed on simulated data and publicly available mortgage data. For maximum transparency and the potential generation of personalized adverse action notices, the constrained models are anal
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Kulaklıoğlu, Duru. "Explainable AI: Enhancing Interpretability of Machine Learning Models." Human Computer Interaction 8, no. 1 (2024): 91. https://doi.org/10.62802/z3pde490.

Texto completo
Resumen
Explainable Artificial Intelligence (XAI) is emerging as a critical field to address the “black box” nature of many machine learning (ML) models. While these models achieve high predictive accuracy, their opacity undermines trust, adoption, and ethical compliance in critical domains such as healthcare, finance, and autonomous systems. This research explores methodologies and frameworks to enhance the interpretability of ML models, focusing on techniques like feature attribution, surrogate models, and counterfactual explanations. By balancing model complexity and transparency, this study highli
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Acun, Cagla, and Olfa Nasraoui. "Pre Hoc and Co Hoc Explainability: Frameworks for Integrating Interpretability into Machine Learning Training for Enhanced Transparency and Performance." Applied Sciences 15, no. 13 (2025): 7544. https://doi.org/10.3390/app15137544.

Texto completo
Resumen
Post hoc explanations for black-box machine learning models have been criticized for potentially inaccurate surrogate models and computational burden at prediction time. We propose pre hoc and co hoc explainability frameworks that integrate interpretability directly into the training process through an inherently interpretable white-box model. Pre hoc uses the white-box model to regularize the black-box model, while co hoc jointly optimizes both models with a shared loss function. We extend these frameworks to generate instance-specific explanations using Jensen–Shannon divergence as a regular
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Yousufi Aqmal, Shahid, and Fermle Erdely S. "Enhancing Nonparametric Tests: Insights for Computational Intelligence and Data Mining." Researcher Academy Innovation Data Analysis 1, no. 3 (2024): 214–26. https://doi.org/10.69725/raida.v1i3.168.

Texto completo
Resumen
Objective: With the aim of improving monitoring reliability and interpretability of CI and DM experimental statistical tests, we evaluate the performance of cutting-edge nonparametric tests and post hoc procedures. Methods: A Friedman Aligned Ranks test, Quade test, and multiple post hoc corrections Bonferroni-Dunn and Holm were used to comparative analyze data. These approaches were employed to algorithm performance metrics with varied datasets to evaluate their capability to detect meaningful differences and control Type I errors.Results: Advanced nonparametric methods consistently outperfor
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Arjunan, Gopalakrishnan. "Implementing Explainable AI in Healthcare: Techniques for Interpretable Machine Learning Models in Clinical Decision-Making." International Journal of Scientific Research and Management (IJSRM) 9, no. 05 (2021): 597–603. http://dx.doi.org/10.18535/ijsrm/v9i05.ec03.

Texto completo
Resumen
The integration of explainable artificial intelligence (XAI) in healthcare is revolutionizing clinical decision-making by providing clarity around complex machine learning (ML) models. As AI becomes increasingly critical in medical fields—ranging from diagnostics to treatment personalization—the interpretability of these models is crucial for fostering trust, transparency, and accountability among healthcare providers and patients. Traditional "black-box" models, such as deep neural networks, often achieve high accuracy but lack transparency, creating challenges in highly regulated, high-stake
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Khiem, Phan Xuan, Zurida B. Batchaeva, and Liana K. Katchieva. "HYBRIDIZATION OF MACHINE LEARNING AND STATISTICS METHODS TO IMPROVE MODEL INTERPRETABILITY." EKONOMIKA I UPRAVLENIE: PROBLEMY, RESHENIYA 12/15, no. 153 (2024): 214–20. https://doi.org/10.36871/ek.up.p.r.2024.12.15.025.

Texto completo
Resumen
The article discusses the hybridization of machine learning and statistics methods to improve the interpretability of models. Interpretability is a key factor for decision making in areas such as medicine, finance, and social sciences, where algorithm transparency is critical. The proposed approach combines the accuracy and flexibility of machine learning with the analytical capabilities of statistical methods. Integration methods are discussed, including the use of confidence intervals, Bayesian methods, principal component analysis, and post-hoc interpretation approaches such as SHAP. The re
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Marconato, Emanuele, Andrea Passerini, and Stefano Teso. "Interpretability Is in the Mind of the Beholder: A Causal Framework for Human-Interpretable Representation Learning." Entropy 25, no. 12 (2023): 1574. http://dx.doi.org/10.3390/e25121574.

Texto completo
Resumen
Research on Explainable Artificial Intelligence has recently started exploring the idea of producing explanations that, rather than being expressed in terms of low-level features, are encoded in terms of interpretable concepts learned from data. How to reliably acquire such concepts is, however, still fundamentally unclear. An agreed-upon notion of concept interpretability is missing, with the result that concepts used by both post hoc explainers and concept-based neural networks are acquired through a variety of mutually incompatible strategies. Critically, most of these neglect the human sid
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Zhang, Xiaopu, Wubing Miao, and Guodong Liu. "Explainable Data Mining Framework of Identifying Root Causes of Rocket Engine Anomalies Based on Knowledge and Physics-Informed Feature Selection." Machines 13, no. 8 (2025): 640. https://doi.org/10.3390/machines13080640.

Texto completo
Resumen
Liquid rocket engines occasionally experience abnormal phenomena with unclear mechanisms, causing difficulty in design improvements. To address the above issue, a data mining method that combines ante hoc explainability, post hoc explainability, and prediction accuracy is proposed. For ante hoc explainability, a feature selection method driven by data, models, and domain knowledge is established. Global sensitivity analysis of a physical model combined with expert knowledge and data correlation is utilized to establish the correlations between different types of parameters. Then a two-stage op
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Biryukov, D. N., and A. S. Dudkin. "Explainability and interpretability are important aspects in ensuring the security of decisions made by intelligent systems (review article)." Scientific and Technical Journal of Information Technologies, Mechanics and Optics 25, no. 3 (2025): 373–86. https://doi.org/10.17586/2226-1494-2025-25-3-373-386.

Texto completo
Resumen
The issues of trust in decisions made (formed) by intelligent systems are becoming more and more relevant. A systematic review of Explicable Artificial Intelligence (XAI) methods and tools aimed at bridging the gap between the complexity of neural networks and the need for interpretability of results for end users is presented. A theoretical analysis of the differences between explainability and interpretability in the context of artificial intelligence as well as their role in ensuring the security of decisions made by intelligent systems is carried out. It is shown that explainability implie
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Gaurav, Kashyap. "Explainable AI (XAI): Methods and Techniques to Make Deep Learning Models More Interpretable and Their Real-World Implications." International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences 11, no. 4 (2023): 1–7. https://doi.org/10.5281/zenodo.14382747.

Texto completo
Resumen
The goal of the developing field of explainable artificial intelligence (XAI) is to make complex AI models, especially deep learning (DL) models, which are frequently criticized for being "black boxes" more interpretable. Understanding how deep learning models make decisions is becoming crucial for accountability, fairness, and trust as deep learning is used more and more in various industries. This paper offers a thorough analysis of the strategies and tactics used to improve the interpretability of deep learning models, including hybrid approaches, post-hoc explanations, and model-specific s
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Larriva-Novo, Xavier, Luis Pérez Miguel, Victor A. Villagra, Manuel Álvarez-Campana, Carmen Sanchez-Zas, and Óscar Jover. "Post-Hoc Categorization Based on Explainable AI and Reinforcement Learning for Improved Intrusion Detection." Applied Sciences 14, no. 24 (2024): 11511. https://doi.org/10.3390/app142411511.

Texto completo
Resumen
The massive usage of Internet services nowadays has led to a drastic increase in cyberattacks, including sophisticated techniques, so that Intrusion Detection Systems (IDSs) need to use AP technologies to enhance their effectiveness. However, this has resulted in a lack of interpretability and explainability from different applications that use AI predictions, making it hard to understand by cybersecurity operators why decisions were made. To address this, the concept of Explainable AI (XAI) has been introduced to make the AI’s decisions more understandable at both global and local levels. Thi
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Degtiarova, Ganna, Fran Mikulicic, Jan Vontobel, et al. "Post-hoc motion correction for coronary computed tomography angiography without additional radiation dose - Improved image quality and interpretability for “free”." Imaging 14, no. 2 (2022): 82–88. http://dx.doi.org/10.1556/1647.2022.00060.

Texto completo
Resumen
AbstractObjectiveTo evaluate the impact of a motion-correction (MC) algorithm, applicable post-hoc and not dependent on extended padding, on the image quality and interpretability of coronary computed tomography angiography (CCTA).MethodsNinety consecutive patients undergoing CCTA on a latest-generation 256-slice CT device were prospectively included. CCTA was performed with prospective electrocardiogram-triggering and the shortest possible acquisition window (without padding) at 75% of the R-R-interval. All datasets were reconstructed without and with MC of the coronaries. The latter exploits
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Domen, Mohorčič, and Ocepek David. "[Re] Hierarchical Shrinkage: Improving the Accuracy and Interpretability of Tree-Based Methods." ReScience C 9, no. 2 (2023): #19. https://doi.org/10.5281/zenodo.8173696.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Jishnu, Setia. "Explainable AI: Methods and Applications." Explainable AI: Methods and Applications 8, no. 10 (2023): 5. https://doi.org/10.5281/zenodo.10021461.

Texto completo
Resumen
Explainable Artificial Intelligence (XAI) has emerged as a critical area of research, ensuring that AI systems are transparent, interpretable, and accountable. This paper provides a comprehensive overview of various methods and applications of Explainable AI. We delve into the importance of interpretability in AI models, explore different techniques for making complex AI models understandable, and discuss real-world applications where explainability is crucial. Through this paper, I aim to shed light on the advancements in the field of XAI and its potentialto bridge the gap between AI's predic
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Lao, Danning, Qi Liu, Jiazi Bu, Junchi Yan, and Wei Shen. "ViTree: Single-Path Neural Tree for Step-Wise Interpretable Fine-Grained Visual Categorization." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (2024): 2866–73. http://dx.doi.org/10.1609/aaai.v38i3.28067.

Texto completo
Resumen
As computer vision continues to advance and finds widespread applications across various domains, the need for interpretability in deep learning models becomes paramount. Existing methods often resort to post-hoc techniques or prototypes to explain the decision-making process, which can be indirect and lack intrinsic illustration. In this research, we introduce ViTree, a novel approach for fine-grained visual categorization that combines the popular vision transformer as a feature extraction backbone with neural decision trees. By traversing the tree paths, ViTree effectively selects patches f
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Zhou, Pei-Yuan, Amane Takeuchi, Fernando Martinez-Lopez, Malikeh Ehghaghi, Andrew K. C. Wong, and En-Shiun Annie Lee. "Benchmarking Interpretability in Healthcare Using Pattern Discovery and Disentanglement." Bioengineering 12, no. 3 (2025): 308. https://doi.org/10.3390/bioengineering12030308.

Texto completo
Resumen
The healthcare industry seeks to integrate AI into clinical applications, yet understanding AI decision making remains a challenge for healthcare practitioners as these systems often function as black boxes. Our work benchmarks the Pattern Discovery and Disentanglement (PDD) system’s unsupervised learning algorithm, which provides interpretable outputs and clustering results from clinical notes to aid decision making. Using the MIMIC-IV dataset, we process free-text clinical notes and ICD-9 codes with Term Frequency-Inverse Document Frequency and Topic Modeling. The PDD algorithm discretizes n
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Ganguly, Rita, Dharmpal Singh, and Rajesh Bose. "The next frontier of explainable artificial intelligence (XAI) in healthcare services: A study on PIMA diabetes dataset." Scientific Temper 16, no. 05 (2025): 4165–70. https://doi.org/10.58414/scientifictemper.2025.16.5.01.

Texto completo
Resumen
The integration of Artificial Intelligence (AI) in healthcare has revolutionized disease diagnosis and risk prediction. However, the "black-box" nature of AI models raises concerns about trust, interpretability, and regulatory compliance. Explainable AI (XAI) addresses these issues by enhancing transparency in AI-driven decisions. This study explores the role of XAI in diabetes prediction using the PIMA Diabetes Dataset, evaluating machine learning models—logistic regression, decision trees, random forests, and deep learning—alongside SHAP and LIME explainability techniques. Data pre-processin
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Naresh Vurukonda. "A Novel Framework for Inherently Interpretable Deep Neural Networks Using Attention-Based Feature Attribution in High-Dimensional Tabular Data." Journal of Information Systems Engineering and Management 10, no. 50s (2025): 599–604. https://doi.org/10.52783/jisem.v10i50s.10290.

Texto completo
Resumen
Deep learning models for tabular data often lack interpretability, posing challenges in domains like healthcare and finance where trust is critical. We propose an attention-augmented neural network architecture that inherently highlights the most informative features, thus providing intrinsic explanations for its predictions. Drawing inspiration from TabNet and Transformer-based models, our model applies multi-head feature-wise attention to automatically weight each feature’s contribution. We incorporate an attention-weight regularization scheme (e.g. sparsemax) to encourage focused attributio
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Naresh Vurukonda. "A Novel Framework for Inherently Interpretable Deep Neural Networks Using Attention-Based Feature Attribution in High-Dimensional Tabular Data." Journal of Information Systems Engineering and Management 10, no. 51s (2025): 1076–81. https://doi.org/10.52783/jisem.v10i51s.10626.

Texto completo
Resumen
Deep learning models for tabular data often lack interpretability, posing challenges in domains like healthcare and finance where trust is critical. We propose an attention-augmented neural network architecture that inherently highlights the most informative features, thus providing intrinsic explanations for its predictions. Drawing inspiration from TabNet and Transformer-based models, our model applies multi-head feature-wise attention to automatically weight each feature’s contribution. We incorporate an attention-weight regularization scheme (e.g. sparsemax) to encourage focused attributio
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Ozdemir, Olcar. "Explainable AI (XAI) in Healthcare: Bridging the Gap between Accuracy and Interpretability." Journal of Science, Technology and Engineering Research 1, no. 1 (2024): 32–44. https://doi.org/10.64206/0z78ev10.

Texto completo
Resumen
Artificial Intelligence (AI) has demonstrated significant potential in revolutionizing healthcare by enhancing diagnostic accuracy, predicting patient outcomes, and optimizing treatment plans. However, the increasing reliance on complex, black-box models has raised critical concerns around transparency, trust, and accountability—particularly in high-stakes medical settings where interpretability is vital for clinical decision-making. This paper explores Explainable AI (XAI) as a solution to bridge the gap between model performance and human interpretability. We review current XAI techniques, i
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

García-Vicente, Clara, David Chushig-Muzo, Inmaculada Mora-Jiménez, et al. "Evaluation of Synthetic Categorical Data Generation Techniques for Predicting Cardiovascular Diseases and Post-Hoc Interpretability of the Risk Factors." Applied Sciences 13, no. 7 (2023): 4119. http://dx.doi.org/10.3390/app13074119.

Texto completo
Resumen
Machine Learning (ML) methods have become important for enhancing the performance of decision-support predictive models. However, class imbalance is one of the main challenges for developing ML models, because it may bias the learning process and the model generalization ability. In this paper, we consider oversampling methods for generating synthetic categorical clinical data aiming to improve the predictive performance in ML models, and the identification of risk factors for cardiovascular diseases (CVDs). We performed a comparative study of several categorical synthetic data generation meth
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Jalali, Anahid, Alexander Schindler, Bernhard Haslhofer, and Andreas Rauber. "Machine Learning Interpretability Techniques for Outage Prediction: A Comparative Study." PHM Society European Conference 5, no. 1 (2020): 10. http://dx.doi.org/10.36001/phme.2020.v5i1.1244.

Texto completo
Resumen
Interpretable machine learning has recently attracted a lot of interest in the community. Currently, it mainly focuses on models trained on non-time series data. LIME and SHAP are well-known examples and provide visual-explanations of feature contributions to model decisions on an instance basis. Other post-hoc approaches, such as attribute-wise interpretations, also focus on tabular data only. Little research has been done so far on the interpretability of predictive models trained on time series data. Therefore, this work focuses on explaining decisions made by black-box models such as Deep
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

LE, Khanh Giang. "IMPROVING ROAD SAFETY: SUPERVISED MACHINE LEARNING ANALYSIS OF FACTORS INFLUENCING CRASH SEVERITY." Scientific Journal of Silesian University of Technology. Series Transport 127 (June 1, 2025): 129–53. https://doi.org/10.20858/sjsutst.2025.127.8.

Texto completo
Resumen
Road traffic crash severity is shaped by a complex interplay of human, vehicular, environmental, and infrastructural factors. While machine learning (ML) has shown promise in analyzing crash data, gaps remain in model interpretability and region-specific insights, particularly for the UK context. This study addresses these gaps by evaluating supervised ML models – Decision Tree, Support Vector Machine (SVM), and LightGBM – to predict crash severity using 2022 UK accident data. The research emphasizes interpretability through SHapley Additive exPlanations (SHAP) to identify critical factors inf
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Zdravkovic, Milan. "On the global feature importance for interpretable and trustworthy heat demand forecasting." Thermal Science, no. 00 (2025): 48. https://doi.org/10.2298/tsci241223048z.

Texto completo
Resumen
The paper introduces the Explainable AI methodology to assess the global feature importance of the Machine Learning models used for heat demand forecasting in intelligent control of District Heating Systems (DHS), with motivation to facilitate their interpretability and trustworthiness, hence addressin g the challenges related to adherence to communal standards, customer satisfaction and liability risks. Methodology involves generation of global feature importance insights by using four different approaches, namely intrinsic (ante-hoc) interpretability of Gradient Boosting method and selected
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Gunasekara, Sachini, and Mirka Saarela. "Explainable AI in Education: Techniques and Qualitative Assessment." Applied Sciences 15, no. 3 (2025): 1239. https://doi.org/10.3390/app15031239.

Texto completo
Resumen
Many of the articles on AI in education compare the performance and fairness of different models, but few specifically focus on quantitatively analyzing their explainability. To bridge this gap, we analyzed key evaluation metrics for two machine learning models—ANN and DT—with a focus on their performance and explainability in predicting student outcomes using the OULAD. The methodology involved evaluating the DT, an intrinsically explainable model, against the more complex ANN, which requires post hoc explainability techniques. The results show that, although the feature-based and structured
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Maddala, Suresh Kumar. "Understanding Explainability in Enterprise AI Models." International Journal of Management Technology 12, no. 1 (2025): 58–68. https://doi.org/10.37745/ijmt.2013/vol12n25868.

Texto completo
Resumen
This article examines the critical role of explainability in enterprise AI deployments, where algorithmic transparency has emerged as both a regulatory necessity and a business imperative. As organizations increasingly rely on sophisticated machine learning models for consequential decisions, the "black box" problem threatens stakeholder trust, regulatory compliance, and effective model governance. We explore the multifaceted business case for explainable AI across regulated industries, analyze the spectrum of interpretability techniques—from inherently transparent models to post-hoc explanati
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Ali, Ali Mohammed Omar. "Explainability in AI: Interpretable Models for Data Science." International Journal for Research in Applied Science and Engineering Technology 13, no. 2 (2025): 766–71. https://doi.org/10.22214/ijraset.2025.66968.

Texto completo
Resumen
As artificial intelligence (AI) continues to drive advancements across various domains, the need for explainability in AI models has become increasingly critical. Many state-of-the-art machine learning models, particularly deep learning architectures, operate as "black boxes," making their decision-making processes difficult to interpret. Explainable AI (XAI) aims to enhance model transparency, ensuring that AI-driven decisions are understandable, trustworthy, and aligned with ethical and regulatory standards. This paper explores different approaches to AI interpretability, including intrinsic
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Vinayak Pillai. "Enhancing the transparency of data and ml models using explainable AI (XAI)." World Journal of Advanced Engineering Technology and Sciences 13, no. 1 (2024): 397–406. http://dx.doi.org/10.30574/wjaets.2024.13.1.0428.

Texto completo
Resumen
To this end, this paper focuses on the increasing demand for the explainability of Machine Learning (ML) models especially in environments where these models are employed to make critical decisions such as in healthcare, finance, and law. Although the typical ML models are considered opaque, XAI provides a set of ways and means to propose making these models more transparent and, thus, easier to explain. This paper describes and analyzes the model-agnostic approach, method of intrinsic explanation, post-hoc explanation, and visualization instruments and demonstrates the use of XAI in various f
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Wang, Zhengguang. "Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity Analysis Methods for Time-Series Deep Learning Models." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (2024): 23768–70. http://dx.doi.org/10.1609/aaai.v38i21.30559.

Texto completo
Resumen
This work undertakes studies to evaluate Interpretability Methods for Time Series Deep Learning. Sensitivity analysis assesses how input changes affect the output, constituting a key component of interpretation. Among the post-hoc interpretation methods such as back-propagation, perturbation, and approximation, my work will investigate perturbation-based sensitivity Analysis methods on modern Transformer models to benchmark their performances. Specifically, my work intends to answer three research questions: 1) Do different sensitivity analysis methods yield comparable outputs and attribute im
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Chatterjee, Soumick, Arnab Das, Chirag Mandal, et al. "TorchEsegeta: Framework for Interpretability and Explainability of Image-Based Deep Learning Models." Applied Sciences 12, no. 4 (2022): 1834. http://dx.doi.org/10.3390/app12041834.

Texto completo
Resumen
Clinicians are often very sceptical about applying automatic image processing approaches, especially deep learning-based methods, in practice. One main reason for this is the black-box nature of these approaches and the inherent problem of missing insights of the automatically derived decisions. In order to increase trust in these methods, this paper presents approaches that help to interpret and explain the results of deep learning algorithms by depicting the anatomical areas that influence the decision of the algorithm most. Moreover, this research presents a unified framework, TorchEsegeta,
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Sai Teja Boppiniti. "A SURVEY ON EXPLAINABLE AI: TECHNIQUES AND CHALLENGES." International Journal of Innovations in Engineering Research and Technology 7, no. 3 (2020): 57–66. http://dx.doi.org/10.26662/ijiert.v7i3.pp57-66.

Texto completo
Resumen
Explainable Artificial Intelligence (XAI) is a rapidly evolving field aimed at making AI systems more interpretable and transparent to human users. As AI technologies become increasingly integrated into critical sectors such as healthcare, finance, and autonomous systems, the need for explanations behind AI decisions has grown significantly. This survey provides a comprehensive review of XAI techniques, categorizing them into post-hoc and intrinsic methods, and examines their application in various domains. Additionally, the paper explores the major challenges in achieving explainability, incl
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Murdoch, W. James, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. "Definitions, methods, and applications in interpretable machine learning." Proceedings of the National Academy of Sciences 116, no. 44 (2019): 22071–80. http://dx.doi.org/10.1073/pnas.1900654116.

Texto completo
Resumen
Machine-learning models have demonstrated great success in learning complex patterns that enable them to make predictions about unobserved data. In addition to using models for prediction, the ability to interpret what a model has learned is receiving an increasing amount of attention. However, this increased focus has led to considerable confusion about the notion of interpretability. In particular, it is unclear how the wide array of proposed interpretation methods are related and what common concepts can be used to evaluate them. We aim to address these concerns by defining interpretability
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Aslam, Nida, Irfan Ullah Khan, Samiha Mirza, et al. "Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)." Sustainability 14, no. 12 (2022): 7375. http://dx.doi.org/10.3390/su14127375.

Texto completo
Resumen
With the expansion of the internet, a major threat has emerged involving the spread of malicious domains intended by attackers to perform illegal activities aiming to target governments, violating privacy of organizations, and even manipulating everyday users. Therefore, detecting these harmful domains is necessary to combat the growing network attacks. Machine Learning (ML) models have shown significant outcomes towards the detection of malicious domains. However, the “black box” nature of the complex ML models obstructs their wide-ranging acceptance in some of the fields. The emergence of Ex
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Roscher, R., B. Bohn, M. F. Duarte, and J. Garcke. "EXPLAIN IT TO ME – FACING REMOTE SENSING CHALLENGES IN THE BIO- AND GEOSCIENCES WITH EXPLAINABLE MACHINE LEARNING." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2020 (August 3, 2020): 817–24. http://dx.doi.org/10.5194/isprs-annals-v-3-2020-817-2020.

Texto completo
Resumen
Abstract. For some time now, machine learning methods have been indispensable in many application areas. Especially with the recent development of efficient neural networks, these methods are increasingly used in the sciences to obtain scientific outcomes from observational or simulated data. Besides a high accuracy, a desired goal is to learn explainable models. In order to reach this goal and obtain explanation, knowledge from the respective domain is necessary, which can be integrated into the model or applied post-hoc. We discuss explainable machine learning approaches which are used to ta
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Damilare Tiamiyu, Seun Oluwaremilekun Aremu, Igba Emmanuel, Chidimma Judith Ihejirika, Michael Babatunde Adewoye, and Adeshina Akin Ajayi. "Interpretable Data Analytics in Blockchain Networks Using Variational Autoencoders and Model-Agnostic Explanation Techniques for Enhanced Anomaly Detection." International Journal of Scientific Research in Science and Technology 11, no. 6 (2024): 152–83. http://dx.doi.org/10.32628/ijsrst24116170.

Texto completo
Resumen
The rapid growth of blockchain technology has brought about increased transaction volumes and complexity, leading to challenges in detecting fraudulent activities and understanding data patterns. Traditional data analytics approaches often fall short in providing both accurate anomaly detection and interpretability, especially in decentralized environments. This paper explores the integration of Variational Autoencoders (VAEs), a deep learning-based anomaly detection technique, with model-agnostic explanation methods such as SHAP (SHapley Additive Explanations) and LIME (Local Interpretable Mo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Guo, Jiaxing, Zhiyi Tang, Changxing Zhang, Wei Xu, and Yonghong Wu. "An Interpretable Deep Learning Method for Identifying Extreme Events under Faulty Data Interference." Applied Sciences 13, no. 9 (2023): 5659. http://dx.doi.org/10.3390/app13095659.

Texto completo
Resumen
Structural health monitoring systems continuously monitor the operational state of structures, generating a large amount of monitoring data during the process. The structural responses of extreme events, such as earthquakes, ship collisions, or typhoons, could be captured and further analyzed. However, it is challenging to identify these extreme events due to the interference of faulty data. Real-world monitoring systems suffer from frequent misidentification and false alarms. Unfortunately, it is difficult to improve the system’s built-in algorithms, especially the deep neural networks, partl
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Methuku, Vijayalaxmi, Sharath Chandra Kondaparthy, and Direesh Reddy Aunugu. "Explainability and Transparency in Artificial Intelligence: Ethical Imperatives and Practical Challenges." International Journal of Electrical, Electronics and Computers 8, no. 3 (2023): 7–12. https://doi.org/10.22161/eec.84.2.

Texto completo
Resumen
Artificial Intelligence (AI) is increasingly embedded in high-stakes domains such as healthcare, finance, and law enforcement, where opaque decision-making raises significant ethical concerns. Among the core challenges in AI ethics are explainability and transparency—key to fostering trust, accountability, and fairness in algorithmic systems. This review explores the ethical foundations of explainable AI (XAI), surveys leading technical approaches such as model-agnostic interpretability techniques and post-hoc explanation methods and examines their inherent limitations and trade-offs. A real-w
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Qian, Wei, Chenxu Zhao, Yangyi Li, Fenglong Ma, Chao Zhang, and Mengdi Huai. "Towards Modeling Uncertainties of Self-Explaining Neural Networks via Conformal Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (2024): 14651–59. http://dx.doi.org/10.1609/aaai.v38i13.29382.

Texto completo
Resumen
Despite the recent progress in deep neural networks (DNNs), it remains challenging to explain the predictions made by DNNs. Existing explanation methods for DNNs mainly focus on post-hoc explanations where another explanatory model is employed to provide explanations. The fact that post-hoc methods can fail to reveal the actual original reasoning process of DNNs raises the need to build DNNs with built-in interpretability. Motivated by this, many self-explaining neural networks have been proposed to generate not only accurate predictions but also clear and intuitive insights into why a particu
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Okajima, Yuzuru, and Kunihiko Sadamasa. "Deep Neural Networks Constrained by Decision Rules." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 2496–505. http://dx.doi.org/10.1609/aaai.v33i01.33012496.

Texto completo
Resumen
Deep neural networks achieve high predictive accuracy by learning latent representations of complex data. However, the reasoning behind their decisions is difficult for humans to understand. On the other hand, rule-based approaches are able to justify the decisions by showing the decision rules leading to them, but they have relatively low accuracy. To improve the interpretability of neural networks, several techniques provide post-hoc explanations of decisions made by neural networks, but they cannot guarantee that the decisions are always explained in a simple form like decision rules becaus
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Sateesh Kumar Rongali. "Enhancing machine learning models: addressing challenges and future directions." World Journal of Advanced Research and Reviews 25, no. 1 (2025): 1749–53. https://doi.org/10.30574/wjarr.2025.25.1.0190.

Texto completo
Resumen
Machine learning is considered as a core of modern artificial intelligence with progressive advancements throughout a spectrum including but not limited to healthcare and finance, natural language processing and self-driving cars. However, several problems remain to affect the efficiency, equal opportunities of users, and adaptability of ML models for an even faster-growing era. The limitations include shortage of high quality and access to training data, model complexity that can lead to overfitting, built in bias of the algorithm, interpretability and finally, the computational density neede
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Huai, Mengdi, Jinduo Liu, Chenglin Miao, Liuyi Yao, and Aidong Zhang. "Towards Automating Model Explanations with Certified Robustness Guarantees." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (2022): 6935–43. http://dx.doi.org/10.1609/aaai.v36i6.20651.

Texto completo
Resumen
Providing model explanations has gained significant popularity recently. In contrast with the traditional feature-level model explanations, concept-based explanations can provide explanations in the form of high-level human concepts. However, existing concept-based explanation methods implicitly follow a two-step procedure that involves human intervention. Specifically, they first need the human to be involved to define (or extract) the high-level concepts, and then manually compute the importance scores of these identified concepts in a post-hoc way. This laborious process requires significan
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Xue, Mufan, Xinyu Wu, Jinlong Li, Xuesong Li, and Guoyuan Yang. "A Convolutional Neural Network Interpretable Framework for Human Ventral Visual Pathway Representation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 6 (2024): 6413–21. http://dx.doi.org/10.1609/aaai.v38i6.28461.

Texto completo
Resumen
Recently, convolutional neural networks (CNNs) have become the best quantitative encoding models for capturing neural activity and hierarchical structure in the ventral visual pathway. However, the weak interpretability of these black-box models hinders their ability to reveal visual representational encoding mechanisms. Here, we propose a convolutional neural network interpretable framework (CNN-IF) aimed at providing a transparent interpretable encoding model for the ventral visual pathway. First, we adapt the feature-weighted receptive field framework to train two high-performing ventral vi
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Kumar, Akshi, Shubham Dikshit, and Victor Hugo C. Albuquerque. "Explainable Artificial Intelligence for Sarcasm Detection in Dialogues." Wireless Communications and Mobile Computing 2021 (July 2, 2021): 1–13. http://dx.doi.org/10.1155/2021/2939334.

Texto completo
Resumen
Sarcasm detection in dialogues has been gaining popularity among natural language processing (NLP) researchers with the increased use of conversational threads on social media. Capturing the knowledge of the domain of discourse, context propagation during the course of dialogue, and situational context and tone of the speaker are some important features to train the machine learning models for detecting sarcasm in real time. As situational comedies vibrantly represent human mannerism and behaviour in everyday real-life situations, this research demonstrates the use of an ensemble supervised le
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Fan, Yongxian, Meng Liu, and Guicong Sun. "An interpretable machine learning framework for diagnosis and prognosis of COVID-19." PLOS ONE 18, no. 9 (2023): e0291961. http://dx.doi.org/10.1371/journal.pone.0291961.

Texto completo
Resumen
Coronaviruses have affected the lives of people around the world. Increasingly, studies have indicated that the virus is mutating and becoming more contagious. Hence, the pressing priority is to swiftly and accurately predict patient outcomes. In addition, physicians and patients increasingly need interpretability when building machine models in healthcare. We propose an interpretable machine framework(KISM) that can diagnose and prognose patients based on blood test datasets. First, we use k-nearest neighbors, isolated forests, and SMOTE to pre-process the original blood test datasets. Seven
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!