Littérature scientifique sur le sujet « Post-hoc explainabil »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Post-hoc explainabil ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Post-hoc explainabil"

1

de-la-Rica-Escudero, Alejandra, Eduardo C. Garrido-Merchán, and María Coronado-Vaca. "Explainable post hoc portfolio management financial policy of a Deep Reinforcement Learning agent." PLOS ONE 20, no. 1 (2025): e0315528. https://doi.org/10.1371/journal.pone.0315528.

Texte intégral
Résumé :
Financial portfolio management investment policies computed quantitatively by modern portfolio theory techniques like the Markowitz model rely on a set of assumptions that are not supported by data in high volatility markets such as the technological sector or cryptocurrencies. Hence, quantitative researchers are looking for alternative models to tackle this problem. Concretely, portfolio management (PM) is a problem that has been successfully addressed recently by Deep Reinforcement Learning (DRL) approaches. In particular, DRL algorithms train an agent by estimating the distribution of the e
Styles APA, Harvard, Vancouver, ISO, etc.
2

Viswan, Vimb, Shaffi Noushath, and Mahmud Mufti. "Interpreting artificial intelligence models: a systematic review on the application of LIME and SHAP in Alzheimer's disease detection." Brain Informatics 11 (April 5, 2024): A10. https://doi.org/10.1186/s40708-024-00222-1.

Texte intégral
Résumé :
Explainable artificial intelligence (XAI) has gained much interest in recent years for its ability to explain the complex decision-making process of machine learning (ML) and deep learning (DL) models. The Local Interpretable Model-agnostic Explanations (LIME) and Shaply Additive exPlanation (SHAP) frameworks have grown as popular interpretive tools for ML and DL models. This article provides a systematic review of the application of LIME and SHAP in interpreting the detection of Alzheimer’s disease (AD). Adhering to PRISMA and Kitchenham’s guidelines, we identified 23 relevant art
Styles APA, Harvard, Vancouver, ISO, etc.
3

Alvanpour, Aneseh, Cagla Acun, Kyle Spurlock, et al. "Comparative Analysis of Post Hoc Explainable Methods for Robotic Grasp Failure Prediction." Electronics 14, no. 9 (2025): 1868. https://doi.org/10.3390/electronics14091868.

Texte intégral
Résumé :
In human–robot collaborative environments, predicting and explaining robotic grasp failures is crucial for effective operation. While machine learning models can predict failures accurately, they often lack transparency, limiting their utility in critical applications. This paper presents a comparative analysis of three post hoc explanation methods—Tree-SHAP, LIME, and TreeInterpreter—for explaining grasp failure predictions from white-box and black-box models. Using a simulated robotic grasping dataset, we evaluate these methods based on their agreement in identifying important features, simi
Styles APA, Harvard, Vancouver, ISO, etc.
4

Zednik, Carlos, and Hannes Boelsen. "Scientific Exploration and Explainable Artificial Intelligence." Minds and Machines 32, no. 1 (2022): 219–39. http://dx.doi.org/10.1007/s11023-021-09583-6.

Texte intégral
Résumé :
AbstractModels developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future investigations of (potentially) causal relations
Styles APA, Harvard, Vancouver, ISO, etc.
5

Larriva-Novo, Xavier, Luis Pérez Miguel, Victor A. Villagra, Manuel Álvarez-Campana, Carmen Sanchez-Zas, and Óscar Jover. "Post-Hoc Categorization Based on Explainable AI and Reinforcement Learning for Improved Intrusion Detection." Applied Sciences 14, no. 24 (2024): 11511. https://doi.org/10.3390/app142411511.

Texte intégral
Résumé :
The massive usage of Internet services nowadays has led to a drastic increase in cyberattacks, including sophisticated techniques, so that Intrusion Detection Systems (IDSs) need to use AP technologies to enhance their effectiveness. However, this has resulted in a lack of interpretability and explainability from different applications that use AI predictions, making it hard to understand by cybersecurity operators why decisions were made. To address this, the concept of Explainable AI (XAI) has been introduced to make the AI’s decisions more understandable at both global and local levels. Thi
Styles APA, Harvard, Vancouver, ISO, etc.
6

Metsch, Jacqueline Michelle, and Anne-Christin Hauschild. "BenchXAI: Comprehensive benchmarking of post-hoc explainable AI methods on multi-modal biomedical data." Computers in Biology and Medicine 191 (June 2025): 110124. https://doi.org/10.1016/j.compbiomed.2025.110124.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Arjunan, Gopalakrishnan. "Implementing Explainable AI in Healthcare: Techniques for Interpretable Machine Learning Models in Clinical Decision-Making." International Journal of Scientific Research and Management (IJSRM) 9, no. 05 (2021): 597–603. http://dx.doi.org/10.18535/ijsrm/v9i05.ec03.

Texte intégral
Résumé :
The integration of explainable artificial intelligence (XAI) in healthcare is revolutionizing clinical decision-making by providing clarity around complex machine learning (ML) models. As AI becomes increasingly critical in medical fields—ranging from diagnostics to treatment personalization—the interpretability of these models is crucial for fostering trust, transparency, and accountability among healthcare providers and patients. Traditional "black-box" models, such as deep neural networks, often achieve high accuracy but lack transparency, creating challenges in highly regulated, high-stake
Styles APA, Harvard, Vancouver, ISO, etc.
8

Jishnu, Setia. "Explainable AI: Methods and Applications." Explainable AI: Methods and Applications 8, no. 10 (2023): 5. https://doi.org/10.5281/zenodo.10021461.

Texte intégral
Résumé :
Explainable Artificial Intelligence (XAI) has emerged as a critical area of research, ensuring that AI systems are transparent, interpretable, and accountable. This paper provides a comprehensive overview of various methods and applications of Explainable AI. We delve into the importance of interpretability in AI models, explore different techniques for making complex AI models understandable, and discuss real-world applications where explainability is crucial. Through this paper, I aim to shed light on the advancements in the field of XAI and its potentialto bridge the gap between AI's predic
Styles APA, Harvard, Vancouver, ISO, etc.
9

Sarma Borah, Proyash Paban, Devraj Kashyap, Ruhini Aktar Laskar, and Ankur Jyoti Sarmah. "A Comprehensive Study on Explainable AI Using YOLO and Post Hoc Method on Medical Diagnosis." Journal of Physics: Conference Series 2919, no. 1 (2024): 012045. https://doi.org/10.1088/1742-6596/2919/1/012045.

Texte intégral
Résumé :
Abstract Medical imaging plays a pivotal role in disease detection and intervention. The black-box nature of deep learning models, such as YOLOv8, creates challenges in interpreting their decisions. This paper presents a toolset to enhance interpretability in AI based diagnostics by integrating Explainable AI (XAI) techniques with YOLOv8. This paper explores implementation of post hoc methods, including Grad-CAM and Eigen CAM, to assist end users in understanding the decision making of the model. This comprehensive evaluation utilises CT-Datasets, demonstrating the efficacy of YOLOv8 for objec
Styles APA, Harvard, Vancouver, ISO, etc.
10

Yang, Huijin, Seon Ha Baek, and Sejoong Kim. "Explainable Prediction of Overcorrection in Severe Hyponatremia: A Post Hoc Analysis of the SALSA Trial." Journal of the American Society of Nephrology 32, no. 10S (2021): 377. http://dx.doi.org/10.1681/asn.20213210s1377b.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Plus de sources

Thèses sur le sujet "Post-hoc explainabil"

1

SEVESO, ANDREA. "Symbolic Reasoning for Contrastive Explanations." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/404830.

Texte intégral
Résumé :
La necessità di spiegazioni sui sistemi di Machine Learning (ML) sta crescendo man mano che i nuovi modelli superano in performance i loro predecessori, diventando più complessi e meno comprensibili per gli utenti finali. Un passaggio essenziale nella ricerca in ambito eXplainable Artificial Intelligence (XAI) è la creazione di modelli interpretabili che mirano ad approssimare la funzione decisionale di un algoritmo black box. Sebbene negli ultimi anni siano stati proposti diversi metodi di XAI, non è stata prestata sufficiente attenzione alla spiegazione di come i modelli modificano il loro
Styles APA, Harvard, Vancouver, ISO, etc.
2

Radulovic, Nedeljko. "Post-hoc Explainable AI for Black Box Models on Tabular Data." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT028.

Texte intégral
Résumé :
Les modèles d'intelligence artificielle (IA) actuels ont fait leurs preuves dans la résolution de diverses tâches, telles que la classification, la régression, le traitement du langage naturel (NLP) et le traitement d'images. Les ressources dont nous disposons aujourd'hui nous permettent d'entraîner des modèles d'IA très complexes pour résoudre différents problèmes dans presque tous les domaines : médecine, finance, justice, transport, prévisions, etc. Avec la popularité et l'utilisation généralisée des modèles d'IA, la nécessite d'assurer la confiance dans ces modèles s'est également accrue.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Post-hoc explainabil"

1

Kamath, Uday, and John Liu. "Post-Hoc Interpretability and Explanations." In Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83356-5_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Deshpande, Saurabh, Rahee Walambe, Ketan Kotecha, and Marina Marjanović Jakovljević. "Post-hoc Explainable Reinforcement Learning Using Probabilistic Graphical Models." In Communications in Computer and Information Science. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95502-1_28.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Cánovas-Segura, Bernardo, Antonio Morales, Antonio López Martínez-Carrasco, et al. "Exploring Antimicrobial Resistance Prediction Using Post-hoc Interpretable Methods." In Artificial Intelligence in Medicine: Knowledge Representation and Transparent and Explainable Systems. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37446-4_8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Stevens, Alexander, Johannes De Smedt, and Jari Peeperkorn. "Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring." In Lecture Notes in Business Information Processing. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.

Texte intégral
Résumé :
AbstractThe growing interest in applying machine and deep learning algorithms in an Outcome-Oriented Predictive Process Monitoring (OOPPM) context has recently fuelled a shift to use models from the explainable artificial intelligence (XAI) paradigm, a field of study focused on creating explainability techniques on top of AI models in order to legitimize the predictions made. Nonetheless, most classification models are evaluated primarily on a performance level, where XAI requires striking a balance between either simple models (e.g. linear regression) or models using complex inference structu
Styles APA, Harvard, Vancouver, ISO, etc.
5

Agiollo, Andrea, Luciano Cavalcante Siebert, Pradeep Kumar Murukannaiah, and Andrea Omicini. "The Quarrel of Local Post-hoc Explainers for Moral Values Classification in Natural Language Processing." In Explainable and Transparent AI and Multi-Agent Systems. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40878-6_6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Neubig, Stefan, Daria Cappey, Nicolas Gehring, Linus Göhl, Andreas Hein, and Helmut Krcmar. "Visualizing Explainable Touristic Recommendations: An Interactive Approach." In Information and Communication Technologies in Tourism 2024. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-58839-6_37.

Texte intégral
Résumé :
AbstractPersonalized recommendations have played a vital role in tourism, serving various purposes, ranging from an improved visitor experience to addressing sustainability issues. However, research shows that recommendations are more likely to be accepted by visitors if they are comprehensible and appeal to the visitors’ common sense. This highlights the importance of explainable recommendations that, according to a previously specified goal, explain an algorithm’s inference process, generate trust among visitors, or educate visitors by making them aware of sustainability practices. Based on
Styles APA, Harvard, Vancouver, ISO, etc.
7

Mota, Bruno, Pedro Faria, Juan Corchado, and Carlos Ramos. "Explainable Artificial Intelligence Applied to Predictive Maintenance: Comparison of Post-Hoc Explainability Techniques." In Communications in Computer and Information Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-63803-9_19.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Oliveira, Pedro, Francisco Franco, Afonso Bessa, Dalila Durães, and Paulo Novais. "Employing Explainable AI Techniques for Air Pollution: An Ante-Hoc and Post-Hoc Approach in Dioxide Nitrogen Forecasting." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-77731-8_30.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Pandey, Chetraj, Rafal A. Angryk, Manolis K. Georgoulis, and Berkay Aydin. "Explainable Deep Learning-Based Solar Flare Prediction with Post Hoc Attention for Operational Forecasting." In Discovery Science. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-45275-8_38.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Nizam, Tasleem, Sherin Zafar, Siddhartha Sankar Biswas, and Imran Hussain. "Investigating the Quality of Explainable Artificial Intelligence: A Survey on Various Techniques of Post hoc." In Intelligent Strategies for ICT. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-1260-1_13.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Post-hoc explainabil"

1

Xu, Kerui, Jun Xu, Sheng Gao, Si Li, Jun Guo, and Ji-Rong Wen. "A Tag-Based Post-Hoc Framework for Explainable Conversational Recommendation." In ICTIR '22: The 2022 ACM SIGIR International Conference on the Theory of Information Retrieval. ACM, 2022. http://dx.doi.org/10.1145/3539813.3545120.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Deb, Kiron, Xuan Zhang, and Kevin Duh. "Post-Hoc Interpretation of Transformer Hyperparameters with Explainable Boosting Machines." In Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.blackboxnlp-1.5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Senevirathna, Thulitha, Bartlomiej Siniarski, Madhusanka Liyanage, and Shen Wang. "Deceiving Post-Hoc Explainable AI (XAI) Methods in Network Intrusion Detection." In 2024 IEEE 21st Consumer Communications & Networking Conference (CCNC). IEEE, 2024. http://dx.doi.org/10.1109/ccnc51664.2024.10454633.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Kenny, Eoin M., Eoin Delaney, and Mark T. Keane. "Advancing Post-Hoc Case-Based Explanation with Feature Highlighting." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/48.

Texte intégral
Résumé :
Explainable AI (XAI) has been proposed as a valuable tool to assist in downstream tasks involving human-AI collaboration. Perhaps the most psychologically valid XAI techniques are case-based approaches which display "whole" exemplars to explain the predictions of black-box AI systems. However, for such post-hoc XAI methods dealing with images, there has been no attempt to improve their scope by using multiple clear feature "parts" of the images to explain the predictions while linking back to relevant cases in the training data, thus allowing for more comprehensive explanations that are faithf
Styles APA, Harvard, Vancouver, ISO, etc.
5

Demir, Caglar, and Axel-Cyrille Ngonga Ngomo. "Neuro-Symbolic Class Expression Learning." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/403.

Texte intégral
Résumé :
Models computed using deep learning have been effectively applied to tackle various problems in many disciplines. Yet, the predictions of these models are often at most post-hoc and locally explainable. In contrast, class expressions in description logics are ante-hoc and globally explainable. Although state-of-the-art symbolic machine learning approaches are being successfully applied to learn class expressions, their application at large scale has been hindered by their impractical runtimes. Arguably, the reliance on myopic heuristic functions contributes to this limitation. We propose a nov
Styles APA, Harvard, Vancouver, ISO, etc.
6

Čyras, Kristijonas, Antonio Rago, Emanuele Albini, Pietro Baroni, and Francesca Toni. "Argumentative XAI: A Survey." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/600.

Texte intégral
Résumé :
Explainable AI (XAI) has been investigated for decades and, together with AI itself, has witnessed unprecedented growth in recent years. Among various approaches to XAI, argumentative models have been advocated in both the AI and social science literature, as their dialectical nature appears to match some basic desirable features of the explanation activity. In this survey we overview XAI approaches built using methods from the field of computational argumentation, leveraging its wide array of reasoning abstractions and explanation delivery methods. We overview the literature focusing on diffe
Styles APA, Harvard, Vancouver, ISO, etc.
7

Aryal, Saugat, and Mark T. Keane. "Even If Explanations: Prior Work, Desiderata & Benchmarks for Semi-Factual XAI." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/732.

Texte intégral
Résumé :
Recently, eXplainable AI (XAI) research has focused on counterfactual explanations as post-hoc justifications for AI-system decisions (e.g., a customer refused a loan might be told “if you asked for a loan with a shorter term, it would have been approved”). Counterfactuals explain what changes to the input-features of an AI system change the output-decision. However, there is a sub-type of counterfactual, semi-factuals, that have received less attention in AI (though the Cognitive Sciences have studied them more). This paper surveys semi-factual explanation, summarising historical and recent w
Styles APA, Harvard, Vancouver, ISO, etc.
8

Thendral Surendranath, Ephina. "Explainable Hybrid Machine Learning Technique for Healthcare Service Utilization." In 15th International Conference on Applied Human Factors and Ergonomics (AHFE 2024). AHFE International, 2024. http://dx.doi.org/10.54941/ahfe1004837.

Texte intégral
Résumé :
In the era of data, predictive and prescriptive analytics in healthcare is enabled by machine learning (ML) algorithms. The varied healthcare entities pose challenges in the inclusion of ML predictive models in the rule-based claims processing system. The hybrid ML algorithm proposed in this research article is for handling huge volumes of data in predicting a member’s utilization of Medicaid home healthcare service. The member’s demographic features, health details and enrolment details are generally considered for building the utilization model though health details may not be available for
Styles APA, Harvard, Vancouver, ISO, etc.
9

Sattarzadeh, Sam, Mahesh Sudhakar, and Konstantinos N. Plataniotis. "SVEA: A Small-scale Benchmark for Validating the Usability of Post-hoc Explainable AI Solutions in Image and Signal Recognition." In 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). IEEE, 2021. http://dx.doi.org/10.1109/iccvw54120.2021.00462.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Morais, Lucas Rabelo de Araujo, Gabriel Arnaud de Melo Fragoso, Teresa Bernarda Ludermir, and Claudio Luis Alves Monteiro. "Explainable AI For the Brazilian Stock Market Index: A Post-Hoc Approach to Deep Learning Models in Time-Series Forecasting." In Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2024. https://doi.org/10.5753/eniac.2024.244444.

Texte intégral
Résumé :
Time-series forecasting is challenging when data lacks clear trends or seasonality, making traditional statistical models less effective. Deep Learning models, like Neural Networks, excel at capturing non-linear patterns and offer a promising alternative. The Bovespa Index (Ibovespa), a key indicator of Brazil’s stock market, is volatile, leading to potential investor losses due to inaccurate forecasts and limited market insight. Neural Networks can enhance forecast accuracy, but reduce model explainability. This study aims to use Deep Learning to forecast the Ibovespa, striving to balance hig
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!