Journal articles on the topic 'Post-hoc Explainability'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 journal articles for your research on the topic 'Post-hoc Explainability.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.
Zhang, Xiaopu, Wubing Miao, and Guodong Liu. "Explainable Data Mining Framework of Identifying Root Causes of Rocket Engine Anomalies Based on Knowledge and Physics-Informed Feature Selection." Machines 13, no. 8 (2025): 640. https://doi.org/10.3390/machines13080640.
Full textAcun, Cagla, Ali Ashary, Dan O. Popa, and Olfa Nasraoui. "Optimizing Local Explainability in Robotic Grasp Failure Prediction." Electronics 14, no. 12 (2025): 2363. https://doi.org/10.3390/electronics14122363.
Full textAlfano, Gianvincenzo, Sergio Greco, Domenico Mandaglio, Francesco Parisi, Reza Shahbazian, and Irina Trubitsyna. "Even-if Explanations: Formal Foundations, Priorities and Complexity." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 15 (2025): 15347–55. https://doi.org/10.1609/aaai.v39i15.33684.
Full textMochaourab, Rami, Arun Venkitaraman, Isak Samsten, Panagiotis Papapetrou, and Cristian R. Rojas. "Post Hoc Explainability for Time Series Classification: Toward a signal processing perspective." IEEE Signal Processing Magazine 39, no. 4 (2022): 119–29. http://dx.doi.org/10.1109/msp.2022.3155955.
Full textFauvel, Kevin, Tao Lin, Véronique Masson, Élisa Fromont, and Alexandre Termier. "XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification." Mathematics 9, no. 23 (2021): 3137. http://dx.doi.org/10.3390/math9233137.
Full textLee, Gin Chong, and Chu Kiong Loo. "On the Post Hoc Explainability of Optimized Self-Organizing Reservoir Network for Action Recognition." Sensors 22, no. 5 (2022): 1905. http://dx.doi.org/10.3390/s22051905.
Full textHildt, Elisabeth. "What Is the Role of Explainability in Medical Artificial Intelligence? A Case-Based Approach." Bioengineering 12, no. 4 (2025): 375. https://doi.org/10.3390/bioengineering12040375.
Full textBoya Marqas, Ridwan, Saman M. Almufti, and Rezhna Azad Yusif. "Unveiling explainability in artificial intelligence: a step to-wards transparent AI." International Journal of Scientific World 11, no. 1 (2025): 13–20. https://doi.org/10.14419/f2agrs86.
Full textMaddala, Suresh Kumar. "Understanding Explainability in Enterprise AI Models." International Journal of Management Technology 12, no. 1 (2025): 58–68. https://doi.org/10.37745/ijmt.2013/vol12n25868.
Full textKabir, Sami, Mohammad Shahadat Hossain, and Karl Andersson. "An Advanced Explainable Belief Rule-Based Framework to Predict the Energy Consumption of Buildings." Energies 17, no. 8 (2024): 1797. http://dx.doi.org/10.3390/en17081797.
Full textMaree, Charl, and Christian Omlin. "Reinforcement Learning Your Way: Agent Characterization through Policy Regularization." AI 3, no. 2 (2022): 250–59. http://dx.doi.org/10.3390/ai3020015.
Full textYan, Fei, Yunqing Chen, Yiwen Xia, Zhiliang Wang, and Ruoxiu Xiao. "An Explainable Brain Tumor Detection Framework for MRI Analysis." Applied Sciences 13, no. 6 (2023): 3438. http://dx.doi.org/10.3390/app13063438.
Full textMaarten Schraagen, Jan, Sabin Kerwien Lopez, Carolin Schneider, Vivien Schneider, Stephanie Tönjes, and Emma Wiechmann. "The Role of Transparency and Explainability in Automated Systems." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 65, no. 1 (2021): 27–31. http://dx.doi.org/10.1177/1071181321651063.
Full textAcun, Cagla, and Olfa Nasraoui. "Pre Hoc and Co Hoc Explainability: Frameworks for Integrating Interpretability into Machine Learning Training for Enhanced Transparency and Performance." Applied Sciences 15, no. 13 (2025): 7544. https://doi.org/10.3390/app15137544.
Full textAli, Ali Mohammed Omar. "Explainability in AI: Interpretable Models for Data Science." International Journal for Research in Applied Science and Engineering Technology 13, no. 2 (2025): 766–71. https://doi.org/10.22214/ijraset.2025.66968.
Full textGunasekara, Sachini, and Mirka Saarela. "Explainable AI in Education: Techniques and Qualitative Assessment." Applied Sciences 15, no. 3 (2025): 1239. https://doi.org/10.3390/app15031239.
Full textSrinivasu, Parvathaneni Naga, N. Sandhya, Rutvij H. Jhaveri, and Roshani Raut. "From Blackbox to Explainable AI in Healthcare: Existing Tools and Case Studies." Mobile Information Systems 2022 (June 13, 2022): 1–20. http://dx.doi.org/10.1155/2022/8167821.
Full textMethuku, Vijayalaxmi, Sharath Chandra Kondaparthy, and Direesh Reddy Aunugu. "Explainability and Transparency in Artificial Intelligence: Ethical Imperatives and Practical Challenges." International Journal of Electrical, Electronics and Computers 8, no. 3 (2023): 7–12. https://doi.org/10.22161/eec.84.2.
Full textAbdelaal, Yasmin, Michaël Aupetit, Abdelkader Baggag, and Dena Al-Thani. "Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review." Journal of Medical Internet Research 26 (December 24, 2024): e53863. https://doi.org/10.2196/53863.
Full textZhang, Xiaoming, Xilin Hu, and Huiyong Wang. "Post Hoc Multi-Granularity Explanation for Multimodal Knowledge Graph Link Prediction." Electronics 14, no. 7 (2025): 1390. https://doi.org/10.3390/electronics14071390.
Full textLarriva-Novo, Xavier, Luis Pérez Miguel, Victor A. Villagra, Manuel Álvarez-Campana, Carmen Sanchez-Zas, and Óscar Jover. "Post-Hoc Categorization Based on Explainable AI and Reinforcement Learning for Improved Intrusion Detection." Applied Sciences 14, no. 24 (2024): 11511. https://doi.org/10.3390/app142411511.
Full textKong, Weihao, Jianping Chen, and Pengfei Zhu. "Machine Learning-Based Uranium Prospectivity Mapping and Model Explainability Research." Minerals 14, no. 2 (2024): 128. http://dx.doi.org/10.3390/min14020128.
Full textBiryukov, D. N., and A. S. Dudkin. "Explainability and interpretability are important aspects in ensuring the security of decisions made by intelligent systems (review article)." Scientific and Technical Journal of Information Technologies, Mechanics and Optics 25, no. 3 (2025): 373–86. https://doi.org/10.17586/2226-1494-2025-25-3-373-386.
Full textCho, Hyeoncheol, Youngrock Oh, and Eunjoo Jeon. "SEEN: Seen: Sharpening Explanations for Graph Neural Networks Using Explanations From Neighborhoods." Advances in Artificial Intelligence and Machine Learning 03, no. 02 (2023): 1165–79. http://dx.doi.org/10.54364/aaiml.2023.1168.
Full textWyatt, Lucie S., Lennard M. van Karnenbeek, Mark Wijkhuizen, Freija Geldof, and Behdad Dashtbozorg. "Explainable Artificial Intelligence (XAI) for Oncological Ultrasound Image Analysis: A Systematic Review." Applied Sciences 14, no. 18 (2024): 8108. http://dx.doi.org/10.3390/app14188108.
Full textGanguly, Rita, Dharmpal Singh, and Rajesh Bose. "The next frontier of explainable artificial intelligence (XAI) in healthcare services: A study on PIMA diabetes dataset." Scientific Temper 16, no. 05 (2025): 4165–70. https://doi.org/10.58414/scientifictemper.2025.16.5.01.
Full textNogueira, Caio, Luís Fernandes, João N. D. Fernandes, and Jaime S. Cardoso. "Explaining Bounding Boxes in Deep Object Detectors Using Post Hoc Methods for Autonomous Driving Systems." Sensors 24, no. 2 (2024): 516. http://dx.doi.org/10.3390/s24020516.
Full textVinayak Pillai. "Enhancing the transparency of data and ml models using explainable AI (XAI)." World Journal of Advanced Engineering Technology and Sciences 13, no. 1 (2024): 397–406. http://dx.doi.org/10.30574/wjaets.2024.13.1.0428.
Full textRanjith Gopalan, Dileesh Onniyil, Ganesh Viswanathan, and Gaurav Samdani. "Hybrid models combining explainable AI and traditional machine learning: A review of methods and applications." World Journal of Advanced Engineering Technology and Sciences 15, no. 2 (2025): 1388–402. https://doi.org/10.30574/wjaets.2025.15.2.0635.
Full textGrozdanovski, Ljupcho. "THE EXPLANATIONS ONE NEEDS FOR THE EXPLANATIONS ONE GIVES—THE NECESSITY OF EXPLAINABLE AI (XAI) FOR CAUSAL EXPLANATIONS OF AI-RELATED HARM:DECONSTRUCTING THE ‘REFUGE OF IGNORANCE’ IN THE EU’S AI LIABILITY REGULATION." International Journal of Law, Ethics, and Technology 2024, no. 2 (2024): 155–262. http://dx.doi.org/10.55574/tqcg5204.
Full textSai Teja Boppiniti. "A SURVEY ON EXPLAINABLE AI: TECHNIQUES AND CHALLENGES." International Journal of Innovations in Engineering Research and Technology 7, no. 3 (2020): 57–66. http://dx.doi.org/10.26662/ijiert.v7i3.pp57-66.
Full textValentinos, Pariza, Pal Avik, Pawar Madhura, and Serra Faber Quim. "[Re] Reproducibility Study of "Label-Free Explainability for Unsupervised Models"." ReScience C 9, no. 2 (2023): #11. https://doi.org/10.5281/zenodo.8173674.
Full textTrejo-Moncada, Denise M. "The eXplainable Artificial Intelligence Paradox in Law: Technological Limits and Legal Transparency." Journal of Artificial Intelligence and Computing Applications 2, no. 1 (2024): 19–27. https://doi.org/10.5281/zenodo.14692066.
Full textRoscher, R., B. Bohn, M. F. Duarte, and J. Garcke. "EXPLAIN IT TO ME – FACING REMOTE SENSING CHALLENGES IN THE BIO- AND GEOSCIENCES WITH EXPLAINABLE MACHINE LEARNING." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2020 (August 3, 2020): 817–24. http://dx.doi.org/10.5194/isprs-annals-v-3-2020-817-2020.
Full textKulaklıoğlu, Duru. "Explainable AI: Enhancing Interpretability of Machine Learning Models." Human Computer Interaction 8, no. 1 (2024): 91. https://doi.org/10.62802/z3pde490.
Full textApostolopoulos, Ioannis D., Ifigeneia Athanasoula, Mpesi Tzani, and Peter P. Groumpos. "An Explainable Deep Learning Framework for Detecting and Localising Smoke and Fire Incidents: Evaluation of Grad-CAM++ and LIME." Machine Learning and Knowledge Extraction 4, no. 4 (2022): 1124–35. http://dx.doi.org/10.3390/make4040057.
Full textChatterjee, Soumick, Arnab Das, Chirag Mandal, et al. "TorchEsegeta: Framework for Interpretability and Explainability of Image-Based Deep Learning Models." Applied Sciences 12, no. 4 (2022): 1834. http://dx.doi.org/10.3390/app12041834.
Full textLi, Lu, Jiale Liu, Xingyu Ji, Maojun Wang, and Zeyu Zhang. "Self-Explainable Graph Transformer for Link Sign Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 11 (2025): 12084–92. https://doi.org/10.1609/aaai.v39i11.33316.
Full textRohr, Maurice, Benedikt Müller, Sebastian Dill, Gökhan Güney, and Christoph Hoog Antink. "Multiple instance learning framework can facilitate explainability in murmur detection." PLOS Digital Health 3, no. 3 (2024): e0000461. http://dx.doi.org/10.1371/journal.pdig.0000461.
Full textAntoniadi, Anna Markella, Miriam Galvin, Mark Heverin, Lan Wei, Orla Hardiman, and Catherine Mooney. "A Clinical Decision Support System for the Prediction of Quality of Life in ALS." Journal of Personalized Medicine 12, no. 3 (2022): 435. http://dx.doi.org/10.3390/jpm12030435.
Full textJishnu, Setia. "Explainable AI: Methods and Applications." Explainable AI: Methods and Applications 8, no. 10 (2023): 5. https://doi.org/10.5281/zenodo.10021461.
Full textSudars, Kaspars, Ivars Namatēvs, and Kaspars Ozols. "Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach." Journal of Imaging 8, no. 2 (2022): 30. http://dx.doi.org/10.3390/jimaging8020030.
Full textMoustakidis, Serafeim, Christos Kokkotis, Dimitrios Tsaopoulos, et al. "Identifying Country-Level Risk Factors for the Spread of COVID-19 in Europe Using Machine Learning." Viruses 14, no. 3 (2022): 625. http://dx.doi.org/10.3390/v14030625.
Full textDjoumessi, Kerol, Ziwei Huang, Laura Kühlewein, et al. "An inherently interpretable AI model improves screening speed and accuracy for early diabetic retinopathy." PLOS Digital Health 4, no. 5 (2025): e0000831. https://doi.org/10.1371/journal.pdig.0000831.
Full textHong, Jung-Ho, Woo-Jeoung Nam, Kyu-Sung Jeon, and Seong-Whan Lee. "Towards Better Visualizing the Decision Basis of Networks via Unfold and Conquer Attribution Guidance." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (2023): 7884–92. http://dx.doi.org/10.1609/aaai.v37i7.25954.
Full textHuang, Rundong, Farhad Shirani, and Dongsheng Luo. "Factorized Explainer for Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (2024): 12626–34. http://dx.doi.org/10.1609/aaai.v38i11.29157.
Full textNaresh Vurukonda. "A Novel Framework for Inherently Interpretable Deep Neural Networks Using Attention-Based Feature Attribution in High-Dimensional Tabular Data." Journal of Information Systems Engineering and Management 10, no. 50s (2025): 599–604. https://doi.org/10.52783/jisem.v10i50s.10290.
Full textNaresh Vurukonda. "A Novel Framework for Inherently Interpretable Deep Neural Networks Using Attention-Based Feature Attribution in High-Dimensional Tabular Data." Journal of Information Systems Engineering and Management 10, no. 51s (2025): 1076–81. https://doi.org/10.52783/jisem.v10i51s.10626.
Full textSingh, Rajeev Kumar, Rohan Gorantla, Sai Giridhar Rao Allada, and Pratap Narra. "SkiNet: A deep learning framework for skin lesion diagnosis with uncertainty estimation and explainability." PLOS ONE 17, no. 10 (2022): e0276836. http://dx.doi.org/10.1371/journal.pone.0276836.
Full textNoriega, Jomark, Luis Rivera, Jorge Castañeda, and José Herrera. "From Crisis to Algorithm: Credit Delinquency Prediction in Peru Under Critical External Factors Using Machine Learning." Data 10, no. 5 (2025): 63. https://doi.org/10.3390/data10050063.
Full text