Journal articles on the topic 'Post-hoc interpretability'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 journal articles for your research on the topic 'Post-hoc interpretability.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.
Feng, Jiangfan, Yukun Liang, and Lin Li. "Anomaly Detection in Videos Using Two-Stream Autoencoder with Post Hoc Interpretability." Computational Intelligence and Neuroscience 2021 (July 26, 2021): 1–15. http://dx.doi.org/10.1155/2021/7367870.
Full textSinhamahapatra, Poulami, Suprosanna Shit, Anjany Sekuboyina, et al. "Enhancing Interpretability of Vertebrae Fracture Grading using Human-interpretable Prototypes." Machine Learning for Biomedical Imaging 2, July 2024 (2024): 977–1002. http://dx.doi.org/10.59275/j.melba.2024-258b.
Full textSarma Borah, Proyash Paban, Devraj Kashyap, Ruhini Aktar Laskar, and Ankur Jyoti Sarmah. "A Comprehensive Study on Explainable AI Using YOLO and Post Hoc Method on Medical Diagnosis." Journal of Physics: Conference Series 2919, no. 1 (2024): 012045. https://doi.org/10.1088/1742-6596/2919/1/012045.
Full textZhang, Zaixi, Qi Liu, Hao Wang, Chengqiang Lu, and Cheekong Lee. "ProtGNN: Towards Self-Explaining Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (2022): 9127–35. http://dx.doi.org/10.1609/aaai.v36i8.20898.
Full textAlfano, Gianvincenzo, Sergio Greco, Domenico Mandaglio, Francesco Parisi, Reza Shahbazian, and Irina Trubitsyna. "Even-if Explanations: Formal Foundations, Priorities and Complexity." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 15 (2025): 15347–55. https://doi.org/10.1609/aaai.v39i15.33684.
Full textXu, Qian, Wenzhao Xie, Bolin Liao, et al. "Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review." Journal of Healthcare Engineering 2023 (February 3, 2023): 1–13. http://dx.doi.org/10.1155/2023/9919269.
Full textGill, Navdeep, Patrick Hall, Kim Montgomery, and Nicholas Schmidt. "A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing." Information 11, no. 3 (2020): 137. http://dx.doi.org/10.3390/info11030137.
Full textKulaklıoğlu, Duru. "Explainable AI: Enhancing Interpretability of Machine Learning Models." Human Computer Interaction 8, no. 1 (2024): 91. https://doi.org/10.62802/z3pde490.
Full textAcun, Cagla, and Olfa Nasraoui. "Pre Hoc and Co Hoc Explainability: Frameworks for Integrating Interpretability into Machine Learning Training for Enhanced Transparency and Performance." Applied Sciences 15, no. 13 (2025): 7544. https://doi.org/10.3390/app15137544.
Full textYousufi Aqmal, Shahid, and Fermle Erdely S. "Enhancing Nonparametric Tests: Insights for Computational Intelligence and Data Mining." Researcher Academy Innovation Data Analysis 1, no. 3 (2024): 214–26. https://doi.org/10.69725/raida.v1i3.168.
Full textArjunan, Gopalakrishnan. "Implementing Explainable AI in Healthcare: Techniques for Interpretable Machine Learning Models in Clinical Decision-Making." International Journal of Scientific Research and Management (IJSRM) 9, no. 05 (2021): 597–603. http://dx.doi.org/10.18535/ijsrm/v9i05.ec03.
Full textKhiem, Phan Xuan, Zurida B. Batchaeva, and Liana K. Katchieva. "HYBRIDIZATION OF MACHINE LEARNING AND STATISTICS METHODS TO IMPROVE MODEL INTERPRETABILITY." EKONOMIKA I UPRAVLENIE: PROBLEMY, RESHENIYA 12/15, no. 153 (2024): 214–20. https://doi.org/10.36871/ek.up.p.r.2024.12.15.025.
Full textMarconato, Emanuele, Andrea Passerini, and Stefano Teso. "Interpretability Is in the Mind of the Beholder: A Causal Framework for Human-Interpretable Representation Learning." Entropy 25, no. 12 (2023): 1574. http://dx.doi.org/10.3390/e25121574.
Full textBiryukov, D. N., and A. S. Dudkin. "Explainability and interpretability are important aspects in ensuring the security of decisions made by intelligent systems (review article)." Scientific and Technical Journal of Information Technologies, Mechanics and Optics 25, no. 3 (2025): 373–86. https://doi.org/10.17586/2226-1494-2025-25-3-373-386.
Full textGaurav, Kashyap. "Explainable AI (XAI): Methods and Techniques to Make Deep Learning Models More Interpretable and Their Real-World Implications." International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences 11, no. 4 (2023): 1–7. https://doi.org/10.5281/zenodo.14382747.
Full textLarriva-Novo, Xavier, Luis Pérez Miguel, Victor A. Villagra, Manuel Álvarez-Campana, Carmen Sanchez-Zas, and Óscar Jover. "Post-Hoc Categorization Based on Explainable AI and Reinforcement Learning for Improved Intrusion Detection." Applied Sciences 14, no. 24 (2024): 11511. https://doi.org/10.3390/app142411511.
Full textDegtiarova, Ganna, Fran Mikulicic, Jan Vontobel, et al. "Post-hoc motion correction for coronary computed tomography angiography without additional radiation dose - Improved image quality and interpretability for “free”." Imaging 14, no. 2 (2022): 82–88. http://dx.doi.org/10.1556/1647.2022.00060.
Full textDomen, Mohorčič, and Ocepek David. "[Re] Hierarchical Shrinkage: Improving the Accuracy and Interpretability of Tree-Based Methods." ReScience C 9, no. 2 (2023): #19. https://doi.org/10.5281/zenodo.8173696.
Full textJishnu, Setia. "Explainable AI: Methods and Applications." Explainable AI: Methods and Applications 8, no. 10 (2023): 5. https://doi.org/10.5281/zenodo.10021461.
Full textLao, Danning, Qi Liu, Jiazi Bu, Junchi Yan, and Wei Shen. "ViTree: Single-Path Neural Tree for Step-Wise Interpretable Fine-Grained Visual Categorization." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (2024): 2866–73. http://dx.doi.org/10.1609/aaai.v38i3.28067.
Full textZhou, Pei-Yuan, Amane Takeuchi, Fernando Martinez-Lopez, Malikeh Ehghaghi, Andrew K. C. Wong, and En-Shiun Annie Lee. "Benchmarking Interpretability in Healthcare Using Pattern Discovery and Disentanglement." Bioengineering 12, no. 3 (2025): 308. https://doi.org/10.3390/bioengineering12030308.
Full textGanguly, Rita, Dharmpal Singh, and Rajesh Bose. "The next frontier of explainable artificial intelligence (XAI) in healthcare services: A study on PIMA diabetes dataset." Scientific Temper 16, no. 05 (2025): 4165–70. https://doi.org/10.58414/scientifictemper.2025.16.5.01.
Full textNaresh Vurukonda. "A Novel Framework for Inherently Interpretable Deep Neural Networks Using Attention-Based Feature Attribution in High-Dimensional Tabular Data." Journal of Information Systems Engineering and Management 10, no. 50s (2025): 599–604. https://doi.org/10.52783/jisem.v10i50s.10290.
Full textNaresh Vurukonda. "A Novel Framework for Inherently Interpretable Deep Neural Networks Using Attention-Based Feature Attribution in High-Dimensional Tabular Data." Journal of Information Systems Engineering and Management 10, no. 51s (2025): 1076–81. https://doi.org/10.52783/jisem.v10i51s.10626.
Full textOzdemir, Olcar. "Explainable AI (XAI) in Healthcare: Bridging the Gap between Accuracy and Interpretability." Journal of Science, Technology and Engineering Research 1, no. 1 (2024): 32–44. https://doi.org/10.64206/0z78ev10.
Full textGarcía-Vicente, Clara, David Chushig-Muzo, Inmaculada Mora-Jiménez, et al. "Evaluation of Synthetic Categorical Data Generation Techniques for Predicting Cardiovascular Diseases and Post-Hoc Interpretability of the Risk Factors." Applied Sciences 13, no. 7 (2023): 4119. http://dx.doi.org/10.3390/app13074119.
Full textJalali, Anahid, Alexander Schindler, Bernhard Haslhofer, and Andreas Rauber. "Machine Learning Interpretability Techniques for Outage Prediction: A Comparative Study." PHM Society European Conference 5, no. 1 (2020): 10. http://dx.doi.org/10.36001/phme.2020.v5i1.1244.
Full textLE, Khanh Giang. "IMPROVING ROAD SAFETY: SUPERVISED MACHINE LEARNING ANALYSIS OF FACTORS INFLUENCING CRASH SEVERITY." Scientific Journal of Silesian University of Technology. Series Transport 127 (June 1, 2025): 129–53. https://doi.org/10.20858/sjsutst.2025.127.8.
Full textZdravkovic, Milan. "On the global feature importance for interpretable and trustworthy heat demand forecasting." Thermal Science, no. 00 (2025): 48. https://doi.org/10.2298/tsci241223048z.
Full textGunasekara, Sachini, and Mirka Saarela. "Explainable AI in Education: Techniques and Qualitative Assessment." Applied Sciences 15, no. 3 (2025): 1239. https://doi.org/10.3390/app15031239.
Full textMaddala, Suresh Kumar. "Understanding Explainability in Enterprise AI Models." International Journal of Management Technology 12, no. 1 (2025): 58–68. https://doi.org/10.37745/ijmt.2013/vol12n25868.
Full textAli, Ali Mohammed Omar. "Explainability in AI: Interpretable Models for Data Science." International Journal for Research in Applied Science and Engineering Technology 13, no. 2 (2025): 766–71. https://doi.org/10.22214/ijraset.2025.66968.
Full textVinayak Pillai. "Enhancing the transparency of data and ml models using explainable AI (XAI)." World Journal of Advanced Engineering Technology and Sciences 13, no. 1 (2024): 397–406. http://dx.doi.org/10.30574/wjaets.2024.13.1.0428.
Full textWang, Zhengguang. "Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity Analysis Methods for Time-Series Deep Learning Models." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (2024): 23768–70. http://dx.doi.org/10.1609/aaai.v38i21.30559.
Full textChatterjee, Soumick, Arnab Das, Chirag Mandal, et al. "TorchEsegeta: Framework for Interpretability and Explainability of Image-Based Deep Learning Models." Applied Sciences 12, no. 4 (2022): 1834. http://dx.doi.org/10.3390/app12041834.
Full textSai Teja Boppiniti. "A SURVEY ON EXPLAINABLE AI: TECHNIQUES AND CHALLENGES." International Journal of Innovations in Engineering Research and Technology 7, no. 3 (2020): 57–66. http://dx.doi.org/10.26662/ijiert.v7i3.pp57-66.
Full textMurdoch, W. James, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. "Definitions, methods, and applications in interpretable machine learning." Proceedings of the National Academy of Sciences 116, no. 44 (2019): 22071–80. http://dx.doi.org/10.1073/pnas.1900654116.
Full textAslam, Nida, Irfan Ullah Khan, Samiha Mirza, et al. "Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)." Sustainability 14, no. 12 (2022): 7375. http://dx.doi.org/10.3390/su14127375.
Full textRoscher, R., B. Bohn, M. F. Duarte, and J. Garcke. "EXPLAIN IT TO ME – FACING REMOTE SENSING CHALLENGES IN THE BIO- AND GEOSCIENCES WITH EXPLAINABLE MACHINE LEARNING." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2020 (August 3, 2020): 817–24. http://dx.doi.org/10.5194/isprs-annals-v-3-2020-817-2020.
Full textDamilare Tiamiyu, Seun Oluwaremilekun Aremu, Igba Emmanuel, Chidimma Judith Ihejirika, Michael Babatunde Adewoye, and Adeshina Akin Ajayi. "Interpretable Data Analytics in Blockchain Networks Using Variational Autoencoders and Model-Agnostic Explanation Techniques for Enhanced Anomaly Detection." International Journal of Scientific Research in Science and Technology 11, no. 6 (2024): 152–83. http://dx.doi.org/10.32628/ijsrst24116170.
Full textGuo, Jiaxing, Zhiyi Tang, Changxing Zhang, Wei Xu, and Yonghong Wu. "An Interpretable Deep Learning Method for Identifying Extreme Events under Faulty Data Interference." Applied Sciences 13, no. 9 (2023): 5659. http://dx.doi.org/10.3390/app13095659.
Full textMethuku, Vijayalaxmi, Sharath Chandra Kondaparthy, and Direesh Reddy Aunugu. "Explainability and Transparency in Artificial Intelligence: Ethical Imperatives and Practical Challenges." International Journal of Electrical, Electronics and Computers 8, no. 3 (2023): 7–12. https://doi.org/10.22161/eec.84.2.
Full textQian, Wei, Chenxu Zhao, Yangyi Li, Fenglong Ma, Chao Zhang, and Mengdi Huai. "Towards Modeling Uncertainties of Self-Explaining Neural Networks via Conformal Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (2024): 14651–59. http://dx.doi.org/10.1609/aaai.v38i13.29382.
Full textOkajima, Yuzuru, and Kunihiko Sadamasa. "Deep Neural Networks Constrained by Decision Rules." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 2496–505. http://dx.doi.org/10.1609/aaai.v33i01.33012496.
Full textSateesh Kumar Rongali. "Enhancing machine learning models: addressing challenges and future directions." World Journal of Advanced Research and Reviews 25, no. 1 (2025): 1749–53. https://doi.org/10.30574/wjarr.2025.25.1.0190.
Full textHuai, Mengdi, Jinduo Liu, Chenglin Miao, Liuyi Yao, and Aidong Zhang. "Towards Automating Model Explanations with Certified Robustness Guarantees." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (2022): 6935–43. http://dx.doi.org/10.1609/aaai.v36i6.20651.
Full textXue, Mufan, Xinyu Wu, Jinlong Li, Xuesong Li, and Guoyuan Yang. "A Convolutional Neural Network Interpretable Framework for Human Ventral Visual Pathway Representation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 6 (2024): 6413–21. http://dx.doi.org/10.1609/aaai.v38i6.28461.
Full textKumar, Akshi, Shubham Dikshit, and Victor Hugo C. Albuquerque. "Explainable Artificial Intelligence for Sarcasm Detection in Dialogues." Wireless Communications and Mobile Computing 2021 (July 2, 2021): 1–13. http://dx.doi.org/10.1155/2021/2939334.
Full textFan, Yongxian, Meng Liu, and Guicong Sun. "An interpretable machine learning framework for diagnosis and prognosis of COVID-19." PLOS ONE 18, no. 9 (2023): e0291961. http://dx.doi.org/10.1371/journal.pone.0291961.
Full textAjkuna MUJO. "Explainable AI in Credit Scoring: Improving Transparency in Loan Decisions." Journal of Information Systems Engineering and Management 10, no. 27s (2025): 506–15. https://doi.org/10.52783/jisem.v10i27s.4437.
Full text