Artículos de revistas sobre el tema "Post-hoc interpretability"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Post-hoc interpretability".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Feng, Jiangfan, Yukun Liang, and Lin Li. "Anomaly Detection in Videos Using Two-Stream Autoencoder with Post Hoc Interpretability." Computational Intelligence and Neuroscience 2021 (July 26, 2021): 1–15. http://dx.doi.org/10.1155/2021/7367870.
Texto completoSinhamahapatra, Poulami, Suprosanna Shit, Anjany Sekuboyina, et al. "Enhancing Interpretability of Vertebrae Fracture Grading using Human-interpretable Prototypes." Machine Learning for Biomedical Imaging 2, July 2024 (2024): 977–1002. http://dx.doi.org/10.59275/j.melba.2024-258b.
Texto completoSarma Borah, Proyash Paban, Devraj Kashyap, Ruhini Aktar Laskar, and Ankur Jyoti Sarmah. "A Comprehensive Study on Explainable AI Using YOLO and Post Hoc Method on Medical Diagnosis." Journal of Physics: Conference Series 2919, no. 1 (2024): 012045. https://doi.org/10.1088/1742-6596/2919/1/012045.
Texto completoZhang, Zaixi, Qi Liu, Hao Wang, Chengqiang Lu, and Cheekong Lee. "ProtGNN: Towards Self-Explaining Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (2022): 9127–35. http://dx.doi.org/10.1609/aaai.v36i8.20898.
Texto completoAlfano, Gianvincenzo, Sergio Greco, Domenico Mandaglio, Francesco Parisi, Reza Shahbazian, and Irina Trubitsyna. "Even-if Explanations: Formal Foundations, Priorities and Complexity." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 15 (2025): 15347–55. https://doi.org/10.1609/aaai.v39i15.33684.
Texto completoXu, Qian, Wenzhao Xie, Bolin Liao, et al. "Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review." Journal of Healthcare Engineering 2023 (February 3, 2023): 1–13. http://dx.doi.org/10.1155/2023/9919269.
Texto completoGill, Navdeep, Patrick Hall, Kim Montgomery, and Nicholas Schmidt. "A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing." Information 11, no. 3 (2020): 137. http://dx.doi.org/10.3390/info11030137.
Texto completoKulaklıoğlu, Duru. "Explainable AI: Enhancing Interpretability of Machine Learning Models." Human Computer Interaction 8, no. 1 (2024): 91. https://doi.org/10.62802/z3pde490.
Texto completoAcun, Cagla, and Olfa Nasraoui. "Pre Hoc and Co Hoc Explainability: Frameworks for Integrating Interpretability into Machine Learning Training for Enhanced Transparency and Performance." Applied Sciences 15, no. 13 (2025): 7544. https://doi.org/10.3390/app15137544.
Texto completoYousufi Aqmal, Shahid, and Fermle Erdely S. "Enhancing Nonparametric Tests: Insights for Computational Intelligence and Data Mining." Researcher Academy Innovation Data Analysis 1, no. 3 (2024): 214–26. https://doi.org/10.69725/raida.v1i3.168.
Texto completoArjunan, Gopalakrishnan. "Implementing Explainable AI in Healthcare: Techniques for Interpretable Machine Learning Models in Clinical Decision-Making." International Journal of Scientific Research and Management (IJSRM) 9, no. 05 (2021): 597–603. http://dx.doi.org/10.18535/ijsrm/v9i05.ec03.
Texto completoKhiem, Phan Xuan, Zurida B. Batchaeva, and Liana K. Katchieva. "HYBRIDIZATION OF MACHINE LEARNING AND STATISTICS METHODS TO IMPROVE MODEL INTERPRETABILITY." EKONOMIKA I UPRAVLENIE: PROBLEMY, RESHENIYA 12/15, no. 153 (2024): 214–20. https://doi.org/10.36871/ek.up.p.r.2024.12.15.025.
Texto completoMarconato, Emanuele, Andrea Passerini, and Stefano Teso. "Interpretability Is in the Mind of the Beholder: A Causal Framework for Human-Interpretable Representation Learning." Entropy 25, no. 12 (2023): 1574. http://dx.doi.org/10.3390/e25121574.
Texto completoZhang, Xiaopu, Wubing Miao, and Guodong Liu. "Explainable Data Mining Framework of Identifying Root Causes of Rocket Engine Anomalies Based on Knowledge and Physics-Informed Feature Selection." Machines 13, no. 8 (2025): 640. https://doi.org/10.3390/machines13080640.
Texto completoBiryukov, D. N., and A. S. Dudkin. "Explainability and interpretability are important aspects in ensuring the security of decisions made by intelligent systems (review article)." Scientific and Technical Journal of Information Technologies, Mechanics and Optics 25, no. 3 (2025): 373–86. https://doi.org/10.17586/2226-1494-2025-25-3-373-386.
Texto completoGaurav, Kashyap. "Explainable AI (XAI): Methods and Techniques to Make Deep Learning Models More Interpretable and Their Real-World Implications." International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences 11, no. 4 (2023): 1–7. https://doi.org/10.5281/zenodo.14382747.
Texto completoLarriva-Novo, Xavier, Luis Pérez Miguel, Victor A. Villagra, Manuel Álvarez-Campana, Carmen Sanchez-Zas, and Óscar Jover. "Post-Hoc Categorization Based on Explainable AI and Reinforcement Learning for Improved Intrusion Detection." Applied Sciences 14, no. 24 (2024): 11511. https://doi.org/10.3390/app142411511.
Texto completoDegtiarova, Ganna, Fran Mikulicic, Jan Vontobel, et al. "Post-hoc motion correction for coronary computed tomography angiography without additional radiation dose - Improved image quality and interpretability for “free”." Imaging 14, no. 2 (2022): 82–88. http://dx.doi.org/10.1556/1647.2022.00060.
Texto completoDomen, Mohorčič, and Ocepek David. "[Re] Hierarchical Shrinkage: Improving the Accuracy and Interpretability of Tree-Based Methods." ReScience C 9, no. 2 (2023): #19. https://doi.org/10.5281/zenodo.8173696.
Texto completoJishnu, Setia. "Explainable AI: Methods and Applications." Explainable AI: Methods and Applications 8, no. 10 (2023): 5. https://doi.org/10.5281/zenodo.10021461.
Texto completoLao, Danning, Qi Liu, Jiazi Bu, Junchi Yan, and Wei Shen. "ViTree: Single-Path Neural Tree for Step-Wise Interpretable Fine-Grained Visual Categorization." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (2024): 2866–73. http://dx.doi.org/10.1609/aaai.v38i3.28067.
Texto completoZhou, Pei-Yuan, Amane Takeuchi, Fernando Martinez-Lopez, Malikeh Ehghaghi, Andrew K. C. Wong, and En-Shiun Annie Lee. "Benchmarking Interpretability in Healthcare Using Pattern Discovery and Disentanglement." Bioengineering 12, no. 3 (2025): 308. https://doi.org/10.3390/bioengineering12030308.
Texto completoGanguly, Rita, Dharmpal Singh, and Rajesh Bose. "The next frontier of explainable artificial intelligence (XAI) in healthcare services: A study on PIMA diabetes dataset." Scientific Temper 16, no. 05 (2025): 4165–70. https://doi.org/10.58414/scientifictemper.2025.16.5.01.
Texto completoNaresh Vurukonda. "A Novel Framework for Inherently Interpretable Deep Neural Networks Using Attention-Based Feature Attribution in High-Dimensional Tabular Data." Journal of Information Systems Engineering and Management 10, no. 50s (2025): 599–604. https://doi.org/10.52783/jisem.v10i50s.10290.
Texto completoNaresh Vurukonda. "A Novel Framework for Inherently Interpretable Deep Neural Networks Using Attention-Based Feature Attribution in High-Dimensional Tabular Data." Journal of Information Systems Engineering and Management 10, no. 51s (2025): 1076–81. https://doi.org/10.52783/jisem.v10i51s.10626.
Texto completoOzdemir, Olcar. "Explainable AI (XAI) in Healthcare: Bridging the Gap between Accuracy and Interpretability." Journal of Science, Technology and Engineering Research 1, no. 1 (2024): 32–44. https://doi.org/10.64206/0z78ev10.
Texto completoGarcía-Vicente, Clara, David Chushig-Muzo, Inmaculada Mora-Jiménez, et al. "Evaluation of Synthetic Categorical Data Generation Techniques for Predicting Cardiovascular Diseases and Post-Hoc Interpretability of the Risk Factors." Applied Sciences 13, no. 7 (2023): 4119. http://dx.doi.org/10.3390/app13074119.
Texto completoJalali, Anahid, Alexander Schindler, Bernhard Haslhofer, and Andreas Rauber. "Machine Learning Interpretability Techniques for Outage Prediction: A Comparative Study." PHM Society European Conference 5, no. 1 (2020): 10. http://dx.doi.org/10.36001/phme.2020.v5i1.1244.
Texto completoLE, Khanh Giang. "IMPROVING ROAD SAFETY: SUPERVISED MACHINE LEARNING ANALYSIS OF FACTORS INFLUENCING CRASH SEVERITY." Scientific Journal of Silesian University of Technology. Series Transport 127 (June 1, 2025): 129–53. https://doi.org/10.20858/sjsutst.2025.127.8.
Texto completoZdravkovic, Milan. "On the global feature importance for interpretable and trustworthy heat demand forecasting." Thermal Science, no. 00 (2025): 48. https://doi.org/10.2298/tsci241223048z.
Texto completoGunasekara, Sachini, and Mirka Saarela. "Explainable AI in Education: Techniques and Qualitative Assessment." Applied Sciences 15, no. 3 (2025): 1239. https://doi.org/10.3390/app15031239.
Texto completoMaddala, Suresh Kumar. "Understanding Explainability in Enterprise AI Models." International Journal of Management Technology 12, no. 1 (2025): 58–68. https://doi.org/10.37745/ijmt.2013/vol12n25868.
Texto completoAli, Ali Mohammed Omar. "Explainability in AI: Interpretable Models for Data Science." International Journal for Research in Applied Science and Engineering Technology 13, no. 2 (2025): 766–71. https://doi.org/10.22214/ijraset.2025.66968.
Texto completoVinayak Pillai. "Enhancing the transparency of data and ml models using explainable AI (XAI)." World Journal of Advanced Engineering Technology and Sciences 13, no. 1 (2024): 397–406. http://dx.doi.org/10.30574/wjaets.2024.13.1.0428.
Texto completoWang, Zhengguang. "Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity Analysis Methods for Time-Series Deep Learning Models." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (2024): 23768–70. http://dx.doi.org/10.1609/aaai.v38i21.30559.
Texto completoChatterjee, Soumick, Arnab Das, Chirag Mandal, et al. "TorchEsegeta: Framework for Interpretability and Explainability of Image-Based Deep Learning Models." Applied Sciences 12, no. 4 (2022): 1834. http://dx.doi.org/10.3390/app12041834.
Texto completoSai Teja Boppiniti. "A SURVEY ON EXPLAINABLE AI: TECHNIQUES AND CHALLENGES." International Journal of Innovations in Engineering Research and Technology 7, no. 3 (2020): 57–66. http://dx.doi.org/10.26662/ijiert.v7i3.pp57-66.
Texto completoMurdoch, W. James, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. "Definitions, methods, and applications in interpretable machine learning." Proceedings of the National Academy of Sciences 116, no. 44 (2019): 22071–80. http://dx.doi.org/10.1073/pnas.1900654116.
Texto completoAslam, Nida, Irfan Ullah Khan, Samiha Mirza, et al. "Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)." Sustainability 14, no. 12 (2022): 7375. http://dx.doi.org/10.3390/su14127375.
Texto completoRoscher, R., B. Bohn, M. F. Duarte, and J. Garcke. "EXPLAIN IT TO ME – FACING REMOTE SENSING CHALLENGES IN THE BIO- AND GEOSCIENCES WITH EXPLAINABLE MACHINE LEARNING." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2020 (August 3, 2020): 817–24. http://dx.doi.org/10.5194/isprs-annals-v-3-2020-817-2020.
Texto completoDamilare Tiamiyu, Seun Oluwaremilekun Aremu, Igba Emmanuel, Chidimma Judith Ihejirika, Michael Babatunde Adewoye, and Adeshina Akin Ajayi. "Interpretable Data Analytics in Blockchain Networks Using Variational Autoencoders and Model-Agnostic Explanation Techniques for Enhanced Anomaly Detection." International Journal of Scientific Research in Science and Technology 11, no. 6 (2024): 152–83. http://dx.doi.org/10.32628/ijsrst24116170.
Texto completoGuo, Jiaxing, Zhiyi Tang, Changxing Zhang, Wei Xu, and Yonghong Wu. "An Interpretable Deep Learning Method for Identifying Extreme Events under Faulty Data Interference." Applied Sciences 13, no. 9 (2023): 5659. http://dx.doi.org/10.3390/app13095659.
Texto completoMethuku, Vijayalaxmi, Sharath Chandra Kondaparthy, and Direesh Reddy Aunugu. "Explainability and Transparency in Artificial Intelligence: Ethical Imperatives and Practical Challenges." International Journal of Electrical, Electronics and Computers 8, no. 3 (2023): 7–12. https://doi.org/10.22161/eec.84.2.
Texto completoQian, Wei, Chenxu Zhao, Yangyi Li, Fenglong Ma, Chao Zhang, and Mengdi Huai. "Towards Modeling Uncertainties of Self-Explaining Neural Networks via Conformal Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (2024): 14651–59. http://dx.doi.org/10.1609/aaai.v38i13.29382.
Texto completoOkajima, Yuzuru, and Kunihiko Sadamasa. "Deep Neural Networks Constrained by Decision Rules." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 2496–505. http://dx.doi.org/10.1609/aaai.v33i01.33012496.
Texto completoSateesh Kumar Rongali. "Enhancing machine learning models: addressing challenges and future directions." World Journal of Advanced Research and Reviews 25, no. 1 (2025): 1749–53. https://doi.org/10.30574/wjarr.2025.25.1.0190.
Texto completoHuai, Mengdi, Jinduo Liu, Chenglin Miao, Liuyi Yao, and Aidong Zhang. "Towards Automating Model Explanations with Certified Robustness Guarantees." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (2022): 6935–43. http://dx.doi.org/10.1609/aaai.v36i6.20651.
Texto completoXue, Mufan, Xinyu Wu, Jinlong Li, Xuesong Li, and Guoyuan Yang. "A Convolutional Neural Network Interpretable Framework for Human Ventral Visual Pathway Representation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 6 (2024): 6413–21. http://dx.doi.org/10.1609/aaai.v38i6.28461.
Texto completoKumar, Akshi, Shubham Dikshit, and Victor Hugo C. Albuquerque. "Explainable Artificial Intelligence for Sarcasm Detection in Dialogues." Wireless Communications and Mobile Computing 2021 (July 2, 2021): 1–13. http://dx.doi.org/10.1155/2021/2939334.
Texto completoFan, Yongxian, Meng Liu, and Guicong Sun. "An interpretable machine learning framework for diagnosis and prognosis of COVID-19." PLOS ONE 18, no. 9 (2023): e0291961. http://dx.doi.org/10.1371/journal.pone.0291961.
Texto completo