Artykuły w czasopismach na temat „Model-agnostic Explainability”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Model-agnostic Explainability”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Diprose, William K., Nicholas Buist, Ning Hua, Quentin Thurier, George Shand, and Reece Robinson. "Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator." Journal of the American Medical Informatics Association 27, no. 4 (2020): 592–600. http://dx.doi.org/10.1093/jamia/ocz229.
Pełny tekst źródłaZafar, Muhammad Rehman, and Naimul Khan. "Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability." Machine Learning and Knowledge Extraction 3, no. 3 (2021): 525–41. http://dx.doi.org/10.3390/make3030027.
Pełny tekst źródłaTOPCU, Deniz. "How to explain a machine learning model: HbA1c classification example." Journal of Medicine and Palliative Care 4, no. 2 (2023): 117–25. http://dx.doi.org/10.47582/jompac.1259507.
Pełny tekst źródłaUllah, Ihsan, Andre Rios, Vaibhav Gala, and Susan Mckeever. "Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation." Applied Sciences 12, no. 1 (2021): 136. http://dx.doi.org/10.3390/app12010136.
Pełny tekst źródłaSrinivasu, Parvathaneni Naga, N. Sandhya, Rutvij H. Jhaveri, and Roshani Raut. "From Blackbox to Explainable AI in Healthcare: Existing Tools and Case Studies." Mobile Information Systems 2022 (June 13, 2022): 1–20. http://dx.doi.org/10.1155/2022/8167821.
Pełny tekst źródłaLv, Ge, Chen Jason Zhang, and Lei Chen. "HENCE-X: Toward Heterogeneity-Agnostic Multi-Level Explainability for Deep Graph Networks." Proceedings of the VLDB Endowment 16, no. 11 (2023): 2990–3003. http://dx.doi.org/10.14778/3611479.3611503.
Pełny tekst źródłaFauvel, Kevin, Tao Lin, Véronique Masson, Élisa Fromont, and Alexandre Termier. "XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification." Mathematics 9, no. 23 (2021): 3137. http://dx.doi.org/10.3390/math9233137.
Pełny tekst źródłaHassan, Fayaz, Jianguo Yu, Zafi Sherhan Syed, Nadeem Ahmed, Mana Saleh Al Reshan, and Asadullah Shaikh. "Achieving model explainability for intrusion detection in VANETs with LIME." PeerJ Computer Science 9 (June 22, 2023): e1440. http://dx.doi.org/10.7717/peerj-cs.1440.
Pełny tekst źródłaVieira, Carla Piazzon Ramos, and Luciano Antonio Digiampietri. "A study about Explainable Articial Intelligence: using decision tree to explain SVM." Revista Brasileira de Computação Aplicada 12, no. 1 (2020): 113–21. http://dx.doi.org/10.5335/rbca.v12i1.10247.
Pełny tekst źródłaNguyen, Hung Viet, and Haewon Byeon. "Prediction of Out-of-Hospital Cardiac Arrest Survival Outcomes Using a Hybrid Agnostic Explanation TabNet Model." Mathematics 11, no. 9 (2023): 2030. http://dx.doi.org/10.3390/math11092030.
Pełny tekst źródłaSzepannaek, Gero, and Karsten Lübke. "How much do we see? On the explainability of partial dependence plots for credit risk scoring." Argumenta Oeconomica 2023, no. 2 (2023): 137–50. http://dx.doi.org/10.15611/aoe.2023.1.07.
Pełny tekst źródłaSovrano, Francesco, Salvatore Sapienza, Monica Palmirani, and Fabio Vitali. "Metrics, Explainability and the European AI Act Proposal." J 5, no. 1 (2022): 126–38. http://dx.doi.org/10.3390/j5010010.
Pełny tekst źródłaKaplun, Dmitry, Alexander Krasichkov, Petr Chetyrbok, Nikolay Oleinikov, Anupam Garg, and Husanbir Singh Pannu. "Cancer Cell Profiling Using Image Moments and Neural Networks with Model Agnostic Explainability: A Case Study of Breast Cancer Histopathological (BreakHis) Database." Mathematics 9, no. 20 (2021): 2616. http://dx.doi.org/10.3390/math9202616.
Pełny tekst źródłaIbrahim, Muhammad Amien, Samsul Arifin, I. Gusti Agung Anom Yudistira, et al. "An Explainable AI Model for Hate Speech Detection on Indonesian Twitter." CommIT (Communication and Information Technology) Journal 16, no. 2 (2022): 175–82. http://dx.doi.org/10.21512/commit.v16i2.8343.
Pełny tekst źródłaManikis, Georgios C., Georgios S. Ioannidis, Loizos Siakallis, et al. "Multicenter DSC–MRI-Based Radiomics Predict IDH Mutation in Gliomas." Cancers 13, no. 16 (2021): 3965. http://dx.doi.org/10.3390/cancers13163965.
Pełny tekst źródłaOubelaid, Adel, Abdelhameed Ibrahim, and Ahmed M. Elshewey. "Bridging the Gap: An Explainable Methodology for Customer Churn Prediction in Supply Chain Management." Journal of Artificial Intelligence and Metaheuristics 4, no. 1 (2023): 16–23. http://dx.doi.org/10.54216/jaim.040102.
Pełny tekst źródłaSathyan, Anoop, Abraham Itzhak Weinberg, and Kelly Cohen. "Interpretable AI for bio-medical applications." Complex Engineering Systems 2, no. 4 (2022): 18. http://dx.doi.org/10.20517/ces.2022.41.
Pełny tekst źródłaAhmed, Md Sabbir, Md Tasin Tazwar, Haseen Khan, et al. "Yield Response of Different Rice Ecotypes to Meteorological, Agro-Chemical, and Soil Physiographic Factors for Interpretable Precision Agriculture Using Extreme Gradient Boosting and Support Vector Regression." Complexity 2022 (September 19, 2022): 1–20. http://dx.doi.org/10.1155/2022/5305353.
Pełny tekst źródłaBaşağaoğlu, Hakan, Debaditya Chakraborty, Cesar Do Lago, et al. "A Review on Interpretable and Explainable Artificial Intelligence in Hydroclimatic Applications." Water 14, no. 8 (2022): 1230. http://dx.doi.org/10.3390/w14081230.
Pełny tekst źródłaMehta, Harshkumar, and Kalpdrum Passi. "Social Media Hate Speech Detection Using Explainable Artificial Intelligence (XAI)." Algorithms 15, no. 8 (2022): 291. http://dx.doi.org/10.3390/a15080291.
Pełny tekst źródłaLu, Haohui, and Shahadat Uddin. "Explainable Stacking-Based Model for Predicting Hospital Readmission for Diabetic Patients." Information 13, no. 9 (2022): 436. http://dx.doi.org/10.3390/info13090436.
Pełny tekst źródłaAbdullah, Talal A. A., Mohd Soperi Mohd Zahid, Waleed Ali, and Shahab Ul Hassan. "B-LIME: An Improvement of LIME for Interpretable Deep Learning Classification of Cardiac Arrhythmia from ECG Signals." Processes 11, no. 2 (2023): 595. http://dx.doi.org/10.3390/pr11020595.
Pełny tekst źródłaMerone, Mario, Alessandro Graziosi, Valerio Lapadula, Lorenzo Petrosino, Onorato d’Angelis, and Luca Vollero. "A Practical Approach to the Analysis and Optimization of Neural Networks on Embedded Systems." Sensors 22, no. 20 (2022): 7807. http://dx.doi.org/10.3390/s22207807.
Pełny tekst źródłaKim, Jaehun. "Increasing trust in complex machine learning systems." ACM SIGIR Forum 55, no. 1 (2021): 1–3. http://dx.doi.org/10.1145/3476415.3476435.
Pełny tekst źródłaDu, Yuhan, Anthony R. Rafferty, Fionnuala M. McAuliffe, John Mehegan, and Catherine Mooney. "Towards an explainable clinical decision support system for large-for-gestational-age births." PLOS ONE 18, no. 2 (2023): e0281821. http://dx.doi.org/10.1371/journal.pone.0281821.
Pełny tekst źródłaAntoniadi, Anna Markella, Yuhan Du, Yasmine Guendouz, et al. "Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review." Applied Sciences 11, no. 11 (2021): 5088. http://dx.doi.org/10.3390/app11115088.
Pełny tekst źródłaKim, Kipyo, Hyeonsik Yang, Jinyeong Yi, et al. "Real-Time Clinical Decision Support Based on Recurrent Neural Networks for In-Hospital Acute Kidney Injury: External Validation and Model Interpretation." Journal of Medical Internet Research 23, no. 4 (2021): e24120. http://dx.doi.org/10.2196/24120.
Pełny tekst źródłaAbir, Wahidul Hasan, Md Fahim Uddin, Faria Rahman Khanam, et al. "Explainable AI in Diagnosing and Anticipating Leukemia Using Transfer Learning Method." Computational Intelligence and Neuroscience 2022 (April 27, 2022): 1–14. http://dx.doi.org/10.1155/2022/5140148.
Pełny tekst źródłaWikle, Christopher K., Abhirup Datta, Bhava Vyasa Hari, et al. "An illustration of model agnostic explainability methods applied to environmental data." Environmetrics, October 25, 2022. http://dx.doi.org/10.1002/env.2772.
Pełny tekst źródłaXu, Zhichao, Hansi Zeng, Juntao Tan, Zuohui Fu, Yongfeng Zhang, and Qingyao Ai. "A Reusable Model-agnostic Framework for Faithfully Explainable Recommendation and System Scrutability." ACM Transactions on Information Systems, June 18, 2023. http://dx.doi.org/10.1145/3605357.
Pełny tekst źródłaJoyce, Dan W., Andrey Kormilitzin, Katharine A. Smith, and Andrea Cipriani. "Explainable artificial intelligence for mental health through transparency and interpretability for understandability." npj Digital Medicine 6, no. 1 (2023). http://dx.doi.org/10.1038/s41746-023-00751-9.
Pełny tekst źródłaNakashima, Heitor Hoffman, Daielly Mantovani, and Celso Machado Junior. "Users’ trust in black-box machine learning algorithms." Revista de Gestão, October 25, 2022. http://dx.doi.org/10.1108/rege-06-2022-0100.
Pełny tekst źródłaSzepannek, Gero, and Karsten Lübke. "Explaining Artificial Intelligence with Care." KI - Künstliche Intelligenz, May 16, 2022. http://dx.doi.org/10.1007/s13218-022-00764-8.
Pełny tekst źródłaSharma, Jeetesh, Murari Lal Mittal, Gunjan Soni, and Arvind Keprate. "Explainable Artificial Intelligence (XAI) Approaches in Predictive Maintenance: A Review." Recent Patents on Engineering 18 (April 17, 2023). http://dx.doi.org/10.2174/1872212118666230417084231.
Pełny tekst źródłaSzczepański, Mateusz, Marek Pawlicki, Rafał Kozik, and Michał Choraś. "New explainability method for BERT-based model in fake news detection." Scientific Reports 11, no. 1 (2021). http://dx.doi.org/10.1038/s41598-021-03100-6.
Pełny tekst źródłaÖZTOPRAK, Samet, and Zeynep ORMAN. "A New Model-Agnostic Method and Implementation for Explaining the Prediction on Finance Data." European Journal of Science and Technology, June 29, 2022. http://dx.doi.org/10.31590/ejosat.1079145.
Pełny tekst źródłaBachoc, François, Fabrice Gamboa, Max Halford, Jean-Michel Loubes, and Laurent Risser. "Explaining machine learning models using entropic variable projection." Information and Inference: A Journal of the IMA 12, no. 3 (2023). http://dx.doi.org/10.1093/imaiai/iaad010.
Pełny tekst źródłaLoveleen, Gaur, Bhandari Mohan, Bhadwal Singh Shikhar, Jhanjhi Nz, Mohammad Shorfuzzaman, and Mehedi Masud. "Explanation-driven HCI Model to Examine the Mini-Mental State for Alzheimer’s Disease." ACM Transactions on Multimedia Computing, Communications, and Applications, April 2022. http://dx.doi.org/10.1145/3527174.
Pełny tekst źródłaVilone, Giulia, and Luca Longo. "A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods." Frontiers in Artificial Intelligence 4 (November 3, 2021). http://dx.doi.org/10.3389/frai.2021.717899.
Pełny tekst źródłaAlabi, Rasheed Omobolaji, Mohammed Elmusrati, Ilmo Leivo, Alhadi Almangush, and Antti A. Mäkitie. "Machine learning explainability in nasopharyngeal cancer survival using LIME and SHAP." Scientific Reports 13, no. 1 (2023). http://dx.doi.org/10.1038/s41598-023-35795-0.
Pełny tekst źródłaBogdanova, Anna, Akira Imakura, and Tetsuya Sakurai. "DC-SHAP Method for Consistent Explainability in Privacy-Preserving Distributed Machine Learning." Human-Centric Intelligent Systems, July 6, 2023. http://dx.doi.org/10.1007/s44230-023-00032-4.
Pełny tekst źródłaZini, Julia El, and Mariette Awad. "On the Explainability of Natural Language Processing Deep Models." ACM Computing Surveys, July 19, 2022. http://dx.doi.org/10.1145/3529755.
Pełny tekst źródłaEsam Noori, Worood, and A. S. Albahri. "Towards Trustworthy Myopia Detection: Integration Methodology of Deep Learning Approach, XAI Visualization, and User Interface System." Applied Data Science and Analysis, February 23, 2023, 1–15. http://dx.doi.org/10.58496/adsa/2023/001.
Pełny tekst źródłaFilho, Renato Miranda, Anísio M. Lacerda, and Gisele L. Pappa. "Explainable regression via prototypes." ACM Transactions on Evolutionary Learning and Optimization, December 15, 2022. http://dx.doi.org/10.1145/3576903.
Pełny tekst źródłaAhmed, Zia U., Kang Sun, Michael Shelly, and Lina Mu. "Explainable artificial intelligence (XAI) for exploring spatial variability of lung and bronchus cancer (LBC) mortality rates in the contiguous USA." Scientific Reports 11, no. 1 (2021). http://dx.doi.org/10.1038/s41598-021-03198-8.
Pełny tekst źródłaChen, Tao, Meng Song, Hongxun Hui, and Huan Long. "Battery Electrode Mass Loading Prognostics and Analysis for Lithium-Ion Battery–Based Energy Storage Systems." Frontiers in Energy Research 9 (October 5, 2021). http://dx.doi.org/10.3389/fenrg.2021.754317.
Pełny tekst źródłaChen, Tao, Meng Song, Hongxun Hui, and Huan Long. "Battery Electrode Mass Loading Prognostics and Analysis for Lithium-Ion Battery–Based Energy Storage Systems." Frontiers in Energy Research 9 (October 5, 2021). http://dx.doi.org/10.3389/fenrg.2021.754317.
Pełny tekst źródłaJaved, Abdul Rehman, Habib Ullah Khan, Mohammad Kamel Bader Alomari, et al. "Toward explainable AI-empowered cognitive health assessment." Frontiers in Public Health 11 (March 9, 2023). http://dx.doi.org/10.3389/fpubh.2023.1024195.
Pełny tekst źródłaMustafa, Ahmad, Klaas Koster, and Ghassan AlRegib. "Explainable Machine Learning for Hydrocarbon Risk Assessment." GEOPHYSICS, July 13, 2023, 1–52. http://dx.doi.org/10.1190/geo2022-0594.1.
Pełny tekst źródłaYang, Darrion Bo-Yun, Alexander Smith, Emily J. Smith, et al. "The State of Machine Learning in Outcomes Prediction of Transsphenoidal Surgery: A Systematic Review." Journal of Neurological Surgery Part B: Skull Base, September 12, 2022. http://dx.doi.org/10.1055/a-1941-3618.
Pełny tekst źródła