Добірка наукової літератури з теми "Interpretable methods"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Interpretable methods".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Interpretable methods"

1

Topin, Nicholay, Stephanie Milani, Fei Fang, and Manuela Veloso. "Iterative Bounding MDPs: Learning Interpretable Policies via Non-Interpretable Methods." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (2021): 9923–31. http://dx.doi.org/10.1609/aaai.v35i11.17192.

Повний текст джерела
Анотація:
Current work in explainable reinforcement learning generally produces policies in the form of a decision tree over the state space. Such policies can be used for formal safety verification, agent behavior prediction, and manual inspection of important features. However, existing approaches fit a decision tree after training or use a custom learning procedure which is not compatible with new learning techniques, such as those which use neural networks. To address this limitation, we propose a novel Markov Decision Process (MDP) type for learning decision tree policies: Iterative Bounding MDPs (
Стилі APA, Harvard, Vancouver, ISO та ін.
2

KATAOKA, Makoto. "COMPUTER-INTERPRETABLE DESCRIPTION OF CONSTRUCTION METHODS." AIJ Journal of Technology and Design 13, no. 25 (2007): 277–80. http://dx.doi.org/10.3130/aijt.13.277.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Murdoch, W. James, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. "Definitions, methods, and applications in interpretable machine learning." Proceedings of the National Academy of Sciences 116, no. 44 (2019): 22071–80. http://dx.doi.org/10.1073/pnas.1900654116.

Повний текст джерела
Анотація:
Machine-learning models have demonstrated great success in learning complex patterns that enable them to make predictions about unobserved data. In addition to using models for prediction, the ability to interpret what a model has learned is receiving an increasing amount of attention. However, this increased focus has led to considerable confusion about the notion of interpretability. In particular, it is unclear how the wide array of proposed interpretation methods are related and what common concepts can be used to evaluate them. We aim to address these concerns by defining interpretability
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Alangari, Nourah, Mohamed El Bachir Menai, Hassan Mathkour, and Ibrahim Almosallam. "Exploring Evaluation Methods for Interpretable Machine Learning: A Survey." Information 14, no. 8 (2023): 469. http://dx.doi.org/10.3390/info14080469.

Повний текст джерела
Анотація:
In recent times, the progress of machine learning has facilitated the development of decision support systems that exhibit predictive accuracy, surpassing human capabilities in certain scenarios. However, this improvement has come at the cost of increased model complexity, rendering them black-box models that obscure their internal logic from users. These black boxes are primarily designed to optimize predictive accuracy, limiting their applicability in critical domains such as medicine, law, and finance, where both accuracy and interpretability are crucial factors for model acceptance. Despit
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kenesei, Tamás, and János Abonyi. "Interpretable support vector regression." Artificial Intelligence Research 1, no. 2 (2012): 11. http://dx.doi.org/10.5430/air.v1n2p11.

Повний текст джерела
Анотація:
This paper deals with transforming Support vector regression (SVR) models into fuzzy systems (FIS). It is highlighted that trained support vector based models can be used for the construction of fuzzy rule-based regression models. However, the transformed support vector model does not automatically result in an interpretable fuzzy model. Training of a support vector model results a complex rule base, where the number of rules are approximately 40-60% of the number of the training data, therefore reduction of the support vector model initialized fuzzy model is an essential task. For this purpos
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ye, Zhuyifan, Wenmian Yang, Yilong Yang, and Defang Ouyang. "Interpretable machine learning methods for in vitro pharmaceutical formulation development." Food Frontiers 2, no. 2 (2021): 195–207. http://dx.doi.org/10.1002/fft2.78.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Mi, Jian-Xun, An-Di Li, and Li-Fang Zhou. "Review Study of Interpretation Methods for Future Interpretable Machine Learning." IEEE Access 8 (2020): 191969–85. http://dx.doi.org/10.1109/access.2020.3032756.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Obermann, Lennart, and Stephan Waack. "Demonstrating non-inferiority of easy interpretable methods for insolvency prediction." Expert Systems with Applications 42, no. 23 (2015): 9117–28. http://dx.doi.org/10.1016/j.eswa.2015.08.009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Assegie, Tsehay Admassu. "Evaluation of the Shapley Additive Explanation Technique for Ensemble Learning Methods." Proceedings of Engineering and Technology Innovation 21 (April 22, 2022): 20–26. http://dx.doi.org/10.46604/peti.2022.9025.

Повний текст джерела
Анотація:
This study aims to explore the effectiveness of the Shapley additive explanation (SHAP) technique in developing a transparent, interpretable, and explainable ensemble method for heart disease diagnosis using random forest algorithms. Firstly, the features with high impact on the heart disease prediction are selected by SHAP using 1025 heart disease datasets, obtained from a publicly available Kaggle data repository. After that, the features which have the greatest influence on the heart disease prediction are used to develop an interpretable ensemble learning model to automate the heart diseas
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Bang, Seojin, Pengtao Xie, Heewook Lee, Wei Wu, and Eric Xing. "Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (2021): 11396–404. http://dx.doi.org/10.1609/aaai.v35i13.17358.

Повний текст джерела
Анотація:
Interpretable machine learning has gained much attention recently. Briefness and comprehensiveness are necessary in order to provide a large amount of information concisely when explaining a black-box decision system. However, existing interpretable machine learning methods fail to consider briefness and comprehensiveness simultaneously, leading to redundant explanations. We propose the variational information bottleneck for interpretation, VIBI, a system-agnostic interpretable method that provides a brief but comprehensive explanation. VIBI adopts an information theoretic principle, informati
Стилі APA, Harvard, Vancouver, ISO та ін.
Більше джерел
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!