Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Interpretable methods.

Статті в журналах з теми "Interpretable methods"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Interpretable methods".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Topin, Nicholay, Stephanie Milani, Fei Fang, and Manuela Veloso. "Iterative Bounding MDPs: Learning Interpretable Policies via Non-Interpretable Methods." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (2021): 9923–31. http://dx.doi.org/10.1609/aaai.v35i11.17192.

Повний текст джерела
Анотація:
Current work in explainable reinforcement learning generally produces policies in the form of a decision tree over the state space. Such policies can be used for formal safety verification, agent behavior prediction, and manual inspection of important features. However, existing approaches fit a decision tree after training or use a custom learning procedure which is not compatible with new learning techniques, such as those which use neural networks. To address this limitation, we propose a novel Markov Decision Process (MDP) type for learning decision tree policies: Iterative Bounding MDPs (
Стилі APA, Harvard, Vancouver, ISO та ін.
2

KATAOKA, Makoto. "COMPUTER-INTERPRETABLE DESCRIPTION OF CONSTRUCTION METHODS." AIJ Journal of Technology and Design 13, no. 25 (2007): 277–80. http://dx.doi.org/10.3130/aijt.13.277.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Murdoch, W. James, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. "Definitions, methods, and applications in interpretable machine learning." Proceedings of the National Academy of Sciences 116, no. 44 (2019): 22071–80. http://dx.doi.org/10.1073/pnas.1900654116.

Повний текст джерела
Анотація:
Machine-learning models have demonstrated great success in learning complex patterns that enable them to make predictions about unobserved data. In addition to using models for prediction, the ability to interpret what a model has learned is receiving an increasing amount of attention. However, this increased focus has led to considerable confusion about the notion of interpretability. In particular, it is unclear how the wide array of proposed interpretation methods are related and what common concepts can be used to evaluate them. We aim to address these concerns by defining interpretability
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Alangari, Nourah, Mohamed El Bachir Menai, Hassan Mathkour, and Ibrahim Almosallam. "Exploring Evaluation Methods for Interpretable Machine Learning: A Survey." Information 14, no. 8 (2023): 469. http://dx.doi.org/10.3390/info14080469.

Повний текст джерела
Анотація:
In recent times, the progress of machine learning has facilitated the development of decision support systems that exhibit predictive accuracy, surpassing human capabilities in certain scenarios. However, this improvement has come at the cost of increased model complexity, rendering them black-box models that obscure their internal logic from users. These black boxes are primarily designed to optimize predictive accuracy, limiting their applicability in critical domains such as medicine, law, and finance, where both accuracy and interpretability are crucial factors for model acceptance. Despit
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kenesei, Tamás, and János Abonyi. "Interpretable support vector regression." Artificial Intelligence Research 1, no. 2 (2012): 11. http://dx.doi.org/10.5430/air.v1n2p11.

Повний текст джерела
Анотація:
This paper deals with transforming Support vector regression (SVR) models into fuzzy systems (FIS). It is highlighted that trained support vector based models can be used for the construction of fuzzy rule-based regression models. However, the transformed support vector model does not automatically result in an interpretable fuzzy model. Training of a support vector model results a complex rule base, where the number of rules are approximately 40-60% of the number of the training data, therefore reduction of the support vector model initialized fuzzy model is an essential task. For this purpos
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ye, Zhuyifan, Wenmian Yang, Yilong Yang, and Defang Ouyang. "Interpretable machine learning methods for in vitro pharmaceutical formulation development." Food Frontiers 2, no. 2 (2021): 195–207. http://dx.doi.org/10.1002/fft2.78.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Mi, Jian-Xun, An-Di Li, and Li-Fang Zhou. "Review Study of Interpretation Methods for Future Interpretable Machine Learning." IEEE Access 8 (2020): 191969–85. http://dx.doi.org/10.1109/access.2020.3032756.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Obermann, Lennart, and Stephan Waack. "Demonstrating non-inferiority of easy interpretable methods for insolvency prediction." Expert Systems with Applications 42, no. 23 (2015): 9117–28. http://dx.doi.org/10.1016/j.eswa.2015.08.009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Assegie, Tsehay Admassu. "Evaluation of the Shapley Additive Explanation Technique for Ensemble Learning Methods." Proceedings of Engineering and Technology Innovation 21 (April 22, 2022): 20–26. http://dx.doi.org/10.46604/peti.2022.9025.

Повний текст джерела
Анотація:
This study aims to explore the effectiveness of the Shapley additive explanation (SHAP) technique in developing a transparent, interpretable, and explainable ensemble method for heart disease diagnosis using random forest algorithms. Firstly, the features with high impact on the heart disease prediction are selected by SHAP using 1025 heart disease datasets, obtained from a publicly available Kaggle data repository. After that, the features which have the greatest influence on the heart disease prediction are used to develop an interpretable ensemble learning model to automate the heart diseas
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Bang, Seojin, Pengtao Xie, Heewook Lee, Wei Wu, and Eric Xing. "Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (2021): 11396–404. http://dx.doi.org/10.1609/aaai.v35i13.17358.

Повний текст джерела
Анотація:
Interpretable machine learning has gained much attention recently. Briefness and comprehensiveness are necessary in order to provide a large amount of information concisely when explaining a black-box decision system. However, existing interpretable machine learning methods fail to consider briefness and comprehensiveness simultaneously, leading to redundant explanations. We propose the variational information bottleneck for interpretation, VIBI, a system-agnostic interpretable method that provides a brief but comprehensive explanation. VIBI adopts an information theoretic principle, informati
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Li, Qiaomei, Rachel Cummings, and Yonatan Mintz. "Optimal Local Explainer Aggregation for Interpretable Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (2022): 12000–12007. http://dx.doi.org/10.1609/aaai.v36i11.21458.

Повний текст джерела
Анотація:
A key challenge for decision makers when incorporating black box machine learned models into practice is being able to understand the predictions provided by these models. One set of methods proposed to address this challenge is that of training surrogate explainer models which approximate how the more complex model is computing its predictions. Explainer methods are generally classified as either local or global explainers depending on what portion of the data space they are purported to explain. The improved coverage of global explainers usually comes at the expense of explainer fidelity (i.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Mahya, Parisa, and Johannes Fürnkranz. "An Empirical Comparison of Interpretable Models to Post-Hoc Explanations." AI 4, no. 2 (2023): 426–36. http://dx.doi.org/10.3390/ai4020023.

Повний текст джерела
Анотація:
Recently, some effort went into explaining intransparent and black-box models, such as deep neural networks or random forests. So-called model-agnostic methods typically approximate the prediction of the intransparent black-box model with an interpretable model, without considering any specifics of the black-box model itself. It is a valid question whether direct learning of interpretable white-box models should not be preferred over post-hoc approximations of intransparent and black-box models. In this paper, we report the results of an empirical study, which compares post-hoc explanations an
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Lee, Franklin Langlang, Jaehong Park, Sushmit Goyal, et al. "Comparison of Machine Learning Methods towards Developing Interpretable Polyamide Property Prediction." Polymers 13, no. 21 (2021): 3653. http://dx.doi.org/10.3390/polym13213653.

Повний текст джерела
Анотація:
Polyamides are often used for their superior thermal, mechanical, and chemical properties. They form a diverse set of materials that have a large variation in properties between linear to aromatic compounds, which renders the traditional quantitative structure–property relationship (QSPR) challenging. We use extended connectivity fingerprints (ECFP) and traditional QSPR fingerprints to develop machine learning models to perform high fidelity prediction of glass transition temperature (Tg), melting temperature (Tm), density (ρ), and tensile modulus (E). The non-linear model using random forest
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Li, Xiao, Zachary Serlin, Guang Yang, and Calin Belta. "A formal methods approach to interpretable reinforcement learning for robotic planning." Science Robotics 4, no. 37 (2019): eaay6276. http://dx.doi.org/10.1126/scirobotics.aay6276.

Повний текст джерела
Анотація:
Growing interest in reinforcement learning approaches to robotic planning and control raises concerns of predictability and safety of robot behaviors realized solely through learned control policies. In addition, formally defining reward functions for complex tasks is challenging, and faulty rewards are prone to exploitation by the learning agent. Here, we propose a formal methods approach to reinforcement learning that (i) provides a formal specification language that integrates high-level, rich, task specifications with a priori, domain-specific knowledge; (ii) makes the reward generation pr
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Skirzyński, Julian, Frederic Becker, and Falk Lieder. "Automatic discovery of interpretable planning strategies." Machine Learning 110, no. 9 (2021): 2641–83. http://dx.doi.org/10.1007/s10994-021-05963-2.

Повний текст джерела
Анотація:
AbstractWhen making decisions, people often overlook critical information or are overly swayed by irrelevant information. A common approach to mitigate these biases is to provide decision-makers, especially professionals such as medical doctors, with decision aids, such as decision trees and flowcharts. Designing effective decision aids is a difficult problem. We propose that recently developed reinforcement learning methods for discovering clever heuristics for good decision-making can be partially leveraged to assist human experts in this design process. One of the biggest remaining obstacle
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Xu, Yixiao, Xiaolei Liu, Kangyi Ding, and Bangzhou Xin. "IBD: An Interpretable Backdoor-Detection Method via Multivariate Interactions." Sensors 22, no. 22 (2022): 8697. http://dx.doi.org/10.3390/s22228697.

Повний текст джерела
Анотація:
Recent work has shown that deep neural networks are vulnerable to backdoor attacks. In comparison with the success of backdoor-attack methods, existing backdoor-defense methods face a lack of theoretical foundations and interpretable solutions. Most defense methods are based on experience with the characteristics of previous attacks, but fail to defend against new attacks. In this paper, we propose IBD, an interpretable backdoor-detection method via multivariate interactions. Using information theory techniques, IBD reveals how the backdoor works from the perspective of multivariate interactio
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Gu, Jindong. "Interpretable Graph Capsule Networks for Object Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (2021): 1469–77. http://dx.doi.org/10.1609/aaai.v35i2.16237.

Повний текст джерела
Анотація:
Capsule Networks, as alternatives to Convolutional Neural Networks, have been proposed to recognize objects from images. The current literature demonstrates many advantages of CapsNets over CNNs. However, how to create explanations for individual classifications of CapsNets has not been well explored. The widely used saliency methods are mainly proposed for explaining CNN-based classifications; they create saliency map explanations by combining activation values and the corresponding gradients, e.g., Grad-CAM. These saliency methods require a specific architecture of the underlying classifiers
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Hagerty, C. G., and F. A. Sonnenberg. "Computer-Interpretable Clinical Practice Guidelines." Yearbook of Medical Informatics 15, no. 01 (2006): 145–58. http://dx.doi.org/10.1055/s-0038-1638486.

Повний текст джерела
Анотація:
SummaryTo provide a comprehensive overview of computerinterpretable guideline (CIG) systems aimed at non-experts. The overview includes the history of efforts to develop CIGs, features of and relationships among current major CIG systems, current status of standards developments pertinent to CIGs and identification of unsolved problems and needs for future researchLiterature re view based on PubMed, AMIA conference proceedings and key references from publications identified. Search terms included practice guidelines, decision support, controlled vocabulary and medical record systems. Papers we
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Ge, Xiaoyi, Mingshu Zhang, Xu An Wang, Jia Liu, and Bin Wei. "Emotion-Drive Interpretable Fake News Detection." International Journal of Data Warehousing and Mining 18, no. 1 (2022): 1–17. http://dx.doi.org/10.4018/ijdwm.314585.

Повний текст джерела
Анотація:
Fake news has brought significant challenges to the healthy development of social media. Although current fake news detection methods are advanced, many models directly utilize unselected user comments and do not consider the emotional connection between news content and user comments. The authors propose an emotion-driven explainable fake news detection model (EDI) to solve this problem. The model can select valuable user comments by using sentiment value, obtain the emotional correlation representation between news content and user comments by using collaborative annotation, and obtain the w
Стилі APA, Harvard, Vancouver, ISO та ін.
20

H. Merritt, Sean, and Alexander P. Christensen. "An Experimental Study of Dimension Reduction Methods on Machine Learning Algorithms with Applications to Psychometrics." Advances in Artificial Intelligence and Machine Learning 03, no. 01 (2023): 760–77. http://dx.doi.org/10.54364/aaiml.2023.1149.

Повний текст джерела
Анотація:
Developing interpretable machine learning models has become an increasingly important issue. One way in which data scientists have been able to develop interpretable models has been to use dimension reduction techniques. In this paper, we examine several dimension reduction techniques including two recent approaches developed in the network psychometrics literature called exploratory graph analysis (EGA) and unique variable analysis (UVA). We compared EGA and UVA with two other dimension reduction techniques common in the machine learning literature (principal component analysis and independen
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Luan, Tao, Guoqing Liang, and Pengfei Peng. "Interpretable DeepFake Detection Based on Frequency Spatial Transformer." International Journal of Emerging Technologies and Advanced Applications 1, no. 2 (2024): 19–25. http://dx.doi.org/10.62677/ijetaa.2402108.

Повний текст джерела
Анотація:
In recent years, the rapid development of DeepFake has garnered significant attention. Traditional DeepFake detection methods have achieved 100% accuracy on certain corresponding datasets, however, these methods lack interpretability. Existing methods for learning forgery traces often rely on pre-annotated data based on supervised learning, which limits their abilities in non-corresponding detection scenarios. To address this issue, we propose an interpretable DeepFake detection approach based on unsupervised learning called Find-X. The Find-X network consists of two components: forgery trace
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Verma, Abhinav. "Verifiable and Interpretable Reinforcement Learning through Program Synthesis." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9902–3. http://dx.doi.org/10.1609/aaai.v33i01.33019902.

Повний текст джерела
Анотація:
We study the problem of generating interpretable and verifiable policies for Reinforcement Learning (RL). Unlike the popular Deep Reinforcement Learning (DRL) paradigm, in which the policy is represented by a neural network, the aim of this work is to find policies that can be represented in highlevel programming languages. Such programmatic policies have several benefits, including being more easily interpreted than neural networks, and being amenable to verification by scalable symbolic methods. The generation methods for programmatic policies also provide a mechanism for systematically usin
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Tulsani, Vijya, Prashant Sahatiya, Jignasha Parmar, and Jayshree Parmar. "XAI Applications in Medical Imaging: A Survey of Methods and Challenges." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9 (2023): 181–86. http://dx.doi.org/10.17762/ijritcc.v11i9.8332.

Повний текст джерела
Анотація:
Medical imaging plays a pivotal role in modern healthcare, aiding in the diagnosis, monitoring, and treatment of various medical conditions. With the advent of Artificial Intelligence (AI), medical imaging has witnessed remarkable advancements, promising more accurate and efficient analysis. However, the black-box nature of many AI models used in medical imaging has raised concerns regarding their interpretability and trustworthiness. In response to these challenges, Explainable AI (XAI) has emerged as a critical field, aiming to provide transparent and interpretable solutions for medical imag
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Hayes, Sean M. S., Jeffrey R. Sachs, and Carolyn R. Cho. "From complex data to biological insight: ‘DEKER’ feature selection and network inference." Journal of Pharmacokinetics and Pharmacodynamics 49, no. 1 (2021): 81–99. http://dx.doi.org/10.1007/s10928-021-09792-7.

Повний текст джерела
Анотація:
AbstractNetwork inference is a valuable approach for gaining mechanistic insight from high-dimensional biological data. Existing methods for network inference focus on ranking all possible relations (edges) among all measured quantities such as genes, proteins, metabolites (features) observed, which yields a dense network that is challenging to interpret. Identifying a sparse, interpretable network using these methods thus requires an error-prone thresholding step which compromises their performance. In this article we propose a new method, DEKER-NET, that addresses this limitation by directly
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Khakabimamaghani, Sahand, Yogeshwar D. Kelkar, Bruno M. Grande, Ryan D. Morin, Martin Ester, and Daniel Ziemek. "SUBSTRA: Supervised Bayesian Patient Stratification." Bioinformatics 35, no. 18 (2019): 3263–72. http://dx.doi.org/10.1093/bioinformatics/btz112.

Повний текст джерела
Анотація:
Abstract Motivation Patient stratification methods are key to the vision of precision medicine. Here, we consider transcriptional data to segment the patient population into subsets relevant to a given phenotype. Whereas most existing patient stratification methods focus either on predictive performance or interpretable features, we developed a method striking a balance between these two important goals. Results We introduce a Bayesian method called SUBSTRA that uses regularized biclustering to identify patient subtypes and interpretable subtype-specific transcript clusters. The method iterati
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Sun, Lili, Xueyan Liu, Min Zhao, and Bo Yang. "Interpretable Variational Graph Autoencoder with Noninformative Prior." Future Internet 13, no. 2 (2021): 51. http://dx.doi.org/10.3390/fi13020051.

Повний текст джерела
Анотація:
Variational graph autoencoder, which can encode structural information and attribute information in the graph into low-dimensional representations, has become a powerful method for studying graph-structured data. However, most existing methods based on variational (graph) autoencoder assume that the prior of latent variables obeys the standard normal distribution which encourages all nodes to gather around 0. That leads to the inability to fully utilize the latent space. Therefore, it becomes a challenge on how to choose a suitable prior without incorporating additional expert knowledge. Given
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Cansel, Neslihan, Fatma Hilal Yagin, Mustafa Akan, and Bedriye Ilkay Aygul. "INTERPRETABLE ESTIMATION OF SUICIDE RISK AND SEVERITY FROM COMPLETE BLOOD COUNT PARAMETERS WITH EXPLAINABLE ARTIFICIAL INTELLIGENCE METHODS." PSYCHIATRIA DANUBINA 35, no. 1 (2023): 62–72. http://dx.doi.org/10.24869/psyd.2023.62.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Shi, Liushuai, Le Wang, Chengjiang Long, et al. "Social Interpretable Tree for Pedestrian Trajectory Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (2022): 2235–43. http://dx.doi.org/10.1609/aaai.v36i2.20121.

Повний текст джерела
Анотація:
Understanding the multiple socially-acceptable future behaviors is an essential task for many vision applications. In this paper, we propose a tree-based method, termed as Social Interpretable Tree (SIT), to address this multi-modal prediction task, where a hand-crafted tree is built depending on the prior information of observed trajectory to model multiple future trajectories. Specifically, a path in the tree from the root to leaf represents an individual possible future trajectory. SIT employs a coarse-to-fine optimization strategy, in which the tree is first built by high-order velocity to
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Krumb, Henry, Dhritimaan Das, Romol Chadda, and Anirban Mukhopadhyay. "CycleGAN for interpretable online EMT compensation." International Journal of Computer Assisted Radiology and Surgery 16, no. 5 (2021): 757–65. http://dx.doi.org/10.1007/s11548-021-02324-1.

Повний текст джерела
Анотація:
Abstract Purpose Electromagnetic tracking (EMT) can partially replace X-ray guidance in minimally invasive procedures, reducing radiation in the OR. However, in this hybrid setting, EMT is disturbed by metallic distortion caused by the X-ray device. We plan to make hybrid navigation clinical reality to reduce radiation exposure for patients and surgeons, by compensating EMT error. Methods Our online compensation strategy exploits cycle-consistent generative adversarial neural networks (CycleGAN). Positions are translated from various bedside environments to their bench equivalents, by adjustin
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Bhambhoria, Rohan, Hui Liu, Samuel Dahan, and Xiaodan Zhu. "Interpretable Low-Resource Legal Decision Making." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (2022): 11819–27. http://dx.doi.org/10.1609/aaai.v36i11.21438.

Повний текст джерела
Анотація:
Over the past several years, legal applications of deep learning have been on the rise. However, as with other high-stakes decision making areas, the requirement for interpretability is of crucial importance. Current models utilized by legal practitioners are more of the conventional machine learning type, wherein they are inherently interpretable, yet unable to harness the performance capabilities of data-driven deep learning models. In this work, we utilize deep learning models in the area of trademark law to shed light on the issue of likelihood of confusion between trademarks. Specifically
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Whiteway, Matthew R., Dan Biderman, Yoni Friedman, et al. "Partitioning variability in animal behavioral videos using semi-supervised variational autoencoders." PLOS Computational Biology 17, no. 9 (2021): e1009439. http://dx.doi.org/10.1371/journal.pcbi.1009439.

Повний текст джерела
Анотація:
Recent neuroscience studies demonstrate that a deeper understanding of brain function requires a deeper understanding of behavior. Detailed behavioral measurements are now often collected using video cameras, resulting in an increased need for computer vision algorithms that extract useful information from video data. Here we introduce a new video analysis tool that combines the output of supervised pose estimation algorithms (e.g. DeepLabCut) with unsupervised dimensionality reduction methods to produce interpretable, low-dimensional representations of behavioral videos that extract more info
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Walter, Nils Philipp, Jonas Fischer, and Jilles Vreeken. "Finding Interpretable Class-Specific Patterns through Efficient Neural Search." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (2024): 9062–70. http://dx.doi.org/10.1609/aaai.v38i8.28756.

Повний текст джерела
Анотація:
Discovering patterns in data that best describe the differences between classes allows to hypothesize and reason about class-specific mechanisms. In molecular biology, for example, these bear the promise of advancing the understanding of cellular processes differing between tissues or diseases, which could lead to novel treatments. To be useful in practice, methods that tackle the problem of finding such differential patterns have to be readily interpretable by domain experts, and scalable to the extremely high-dimensional data. In this work, we propose a novel, inherently interpretable binary
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Meng, Fan. "Creating Interpretable Data-Driven Approaches for Tropical Cyclones Forecasting." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (2022): 12892–93. http://dx.doi.org/10.1609/aaai.v36i11.21583.

Повний текст джерела
Анотація:
Tropical cyclones (TC) are extreme weather phenomena that bring heavy disasters to humans. Existing forecasting techniques contain computationally intensive dynamical models and statistical methods with complex inputs, both of which have bottlenecks in intensity forecasting, and we aim to create data-driven methods to break this forecasting bottleneck. The research goal of my PhD topic is to introduce novel methods to provide accurate and trustworthy forecasting of TC by developing interpretable machine learning models to analyze the characteristics of TC from multiple sources of data such as
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Wang, Min, Steven M. Kornblau, and Kevin R. Coombes. "Decomposing the Apoptosis Pathway Into Biologically Interpretable Principal Components." Cancer Informatics 17 (January 1, 2018): 117693511877108. http://dx.doi.org/10.1177/1176935118771082.

Повний текст джерела
Анотація:
Principal component analysis (PCA) is one of the most common techniques in the analysis of biological data sets, but applying PCA raises 2 challenges. First, one must determine the number of significant principal components (PCs). Second, because each PC is a linear combination of genes, it rarely has a biological interpretation. Existing methods to determine the number of PCs are either subjective or computationally extensive. We review several methods and describe a new R package, PCDimension, that implements additional methods, the most important being an algorithm that extends and automate
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Weiss, S. M., and N. Indurkhya. "Rule-based Machine Learning Methods for Functional Prediction." Journal of Artificial Intelligence Research 3 (December 1, 1995): 383–403. http://dx.doi.org/10.1613/jair.199.

Повний текст джерела
Анотація:
We describe a machine learning method for predicting the value of a real-valued function, given the values of multiple input variables. The method induces solutions from samples in the form of ordered disjunctive normal form (DNF) decision rules. A central objective of the method and representation is the induction of compact, easily interpretable solutions. This rule-based decision model can be extended to search efficiently for similar cases prior to approximating function values. Experimental results on real-world data demonstrate that the new techniques are competitive with existing machin
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Feng, Aosong, Chenyu You, Shiqiang Wang, and Leandros Tassiulas. "KerGNNs: Interpretable Graph Neural Networks with Graph Kernels." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (2022): 6614–22. http://dx.doi.org/10.1609/aaai.v36i6.20615.

Повний текст джерела
Анотація:
Graph kernels are historically the most widely-used technique for graph classification tasks. However, these methods suffer from limited performance because of the hand-crafted combinatorial features of graphs. In recent years, graph neural networks (GNNs) have become the state-of-the-art method in downstream graph-related tasks due to their superior performance. Most GNNs are based on Message Passing Neural Network (MPNN) frameworks. However, recent studies show that MPNNs can not exceed the power of the Weisfeiler-Lehman (WL) algorithm in graph isomorphism test. To address the limitations of
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Wang, Yulong, Xiaolu Zhang, Xiaolin Hu, Bo Zhang, and Hang Su. "Dynamic Network Pruning with Interpretable Layerwise Channel Selection." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 6299–306. http://dx.doi.org/10.1609/aaai.v34i04.6098.

Повний текст джерела
Анотація:
Dynamic network pruning achieves runtime acceleration by dynamically determining the inference paths based on different inputs. However, previous methods directly generate continuous decision values for each weight channel, which cannot reflect a clear and interpretable pruning process. In this paper, we propose to explicitly model the discrete weight channel selections, which encourages more diverse weights utilization, and achieves more sparse runtime inference paths. Meanwhile, with the help of interpretable layerwise channel selections in the dynamic network, we can visualize the network d
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Hase, Peter, Chaofan Chen, Oscar Li, and Cynthia Rudin. "Interpretable Image Recognition with Hierarchical Prototypes." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7 (October 28, 2019): 32–40. http://dx.doi.org/10.1609/hcomp.v7i1.5265.

Повний текст джерела
Анотація:
Vision models are interpretable when they classify objects on the basis of features that a person can directly understand. Recently, methods relying on visual feature prototypes have been developed for this purpose. However, in contrast to how humans categorize objects, these approaches have not yet made use of any taxonomical organization of class labels. With such an approach, for instance, we may see why a chimpanzee is classified as a chimpanzee, but not why it was considered to be a primate or even an animal. In this work we introduce a model that uses hierarchically organized prototypes
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Ghanem, Souhila, Raphaël Couturier, and Pablo Gregori. "An Accurate and Easy to Interpret Binary Classifier Based on Association Rules Using Implication Intensity and Majority Vote." Mathematics 9, no. 12 (2021): 1315. http://dx.doi.org/10.3390/math9121315.

Повний текст джерела
Анотація:
In supervised learning, classifiers range from simpler, more interpretable and generally less accurate ones (e.g., CART, C4.5, J48) to more complex, less interpretable and more accurate ones (e.g., neural networks, SVM). In this tradeoff between interpretability and accuracy, we propose a new classifier based on association rules, that is to say, both easy to interpret and leading to relevant accuracy. To illustrate this proposal, its performance is compared to other widely used methods on six open access datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Eiras-Franco, Carlos, Bertha Guijarro-Berdiñas, Amparo Alonso-Betanzos, and Antonio Bahamonde. "Interpretable Market Segmentation on High Dimension Data." Proceedings 2, no. 18 (2018): 1171. http://dx.doi.org/10.3390/proceedings2181171.

Повний текст джерела
Анотація:
Obtaining relevant information from the vast amount of data generated by interactions in a market or, in general, from a dyadic dataset, is a broad problem of great interest both for industry and academia. Also, the interpretability of machine learning algorithms is becoming increasingly relevant and even becoming a legal requirement, all of which increases the demand for such algorithms. In this work we propose a quality measure that factors in the interpretability of results. Additionally, we present a grouping algorithm on dyadic data that returns results with a level of interpretability se
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Pulkkinen, Pietari, and Hannu Koivisto. "Identification of interpretable and accurate fuzzy classifiers and function estimators with hybrid methods." Applied Soft Computing 7, no. 2 (2007): 520–33. http://dx.doi.org/10.1016/j.asoc.2006.11.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Liu, Yuekai, Tianyang Wang, and Fulei Chu. "Hybrid machine condition monitoring based on interpretable dual tree methods using Wasserstein metrics." Expert Systems with Applications 235 (January 2024): 121104. http://dx.doi.org/10.1016/j.eswa.2023.121104.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Munir, Nimra, Ross McMorrow, Konrad Mulrennan, et al. "Interpretable Machine Learning Methods for Monitoring Polymer Degradation in Extrusion of Polylactic Acid." Polymers 15, no. 17 (2023): 3566. http://dx.doi.org/10.3390/polym15173566.

Повний текст джерела
Анотація:
This work investigates real-time monitoring of extrusion-induced degradation in different grades of PLA across a range of process conditions and machine set-ups. Data on machine settings together with in-process sensor data, including temperature, pressure, and near-infrared (NIR) spectra, are used as inputs to predict the molecular weight and mechanical properties of the product. Many soft sensor approaches based on complex spectral data are essentially ‘black-box’ in nature, which can limit industrial acceptability. Hence, the focus here is on identifying an optimal approach to developing in
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Qiao, Zuqiang, Shengzhi Dong, Qing Li, et al. "Performance prediction models for sintered NdFeB using machine learning methods and interpretable studies." Journal of Alloys and Compounds 963 (November 2023): 171250. http://dx.doi.org/10.1016/j.jallcom.2023.171250.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Ragazzo, Michele, Stefano Melchiorri, Laura Manzo, et al. "Comparative Analysis of ANDE 6C Rapid DNA Analysis System and Traditional Methods." Genes 11, no. 5 (2020): 582. http://dx.doi.org/10.3390/genes11050582.

Повний текст джерела
Анотація:
Rapid DNA analysis is an ultrafast and fully automated DNA-typing system, which can produce interpretable genetic profiles from biological samples within 90 minutes. This “swab in—profile out” method comprises DNA extraction, amplification by PCR multiplex, separation and detection of DNA fragments by capillary electrophoresis. The aim of study was the validation of the Accelerated Nuclear DNA Equipment (ANDE) 6C system as a typing method for reference samples according to the ISO/IEC 17025 standard. Here, we report the evaluation of the validity and reproducibility of results by the compariso
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Wu, Bozhi, Sen Chen, Cuiyun Gao, et al. "Why an Android App Is Classified as Malware." ACM Transactions on Software Engineering and Methodology 30, no. 2 (2021): 1–29. http://dx.doi.org/10.1145/3423096.

Повний текст джерела
Анотація:
Machine learning–(ML) based approach is considered as one of the most promising techniques for Android malware detection and has achieved high accuracy by leveraging commonly used features. In practice, most of the ML classifications only provide a binary label to mobile users and app security analysts. However, stakeholders are more interested in the reason why apps are classified as malicious in both academia and industry. This belongs to the research area of interpretable ML but in a specific research domain (i.e., mobile malware detection). Although several interpretable ML methods have be
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Xiang, Ziyu, Mingzhou Fan, Guillermo Vázquez Tovar, et al. "Physics-constrained Automatic Feature Engineering for Predictive Modeling in Materials Science." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (2021): 10414–21. http://dx.doi.org/10.1609/aaai.v35i12.17247.

Повний текст джерела
Анотація:
Automatic Feature Engineering (AFE) aims to extract useful knowledge for interpretable predictions given data for the machine learning tasks. Here, we develop AFE to extract dependency relationships that can be interpreted with functional formulas to discover physics meaning or new hypotheses for the problems of interest. We focus on materials science applications, where interpretable predictive modeling may provide principled understanding of materials systems and guide new materials discovery. It is often computationally prohibitive to exhaust all the potential relationships to construct and
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Abafogi, Abdo Ababor. "Survey on Interpretable Semantic Textual Similarity, and its Applications." International Journal of Innovative Technology and Exploring Engineering 10, no. 3 (2021): 14–18. http://dx.doi.org/10.35940/ijitee.b8294.0110321.

Повний текст джерела
Анотація:
Both semantic representation and related natural language processing(NLP) tasks has become more popular due to the introduction of distributional semantics. Semantic textual similarity (STS)is one of a task in NLP, it determinesthe similarity based onthe meanings of two shorttexts (sentences). Interpretable STS is the way of giving explanation to semantic similarity between short texts. Giving interpretation is indeedpossible tohuman, but, constructing computational modelsthat explain as human level is challenging. The interpretable STS task give output in natural way with a continuous value o
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Nan, Tianlong, Yuan Gao, and Christian Kroer. "Fast and Interpretable Dynamics for Fisher Markets via Block-Coordinate Updates." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 5 (2023): 5832–40. http://dx.doi.org/10.1609/aaai.v37i5.25723.

Повний текст джерела
Анотація:
We consider the problem of large-scale Fisher market equilibrium computation through scalable first-order optimization methods. It is well-known that market equilibria can be captured using structured convex programs such as the Eisenberg-Gale and Shmyrev convex programs. Highly performant deterministic full-gradient first-order methods have been developed for these programs. In this paper, we develop new block-coordinate first-order methods for computing Fisher market equilibria, and show that these methods have interpretations as tâtonnement-style or proportional response-style dynamics wher
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Zhao, Mingyang, Junchang Xin, Zhongyang Wang, Xinlei Wang, and Zhiqiong Wang. "Interpretable Model Based on Pyramid Scene Parsing Features for Brain Tumor MRI Image Segmentation." Computational and Mathematical Methods in Medicine 2022 (January 31, 2022): 1–10. http://dx.doi.org/10.1155/2022/8000781.

Повний текст джерела
Анотація:
Due to the black box model nature of convolutional neural networks, computer-aided diagnosis methods based on depth learning are usually poorly interpretable. Therefore, the diagnosis results obtained by these unexplained methods are difficult to gain the trust of patients and doctors, which limits their application in the medical field. To solve this problem, an interpretable depth learning image segmentation framework is proposed in this paper for processing brain tumor magnetic resonance images. A gradient-based class activation mapping method is introduced into the segmentation model based
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!