To see the other types of publications on this topic, follow the link: Interpretable deep learning.

Journal articles on the topic 'Interpretable deep learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Interpretable deep learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gangopadhyay, Tryambak, Sin Yong Tan, Anthony LoCurto, James B. Michael, and Soumik Sarkar. "Interpretable Deep Learning for Monitoring Combustion Instability." IFAC-PapersOnLine 53, no. 2 (2020): 832–37. http://dx.doi.org/10.1016/j.ifacol.2020.12.839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zheng, Hong, Yinglong Dai, Fumin Yu, and Yuezhen Hu. "Interpretable Saliency Map for Deep Reinforcement Learning." Journal of Physics: Conference Series 1757, no. 1 (2021): 012075. http://dx.doi.org/10.1088/1742-6596/1757/1/012075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ruffolo, Jeffrey A., Jeremias Sulam, and Jeffrey J. Gray. "Antibody structure prediction using interpretable deep learning." Patterns 3, no. 2 (2022): 100406. http://dx.doi.org/10.1016/j.patter.2021.100406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bhambhoria, Rohan, Hui Liu, Samuel Dahan, and Xiaodan Zhu. "Interpretable Low-Resource Legal Decision Making." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (2022): 11819–27. http://dx.doi.org/10.1609/aaai.v36i11.21438.

Full text
Abstract:
Over the past several years, legal applications of deep learning have been on the rise. However, as with other high-stakes decision making areas, the requirement for interpretability is of crucial importance. Current models utilized by legal practitioners are more of the conventional machine learning type, wherein they are inherently interpretable, yet unable to harness the performance capabilities of data-driven deep learning models. In this work, we utilize deep learning models in the area of trademark law to shed light on the issue of likelihood of confusion between trademarks. Specifically
APA, Harvard, Vancouver, ISO, and other styles
5

Arik, Sercan Ö., and Tomas Pfister. "TabNet: Attentive Interpretable Tabular Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (2021): 6679–87. http://dx.doi.org/10.1609/aaai.v35i8.16826.

Full text
Abstract:
We propose a novel high-performance and interpretable canonical deep tabular data learning architecture, TabNet. TabNet uses sequential attention to choose which features to reason from at each decision step, enabling interpretability and more efficient learning as the learning capacity is used for the most salient features. We demonstrate that TabNet outperforms other variants on a wide range of non-performance-saturated tabular datasets and yields interpretable feature attributions plus insights into its global behavior. Finally, we demonstrate self-supervised learning for tabular data, sign
APA, Harvard, Vancouver, ISO, and other styles
6

Lin, Chih-Hsu, and Olivier Lichtarge. "Using interpretable deep learning to model cancer dependencies." Bioinformatics 37, no. 17 (2021): 2675–81. http://dx.doi.org/10.1093/bioinformatics/btab137.

Full text
Abstract:
Abstract Motivation Cancer dependencies provide potential drug targets. Unfortunately, dependencies differ among cancers and even individuals. To this end, visible neural networks (VNNs) are promising due to robust performance and the interpretability required for the biomedical field. Results We design Biological visible neural network (BioVNN) using pathway knowledge to predict cancer dependencies. Despite having fewer parameters, BioVNN marginally outperforms traditional neural networks (NNs) and converges faster. BioVNN also outperforms an NN based on randomized pathways. More importantly,
APA, Harvard, Vancouver, ISO, and other styles
7

Liao, WangMin, BeiJi Zou, RongChang Zhao, YuanQiong Chen, ZhiYou He, and MengJie Zhou. "Clinical Interpretable Deep Learning Model for Glaucoma Diagnosis." IEEE Journal of Biomedical and Health Informatics 24, no. 5 (2020): 1405–12. http://dx.doi.org/10.1109/jbhi.2019.2949075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Matsubara, Takashi. "Bayesian deep learning: A model-based interpretable approach." Nonlinear Theory and Its Applications, IEICE 11, no. 1 (2020): 16–35. http://dx.doi.org/10.1587/nolta.11.16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Yi, Kenneth Barr, and John Reinitz. "Fully interpretable deep learning model of transcriptional control." Bioinformatics 36, Supplement_1 (2020): i499—i507. http://dx.doi.org/10.1093/bioinformatics/btaa506.

Full text
Abstract:
Abstract Motivation The universal expressibility assumption of Deep Neural Networks (DNNs) is the key motivation behind recent worksin the systems biology community to employDNNs to solve important problems in functional genomics and moleculargenetics. Typically, such investigations have taken a ‘black box’ approach in which the internal structure of themodel used is set purely by machine learning considerations with little consideration of representing the internalstructure of the biological system by the mathematical structure of the DNN. DNNs have not yet been applied to thedetailed modelin
APA, Harvard, Vancouver, ISO, and other styles
10

Yamuna, Vadada. "Interpretable Deep Learning Models for Improved Diabetes Diagnosis." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 06 (2025): 1–9. https://doi.org/10.55041/ijsrem50834.

Full text
Abstract:
Diabetes, a chronic condition marked by persistent high blood sugar, poses major global health challenges due to complications like cardiovascular disease and neuropathy. Traditional diagnostic methods, though common, are invasive, time-consuming, and prone to interpretation errors. To overcome these issues, this project proposes a novel machine learning framework that integrates structured data (e.g., demographics, test results) and unstructured data (e.g., retinal images, clinical notes) using deep learning models like CNNs, RNNs, and transformers. Explainable AI techniques, such as SHAP and
APA, Harvard, Vancouver, ISO, and other styles
11

Bang, Seojin, Pengtao Xie, Heewook Lee, Wei Wu, and Eric Xing. "Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (2021): 11396–404. http://dx.doi.org/10.1609/aaai.v35i13.17358.

Full text
Abstract:
Interpretable machine learning has gained much attention recently. Briefness and comprehensiveness are necessary in order to provide a large amount of information concisely when explaining a black-box decision system. However, existing interpretable machine learning methods fail to consider briefness and comprehensiveness simultaneously, leading to redundant explanations. We propose the variational information bottleneck for interpretation, VIBI, a system-agnostic interpretable method that provides a brief but comprehensive explanation. VIBI adopts an information theoretic principle, informati
APA, Harvard, Vancouver, ISO, and other styles
12

Nisha, Mrs M. P. "Interpretable Deep Neural Networks using SHAP and LIME for Decision Making in Smart Home Automation." International Scientific Journal of Engineering and Management 04, no. 05 (2025): 1–7. https://doi.org/10.55041/isjem03409.

Full text
Abstract:
Abstract - Deep Neural Networks (DNNs) are increasingly being used in smart home automation for intelligent decision-making based on IoT sensor data. This project aims to develop an interpretable deep neural network model for decision-making in smart home automation using SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations). The focus is on enhancing transparency in AI-driven automation systems by providing clear explanations for model predictions. The approach involves collecting IoT sensor data from smart home environments, training a deep learning
APA, Harvard, Vancouver, ISO, and other styles
13

Brinkrolf, Johannes, and Barbara Hammer. "Interpretable machine learning with reject option." at - Automatisierungstechnik 66, no. 4 (2018): 283–90. http://dx.doi.org/10.1515/auto-2017-0123.

Full text
Abstract:
Abstract Classification by means of machine learning models constitutes one relevant technology in process automation and predictive maintenance. However, common techniques such as deep networks or random forests suffer from their black box characteristics and possible adversarial examples. In this contribution, we give an overview about a popular alternative technology from machine learning, namely modern variants of learning vector quantization, which, due to their combined discriminative and generative nature, incorporate interpretability and the possibility of explicit reject options for i
APA, Harvard, Vancouver, ISO, and other styles
14

An, Junkang, Yiwan Zhang, and Inwhee Joe. "Specific-Input LIME Explanations for Tabular Data Based on Deep Learning Models." Applied Sciences 13, no. 15 (2023): 8782. http://dx.doi.org/10.3390/app13158782.

Full text
Abstract:
Deep learning researchers believe that as deep learning models evolve, they can perform well on many tasks. However, the complex parameters of deep learning models make it difficult for users to understand how deep learning models make predictions. In this paper, we propose the specific-input local interpretable model-agnostic explanations (LIME) model, a novel interpretable artificial intelligence (XAI) method that interprets deep learning models of tabular data. The specific-input process uses feature importance and partial dependency plots (PDPs) to select the “what” and “how”. In our exper
APA, Harvard, Vancouver, ISO, and other styles
15

Zinemanas, Pablo, Martín Rocamora, Marius Miron, Frederic Font, and Xavier Serra. "An Interpretable Deep Learning Model for Automatic Sound Classification." Electronics 10, no. 7 (2021): 850. http://dx.doi.org/10.3390/electronics10070850.

Full text
Abstract:
Deep learning models have improved cutting-edge technologies in many research areas, but their black-box structure makes it difficult to understand their inner workings and the rationale behind their predictions. This may lead to unintended effects, such as being susceptible to adversarial attacks or the reinforcement of biases. There is still a lack of research in the audio domain, despite the increasing interest in developing deep learning models that provide explanations of their decisions. To reduce this gap, we propose a novel interpretable deep learning model for automatic sound classifi
APA, Harvard, Vancouver, ISO, and other styles
16

Mu, Xuechen, Zhenyu Huang, Qiufen Chen, et al. "DeepEnhancerPPO: An Interpretable Deep Learning Approach for Enhancer Classification." International Journal of Molecular Sciences 25, no. 23 (2024): 12942. https://doi.org/10.3390/ijms252312942.

Full text
Abstract:
Enhancers are short genomic segments located in non-coding regions of the genome that play a critical role in regulating the expression of target genes. Despite their importance in transcriptional regulation, effective methods for classifying enhancer categories and regulatory strengths remain limited. To address this challenge, we propose a novel end-to-end deep learning architecture named DeepEnhancerPPO. The model integrates ResNet and Transformer modules to extract local, hierarchical, and long-range contextual features. Following feature fusion, we employ Proximal Policy Optimization (PPO
APA, Harvard, Vancouver, ISO, and other styles
17

Ma, Shuang, Haifeng Wang, Wei Zhao, et al. "An interpretable deep learning model for hallux valgus prediction." Computers in Biology and Medicine 185 (February 2025): 109468. https://doi.org/10.1016/j.compbiomed.2024.109468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Gagne II, David John, Sue Ellen Haupt, Douglas W. Nychka, and Gregory Thompson. "Interpretable Deep Learning for Spatial Analysis of Severe Hailstorms." Monthly Weather Review 147, no. 8 (2019): 2827–45. http://dx.doi.org/10.1175/mwr-d-18-0316.1.

Full text
Abstract:
Abstract Deep learning models, such as convolutional neural networks, utilize multiple specialized layers to encode spatial patterns at different scales. In this study, deep learning models are compared with standard machine learning approaches on the task of predicting the probability of severe hail based on upper-air dynamic and thermodynamic fields from a convection-allowing numerical weather prediction model. The data for this study come from patches surrounding storms identified in NCAR convection-allowing ensemble runs from 3 May to 3 June 2016. The machine learning models are trained to
APA, Harvard, Vancouver, ISO, and other styles
19

Abdel-Basset, Mohamed, Hossam Hawash, Khalid Abdulaziz Alnowibet, Ali Wagdy Mohamed, and Karam M. Sallam. "Interpretable Deep Learning for Discriminating Pneumonia from Lung Ultrasounds." Mathematics 10, no. 21 (2022): 4153. http://dx.doi.org/10.3390/math10214153.

Full text
Abstract:
Lung ultrasound images have shown great promise to be an operative point-of-care test for the diagnosis of COVID-19 because of the ease of procedure with negligible individual protection equipment, together with relaxed disinfection. Deep learning (DL) is a robust tool for modeling infection patterns from medical images; however, the existing COVID-19 detection models are complex and thereby are hard to deploy in frequently used mobile platforms in point-of-care testing. Moreover, most of the COVID-19 detection models in the existing literature on DL are implemented as a black box, hence, they
APA, Harvard, Vancouver, ISO, and other styles
20

Chen, Xingguo, Yang Li, Xiaoyan Xu, and Min Shao. "A Novel Interpretable Deep Learning Model for Ozone Prediction." Applied Sciences 13, no. 21 (2023): 11799. http://dx.doi.org/10.3390/app132111799.

Full text
Abstract:
Due to the limited understanding of the physical and chemical processes involved in ozone formation, as well as the large uncertainties surrounding its precursors, commonly used methods often result in biased predictions. Deep learning, as a powerful tool for fitting data, offers an alternative approach. However, most deep learning-based ozone-prediction models only take into account temporality and have limited capacity. Existing spatiotemporal deep learning models generally suffer from model complexity and inadequate spatiality learning. Thus, we propose a novel spatiotemporal model, namely
APA, Harvard, Vancouver, ISO, and other styles
21

Zhang, Rongquan, Siqi Bu, Min Zhou, Gangqiang Li, Baishao Zhan, and Zhe Zhang. "Deep reinforcement learning based interpretable photovoltaic power prediction framework." Sustainable Energy Technologies and Assessments 67 (July 2024): 103830. http://dx.doi.org/10.1016/j.seta.2024.103830.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Xu, Lingfeng, Julie Liss, and Visar Berisha. "Dysarthria detection based on a deep learning model with a clinically-interpretable layer." JASA Express Letters 3, no. 1 (2023): 015201. http://dx.doi.org/10.1121/10.0016833.

Full text
Abstract:
Studies have shown deep neural networks (DNN) as a potential tool for classifying dysarthric speakers and controls. However, representations used to train DNNs are largely not clinically interpretable, which limits clinical value. Here, a model with a bottleneck layer is trained to jointly learn a classification label and four clinically-interpretable features. Evaluation of two dysarthria subtypes shows that the proposed method can flexibly trade-off between improved classification accuracy and discovery of clinically-interpretable deficit patterns. The analysis using Shapley additive explana
APA, Harvard, Vancouver, ISO, and other styles
23

T. Vengatesh. "Transparent Decision-Making with Explainable Ai (Xai): Advances in Interpretable Deep Learning." Journal of Information Systems Engineering and Management 10, no. 4 (2025): 1295–303. https://doi.org/10.52783/jisem.v10i4.10584.

Full text
Abstract:
As artificial intelligence (AI) systems, particularly deep learning models, become increasingly integrated into critical decision-making processes, the demand for transparency and interpretability grows. Explainable AI (XAI) addresses the "black-box" nature of deep learning by developing methods that make AI decisions understandable to humans. This paper explores recent advances in interpretable deep learning models, focusing on techniques such as attention mechanisms, SHAP (SHapley Additive exPlanations), LIME (Local Interpretable Model-agnostic Explanations), and self-explaining neural netwo
APA, Harvard, Vancouver, ISO, and other styles
24

Koriakina, Nadezhda, Nataša Sladoje, Vladimir Bašić, and Joakim Lindblad. "Deep multiple instance learning versus conventional deep single instance learning for interpretable oral cancer detection." PLOS ONE 19, no. 4 (2024): e0302169. http://dx.doi.org/10.1371/journal.pone.0302169.

Full text
Abstract:
The current medical standard for setting an oral cancer (OC) diagnosis is histological examination of a tissue sample taken from the oral cavity. This process is time-consuming and more invasive than an alternative approach of acquiring a brush sample followed by cytological analysis. Using a microscope, skilled cytotechnologists are able to detect changes due to malignancy; however, introducing this approach into clinical routine is associated with challenges such as a lack of resources and experts. To design a trustworthy OC detection system that can assist cytotechnologists, we are interest
APA, Harvard, Vancouver, ISO, and other styles
25

Reddy, Kudumula Tejeswar, B. Dilip Chakravarthy, M. Subbarao, and Asadi Srinivasulu. "Enhancing Plant Disease Detection through Deep Learning." International Journal of Scientific Methods in Engineering and Management 01, no. 10 (2023): 01–13. http://dx.doi.org/10.58599/ijsmem.2023.11001.

Full text
Abstract:
Progress in the field of deep learning has displayed significant potential in transforming the detection of plant diseases, presenting automated and efficient approaches to support agricultural processes. This investigation explores the major obstacles associated with the detection of plant diseases, emphasizing constraints related to data availability, generalization across different plant species, variability in environmental conditions, and the need for model interpretability in the realm of deep learning. The deficiency and imbalanced distribution of labeled data present considerable chall
APA, Harvard, Vancouver, ISO, and other styles
26

Cheng, Xueyi, and Chang Che. "Interpretable Machine Learning: Explainability in Algorithm Design." Journal of Industrial Engineering and Applied Science 2, no. 6 (2024): 65–70. https://doi.org/10.70393/6a69656173.323337.

Full text
Abstract:
In recent years, there is a high demand for transparency and accountability in machine learning models, especially in domains such as healthcare, finance and etc. In this paper, we delve into deep how to make machine learning models more interpretable, with focus on the importance of the explainability of the algorithm design. The main objective of this paper is to fill this gap and provide a comprehensive survey and analytical study towards AutoML. To that end, we first introduce the AutoML technology and review its various tools and techniques.
APA, Harvard, Vancouver, ISO, and other styles
27

Schmid, Ute, and Bettina Finzel. "Mutual Explanations for Cooperative Decision Making in Medicine." KI - Künstliche Intelligenz 34, no. 2 (2020): 227–33. http://dx.doi.org/10.1007/s13218-020-00633-2.

Full text
Abstract:
AbstractExploiting mutual explanations for interactive learning is presented as part of an interdisciplinary research project on transparent machine learning for medical decision support. Focus of the project is to combine deep learning black box approaches with interpretable machine learning for classification of different types of medical images to combine the predictive accuracy of deep learning and the transparency and comprehensibility of interpretable models. Specifically, we present an extension of the Inductive Logic Programming system Aleph to allow for interactive learning. Medical e
APA, Harvard, Vancouver, ISO, and other styles
28

Wei, Kaihua, Bojian Chen, Jingcheng Zhang, et al. "Explainable Deep Learning Study for Leaf Disease Classification." Agronomy 12, no. 5 (2022): 1035. http://dx.doi.org/10.3390/agronomy12051035.

Full text
Abstract:
Explainable artificial intelligence has been extensively studied recently. However, the research of interpretable methods in the agricultural field has not been systematically studied. We studied the interpretability of deep learning models in different agricultural classification tasks based on the fruit leaves dataset. The purpose is to explore whether the classification model is more inclined to extract the appearance characteristics of leaves or the texture characteristics of leaf lesions during the feature extraction process. The dataset was arranged into three experiments with different
APA, Harvard, Vancouver, ISO, and other styles
29

Wei, Kaihua, Bojian Chen, Jingcheng Zhang, et al. "Explainable Deep Learning Study for Leaf Disease Classification." Agronomy 12, no. 5 (2022): 1035. http://dx.doi.org/10.3390/agronomy12051035.

Full text
Abstract:
Explainable artificial intelligence has been extensively studied recently. However, the research of interpretable methods in the agricultural field has not been systematically studied. We studied the interpretability of deep learning models in different agricultural classification tasks based on the fruit leaves dataset. The purpose is to explore whether the classification model is more inclined to extract the appearance characteristics of leaves or the texture characteristics of leaf lesions during the feature extraction process. The dataset was arranged into three experiments with different
APA, Harvard, Vancouver, ISO, and other styles
30

Wei, Kaihua, Bojian Chen, Jingcheng Zhang, et al. "Explainable Deep Learning Study for Leaf Disease Classification." Agronomy 12, no. 5 (2022): 1035. http://dx.doi.org/10.3390/agronomy12051035.

Full text
Abstract:
Explainable artificial intelligence has been extensively studied recently. However, the research of interpretable methods in the agricultural field has not been systematically studied. We studied the interpretability of deep learning models in different agricultural classification tasks based on the fruit leaves dataset. The purpose is to explore whether the classification model is more inclined to extract the appearance characteristics of leaves or the texture characteristics of leaf lesions during the feature extraction process. The dataset was arranged into three experiments with different
APA, Harvard, Vancouver, ISO, and other styles
31

Weng, Tsui-Wei (Lily). "Towards Trustworthy Deep Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 20 (2024): 22682. http://dx.doi.org/10.1609/aaai.v38i20.30298.

Full text
Abstract:
Deep neural networks (DNNs) have achieved unprecedented success across many scientific and engineering fields in the last decades. Despite its empirical success, unfortunately, recent studies have shown that there are various failure modes and blindspots in DNN models which may result in unexpected serious failures and potential harms, e.g. the existence of adversarial examples and small perturbations. This is not acceptable especially for safety critical and high stakes applications in the real-world, including healthcare, self-driving cars, aircraft control systems, hiring and malware detect
APA, Harvard, Vancouver, ISO, and other styles
32

Monje, Leticia, Ramón A. Carrasco, Carlos Rosado, and Manuel Sánchez-Montañés. "Deep Learning XAI for Bus Passenger Forecasting: A Use Case in Spain." Mathematics 10, no. 9 (2022): 1428. http://dx.doi.org/10.3390/math10091428.

Full text
Abstract:
Time series forecasting of passenger demand is crucial for optimal planning of limited resources. For smart cities, passenger transport in urban areas is an increasingly important problem, because the construction of infrastructure is not the solution and the use of public transport should be encouraged. One of the most sophisticated techniques for time series forecasting is Long Short Term Memory (LSTM) neural networks. These deep learning models are very powerful for time series forecasting but are not interpretable by humans (black-box models). Our goal was to develop a predictive and lingu
APA, Harvard, Vancouver, ISO, and other styles
33

Sieusahai, Alexander, and Matthew Guzdial. "Explaining Deep Reinforcement Learning Agents in the Atari Domain through a Surrogate Model." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 17, no. 1 (2021): 82–90. http://dx.doi.org/10.1609/aiide.v17i1.18894.

Full text
Abstract:
One major barrier to applications of deep Reinforcement Learning (RL) both inside and outside of games is the lack of explainability. In this paper, we describe a lightweight and effective method to derive explanations for deep RL agents, which we evaluate in the Atari domain. Our method relies on a transformation of the pixel-based input of the RL agent to a symbolic, interpretable input representation. We then train a surrogate model, which is itself interpretable, to replicate the behavior of the target, deep RL agent. Our experiments demonstrate that we can learn an effective surrogate tha
APA, Harvard, Vancouver, ISO, and other styles
34

Ajioka, Takehiro, Nobuhiro Nakai, Okito Yamashita, and Toru Takumi. "End-to-end deep learning approach to mouse behavior classification from cortex-wide calcium imaging." PLOS Computational Biology 20, no. 3 (2024): e1011074. http://dx.doi.org/10.1371/journal.pcbi.1011074.

Full text
Abstract:
Deep learning is a powerful tool for neural decoding, broadly applied to systems neuroscience and clinical studies. Interpretable and transparent models that can explain neural decoding for intended behaviors are crucial to identifying essential features of deep learning decoders in brain activity. In this study, we examine the performance of deep learning to classify mouse behavioral states from mesoscopic cortex-wide calcium imaging data. Our convolutional neural network (CNN)-based end-to-end decoder combined with recurrent neural network (RNN) classifies the behavioral states with high acc
APA, Harvard, Vancouver, ISO, and other styles
35

Zhu, Xiyue, Yu Cheng, Jiafeng He, and Juan Guo. "Adaptive Mask-Based Interpretable Convolutional Neural Network (AMI-CNN) for Modulation Format Identification." Applied Sciences 14, no. 14 (2024): 6302. http://dx.doi.org/10.3390/app14146302.

Full text
Abstract:
Recently, various deep learning methods have been applied to Modulation Format Identification (MFI). The interpretability of deep learning models is important. However, this interpretability is challenged due to the black-box nature of deep learning. To deal with this difficulty, we propose an Adaptive Mask-Based Interpretable Convolutional Neural Network (AMI-CNN) that utilizes a mask structure for feature selection during neural network training and feeds the selected features into the classifier for decision making. During training, the masks are updated dynamically with parameters to optim
APA, Harvard, Vancouver, ISO, and other styles
36

Xu, Mouyi, and Lijun Chang. "Graph Edit Distance Estimation: A New Heuristic and A Holistic Evaluation of Learning-based Methods." Proceedings of the ACM on Management of Data 3, no. 3 (2025): 1–24. https://doi.org/10.1145/3725304.

Full text
Abstract:
Graph edit distance (GED) is an important metric for measuring the distance or similarity between two graphs. It is defined as the minimum number of edit operations required to transform one graph into another. Computing the exact GED between two graphs is an NP-hard problem. With the success of deep learning across various application domains, graph neural networks have also been recently utilized to predict the GED between graphs. However, the existing studies on learning-based methods have two significant limitations. (1)~The development of deep learning models for GED prediction has been e
APA, Harvard, Vancouver, ISO, and other styles
37

Joseph, Aaron Tsapa. "Interpretable Deep Learning for Fintech: Enabling Ethical and Explainable AI-Driven Financial Solutions." Journal of Scientific and Engineering Research 11, no. 3 (2024): 271–77. https://doi.org/10.5281/zenodo.11220841.

Full text
Abstract:
Deep learning technologies in financial technology (FinTech) have helped develop the sector, and the FinTech winter brought innovation into the field; thus, the automation of transactions, insights, and risk prediction were all enhanced. On the one hand, artificial intelligence techniques allow enhanced performance; however, such progress also raises the ethical issues associated with using hidden algorithms as a driving force. This article concentrates on topics based on moral and explainable AI in FinTech and presents the principles of the pillars of transparency and responsibility. The path
APA, Harvard, Vancouver, ISO, and other styles
38

R. S. Deshpande, P. V. Ambatkar. "Interpretable Deep Learning Models: Enhancing Transparency and Trustworthiness in Explainable AI." Proceeding International Conference on Science and Engineering 11, no. 1 (2023): 1352–63. http://dx.doi.org/10.52783/cienceng.v11i1.286.

Full text
Abstract:
Explainable AI (XAI) aims to address the opacity of deep learning models, which can limit their adoption in critical decision-making applications. This paper presents a novel framework that integrates interpretable components and visualization techniques to enhance the transparency and trustworthiness of deep learning models. We propose a hybrid explanation method combining saliency maps, feature attribution, and local interpretable model-agnostic explanations (LIME) to provide comprehensive insights into the model's decision-making process.
 Our experiments with convolutional neural netw
APA, Harvard, Vancouver, ISO, and other styles
39

Shamsuzzaman, Md. "Explainable and Interpretable Deep Learning Models." Global Journal of Engineering Sciences 5, no. 5 (2020). http://dx.doi.org/10.33552/gjes.2020.05.000621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Ahsan, Md Manjurul, Md Shahin Ali, Md Mehedi Hassan, et al. "Monkeypox Diagnosis with Interpretable Deep Learning." IEEE Access, 2023, 1. http://dx.doi.org/10.1109/access.2023.3300793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Delaunay, Antoine, and Hannah M. Christensen. "Interpretable Deep Learning for Probabilistic MJO Prediction." Geophysical Research Letters, August 24, 2022. http://dx.doi.org/10.1029/2022gl098566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Ahn, Daehwan, Dokyun Lee, and Kartik Hosanagar. "Interpretable Deep Learning Approach to Churn Management." SSRN Electronic Journal, 2020. http://dx.doi.org/10.2139/ssrn.3981160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Richman, Ronald, and Mario V. Wuthrich. "LocalGLMnet: interpretable deep learning for tabular data." SSRN Electronic Journal, 2021. http://dx.doi.org/10.2139/ssrn.3892015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Kim, Dohyun, Jungtae Lee, Jangsup Moon, and Taesup Moon. "Interpretable Deep Learning‐based Hippocampal Sclerosis Classification." Epilepsia Open, September 29, 2022. http://dx.doi.org/10.1002/epi4.12655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Zografopoulos, Lazaros, Maria Chiara Iannino, Ioannis Psaradellis, and Georgios Sermpinis. "Industry return prediction via interpretable deep learning." European Journal of Operational Research, August 2024. http://dx.doi.org/10.1016/j.ejor.2024.08.032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Wagle, Manoj M., Siqu Long, Carissa Chen, Chunlei Liu, and Pengyi Yang. "Interpretable deep learning in single-cell omics." Bioinformatics, June 18, 2024. http://dx.doi.org/10.1093/bioinformatics/btae374.

Full text
Abstract:
Abstract Motivation Single-cell omics technologies have enabled the quantification of molecular profiles in individual cells at an unparalleled resolution. Deep learning, a rapidly evolving sub-field of machine learning, has instilled a significant interest in single-cell omics research due to its remarkable success in analysing heterogeneous high-dimensional single-cell omics data. Nevertheless, the inherent multi-layer nonlinear architecture of deep learning models often makes them ‘black boxes’ as the reasoning behind predictions is often unknown and not transparent to the user. This has st
APA, Harvard, Vancouver, ISO, and other styles
47

Oyedeji, Mojeed Opeyemi, Emmanuel Okafor, Hussein Samma, and Motaz Alfarraj. "Interpretable Deep Learning for Classifying Skin Lesions." International Journal of Intelligent Systems 2025, no. 1 (2025). https://doi.org/10.1155/int/2751767.

Full text
Abstract:
The global prevalence of skin cancer necessitates the development of AI‐assisted technologies for accurate and interpretable diagnosis of skin lesions. This study presents a novel deep learning framework for enhancing the interpretability and reliability of skin lesion predictions from clinical images, which are more inclusive, accessible, and representative of real‐world conditions than dermoscopic images. We comprehensively analyzed 13 deep learning models from four main convolutional neural network architecture classes: DenseNet, ResNet, MobileNet, and EfficientNet. Different data augmentat
APA, Harvard, Vancouver, ISO, and other styles
48

Li, Xuhong, Haoyi Xiong, Xingjian Li, et al. "Interpretable deep learning: interpretation, interpretability, trustworthiness, and beyond." Knowledge and Information Systems, September 14, 2022. http://dx.doi.org/10.1007/s10115-022-01756-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Huang, Liyao, Weimin Zheng, and Zuohua Deng. "TOURISM DEMAND FORECASTING: AN INTERPRETABLE DEEP LEARNING MODEL." Tourism Analysis, 2024. http://dx.doi.org/10.3727/108354224x17180286995735.

Full text
Abstract:
With emerging learning techniques and large datasets, the advantages of applying deep learning models in the field of tourism demand forecasting have been increasingly recognized. However, the lack of sufficient interpretability has led to questioning the credibility of most existing deep learning models. This study attempts to meet these challenges by proposing an interpretable deep learning framework, which combines the long short-term memory model with Shapley Additive interpretation. Results of two case studies conducted in China confirm that our model can perfectly reconcile interpretabil
APA, Harvard, Vancouver, ISO, and other styles
50

Jiang, Kai, Zheli Xiong, Qichong Yang, Jianpeng Chen, and Gang Chen. "An interpretable ensemble method for deep representation learning." Engineering Reports, July 4, 2023. http://dx.doi.org/10.1002/eng2.12725.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!