Academic literature on the topic 'Layer-wise Relevance Propagation (LRP)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Layer-wise Relevance Propagation (LRP).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Layer-wise Relevance Propagation (LRP)"

1

Ullah, Ihsan, Andre Rios, Vaibhav Gala, and Susan Mckeever. "Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation." Applied Sciences 12, no. 1 (2021): 136. http://dx.doi.org/10.3390/app12010136.

Full text
Abstract:
Trust and credibility in machine learning models are bolstered by the ability of a model to explain its decisions. While explainability of deep learning models is a well-known challenge, a further challenge is clarity of the explanation itself for relevant stakeholders of the model. Layer-wise Relevance Propagation (LRP), an established explainability technique developed for deep models in computer vision, provides intuitive human-readable heat maps of input images. We present the novel application of LRP with tabular datasets containing mixed data (categorical and numerical) using a deep neural network (1D-CNN), for Credit Card Fraud detection and Telecom Customer Churn prediction use cases. We show how LRP is more effective than traditional explainability concepts of Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) for explainability. This effectiveness is both local to a sample level and holistic over the whole testing set. We also discuss the significant computational time advantage of LRP (1–2 s) over LIME (22 s) and SHAP (108 s) on the same laptop, and thus its potential for real time application scenarios. In addition, our validation of LRP has highlighted features for enhancing model performance, thus opening up a new area of research of using XAI as an approach for feature subset selection.
APA, Harvard, Vancouver, ISO, and other styles
2

HSU, SHENG-YI, MAU-HSIANG SHIH, WU-HSIUNG WU, HAO-REN YAO, and FENG-SHENG TSAI. "Gene reduction for cancer detection using layer-wise relevance propagation." Journal of Decision Making and Healthcare 1, no. 1 (2024): 30–44. http://dx.doi.org/10.69829/jdmh-024-0101-ta03.

Full text
Abstract:
Precise detection of cancer types and normal tissues is crucial for cancer diagnosis. Specifically, cancer classification using gene expression data is key to identify genes whose expression patterns are tumor-specific. Here we aim to search for a minimal set of genes that may reduce the expression complexity and retain a qualified classification accuracy accordingly. We applied neural network models with layer-wise relevance propagation (LRP) to find genes that significantly contribute to classification. Two algorithms for the LRP-candidate gene selection and the cycle of gene reduction were proposed. By implementing the two algorithms for gene reduction, our model retained 95.32% validation accuracy to make classification of six cancer types and normal with a minimal set of seven genes. Furthermore, a cross-evaluation process was performed on the minimal set of seven genes, indicating that the selected marker genes in five out of six cancer types are biologically relevant to cancer annotated by the COSMIC Cancer Gene Census.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Y. Y., S. Y. Huang, S. B. Xu, et al. "Selection of the Main Control Parameters for the Dst Index Prediction Model Based on a Layer-wise Relevance Propagation Method." Astrophysical Journal Supplement Series 260, no. 1 (2022): 6. http://dx.doi.org/10.3847/1538-4365/ac616c.

Full text
Abstract:
Abstract The prediction of the Dst index is an important subject in space weather. It has significant progress with the prevalent applications of neural networks. The selection of input parameters is critical for the prediction model of the Dst index or other space-weather models. In this study, we perform a layer-wise relevance propagation (LRP) method to select the main parameters for the prediction of the Dst index and understand the physical interpretability of neural networks for the first time. Taking an hourly Dst index and 10 types of solar wind parameters as the inputs, we utilize a long short-term memory network to predict the Dst index and present the LRP method to analyze the dependence of the Dst index on these parameters. LRP defines the relevance score for each input, and a higher relevance score indicates that the corresponding input parameter contributes more to the output. The results show that Dst, E y , B z , and V are the main control parameters for Dst index prediction. In order to verify the LRP method, we design two more supplementary experiments for further confirmation. These results confirm that the LRP method can reduce the initial dimension of neural network input at the cost of minimum information loss and contribute to the understanding of physical processes in space weather.
APA, Harvard, Vancouver, ISO, and other styles
4

Neha Ahlawat. "Multimodal Deep Belief Network with Layer-Wise Relevance Propagation: A Solution for Heterogeneous Image Challenges in Big Data." Journal of Information Systems Engineering and Management 10, no. 22s (2025): 736–41. https://doi.org/10.52783/jisem.v10i22s.3616.

Full text
Abstract:
Cancer is a complex and heterogeneous disease, with diverse molecular profiles and clinical outcomes. Accurate cancer classification is crucial for personalized treatment strategies and improved patient survival. The advent of high-throughput technologies has generated vast amounts of multi-dimensional data, including genomic, proteomic, and clinical information. Analyzing this "big data" requires sophisticated computational methods. This paper presents an improvised approach for Layer-wise Relevance Propagation (LRP) in Multimodal Deep Belief Networks (MDBNs) for cancer classification. By integrating Clipped Activation and Contrastive Divergence (CD), we enhance model interpretability and performance, addressing challenges like vanishing gradients and slow convergence. Our approach improves the efficiency of LRP while ensuring stable training and faster model convergence. Experiments on multimodal medical data, including brain, breast, and bone scans, demonstrate significant gains in classification accuracy and interpretability compared to traditional methods, offering a scalable solution for deep learning in healthcare.
APA, Harvard, Vancouver, ISO, and other styles
5

Ado, Abubakar, Olalekan J. Awujoola, Sabiu Danlami Abdullahi, and Sulaiman Hashim Ibrahim. "INTEGRATION OF LAYER-WISE RELEVANCE PROPAGATION, RECURSIVE DATA PRUNING, AND CONVOLUTIONAL NEURAL NETWORKS FOR IMPROVED TEXT CLASSIFICATION." FUDMA JOURNAL OF SCIENCES 9, no. 2 (2025): 35–41. https://doi.org/10.33003/fjs-2025-0902-3058.

Full text
Abstract:
This research presents a significant advancement in text classification by integrating Layer-wise Relevance Propagation (LRP), recursive data pruning, and Convolutional Neural Networks (CNNs) with cross-validation. The study addresses the critical limitations of existing text classification methods, particularly issues of information loss and overfitting, which often hinder the efficiency and interpretability of models in natural language processing (NLP). To overcome these challenges, the proposed model employs LRP to enhance the interpretability of the classification process, allowing for precise identification of relevant features that contribute to decision-making. Additionally, the implementation of recursive data pruning optimizes model efficiency by dynamically eliminating irrelevant or redundant data, thereby reducing computational complexity without compromising performance. The effectiveness of the approach is further bolstered by utilizing cross-validation techniques to ensure robust evaluation across diverse datasets. The empirical evaluation of the integrated model revealed remarkable improvements in classification performance, achieving an accuracy of 94%, surpassing the benchmark of 92.88% established by the ReDP-CNN model proposed by Li et al. (2020). The comprehensive assessment included detailed metrics such as precision, recall, and F1-score, confirming the model's robust capability in accurately classifying text data across various categories.
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Jae-Eung, and Ji-Hyeong Han. "Layer-wise Relevance Propagation (LRP) Based Technical and Macroeconomic Indicator Impact Analysis for an Explainable Deep Learning Model to Predict an Increase and Decrease in KOSPI." Journal of KIISE 48, no. 12 (2021): 1289–97. http://dx.doi.org/10.5626/jok.2021.48.12.1289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Du, Meng, Daping Bi, Mingyang Du, Xinsong Xu, and Zilong Wu. "ULAN: A Universal Local Adversarial Network for SAR Target Recognition Based on Layer-Wise Relevance Propagation." Remote Sensing 15, no. 1 (2022): 21. http://dx.doi.org/10.3390/rs15010021.

Full text
Abstract:
Recent studies have proven that synthetic aperture radar (SAR) automatic target recognition (ATR) models based on deep neural networks (DNN) are vulnerable to adversarial examples. However, existing attacks easily fail in the case where adversarial perturbations cannot be fully fed to victim models. We call this situation perturbation offset. Moreover, since background clutter takes up most of the area in SAR images and has low relevance to recognition results, fooling models with global perturbations is quite inefficient. This paper proposes a semi-white-box attack network called Universal Local Adversarial Network (ULAN) to generate universal adversarial perturbations (UAP) for the target regions of SAR images. In the proposed method, we calculate the model’s attention heatmaps through layer-wise relevance propagation (LRP), which is used to locate the target regions of SAR images that have high relevance to recognition results. In particular, we utilize a generator based on U-Net to learn the mapping from noise to UAPs and craft adversarial examples by adding the generated local perturbations to target regions. Experiments indicate that the proposed method effectively prevents perturbation offset and achieves comparable attack performance to conventional global UAPs by perturbing only a quarter or less of SAR image areas.
APA, Harvard, Vancouver, ISO, and other styles
8

Nazari, Mahmood, Andreas Kluge, Ivayla Apostolova, et al. "Explainable AI to improve acceptance of convolutional neural networks for automatic classification of dopamine transporter SPECT in the diagnosis of clinically uncertain parkinsonian syndromes." European Journal of Nuclear Medicine and Molecular Imaging 49, no. 4 (2021): 1176–86. http://dx.doi.org/10.1007/s00259-021-05569-9.

Full text
Abstract:
Abstract Purpose Deep convolutional neural networks (CNN) provide high accuracy for automatic classification of dopamine transporter (DAT) SPECT images. However, CNN are inherently black-box in nature lacking any kind of explanation for their decisions. This limits their acceptance for clinical use. This study tested layer-wise relevance propagation (LRP) to explain CNN-based classification of DAT-SPECT in patients with clinically uncertain parkinsonian syndromes. Methods The study retrospectively included 1296 clinical DAT-SPECT with visual binary interpretation as “normal” or “reduced” by two experienced readers as standard-of-truth. A custom-made CNN was trained with 1008 randomly selected DAT-SPECT. The remaining 288 DAT-SPECT were used to assess classification performance of the CNN and to test LRP for explanation of the CNN-based classification. Results Overall accuracy, sensitivity, and specificity of the CNN were 95.8%, 92.8%, and 98.7%, respectively. LRP provided relevance maps that were easy to interpret in each individual DAT-SPECT. In particular, the putamen in the hemisphere most affected by nigrostriatal degeneration was the most relevant brain region for CNN-based classification in all reduced DAT-SPECT. Some misclassified DAT-SPECT showed an “inconsistent” relevance map more typical for the true class label. Conclusion LRP is useful to provide explanation of CNN-based decisions in individual DAT-SPECT and, therefore, can be recommended to support CNN-based classification of DAT-SPECT in clinical routine. Total computation time of 3 s is compatible with busy clinical workflow. The utility of “inconsistent” relevance maps to identify misclassified cases requires further investigation.
APA, Harvard, Vancouver, ISO, and other styles
9

Zang, Bo, Linlin Ding, Zhenpeng Feng, et al. "CNN-LRP: Understanding Convolutional Neural Networks Performance for Target Recognition in SAR Images." Sensors 21, no. 13 (2021): 4536. http://dx.doi.org/10.3390/s21134536.

Full text
Abstract:
Target recognition is one of the most challenging tasks in synthetic aperture radar (SAR) image processing since it is highly affected by a series of pre-processing techniques which usually require sophisticated manipulation for different data and consume huge calculation resources. To alleviate this limitation, numerous deep-learning based target recognition methods are proposed, particularly combined with convolutional neural network (CNN) due to its strong capability of data abstraction and end-to-end structure. In this case, although complex pre-processing can be avoided, the inner mechanism of CNN is still unclear. Such a “black box” only tells a result but not what CNN learned from the input data, thus it is difficult for researchers to further analyze the causes of errors. Layer-wise relevance propagation (LRP) is a prevalent pixel-level rearrangement algorithm to visualize neural networks’ inner mechanism. LRP is usually applied in sparse auto-encoder with only fully-connected layers rather than CNN, but such network structure usually obtains much lower recognition accuracy than CNN. In this paper, we propose a novel LRP algorithm particularly designed for understanding CNN’s performance on SAR image target recognition. We provide a concise form of the correlation between output of a layer and weights of the next layer in CNNs. The proposed method can provide positive and negative contributions in input SAR images for CNN’s classification, viewed as a clear visual understanding of CNN’s recognition mechanism. Numerous experimental results demonstrate the proposed method outperforms common LRP.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, He-Sheng, Dah-Jing Jwo, and Zhi-Hang Gao. "Towards Explainable Artificial Intelligence for GNSS Multipath LSTM Training Models." Sensors 25, no. 3 (2025): 978. https://doi.org/10.3390/s25030978.

Full text
Abstract:
This paper addresses the critical challenge of understanding and interpreting deep learning models in Global Navigation Satellite System (GNSS) applications, specifically focusing on multipath effect detection and analysis. As GNSS systems become increasingly reliant on deep learning for signal processing, the lack of model interpretability poses significant risks for safety-critical applications. We propose a novel approach combining Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) cells with Layer-wise Relevance Propagation (LRP) to create an explainable framework for multipath detection. Our key contributions include: (1) the development of an interpretable LSTM architecture for processing GNSS observables, including multipath variables, carrier-to-noise ratios, and satellite elevation angles; (2) the adaptation of the LRP technique for GNSS signal analysis, enabling attribution of model decisions to specific input features; and (3) the discovery of a correlation between LRP relevance scores and signal anomalies, leading to a new method for anomaly detection. Through systematic experimental validation, we demonstrate that our LSTM model achieves high prediction accuracy across all GNSS parameters while maintaining interpretability. A significant finding emerges from our controlled experiments: LRP relevance scores consistently increase during anomalous signal conditions, with growth rates varying from 7.34% to 32.48% depending on the feature type. In our validation experiments, we systematically introduced signal anomalies in specific time segments of the data sequence and observed corresponding increases in LRP scores: multipath parameters showed increases of 7.34–8.81%, carrier-to-noise ratios exhibited changes of 12.50–32.48%, and elevation angle parameters increased by 16.10%. These results demonstrate the potential of LRP-based analysis for enhancing GNSS signal quality monitoring and integrity assessment. Our approach not only improves the interpretability of deep learning models in GNSS applications but also provides a practical framework for detecting and analyzing signal anomalies, contributing to the development of more reliable and trustworthy navigation systems.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Layer-wise Relevance Propagation (LRP)"

1

Rosenlew, Matilda, and Timas Ljungdahl. "Using Layer-wise Relevance Propagation and Sensitivity Analysis Heatmaps to understand the Classification of an Image produced by a Neural Network." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252702.

Full text
Abstract:
Neural networks are regarded as state of the art within many areas of machine learning, however due to their growing complexity and size, a question regarding their trustability and understandability has been raised. Thus, neural networks are often being considered a "black-box". This has lead to the emersion of evaluation methods trying to decipher these complex networks. Two of these methods, layer-wise relevance propagation (LRP) and sensitivity analysis (SA), are used to generate heatmaps, which presents pixels in the input image that have an impact on the classification. In this report, the aim is to do a usability-analysis by evaluating and comparing these methods to see how they can be used in order to understand a particular classification. The method used in this report is to iteratively distort image regions that were highlighted as important by the two heatmapping-methods. This lead to the findings that distorting essential features of an image according to the LRP heatmaps lead to a decrease in classification score, while distorting inessential features of an image according to the combination of SA and LRP heatmaps lead to an increase in classification score. The results corresponded well with the theory of the heatmapping-methods and lead to the conclusion that a combination of the two evaluation methods is advocated for, to fully understand a particular classification.<br>Neurala nätverk betraktas som den senaste tekniken i många områden inom maskininlärning, dock har deras pålitlighet och förståelse ifrågasatts på grund av deras växande komplexitet och storlek. Således, blir neurala nätverk ofta sedda som en "svart låda". Detta har lett till utvecklingen  av evalueringsmetoder som ämnar att tolka dessa komplexa nätverk. Två av dessa metoder, layer-wise relevance propagation (LRP) och sensitivity analysis (SA), används för att generera färgdiagram som visar pixlar i indata-bilden som har en påverkan på klassificeringen. I den här rapporten, är målet att göra en användarbarhets-analys genom att utvärdera och jämföra dessa metoder för att se hur de kan användas för att förstå en specifik klassificering. Metoden som används i denna rapport är att iterativt förvränga bilder genom att följa de två färgdiagrams-metoderna. Detta ledde till insikterna att förvrängning av väsentliga delar av bilden, vilket framgick ur LRP färgdiagrammen, tydligt minskade sannolikheten för klassen. Det framkom även att förvrängning av oväsentliga delar, som framgick genom att kombinera SA och LRP färgdiagrammen, ökade sannolikheten för klassen. Resultaten stämde väl överens med teorin och detta ledde till slutsatsen att en kombination av metoderna rekommenderas för att förstå en specifik klassificering.
APA, Harvard, Vancouver, ISO, and other styles
2

Lapuschkin, Sebastian Verfasser], Klaus-Robert [Akademischer Betreuer] [Gutachter] [Müller, Thomas [Gutachter] Wiegand, and Jose C. [Gutachter] Principe. "Opening the machine learning black box with Layer-wise Relevance Propagation / Sebastian Lapuschkin ; Gutachter: Klaus-Robert Müller, Thomas Wiegand, Jose C. Principe ; Betreuer: Klaus-Robert Müller." Berlin : Technische Universität Berlin, 2019. http://d-nb.info/1177139251/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lapuschkin, Sebastian [Verfasser], Klaus-Robert [Akademischer Betreuer] [Gutachter] Müller, Thomas [Gutachter] Wiegand, and Jose C. [Gutachter] Principe. "Opening the machine learning black box with Layer-wise Relevance Propagation / Sebastian Lapuschkin ; Gutachter: Klaus-Robert Müller, Thomas Wiegand, Jose C. Principe ; Betreuer: Klaus-Robert Müller." Berlin : Technische Universität Berlin, 2019. http://d-nb.info/1177139251/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Layer-wise Relevance Propagation (LRP)"

1

Montavon, Grégoire, Jacob Kauffmann, Wojciech Samek, and Klaus-Robert Müller. "Explaining the Predictions of Unsupervised Learning Models." In xxAI - Beyond Explainable AI. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_7.

Full text
Abstract:
AbstractUnsupervised learning is a subfield of machine learning that focuses on learning the structure of data without making use of labels. This implies a different set of learning algorithms than those used for supervised learning, and consequently, also prevents a direct transposition of Explainable AI (XAI) methods from the supervised to the less studied unsupervised setting. In this chapter, we review our recently proposed ‘neuralization-propagation’ (NEON) approach for bringing XAI to workhorses of unsupervised learning such as kernel density estimation and k-means clustering. NEON first converts (without retraining) the unsupervised model into a functionally equivalent neural network so that, in a second step, supervised XAI techniques such as layer-wise relevance propagation (LRP) can be used. The approach is showcased on two application examples: (1) analysis of spending behavior in wholesale customer data and (2) analysis of visual features in industrial and scene images.
APA, Harvard, Vancouver, ISO, and other styles
2

Nikolakis, Nikolaos, Paolo Catti, and Kosmas Alexopoulos. "An Explainable Active Learning Approach for Enhanced Defect Detection in Manufacturing." In Lecture Notes in Mechanical Engineering. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-86489-6_5.

Full text
Abstract:
Abstract Artificial Intelligence (AI) can significantly support manufacturing companies in their pursuit of operational excellence, by maintaining efficiency while minimizing defects. However, the complexity of AI solutions often creates a barrier to their practical application. Transparency and user-friendliness should be prioritized to ensure that the insights generated by AI can be effectively applied in real-time decision-making. To bridge this gap and foster a collaborative environment where AI and human expertise collectively drive operational excellence, this paper suggests an AI approach that targets identifying defects in production while providing understandable insights. A semi-supervised convolutional neural network (CNNs) with attention mechanisms and Layer-wise Relevance Propagation (LRP) for explainable active learning is discussed. Predictions but also feedback from human experts are used to dynamically adjust the learning focus, ensuring a continuous improvement cycle in defect detection capabilities. The proposed approach has been tested in a use case related to the manufacturing of batteries. Preliminary results demonstrate substantial improvements in prediction accuracy and operational efficiency, offering a scalable solution for industrial applications aiming at zero defects.
APA, Harvard, Vancouver, ISO, and other styles
3

Becking, Daniel, Maximilian Dreyer, Wojciech Samek, Karsten Müller, and Sebastian Lapuschkin. "ECQ$$^{\text {x}}$$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs." In xxAI - Beyond Explainable AI. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_14.

Full text
Abstract:
AbstractThe remarkable success of deep neural networks (DNNs) in various applications is accompanied by a significant increase in network parameters and arithmetic operations. Such increases in memory and computational demands make deep learning prohibitive for resource-constrained hardware platforms such as mobile devices. Recent efforts aim to reduce these overheads, while preserving model performance as much as possible, and include parameter reduction techniques, parameter quantization, and lossless compression techniques.In this chapter, we develop and describe a novel quantization paradigm for DNNs: Our method leverages concepts of explainable AI (XAI) and concepts of information theory: Instead of assigning weight values based on their distances to the quantization clusters, the assignment function additionally considers weight relevances obtained from Layer-wise Relevance Propagation (LRP) and the information content of the clusters (entropy optimization). The ultimate goal is to preserve the most relevant weights in quantization clusters of highest information content.Experimental results show that this novel Entropy-Constrained and XAI-adjusted Quantization (ECQ$$^{\text {x}}$$ x ) method generates ultra low-precision (2–5 bit) and simultaneously sparse neural networks while maintaining or even improving model performance. Due to reduced parameter precision and high number of zero-elements, the rendered networks are highly compressible in terms of file size, up to 103$$\times $$ × compared to the full-precision unquantized DNN model. Our approach was evaluated on different types of models and datasets (including Google Speech Commands, CIFAR-10 and Pascal VOC) and compared with previous work.
APA, Harvard, Vancouver, ISO, and other styles
4

Montavon, Grégoire, Alexander Binder, Sebastian Lapuschkin, Wojciech Samek, and Klaus-Robert Müller. "Layer-Wise Relevance Propagation: An Overview." In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28954-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Binder, Alexander, Sebastian Bach, Gregoire Montavon, Klaus-Robert Müller, and Wojciech Samek. "Layer-Wise Relevance Propagation for Deep Neural Network Architectures." In Lecture Notes in Electrical Engineering. Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-0557-2_87.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Weinberger, Patrick, Bernhard Fröhler, Anja Heim, Alexander Gall, Ulrich Bodenhofer, and Sascha Senck. "Applying Layer-Wise Relevance Propagation on U-Net Architectures." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-78198-8_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jia, Wohuan, Shaoshuai Zhang, Yue Jiang, and Li Xu. "Interpreting Convolutional Neural Networks via Layer-Wise Relevance Propagation." In Lecture Notes in Computer Science. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-06794-5_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Erdem, Türkücan, and Süleyman Eken. "Layer-Wise Relevance Propagation for Smart-Grid Stability Prediction." In Pattern Recognition and Artificial Intelligence. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04112-9_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Otsuki, Seitaro, Tsumugi Iida, Félix Doublet, et al. "Layer-Wise Relevance Propagation with Conservation Property for ResNet." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72775-7_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Winter, Daniel, Ang Bian, and Xiaoyi Jiang. "Layer-Wise Relevance Propagation Based Sample Condensation for Kernel Machines." In Computer Analysis of Images and Patterns. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89128-2_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Layer-wise Relevance Propagation (LRP)"

1

Rathod, Prajakta, Shefali Naik, and Jayendra Bhalodiya. "Epilepsy Detection with CNN and Explanation with Layer-wise Relevance Propagation." In 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT). IEEE, 2024. http://dx.doi.org/10.1109/icccnt61001.2024.10726082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

David, Muhumuza, Mawejje Mark William, Nabagereka Bridget, Nakayiza Hellen Raudha, Rose Nakibuule, and Ggaliwango Marvin. "Layer-wise Relevance Backward Propagation for Fake Face Detection and Reconstruction." In 2024 5th International Conference on Image Processing and Capsule Networks (ICIPCN). IEEE, 2024. http://dx.doi.org/10.1109/icipcn63822.2024.00102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Manuela, Ukech Melissa, Rose Nakasi, Daudi Jjingo, Nakayiza Hellen, Martin Ngobye, and Ggaliwango Marvin. "Machine Vision Intelligence Using Layer-Wise Relevance Backward Propagation For Breast Cancer Diagnosis." In 2024 5th International Conference on Image Processing and Capsule Networks (ICIPCN). IEEE, 2024. http://dx.doi.org/10.1109/icipcn63822.2024.00031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tanveer, Hira, and Seemab Latif. "Decoding Stock Market Predictions: Insights from Explainable AI Using Layer-wise Relevance Propagation." In 2024 26th International Multitopic Conference (INMIC). IEEE, 2024. https://doi.org/10.1109/inmic64792.2024.11004356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bhati, Deepshikha, Fnu Neha, Md Amiruzzaman, Angela Guercio, Deepak Kumar Shukla, and Ben Ward. "Neural Network Interpretability with Layer-Wise Relevance Propagation: Novel Techniques for Neuron Selection and Visualization." In 2025 IEEE 15th Annual Computing and Communication Workshop and Conference (CCWC). IEEE, 2025. https://doi.org/10.1109/ccwc62904.2025.10903721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Comanducci, Luca, Fabio Antonacci, and Augusto Sarti. "Interpreting End-to-End Deep Learning Models for Speech Source Localization Using Layer-Wise Relevance Propagation." In 2024 32nd European Signal Processing Conference (EUSIPCO). IEEE, 2024. http://dx.doi.org/10.23919/eusipco63174.2024.10715394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Landt-Hayen, Marco, Willi Rath, Martin Claus, and Peer Kr¨oger. "Fact or Artifact? Revise Layer-Wise Relevance Propagation on Various ANN Architectures⋆." In 9th International Conference on Computer Science, Engineering and Applications. Academy & Industry Research Collaboration Center, 2023. http://dx.doi.org/10.5121/csit.2023.132305.

Full text
Abstract:
Layer-wise relevance propagation (LRP) is a widely used and powerful technique to reveal insights into various artificial neural network (ANN) architectures. LRP is often used in the context of image classification. The aim is to understand, which parts of the input sample have highest relevance and hence most influence on the model prediction. Relevance can be traced back through the network to attribute a certain score to each input pixel. Relevance scores are then combined and displayed as heat maps and give humans an intuitive visual understanding of classification models. Opening the black box to understand the classification engine in great detail is essential for domain experts to gain trust in ANN models. However, there are pitfalls in terms of model-inherent artifacts included in the obtained relevance maps, that can easily be missed. But for a valid interpretation, these artifacts must not be ignored. Here, we apply and revise LRP on various ANN architectures trained as classifiers on geospatial and synthetic data. Depending on the network architecture, we show techniques to control model focus and give guidance to improve the quality of obtained relevance maps to separate facts from artifacts.
APA, Harvard, Vancouver, ISO, and other styles
8

Landt-Hayen, Marco, Peer Kröger, Martin Claus, and Willi Rath. "Layer-Wise Relevance Propagation for Echo State Networks Applied to Earth System Variability." In 8th International Conference on Signal, Image Processing and Embedded Systems (SIGEM 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.122008.

Full text
Abstract:
Artificial neural networks (ANNs) are powerful methods for many hard problems (e.g. image classification or time series prediction). However, these models are often difficult to interpret. Layer-wise relevance propagation (LRP) is a widely used technique to understand how ANN models come to their conclusion and to understand what a model has learned. Here, we focus on Echo State Networks (ESNs) as a certain type of recurrent neural networks. ESNs are easy to train and only require a small number of trainable parameters. We show how LRP can be applied to ESNs to open the black-box. We also show an efficient way of how ESNs can be used for image classification: Our ESN model serves as a detector for El Niño Southern Oscillation (ENSO) from sea surface temperature anomalies. ENSO is a well-known problem. Here, we use this problem to demonstrate how LRP can significantly enhance the explainablility of ESNs.
APA, Harvard, Vancouver, ISO, and other styles
9

Sato, Matthew M., Vivian Wen Hui Wong, Kincho H. Law, Ho Yeung, and Paul Witherell. "Explainability of Laser Powder Bed Fusion Melt Pool Classification Using Deep Learning." In ASME 2023 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/detc2023-109137.

Full text
Abstract:
Abstract Laser powder bed fusion (LPBF) has shown enormous potential for metal additive manufacturing in recent years. However, the relationship between the LPBF process parameters and part quality is not yet fully understood. Some LPBF machines now use cameras to monitor melt pools during manufacturing. Machine learning techniques have been proposed to analyze the melt pool data and to evaluate the quality of the manufacturing process. However, these machine learning techniques often appear as a black box and the underlying decisions made by the machine learning models are unknown. This paper proposes a neural network to classify the melt pool shapes using melt pool images and process parameters as model inputs. With both process parameters and the melt pool image being included, an explainable artificial intelligence (XAI) approach is developed to interpret the neural network and understand the relationships between the melt pool shape and the process parameters. Specifically, layer-wise relevance propagation (LRP) is used to reveal the relevance of process parameters in the neural network’s decision-making. Using LRP, relationships between the process parameters and melt pool shapes are revealed without explicit knowledge of the underlying physics. These relationships can potentially be used to adjust the process parameters and improve the quality of LPBF manufactured parts. This paper demonstrates how neural networks and XAI can effectively identify relationships between process parameters and LPBF melt pools.
APA, Harvard, Vancouver, ISO, and other styles
10

Utsumi, Akira. "Refining Pretrained Word Embeddings Using Layer-wise Relevance Propagation." In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/d18-1520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography