Academic literature on the topic 'Layer-wise relevance propagation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Layer-wise relevance propagation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Layer-wise relevance propagation"

1

Huang, Xinyi, Suphanut Jamonnak, Ye Zhao, Tsung Heng Wu, and Wei Xu. "A Visual Designer of Layer‐wise Relevance Propagation Models." Computer Graphics Forum 40, no. 3 (2021): 227–38. http://dx.doi.org/10.1111/cgf.14302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jung, Yeon-Jee, Seung-Ho Han, and Ho-Jin Choi. "Explaining CNN and RNN Using Selective Layer-Wise Relevance Propagation." IEEE Access 9 (2021): 18670–81. http://dx.doi.org/10.1109/access.2021.3051171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jung, Yeon‐Jee, Seung‐Ho Han, and Ho‐Jin Choi. "SLRP: Improved heatmap generation via selective layer‐wise relevance propagation." Electronics Letters 57, no. 10 (2021): 393–96. http://dx.doi.org/10.1049/ell2.12061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bach, Sebastian, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. "On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation." PLOS ONE 10, no. 7 (2015): e0130140. http://dx.doi.org/10.1371/journal.pone.0130140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Jincheng, and Qingfeng Du. "Adversarial attacks on text classification models using layer‐wise relevance propagation." International Journal of Intelligent Systems 35, no. 9 (2020): 1397–415. http://dx.doi.org/10.1002/int.22260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Grezmak, John, Jianjing Zhang, Peng Wang, Kenneth A. Loparo, and Robert X. Gao. "Interpretable Convolutional Neural Network Through Layer-wise Relevance Propagation for Machine Fault Diagnosis." IEEE Sensors Journal 20, no. 6 (2020): 3172–81. http://dx.doi.org/10.1109/jsen.2019.2958787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Juhwan, Geun Ho Gu, Juhwan Noh, et al. "Predicting potentially hazardous chemical reactions using an explainable neural network." Chemical Science 12, no. 33 (2021): 11028–37. http://dx.doi.org/10.1039/d1sc01049b.

Full text
Abstract:
An explainable neural network model is developed to predict the formation of hazardous products for chemical reactions. An input attribution method, layer-wise relevance propagation, is used to explain the decision-making process.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Heyi, Yunke Tian, Klaus Mueller, and Xin Chen. "Beyond saliency: Understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation." Image and Vision Computing 83-84 (March 2019): 70–86. http://dx.doi.org/10.1016/j.imavis.2019.02.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Eitel, Fabian, Emily Soehler, Judith Bellmann-Strobl, et al. "Uncovering convolutional neural network decisions for diagnosing multiple sclerosis on conventional MRI using layer-wise relevance propagation." NeuroImage: Clinical 24 (2019): 102003. http://dx.doi.org/10.1016/j.nicl.2019.102003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Korda, A. I., A. Ruef, S. Neufang, et al. "Identification of voxel-based texture abnormalities as new biomarkers for schizophrenia and major depressive patients using layer-wise relevance propagation on deep learning decisions." Psychiatry Research: Neuroimaging 313 (July 2021): 111303. http://dx.doi.org/10.1016/j.pscychresns.2021.111303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Layer-wise relevance propagation"

1

Rosenlew, Matilda, and Timas Ljungdahl. "Using Layer-wise Relevance Propagation and Sensitivity Analysis Heatmaps to understand the Classification of an Image produced by a Neural Network." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252702.

Full text
Abstract:
Neural networks are regarded as state of the art within many areas of machine learning, however due to their growing complexity and size, a question regarding their trustability and understandability has been raised. Thus, neural networks are often being considered a "black-box". This has lead to the emersion of evaluation methods trying to decipher these complex networks. Two of these methods, layer-wise relevance propagation (LRP) and sensitivity analysis (SA), are used to generate heatmaps, which presents pixels in the input image that have an impact on the classification. In this report, the aim is to do a usability-analysis by evaluating and comparing these methods to see how they can be used in order to understand a particular classification. The method used in this report is to iteratively distort image regions that were highlighted as important by the two heatmapping-methods. This lead to the findings that distorting essential features of an image according to the LRP heatmaps lead to a decrease in classification score, while distorting inessential features of an image according to the combination of SA and LRP heatmaps lead to an increase in classification score. The results corresponded well with the theory of the heatmapping-methods and lead to the conclusion that a combination of the two evaluation methods is advocated for, to fully understand a particular classification.<br>Neurala nätverk betraktas som den senaste tekniken i många områden inom maskininlärning, dock har deras pålitlighet och förståelse ifrågasatts på grund av deras växande komplexitet och storlek. Således, blir neurala nätverk ofta sedda som en "svart låda". Detta har lett till utvecklingen  av evalueringsmetoder som ämnar att tolka dessa komplexa nätverk. Två av dessa metoder, layer-wise relevance propagation (LRP) och sensitivity analysis (SA), används för att generera färgdiagram som visar pixlar i indata-bilden som har en påverkan på klassificeringen. I den här rapporten, är målet att göra en användarbarhets-analys genom att utvärdera och jämföra dessa metoder för att se hur de kan användas för att förstå en specifik klassificering. Metoden som används i denna rapport är att iterativt förvränga bilder genom att följa de två färgdiagrams-metoderna. Detta ledde till insikterna att förvrängning av väsentliga delar av bilden, vilket framgick ur LRP färgdiagrammen, tydligt minskade sannolikheten för klassen. Det framkom även att förvrängning av oväsentliga delar, som framgick genom att kombinera SA och LRP färgdiagrammen, ökade sannolikheten för klassen. Resultaten stämde väl överens med teorin och detta ledde till slutsatsen att en kombination av metoderna rekommenderas för att förstå en specifik klassificering.
APA, Harvard, Vancouver, ISO, and other styles
2

Lapuschkin, Sebastian Verfasser], Klaus-Robert [Akademischer Betreuer] [Gutachter] [Müller, Thomas [Gutachter] Wiegand, and Jose C. [Gutachter] Principe. "Opening the machine learning black box with Layer-wise Relevance Propagation / Sebastian Lapuschkin ; Gutachter: Klaus-Robert Müller, Thomas Wiegand, Jose C. Principe ; Betreuer: Klaus-Robert Müller." Berlin : Technische Universität Berlin, 2019. http://d-nb.info/1177139251/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lapuschkin, Sebastian [Verfasser], Klaus-Robert [Akademischer Betreuer] [Gutachter] Müller, Thomas [Gutachter] Wiegand, and Jose C. [Gutachter] Principe. "Opening the machine learning black box with Layer-wise Relevance Propagation / Sebastian Lapuschkin ; Gutachter: Klaus-Robert Müller, Thomas Wiegand, Jose C. Principe ; Betreuer: Klaus-Robert Müller." Berlin : Technische Universität Berlin, 2019. http://d-nb.info/1177139251/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Layer-wise relevance propagation"

1

Montavon, Grégoire, Alexander Binder, Sebastian Lapuschkin, Wojciech Samek, and Klaus-Robert Müller. "Layer-Wise Relevance Propagation: An Overview." In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28954-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Binder, Alexander, Sebastian Bach, Gregoire Montavon, Klaus-Robert Müller, and Wojciech Samek. "Layer-Wise Relevance Propagation for Deep Neural Network Architectures." In Lecture Notes in Electrical Engineering. Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-0557-2_87.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Binder, Alexander, Grégoire Montavon, Sebastian Lapuschkin, Klaus-Robert Müller, and Wojciech Samek. "Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers." In Artificial Neural Networks and Machine Learning – ICANN 2016. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-44781-0_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Huber, Tobias, Dominik Schiller, and Elisabeth André. "Enhancing Explainability of Deep Reinforcement Learning Through Selective Layer-Wise Relevance Propagation." In KI 2019: Advances in Artificial Intelligence. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30179-8_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Layer-wise relevance propagation"

1

Utsumi, Akira. "Refining Pretrained Word Embeddings Using Layer-wise Relevance Propagation." In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/d18-1520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Zifei, Xiaolin Huang, Jie Yang, and Nikola Kasabov. "Universal Adversarial Perturbation Generated by Attacking Layer-wise Relevance Propagation." In 2020 IEEE 10th International Conference on Intelligent Systems (IS). IEEE, 2020. http://dx.doi.org/10.1109/is48319.2020.9199956.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Iwana, Brian Kenji, Ryohei Kuroki, and Seiichi Uchida. "Explaining Convolutional Neural Networks using Softmax Gradient Layer-wise Relevance Propagation." In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE, 2019. http://dx.doi.org/10.1109/iccvw.2019.00513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bharadhwaj, Homanga. "Layer-Wise Relevance Propagation for Explainable Deep Learning Based Speech Recognition." In 2018 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). IEEE, 2018. http://dx.doi.org/10.1109/isspit.2018.8642691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ranguelova, Elena, Eric J. Pauwels, and Joost Berkhout. "Evaluating Layer-Wise Relevance Propagation Explainability Maps for Artificial Neural Networks." In 2018 IEEE 14th International Conference on e-Science (e-Science). IEEE, 2018. http://dx.doi.org/10.1109/escience.2018.00107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Yinchong, Volker Tresp, Marius Wunderle, and Peter A. Fasching. "Explaining Therapy Predictions with Layer-Wise Relevance Propagation in Neural Networks." In 2018 IEEE International Conference on Healthcare Informatics (ICHI). IEEE, 2018. http://dx.doi.org/10.1109/ichi.2018.00025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cik, Ivan, Andrindrasana David Rasamoelina, Marian Mach, and Peter Sincak. "Explaining Deep Neural Network using Layer-wise Relevance Propagation and Integrated Gradients." In 2021 IEEE 19th World Symposium on Applied Machine Intelligence and Informatics (SAMI). IEEE, 2021. http://dx.doi.org/10.1109/sami50585.2021.9378686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ellis, Charles A., Mohammad S. E. Sendi, Jon T. Willie, and Babak Mahmoudi. "Hierarchical Neural Network with Layer-wise Relevance Propagation for Interpretable Multiclass Neural State Classification." In 2021 10th International IEEE/EMBS Conference on Neural Engineering (NER). IEEE, 2021. http://dx.doi.org/10.1109/ner49283.2021.9441217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yan, Weizheng, Sergey Plis, Vince D. Calhoun, et al. "Discriminating schizophrenia from normal controls using resting state functional network connectivity: A deep neural network and layer-wise relevance propagation method." In 2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2017. http://dx.doi.org/10.1109/mlsp.2017.8168179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!