Literatura académica sobre el tema "Post-hoc interpretability"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Post-hoc interpretability".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Post-hoc interpretability"

1

Feng, Jiangfan, Yukun Liang, and Lin Li. "Anomaly Detection in Videos Using Two-Stream Autoencoder with Post Hoc Interpretability." Computational Intelligence and Neuroscience 2021 (July 26, 2021): 1–15. http://dx.doi.org/10.1155/2021/7367870.

Texto completo
Resumen
The growing interest in deep learning approaches to video surveillance raises concerns about the accuracy and efficiency of neural networks. However, fast and reliable detection of abnormal events is still a challenging work. Here, we introduce a two-stream approach that offers an autoencoder-based structure for fast and efficient detection to facilitate anomaly detection from surveillance video without labeled abnormal events. Furthermore, we present post hoc interpretability of feature map visualization to show the process of feature learning, revealing uncertain and ambiguous decision bound
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Sinhamahapatra, Poulami, Suprosanna Shit, Anjany Sekuboyina, et al. "Enhancing Interpretability of Vertebrae Fracture Grading using Human-interpretable Prototypes." Machine Learning for Biomedical Imaging 2, July 2024 (2024): 977–1002. http://dx.doi.org/10.59275/j.melba.2024-258b.

Texto completo
Resumen
Vertebral fracture grading classifies the severity of vertebral fractures, which is a challenging task in medical imaging and has recently attracted Deep Learning (DL) models. Only a few works attempted to make such models human-interpretable despite the need for transparency and trustworthiness in critical use cases like DL-assisted medical diagnosis. Moreover, such models either rely on post-hoc methods or additional annotations. In this work, we propose a novel interpretable-by-design method, ProtoVerse, to find relevant sub-parts of vertebral fractures (prototypes) that reliably explain th
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Sarma Borah, Proyash Paban, Devraj Kashyap, Ruhini Aktar Laskar, and Ankur Jyoti Sarmah. "A Comprehensive Study on Explainable AI Using YOLO and Post Hoc Method on Medical Diagnosis." Journal of Physics: Conference Series 2919, no. 1 (2024): 012045. https://doi.org/10.1088/1742-6596/2919/1/012045.

Texto completo
Resumen
Abstract Medical imaging plays a pivotal role in disease detection and intervention. The black-box nature of deep learning models, such as YOLOv8, creates challenges in interpreting their decisions. This paper presents a toolset to enhance interpretability in AI based diagnostics by integrating Explainable AI (XAI) techniques with YOLOv8. This paper explores implementation of post hoc methods, including Grad-CAM and Eigen CAM, to assist end users in understanding the decision making of the model. This comprehensive evaluation utilises CT-Datasets, demonstrating the efficacy of YOLOv8 for objec
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Zhang, Zaixi, Qi Liu, Hao Wang, Chengqiang Lu, and Cheekong Lee. "ProtGNN: Towards Self-Explaining Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (2022): 9127–35. http://dx.doi.org/10.1609/aaai.v36i8.20898.

Texto completo
Resumen
Despite the recent progress in Graph Neural Networks (GNNs), it remains challenging to explain the predictions made by GNNs. Existing explanation methods mainly focus on post-hoc explanations where another explanatory model is employed to provide explanations for a trained GNN. The fact that post-hoc methods fail to reveal the original reasoning process of GNNs raises the need of building GNNs with built-in interpretability. In this work, we propose Prototype Graph Neural Network (ProtGNN), which combines prototype learning with GNNs and provides a new perspective on the explanations of GNNs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Alfano, Gianvincenzo, Sergio Greco, Domenico Mandaglio, Francesco Parisi, Reza Shahbazian, and Irina Trubitsyna. "Even-if Explanations: Formal Foundations, Priorities and Complexity." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 15 (2025): 15347–55. https://doi.org/10.1609/aaai.v39i15.33684.

Texto completo
Resumen
Explainable AI has received significant attention in recent years. Machine learning models often operate as black boxes, lacking explainability and transparency while supporting decision-making processes. Local post-hoc explainability queries attempt to answer why individual inputs are classified in a certain way by a given model. While there has been important work on counterfactual explanations, less attention has been devoted to semifactual ones. In this paper, we focus on local post-hoc explainability queries within the semifactual `even-if' thinking and their computational complexity amon
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Xu, Qian, Wenzhao Xie, Bolin Liao, et al. "Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review." Journal of Healthcare Engineering 2023 (February 3, 2023): 1–13. http://dx.doi.org/10.1155/2023/9919269.

Texto completo
Resumen
Background. Artificial intelligence (AI) has developed rapidly, and its application extends to clinical decision support system (CDSS) for improving healthcare quality. However, the interpretability of AI-driven CDSS poses significant challenges to widespread application. Objective. This study is a review of the knowledge-based and data-based CDSS literature regarding interpretability in health care. It highlights the relevance of interpretability for CDSS and the area for improvement from technological and medical perspectives. Methods. A systematic search was conducted on the interpretabilit
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Gill, Navdeep, Patrick Hall, Kim Montgomery, and Nicholas Schmidt. "A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing." Information 11, no. 3 (2020): 137. http://dx.doi.org/10.3390/info11030137.

Texto completo
Resumen
This manuscript outlines a viable approach for training and evaluating machine learning systems for high-stakes, human-centered, or regulated applications using common Python programming tools. The accuracy and intrinsic interpretability of two types of constrained models, monotonic gradient boosting machines and explainable neural networks, a deep learning architecture well-suited for structured data, are assessed on simulated data and publicly available mortgage data. For maximum transparency and the potential generation of personalized adverse action notices, the constrained models are anal
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Kulaklıoğlu, Duru. "Explainable AI: Enhancing Interpretability of Machine Learning Models." Human Computer Interaction 8, no. 1 (2024): 91. https://doi.org/10.62802/z3pde490.

Texto completo
Resumen
Explainable Artificial Intelligence (XAI) is emerging as a critical field to address the “black box” nature of many machine learning (ML) models. While these models achieve high predictive accuracy, their opacity undermines trust, adoption, and ethical compliance in critical domains such as healthcare, finance, and autonomous systems. This research explores methodologies and frameworks to enhance the interpretability of ML models, focusing on techniques like feature attribution, surrogate models, and counterfactual explanations. By balancing model complexity and transparency, this study highli
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Acun, Cagla, and Olfa Nasraoui. "Pre Hoc and Co Hoc Explainability: Frameworks for Integrating Interpretability into Machine Learning Training for Enhanced Transparency and Performance." Applied Sciences 15, no. 13 (2025): 7544. https://doi.org/10.3390/app15137544.

Texto completo
Resumen
Post hoc explanations for black-box machine learning models have been criticized for potentially inaccurate surrogate models and computational burden at prediction time. We propose pre hoc and co hoc explainability frameworks that integrate interpretability directly into the training process through an inherently interpretable white-box model. Pre hoc uses the white-box model to regularize the black-box model, while co hoc jointly optimizes both models with a shared loss function. We extend these frameworks to generate instance-specific explanations using Jensen–Shannon divergence as a regular
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Yousufi Aqmal, Shahid, and Fermle Erdely S. "Enhancing Nonparametric Tests: Insights for Computational Intelligence and Data Mining." Researcher Academy Innovation Data Analysis 1, no. 3 (2024): 214–26. https://doi.org/10.69725/raida.v1i3.168.

Texto completo
Resumen
Objective: With the aim of improving monitoring reliability and interpretability of CI and DM experimental statistical tests, we evaluate the performance of cutting-edge nonparametric tests and post hoc procedures. Methods: A Friedman Aligned Ranks test, Quade test, and multiple post hoc corrections Bonferroni-Dunn and Holm were used to comparative analyze data. These approaches were employed to algorithm performance metrics with varied datasets to evaluate their capability to detect meaningful differences and control Type I errors.Results: Advanced nonparametric methods consistently outperfor
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Tesis sobre el tema "Post-hoc interpretability"

1

Jeyasothy, Adulam. "Génération d'explications post-hoc personnalisées." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS027.

Texto completo
Resumen
La thèse se place dans le domaine de l'IA explicable (XAI, eXplainable AI). Nous nous concentrons sur les méthodes d'interprétabilité post-hoc qui visent à expliquer à un utilisateur la prédiction pour une donnée d'intérêt spécifique effectuée par un modèle de décision entraîné. Pour augmenter l'interprétabilité des explications, cette thèse étudie l'intégration de connaissances utilisateur dans ces méthodes, et vise ainsi à améliorer la compréhensibilité de l'explication en générant des explications personnalisées adaptées à chaque utilisateur. Pour cela, nous proposons un formalisme général
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

SEVESO, ANDREA. "Symbolic Reasoning for Contrastive Explanations." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/404830.

Texto completo
Resumen
La necessità di spiegazioni sui sistemi di Machine Learning (ML) sta crescendo man mano che i nuovi modelli superano in performance i loro predecessori, diventando più complessi e meno comprensibili per gli utenti finali. Un passaggio essenziale nella ricerca in ambito eXplainable Artificial Intelligence (XAI) è la creazione di modelli interpretabili che mirano ad approssimare la funzione decisionale di un algoritmo black box. Sebbene negli ultimi anni siano stati proposti diversi metodi di XAI, non è stata prestata sufficiente attenzione alla spiegazione di come i modelli modificano il loro
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Laugel, Thibault. "Interprétabilité locale post-hoc des modèles de classification "boites noires"." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS215.

Texto completo
Resumen
Cette thèse porte sur le domaine du XAI (explicabilité de l'IA), et plus particulièrement sur le paradigme de l'interprétabilité locale post-hoc, c'est-à-dire la génération d'explications pour une prédiction unique d'un classificateur entraîné. En particulier, nous étudions un contexte totalement agnostique, c'est-à-dire que l'explication est générée sans utiliser aucune connaissance sur le modèle de classification (traité comme une boîte noire) ni les données utilisées pour l'entraîner. Dans cette thèse, nous identifions plusieurs problèmes qui peuvent survenir dans ce contexte et qui peuvent
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Radulovic, Nedeljko. "Post-hoc Explainable AI for Black Box Models on Tabular Data." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT028.

Texto completo
Resumen
Les modèles d'intelligence artificielle (IA) actuels ont fait leurs preuves dans la résolution de diverses tâches, telles que la classification, la régression, le traitement du langage naturel (NLP) et le traitement d'images. Les ressources dont nous disposons aujourd'hui nous permettent d'entraîner des modèles d'IA très complexes pour résoudre différents problèmes dans presque tous les domaines : médecine, finance, justice, transport, prévisions, etc. Avec la popularité et l'utilisation généralisée des modèles d'IA, la nécessite d'assurer la confiance dans ces modèles s'est également accrue.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Bhattacharya, Debarpan. "A Learnable Distillation Approach For Model-agnostic Explainability With Multimodal Applications." Thesis, 2023. https://etd.iisc.ac.in/handle/2005/6108.

Texto completo
Resumen
Deep neural networks are the most widely used examples of sophisticated mapping functions from feature space to class labels. In the recent years, several high impact decisions in domains such as finance, healthcare, law and autonomous driving, are made with deep models. In these tasks, the model decisions lack interpretability, and pose difficulties in making the models accountable. Hence, there is a strong demand for developing explainable approaches which can elicit how the deep neural architecture, despite the astounding performance improvements observed in all fields, including computer v
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Post-hoc interpretability"

1

Kamath, Uday, and John Liu. "Post-Hoc Interpretability and Explanations." In Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83356-5_5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Greenwell, Brandon M. "Peeking inside the “black box”: post-hoc interpretability." In Tree-Based Methods for Statistical Learning in R. Chapman and Hall/CRC, 2022. http://dx.doi.org/10.1201/9781003089032-6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Ann Jo, Ashly, and Ebin Deni Raj. "Post hoc Interpretability: Review on New Frontiers of Interpretable AI." In Lecture Notes in Networks and Systems. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1203-2_23.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Santos, Flávio Arthur Oliveira, Cleber Zanchettin, José Vitor Santos Silva, Leonardo Nogueira Matos, and Paulo Novais. "A Hybrid Post Hoc Interpretability Approach for Deep Neural Networks." In Lecture Notes in Computer Science. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86271-8_50.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Molnar, Christoph, Giuseppe Casalicchio, and Bernd Bischl. "Quantifying Model Complexity via Functional Decomposition for Better Post-hoc Interpretability." In Machine Learning and Knowledge Discovery in Databases. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-43823-4_17.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Waqas, Muhammad, Tomas Maul, Amr Ahmed, and Iman Yi Liao. "Evaluation of Post-hoc Interpretability Methods in Breast Cancer Histopathological Image Classification." In Advances in Brain Inspired Cognitive Systems. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-1417-9_9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Stevens, Alexander, Johannes De Smedt, and Jari Peeperkorn. "Quantifying Explainability in Outcome-Oriented Predictive Process Monitoring." In Lecture Notes in Business Information Processing. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_15.

Texto completo
Resumen
AbstractThe growing interest in applying machine and deep learning algorithms in an Outcome-Oriented Predictive Process Monitoring (OOPPM) context has recently fuelled a shift to use models from the explainable artificial intelligence (XAI) paradigm, a field of study focused on creating explainability techniques on top of AI models in order to legitimize the predictions made. Nonetheless, most classification models are evaluated primarily on a performance level, where XAI requires striking a balance between either simple models (e.g. linear regression) or models using complex inference structures (e.g. neural networks) with post-processing to calculate feature importance. In this paper, a comprehensive overview of predictive models with varying intrinsic complexity are measured based on explainability with model-agnostic quantitative evaluation metrics. To this end, explainability is designed as a symbiosis between interpretability and faithfulness and thereby allowing to compare inherently created explanations (e.g. decision tree rules) with post-hoc explainability techniques (e.g. Shapley values) on top of AI models. Moreover, two improved versions of the logistic regression model capable of capturing non-linear interactions and both inherently generating their own explanations are proposed in the OOPPM context. These models are benchmarked with two common state-of-the-art models with post-hoc explanation techniques in the explainability-performance space.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Waqas, Muhammad, Tomas Maul, Iman Yi Liao, and Amr Ahmed. "Post Hoc Interpretability of Deep Learning Models for Breast Cancer Histopathological Images with Variational Autoencoders." In Communications in Computer and Information Science. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-6948-6_7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Turbé, Hugues, Mina Bjelogrlic, Mehdi Namdar, et al. "A Lightweight and Interpretable Model to Classify Bundle Branch Blocks from ECG Signals." In Studies in Health Technology and Informatics. IOS Press, 2022. http://dx.doi.org/10.3233/shti220393.

Texto completo
Resumen
Automatic classification of ECG signals has been a longtime research area with large progress having been made recently. However these advances have been achieved with increasingly complex models at the expense of model’s interpretability. In this research, a new model based on multivariate autoregressive model (MAR) coefficients combined with a tree-based model to classify bundle branch blocks is proposed. The advantage of the presented approach is to build a lightweight model which combined with post-hoc interpretability can bring new insights into important cross-lead dependencies which are indicative of the diseases of interest.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Dumka, Ankur, Vaibhav Chaudhari, Anil Kumar Bisht, Ruchira Rawat, and Arnav Pandey. "Methods, Techniques, and Application of Explainable Artificial Intelligence." In Advances in Environmental Engineering and Green Technologies. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-2351-9.ch017.

Texto completo
Resumen
With advancement in machine learning, the use of machine learning has been increased, and explainable artificial intelligence (XAI) has emerged as an area of research and development for addressing the opacity and complexity of machine learning models. This chapter has proposed the overview of the current state of explainable artificial intelligence with highlighting its significance, disadvantages, and its potential applications in different fields. This chapter explores several explainable artificial techniques ranging from post-hoc methods like SHAP, LIME to decision tree and rule-based systems. This chapter also focusses on complexity and interpretability of a model.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Post-hoc interpretability"

1

Cohen, Benjamin G., Burcu Beykal, and George M. Bollas. "Selection of Fitness Criteria for Learning Interpretable PDE Solutions via Symbolic Regression." In The 35th European Symposium on Computer Aided Process Engineering. PSE Press, 2025. https://doi.org/10.69997/sct.199083.

Texto completo
Resumen
Physics-Informed Symbolic Regression (PISR) offers a pathway to discover human-interpretable solutions to partial differential equations (PDEs). This work investigates three fitness metrics within a PISR framework: PDE fitness, Bayesian Information Criterion (BIC), and a fitness metric proportional to the probability of a model given the data. Through experiments with Laplace�s equation, Burgers� equation, and a nonlinear wave equation, we demonstrate that incorporating information theoretic criteria like BIC can yield higher fidelity models while maintaining interpretability. Our results show
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Laugel, Thibault, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. "The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/388.

Texto completo
Resumen
Post-hoc interpretability approaches have been proven to be powerful tools to generate explanations for the predictions made by a trained black-box model. However, they create the risk of having explanations that are a result of some artifacts learned by the model instead of actual knowledge from the data. This paper focuses on the case of counterfactual explanations and asks whether the generated instances can be justified, i.e. continuously connected to some ground-truth data. We evaluate the risk of generating unjustified counterfactual examples by investigating the local neighborhoods of i
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Vieira, Carla Piazzon, and Luciano Antonio Digiampietri. "Machine Learning post-hoc interpretability: a systematic mapping study." In SBSI: XVIII Brazilian Symposium on Information Systems. ACM, 2022. http://dx.doi.org/10.1145/3535511.3535512.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Attanasio, Giuseppe, Debora Nozza, Eliana Pastor, and Dirk Hovy. "Benchmarking Post-Hoc Interpretability Approaches for Transformer-based Misogyny Detection." In Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP. Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.nlppower-1.11.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Sujana, D. Swainson, and D. Peter Augustine. "Explaining Autism Diagnosis Model Through Local Interpretability Techniques – A Post-hoc Approach." In 2023 International Conference on Data Science, Agents & Artificial Intelligence (ICDSAAI). IEEE, 2023. http://dx.doi.org/10.1109/icdsaai59313.2023.10452575.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Morais, Lucas Rabelo de Araujo, Gabriel Arnaud de Melo Fragoso, Teresa Bernarda Ludermir, and Claudio Luis Alves Monteiro. "Explainable AI For the Brazilian Stock Market Index: A Post-Hoc Approach to Deep Learning Models in Time-Series Forecasting." In Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2024. https://doi.org/10.5753/eniac.2024.244444.

Texto completo
Resumen
Time-series forecasting is challenging when data lacks clear trends or seasonality, making traditional statistical models less effective. Deep Learning models, like Neural Networks, excel at capturing non-linear patterns and offer a promising alternative. The Bovespa Index (Ibovespa), a key indicator of Brazil’s stock market, is volatile, leading to potential investor losses due to inaccurate forecasts and limited market insight. Neural Networks can enhance forecast accuracy, but reduce model explainability. This study aims to use Deep Learning to forecast the Ibovespa, striving to balance hig
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Gkoumas, Dimitris, Qiuchi Li, Yijun Yu, and Dawei Song. "An Entanglement-driven Fusion Neural Network for Video Sentiment Analysis." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/239.

Texto completo
Resumen
Video data is multimodal in its nature, where an utterance can involve linguistic, visual and acoustic information. Therefore, a key challenge for video sentiment analysis is how to combine different modalities for sentiment recognition effectively. The latest neural network approaches achieve state-of-the-art performance, but they neglect to a large degree of how humans understand and reason about sentiment states. By contrast, recent advances in quantum probabilistic neural models have achieved comparable performance to the state-of-the-art, yet with better transparency and increased level o
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!