To see the other types of publications on this topic, follow the link: Feature explanation.

Journal articles on the topic 'Feature explanation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Feature explanation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Brdnik, Saša, Vili Podgorelec, and Boštjan Šumak. "Assessing Perceived Trust and Satisfaction with Multiple Explanation Techniques in XAI-Enhanced Learning Analytics." Electronics 12, no. 12 (2023): 2594. http://dx.doi.org/10.3390/electronics12122594.

Full text
Abstract:
This study aimed to observe the impact of eight explainable AI (XAI) explanation techniques on user trust and satisfaction in the context of XAI-enhanced learning analytics while comparing two groups of STEM college students based on their Bologna study level, using various established feature relevance techniques, certainty, and comparison explanations. Overall, the students reported the highest trust in local feature explanation in the form of a bar graph. Additionally, master’s students presented with global feature explanations also reported high trust in this form of explanation. The high
APA, Harvard, Vancouver, ISO, and other styles
2

Chapman-Rounds, Matt, Umang Bhatt, Erik Pazos, Marc-Andre Schulz, and Konstantinos Georgatzis. "FIMAP: Feature Importance by Minimal Adversarial Perturbation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (2021): 11433–41. http://dx.doi.org/10.1609/aaai.v35i13.17362.

Full text
Abstract:
Instance-based model-agnostic feature importance explanations (LIME, SHAP, L2X) are a popular form of algorithmic transparency. These methods generally return either a weighting or subset of input features as an explanation for the classification of an instance. An alternative literature argues instead that counterfactual instances, which alter the black-box model's classification, provide a more actionable form of explanation. We present Feature Importance by Minimal Adversarial Perturbation (FIMAP), a neural network based approach that unifies feature importance and counterfactual explanatio
APA, Harvard, Vancouver, ISO, and other styles
3

Olatunji, Iyiola E., Mandeep Rathee, Thorben Funke, and Megha Khosla. "Private Graph Extraction via Feature Explanations." Proceedings on Privacy Enhancing Technologies 2023, no. 2 (2023): 59–78. http://dx.doi.org/10.56553/popets-2023-0041.

Full text
Abstract:
Privacy and interpretability are two important ingredients for achieving trustworthy machine learning. We study the interplay of these two aspects in graph machine learning through graph reconstruction attacks. The goal of the adversary here is to reconstruct the graph structure of the training data given access to model explanations. Based on the different kinds of auxiliary information available to the adversary, we propose several graph reconstruction attacks. We show that additional knowledge of post-hoc feature explanations substantially increases the success rate of these attacks. Furthe
APA, Harvard, Vancouver, ISO, and other styles
4

Izza, Yacine, Alexey Ignatiev, Peter J. Stuckey, and Joao Marques-Silva. "Delivering Inflated Explanations." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (2024): 12744–53. http://dx.doi.org/10.1609/aaai.v38i11.29170.

Full text
Abstract:
In the quest for Explainable Artificial Intelligence (XAI) one of the questions that frequently arises given a decision made by an AI system is, ``why was the decision made in this way?'' Formal approaches to explainability build a formal model of the AI system and use this to reason about the properties of the system. Given a set of feature values for an instance to be explained, and a resulting decision, a formal abductive explanation is a set of features, such that if they take the given value will always lead to the same decision. This explanation is useful, it shows that only some feature
APA, Harvard, Vancouver, ISO, and other styles
5

An, Shuai, and Yang Cao. "Relative Keys: Putting Feature Explanation into Context." Proceedings of the ACM on Management of Data 2, no. 1 (2024): 1–28. http://dx.doi.org/10.1145/3639263.

Full text
Abstract:
Formal feature explanations strictly maintain perfect conformity but are intractable to compute, while heuristic methods are much faster but can lead to problematic explanations due to lack of conformity guarantees. We propose relative keys that have the best of both worlds. Relative keys associate feature explanations with a set of instances as context, and warrant perfect conformity over the context as formal explanations do, whilst being orders of magnitudes faster and working for complex blackbox models. Based on it, we develop CCE, a prototype that computes explanations with provably boun
APA, Harvard, Vancouver, ISO, and other styles
6

AlJalaud, Ebtisam, and Manar Hosny. "Enhancing Explainable Artificial Intelligence: Using Adaptive Feature Weight Genetic Explanation (AFWGE) with Pearson Correlation to Identify Crucial Feature Groups." Mathematics 12, no. 23 (2024): 3727. http://dx.doi.org/10.3390/math12233727.

Full text
Abstract:
The ‘black box’ nature of machine learning (ML) approaches makes it challenging to understand how most artificial intelligence (AI) models make decisions. Explainable AI (XAI) aims to provide analytical techniques to understand the behavior of ML models. XAI utilizes counterfactual explanations that indicate how variations in input features lead to different outputs. However, existing methods must also highlight the importance of features to provide more actionable explanations that would aid in the identification of key drivers behind model decisions—and, hence, more reliable interpretations—
APA, Harvard, Vancouver, ISO, and other styles
7

Utkin, Lev, and Andrei Konstantinov. "Ensembles of Random SHAPs." Algorithms 15, no. 11 (2022): 431. http://dx.doi.org/10.3390/a15110431.

Full text
Abstract:
The ensemble-based modifications of the well-known SHapley Additive exPlanations (SHAP) method for the local explanation of a black-box model are proposed. The modifications aim to simplify the SHAP which is computationally expensive when there is a large number of features. The main idea behind the proposed modifications is to approximate the SHAP by an ensemble of SHAPs with a smaller number of features. According to the first modification, called the ER-SHAP, several features are randomly selected many times from the feature set, and the Shapley values for the features are computed by means
APA, Harvard, Vancouver, ISO, and other styles
8

Lin, Ming-Yen, I.-Chen Hsieh, and Sue-Chen Hsush. "Enhancing Personalized Explainable Recommendations with Transformer Architecture and Feature Handling." Electronics 14, no. 5 (2025): 998. https://doi.org/10.3390/electronics14050998.

Full text
Abstract:
The advancement of explainable recommendations aims to improve the quality of textual explanations for recommendations. Traditional methods primarily used Recurrent Neural Networks (RNNs) or their variants to generate personalized explanations. However, recent research has focused on leveraging Transformer architectures to enhance explanations by extracting user reviews and incorporating features from interacted items. Nevertheless, previous studies have failed to fully exploit the relationship between reviews and user ratings to generate more personalized explanations. In this paper, we propo
APA, Harvard, Vancouver, ISO, and other styles
9

Beckh, Katharina, Joann Rachel Jacob, Adrian Seeliger, Stefan Rüping, and Najmeh Mousavi Nejad. "Limitations of Feature Attribution in Long Text Classification of Standards." Proceedings of the AAAI Symposium Series 4, no. 1 (2024): 10–17. http://dx.doi.org/10.1609/aaaiss.v4i1.31765.

Full text
Abstract:
Managing complex AI systems requires insight into a model's decision-making processes. Understanding how these systems arrive at their conclusions is essential for ensuring reliability. In the field of explainable natural language processing, many approaches have been developed and evaluated. However, experimental analysis of explainability for text classification has been largely constrained to short text and binary classification. In this applied work, we study explainability for a real-world task where the goal is to assess the technological suitability of standards. This prototypical use c
APA, Harvard, Vancouver, ISO, and other styles
10

Long, Marilee. "Scientific explanation in US newspaper science stories." Public Understanding of Science 4, no. 2 (1995): 119–30. http://dx.doi.org/10.1088/0963-6625/4/2/002.

Full text
Abstract:
Mass media are important sources of science information for many adults. However, this study, which reports a content analysis of science stories in 100 US newspapers, found that while 70 newspapers carried science stories, the majority of these stories contained little scientific explanation. Ten percent or less of content was comprised of elucidating (definitions of terms) and/or quasi-scientific explanations (explications of relationships among scientific concepts). The study also investigated the effect of production-based variables on scientific explanation. Stories in feature and science
APA, Harvard, Vancouver, ISO, and other styles
11

Botting, David. "The Logic of Intending and Predicting." KRITERION – Journal of Philosophy 31, no. 3 (2017): 1–24. http://dx.doi.org/10.1515/krt-2017-310302.

Full text
Abstract:
Abstract Can human acts be causally explained in the same way as the rest of nature? If so, causal explanation in the manner of the Hempelian model shouldn’t the human sciences and the natural sciences equally. This is not so much a question of whether the Hempelian model is a completely adequate account of causal explanation, but about whether it is adequate or inadequate in the same way for each: if there is some unique feature of human acts that dictates that they are to be explained differently from natural events, then it is reasonable to suppose that this feature will be revealed by cons
APA, Harvard, Vancouver, ISO, and other styles
12

Venkatsubramaniam, Bhaskaran, and Pallav Kumar Baruah. "COMPARATIVE STUDY OF XAI USING FORMAL CONCEPT LATTICE AND LIME." ICTACT Journal on Soft Computing 13, no. 1 (2022): 2782–91. http://dx.doi.org/10.21917/ijsc.2022.0396.

Full text
Abstract:
Local Interpretable Model Agnostic Explanation (LIME) is a technique to explain a black box machine learning model using a surrogate model approach. While this technique is very popular, inherent to its approach, explanations are generated from the surrogate model and not directly from the black box model. In sensitive domains like healthcare, this need not be acceptable as trustworthy. These techniques also assume that features are independent and provide feature weights of the surrogate linear model as feature importance. In real life datasets, features may be dependent and a combination of
APA, Harvard, Vancouver, ISO, and other styles
13

Lai, Chengen, Shengli Song, Shiqi Meng, Jingyang Li, Sitong Yan, and Guangneng Hu. "Towards More Faithful Natural Language Explanation Using Multi-Level Contrastive Learning in VQA." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (2024): 2849–57. http://dx.doi.org/10.1609/aaai.v38i3.28065.

Full text
Abstract:
Natural language explanation in visual question answer (VQA-NLE) aims to explain the decision-making process of models by generating natural language sentences to increase users' trust in the black-box systems. Existing post-hoc methods have achieved significant progress in obtaining a plausible explanation. However, such post-hoc explanations are not always aligned with human logical inference, suffering from the issues on: 1) Deductive unsatisfiability, the generated explanations do not logically lead to the answer; 2) Factual inconsistency, the model falsifies its counterfactual explanation
APA, Harvard, Vancouver, ISO, and other styles
14

Nakamoto, Ryosuke, Brendan Flanagan, Yiling Dai, Taisei Yamauchi, Kyosuke Takami, and Hiroaki Ogata. "Integrating self-explanation and operational data for impasse detection in mathematical learning." Research and Practice in Technology Enhanced Learning 20 (July 23, 2024): 019. http://dx.doi.org/10.58459/rptel.2025.20019.

Full text
Abstract:
Self-explanation is increasingly recognized as a key factor in learning. Identifying learning impasses, which are significant educational challenges, is also crucial as they can lead to deeper learning experiences. This paper argues that integrating self-explanation with relevant datasets is essential for detecting learning impasses in online mathematics education. To test this idea, we created an evaluative framework using a rubric-based approach tailored for mathematical problem-solving. Our analysis combines various data types, including handwritten responses and digital self-explanations f
APA, Harvard, Vancouver, ISO, and other styles
15

Pintelas, Emmanuel, Meletis Liaskos, Ioannis E. Livieris, Sotiris Kotsiantis, and Panagiotis Pintelas. "Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction." Journal of Imaging 6, no. 6 (2020): 37. http://dx.doi.org/10.3390/jimaging6060037.

Full text
Abstract:
Image classification is a very popular machine learning domain in which deep convolutional neural networks have mainly emerged on such applications. These networks manage to achieve remarkable performance in terms of prediction accuracy but they are considered as black box models since they lack the ability to interpret their inner working mechanism and explain the main reasoning of their predictions. There is a variety of real world tasks, such as medical applications, in which interpretability and explainability play a significant role. Making decisions on critical issues such as cancer pred
APA, Harvard, Vancouver, ISO, and other styles
16

Chen, Valerie, Q. Vera Liao, Jennifer Wortman Vaughan, and Gagan Bansal. "Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations." Proceedings of the ACM on Human-Computer Interaction 7, CSCW2 (2023): 1–32. http://dx.doi.org/10.1145/3610219.

Full text
Abstract:
AI explanations are often mentioned as a way to improve human-AI decision-making, but empirical studies have not found consistent evidence of explanations' effectiveness and, on the contrary, suggest that they can increase overreliance when the AI system is wrong. While many factors may affect reliance on AI support, one important factor is how decision-makers reconcile their own intuition---beliefs or heuristics, based on prior knowledge, experience, or pattern recognition, used to make judgments---with the information provided by the AI system to determine when to override AI predictions. We
APA, Harvard, Vancouver, ISO, and other styles
17

VanNostrand, Peter M., Huayi Zhang, Dennis M. Hofmann, and Elke A. Rundensteiner. "FACET: Robust Counterfactual Explanation Analytics." Proceedings of the ACM on Management of Data 1, no. 4 (2023): 1–27. http://dx.doi.org/10.1145/3626729.

Full text
Abstract:
Machine learning systems are deployed in domains such as hiring and healthcare, where undesired classifications can have serious ramifications for the user. Thus, there is a rising demand for explainable AI systems which provide actionable steps for lay users to obtain their desired outcome. To meet this need, we propose FACET, the first explanation analytics system which supports a user in interactively refining counterfactual explanations for decisions made by tree ensembles. As FACET's foundation, we design a novel type of counterfactual explanation called the counterfactual region. Unlike
APA, Harvard, Vancouver, ISO, and other styles
18

Lin, Ming-Yen, Yuan-Ming Chang, Chi-Chun Li, and Wen-Cheng Chao. "Explainable Machine Learning to Predict Successful Weaning of Mechanical Ventilation in Critically Ill Patients Requiring Hemodialysis." Healthcare 11, no. 6 (2023): 910. http://dx.doi.org/10.3390/healthcare11060910.

Full text
Abstract:
Lungs and kidneys are two vital and frequently injured organs among critically ill patients. In this study, we attempt to develop a weaning prediction model for patients with both respiratory and renal failure using an explainable machine learning (XML) approach. We used the eICU collaborative research database, which contained data from 335 ICUs across the United States. Four ML models, including XGBoost, GBM, AdaBoost, and RF, were used, with weaning prediction and feature windows, both at 48 h. The model’s explanations were presented at the domain, feature, and individual levels by leveragi
APA, Harvard, Vancouver, ISO, and other styles
19

Lei, Xia, Jia-Jiang Lin, Xiong-Lin Luo, and Yongkai Fan. "Explaining deep residual networks predictions with symplectic adjoint method." Computer Science and Information Systems, no. 00 (2023): 47. http://dx.doi.org/10.2298/csis230310047l.

Full text
Abstract:
Understanding deep residual networks (ResNets) decisions are receiving much attention as a way to ensure their security and reliability. Recent research, however, lacks theoretical analysis to guarantee the faithfulness of explanations and could produce an unreliable explanation. In order to explain ResNets predictions, we suggest a provably faithful explanation for ResNet using a surrogate explainable model, a neural ordinary differential equation network (Neural ODE). First, ResNets are proved to converge to a Neural ODE and the Neural ODE is regarded as a surrogate model to explain the deci
APA, Harvard, Vancouver, ISO, and other styles
20

Biradar, Gagan, Yacine Izza, Elita Lobo, Vignesh Viswanathan, and Yair Zick. "Axiomatic Aggregations of Abductive Explanations." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (2024): 11096–104. http://dx.doi.org/10.1609/aaai.v38i10.28986.

Full text
Abstract:
The recent criticisms of the robustness of post hoc model approximation explanation methods (like LIME and SHAP) have led to the rise of model-precise abductive explanations. For each data point, abductive explanations provide a minimal subset of features that are sufficient to generate the outcome. While theoretically sound and rigorous, abductive explanations suffer from a major issue --- there can be several valid abductive explanations for the same data point. In such cases, providing a single abductive explanation can be insufficient; on the other hand, providing all valid abductive expla
APA, Harvard, Vancouver, ISO, and other styles
21

EL Shawi, Radwa, and Mouaz H. Al-Mallah. "Interpretable Local Concept-based Explanation with Human Feedback to Predict All-cause Mortality." Journal of Artificial Intelligence Research 75 (November 18, 2022): 833–55. http://dx.doi.org/10.1613/jair.1.14019.

Full text
Abstract:
Machine learning models are incorporated in different fields and disciplines in which some of them require a high level of accountability and transparency, for example, the healthcare sector. With the General Data Protection Regulation (GDPR), the importance for plausibility and verifiability of the predictions made by machine learning models has become essential. A widely used category of explanation techniques attempts to explain models’ predictions by quantifying the importance score of each input feature. However, summarizing such scores to provide human-interpretable explanations is chall
APA, Harvard, Vancouver, ISO, and other styles
22

Nguyen, Truc, Phung Lai, Hai Phan, and My T. Thai. "XRand: Differentially Private Defense against Explanation-Guided Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 10 (2023): 11873–81. http://dx.doi.org/10.1609/aaai.v37i10.26401.

Full text
Abstract:
Recent development in the field of explainable artificial intelligence (XAI) has helped improve trust in Machine-Learning-as-a-Service (MLaaS) systems, in which an explanation is provided together with the model prediction in response to each query. However, XAI also opens a door for adversaries to gain insights into the black-box models in MLaaS, thereby making the models more vulnerable to several attacks. For example, feature-based explanations (e.g., SHAP) could expose the top important features that a black-box model focuses on. Such disclosure has been exploited to craft effective backdo
APA, Harvard, Vancouver, ISO, and other styles
23

Lehrer, Keith. "Ultimate Preference and Explanation." Grazer Philosophische Studien 97, no. 4 (2020): 600–615. http://dx.doi.org/10.1163/18756735-00000125.

Full text
Abstract:
Abstract The articles by Corlett, McKenna and Waller in the present issue call for some further enlightenment on Lehrer’s defense of classical compatibilism. Ultimate explanation in terms of a power preference, which is the primary explanation for choice, is now the central feature of his defense. This includes the premise that scientific determinism may fail to explain our choices. Sylvain Bromberger (1965) showed that nomological deduction is not sufficient for explanation. A power preference, which is by definition a preference over alternatives, is the primary explanation when the power pr
APA, Harvard, Vancouver, ISO, and other styles
24

O'Brien, Michael J., and Thomas D. Holland. "The Role of Adaptation in Archaeological Explanation." American Antiquity 57, no. 1 (1992): 36–59. http://dx.doi.org/10.2307/2694834.

Full text
Abstract:
Adaptation, a venerable icon in archaeology, often is afforded the vacuous role of being an ex-post-facto argument used to "explain" the appearance and persistence of traits among prehistoric groups—a position that has seriously impeded development of a selectionist perspective in archaeology. Biological and philosophical definitions of adaptation—and by extension, definitions of adaptedness—vary considerably, but all are far removed from those usually employed in archaeology. The prevailing view in biology is that adaptations are features that were shaped by natural selection and that increas
APA, Harvard, Vancouver, ISO, and other styles
25

Xie, Yuting, Fulvio Zaccagna, Leonardo Rundo, et al. "IMPA-Net: Interpretable Multi-Part Attention Network for Trustworthy Brain Tumor Classification from MRI." Diagnostics 14, no. 10 (2024): 997. http://dx.doi.org/10.3390/diagnostics14100997.

Full text
Abstract:
Deep learning (DL) networks have shown attractive performance in medical image processing tasks such as brain tumor classification. However, they are often criticized as mysterious “black boxes”. The opaqueness of the model and the reasoning process make it difficult for health workers to decide whether to trust the prediction outcomes. In this study, we develop an interpretable multi-part attention network (IMPA-Net) for brain tumor classification to enhance the interpretability and trustworthiness of classification outcomes. The proposed model not only predicts the tumor grade but also provi
APA, Harvard, Vancouver, ISO, and other styles
26

Van den Broeck, Guy, Anton Lykov, Maximilian Schleich, and Dan Suciu. "On the Tractability of SHAP Explanations." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 7 (2021): 6505–13. http://dx.doi.org/10.1609/aaai.v35i7.16806.

Full text
Abstract:
SHAP explanations are a popular feature-attribution mechanism for explainable AI. They use game-theoretic notions to measure the influence of individual features on the prediction of a machine learning model. Despite a lot of recent interest from both academia and industry, it is not known whether SHAP explanations of common machine learning models can be computed efficiently. In this paper, we establish the complexity of computing the SHAP explanation in three important settings. First, we consider fully-factorized data distributions, and show that the complexity of computing the SHAP explana
APA, Harvard, Vancouver, ISO, and other styles
27

Sattarzadeh, Sam, Mahesh Sudhakar, Anthony Lem, et al. "Explaining Convolutional Neural Networks through Attribution-Based Input Sampling and Block-Wise Feature Aggregation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (2021): 11639–47. http://dx.doi.org/10.1609/aaai.v35i13.17384.

Full text
Abstract:
As an emerging field in Machine Learning, Explainable AI (XAI) has been offering remarkable performance in interpreting the decisions made by Convolutional Neural Networks (CNNs). To achieve visual explanations for CNNs, methods based on class activation mapping and randomized input sampling have gained great popularity. However, the attribution methods based on these techniques provide lower-resolution and blurry explanation maps that limit their explanation power. To circumvent this issue, visualization based on various layers is sought. In this work, we collect visualization maps from multi
APA, Harvard, Vancouver, ISO, and other styles
28

Cowan, Robert. "The Puzzle of Moral Memory." Journal of Moral Philosophy 17, no. 2 (2020): 202–28. http://dx.doi.org/10.1163/17455243-20192914.

Full text
Abstract:
A largely overlooked and puzzling feature of morality is Moral Memory: apparent cases of directly memorising, remembering, and forgetting first-order moral propositions seem odd. To illustrate: consider someone apparently memorising that capital punishment is wrong, or acting as if they are remembering that euthanasia is permissible, or reporting that they have forgotten that torture is wrong. I here clarify Moral Memory and identify desiderata of good explanations. I then proceed to amend the only extant account, Bugeja’s (2016) Non-Cognitivist explanation, but show that it isn’t superior to
APA, Harvard, Vancouver, ISO, and other styles
29

Tejaskumar Dattatray Pujari. "Robust Explainable AI via Adversarial Latent Diffusion Models: Mitigating Gradient Obfuscation with Interpretable Feature Attribution." Journal of Information Systems Engineering and Management 10, no. 36s (2025): 488–503. https://doi.org/10.52783/jisem.v10i36s.6522.

Full text
Abstract:
This study introduces the Adversarial Latent Diffusion Explanations (ALDE) framework, a novel approach aimed at improving the robustness and interpretability of explainable AI (XAI) methods under adversarial conditions. An experimental research design was used to integrate diffusion models with adversarial training, focusing on deep image classification tasks. The framework was tested using two popular datasets—ImageNet and CIFAR-10—and two pre-trained deep learning models, ResNet-50 and WideResNet-28-10. The ALDE framework combines a Denoising Diffusion Probabilistic Model (DDPM) for input pu
APA, Harvard, Vancouver, ISO, and other styles
30

Yin, Yiqiao, and Yash Bingi. "Using Machine Learning to Classify Human Fetal Health and Analyze Feature Importance." BioMedInformatics 3, no. 2 (2023): 280–98. http://dx.doi.org/10.3390/biomedinformatics3020019.

Full text
Abstract:
The reduction of childhood mortality is an ongoing struggle and a commonly used factor in determining progress in the medical field. The under-5 mortality number is around 5 million around the world, with many of the deaths being preventable. In light of this issue, cardiotocograms (CTGs) have emerged as a leading tool to determine fetal health. By using ultrasound pulses and reading the responses, CTGs help healthcare professionals assess the overall health of the fetus to determine the risk of child mortality. However, interpreting the results of the CTGs is time consuming and inefficient, e
APA, Harvard, Vancouver, ISO, and other styles
31

Delaunay, Julien, Luis Galárraga, Christine Largouet, and Niels van Berkel. "Impact of Explanation Techniques and Representations on Users' Comprehension and Confidence in Explainable AI." Proceedings of the ACM on Human-Computer Interaction 9, no. 2 (2025): 1–28. https://doi.org/10.1145/3711011.

Full text
Abstract:
Local explainability, an important sub-field of eXplainable AI, focuses on describing the decisions of AI models for individual use cases by providing the underlying relationships between a model's inputs and outputs. While the machine learning community has made substantial progress in improving explanation accuracy and completeness, these explanations are rarely evaluated by the final users. In this paper, we evaluate the impact of various explanation and representation techniques on users' comprehension and confidence. Through a user study on two different domains, we assessed three commonl
APA, Harvard, Vancouver, ISO, and other styles
32

Van den Broeck, Guy, Anton Lykov, Maximilian Schleich, and Dan Suciu. "On the Tractability of SHAP Explanations." Journal of Artificial Intelligence Research 74 (June 23, 2022): 851–86. http://dx.doi.org/10.1613/jair.1.13283.

Full text
Abstract:

 
 
 SHAP explanations are a popular feature-attribution mechanism for explainable AI. They use game-theoretic notions to measure the influence of individual features on the prediction of a machine learning model. Despite a lot of recent interest from both academia and industry, it is not known whether SHAP explanations of common machine learning models can be computed efficiently. In this paper, we establish the complexity of computing the SHAP explanation in three important settings. First, we consider fully-factorized data distributions, and show that the complexity of compu
APA, Harvard, Vancouver, ISO, and other styles
33

Qiu, Changqing, Fusheng Jin, and Yining Zhang. "Empowering CAM-Based Methods with Capability to Generate Fine-Grained and High-Faithfulness Explanations." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 5 (2024): 4587–95. http://dx.doi.org/10.1609/aaai.v38i5.28258.

Full text
Abstract:
Recently, the explanation of neural network models has garnered considerable research attention. In computer vision, CAM (Class Activation Map)-based methods and LRP (Layer-wise Relevance Propagation) method are two common explanation methods. However, since most CAM-based methods can only generate global weights, they can only generate coarse-grained explanations at a deep layer. LRP and its variants, on the other hand, can generate fine-grained explanations. But the faithfulness of the explanations is too low. To address these challenges, in this paper, we propose FG-CAM (Fine-Grained CAM),
APA, Harvard, Vancouver, ISO, and other styles
34

Henry, Richard B. C., Angela Speck, Amanda I. Karakas, and Gary J. Ferland. "The curious conundrum regarding sulfur and oxygen abundances in planetary nebulae." Proceedings of the International Astronomical Union 7, S283 (2011): 384–85. http://dx.doi.org/10.1017/s1743921312011544.

Full text
Abstract:
AbstractWe carefully consider numerous explanations for the sulfur abundance anomaly in planetary nebulae. No one rationale appears to be satisfactory, and we suggest that the ultimate explanation is likely to be a heretofore unidentified feature of the nebular gas which significantly impacts the sulfur ionization correction factor.
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Tingyang, Dazhuo Qiu, Yinghui Wu, Arijit Khan, Xiangyu Ke, and Yunjun Gao. "View-based Explanations for Graph Neural Networks." Proceedings of the ACM on Management of Data 2, no. 1 (2024): 1–27. http://dx.doi.org/10.1145/3639295.

Full text
Abstract:
Generating explanations for graph neural networks (GNNs) has been studied to understand their behaviors in analytical tasks such as graph classification. Existing approaches aim to understand the overall results of GNNs rather than providing explanations for specific class labels of interest, and may return explanation structures that are hard to access, nor directly queryable. We propose GVEX, a novel paradigm that generates Graph Views for GNN EXplanation. (1) We design a two-tier explanation structure called explanation views. An explanation view consists of a set of graph patterns and a se
APA, Harvard, Vancouver, ISO, and other styles
36

Gjærum, Vilde B., Inga Strümke, Ole Andreas Alsos, and Anastasios M. Lekkas. "Explaining a Deep Reinforcement Learning Docking Agent Using Linear Model Trees with User Adapted Visualization." Journal of Marine Science and Engineering 9, no. 11 (2021): 1178. http://dx.doi.org/10.3390/jmse9111178.

Full text
Abstract:
Deep neural networks (DNNs) can be useful within the marine robotics field, but their utility value is restricted by their black-box nature. Explainable artificial intelligence methods attempt to understand how such black-boxes make their decisions. In this work, linear model trees (LMTs) are used to approximate the DNN controlling an autonomous surface vessel (ASV) in a simulated environment and then run in parallel with the DNN to give explanations in the form of feature attributions in real-time. How well a model can be understood depends not only on the explanation itself, but also on how
APA, Harvard, Vancouver, ISO, and other styles
37

Stahovich, Thomas F., and Anand Raghavan. "Computing Design Rationales by Interpreting Simulations*." Journal of Mechanical Design 122, no. 1 (2000): 77–82. http://dx.doi.org/10.1115/1.533547.

Full text
Abstract:
We describe an approach for automatically computing a class of design rationales. Our focus is computing the purposes of the geometric features on the parts of a device. This is accomplished by first simulating the device with the feature in question removed and comparing this to a simulation of the nominal device. The differences between the simulations are indicative of the behaviors that the feature ultimately causes. Fundamental principles of mechanics are then used to construct a causal explanation that describes how the feature causes these behaviors. This explanation constitutes one of
APA, Harvard, Vancouver, ISO, and other styles
38

BULATOVIĆ, Vesna. "NON-GRAMMATICALITY OF AORIST IN REPORTING DEPENDENT CLAUSES IN MONTENEGRIN LANGUAGE." Lingua Montenegrina 22, no. 2 (2018): 3–13. https://doi.org/10.46584/lm.v22i2.646.

Full text
Abstract:
The analysis that this paper reports on starts from an interesting explanation of a non-grammatical use of aorist in Bulgarian and checks whether the same explanation applies to aorist in Montenegrin. The analysis covers a number of key features of aorist addressed in grammars and other linguistic literature. The results show that the key feature of aorist is witnessing and that it largely determines whether aorist is allowed or disallowed in different syntactic structures. The feature of witnessing is also the reason why the use of aorist in reporting dependent clauses is non-grammatical.
APA, Harvard, Vancouver, ISO, and other styles
39

Mohammadi, Majid, Ilaria Tiddi, and Annette Ten Teije. "Unlocking the Game: Estimating Games in Möbius Representation for Explanation and High-Order Interaction Detection." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 18 (2025): 19512–19. https://doi.org/10.1609/aaai.v39i18.34148.

Full text
Abstract:
Shapley value-based explanations are widely utilized to demystify predictions made by opaque models. Approaches to estimating Shapley values often approximate explanation games as inessential and estimate the Shapley value directly as feature attribution with a limited capacity to quantify feature interactions. This paper introduces a new approach for calculating Shapley values that relaxes the assumption of inessential games and is proven to provide additive feature attribution. The initial formulation of the proposed approach includes the estimation of game values in their Möbius representat
APA, Harvard, Vancouver, ISO, and other styles
40

Miao, Shangbo, Chenxi Zhang, Yushun Piao, and Yalin Miao. "Classification and Model Explanation of Traditional Dwellings Based on Improved Swin Transformer." Buildings 14, no. 6 (2024): 1540. http://dx.doi.org/10.3390/buildings14061540.

Full text
Abstract:
The extraction of features and classification of traditional dwellings plays significant roles in preserving and ensuring the sustainable development of these structures. Currently, challenges persist in subjective classification and the accuracy of feature extraction. This study focuses on traditional dwellings in Gansu Province, China, employing a novel model named Improved Swin Transformer. This model, based on the Swin Transformer and parallel grouped Convolutional Neural Networks (CNN) branches, aims to enhance the accuracy of feature extraction and classification precision. Furthermore,
APA, Harvard, Vancouver, ISO, and other styles
41

Xia, Bohui, Xueting Wang, and Toshihiko Yamasaki. "Semantic Explanation for Deep Neural Networks Using Feature Interactions." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 3s (2021): 1–19. http://dx.doi.org/10.1145/3474557.

Full text
Abstract:
Given the promising results obtained by deep-learning techniques in multimedia analysis, the explainability of predictions made by networks has become important in practical applications. We present a method to generate semantic and quantitative explanations that are easily interpretable by humans. The previous work to obtain such explanations has focused on the contributions of each feature, taking their sum to be the prediction result for a target variable; the lack of discriminative power due to this simple additive formulation led to low explanatory performance. Our method considers not on
APA, Harvard, Vancouver, ISO, and other styles
42

Terra, Ahmad, Rafia Inam, and Elena Fersman. "BEERL: Both Ends Explanations for Reinforcement Learning." Applied Sciences 12, no. 21 (2022): 10947. http://dx.doi.org/10.3390/app122110947.

Full text
Abstract:
Deep Reinforcement Learning (RL) is a black-box method and is hard to understand because the agent employs a neural network (NN). To explain the behavior and decisions made by the agent, different eXplainable RL (XRL) methods are developed; for example, feature importance methods are applied to analyze the contribution of the input side of the model, and reward decomposition methods are applied to explain the components of the output end of the RL model. In this study, we present a novel method to connect explanations from both input and output ends of a black-box model, which results in fine-
APA, Harvard, Vancouver, ISO, and other styles
43

Zhang, Ruihan, Prashan Madumal, Tim Miller, Krista A. Ehinger, and Benjamin I. P. Rubinstein. "Invertible Concept-based Explanations for CNN Models with Non-negative Concept Activation Vectors." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (2021): 11682–90. http://dx.doi.org/10.1609/aaai.v35i13.17389.

Full text
Abstract:
Convolutional neural network (CNN) models for computer vision are powerful but lack explainability in their most basic form. This deficiency remains a key challenge when applying CNNs in important domains. Recent work on explanations through feature importance of approximate linear models has moved from input-level features (pixels or segments) to features from mid-layer feature maps in the form of concept activation vectors (CAVs). CAVs contain concept-level information and could be learned via clustering. In this work, we rethink the ACE algorithm of Ghorbani et~al., proposing an alternative
APA, Harvard, Vancouver, ISO, and other styles
44

Nakano, Shou, and Yang Liu. "Interpreting Temporal Shifts in Global Annual Data Using Local Surrogate Models." Mathematics 13, no. 4 (2025): 626. https://doi.org/10.3390/math13040626.

Full text
Abstract:
This paper focuses on explaining changes over time in globally sourced annual temporal data with the specific objective of identifying features in black-box models that contribute to these temporal shifts. Leveraging local explanations, a part of explainable machine learning/XAI, can yield explanations behind a country’s growth or downfall after making economic or social decisions. We employ a Local Interpretable Model-Agnostic Explanation (LIME) to shed light on national happiness indices, economic freedom, and population metrics, spanning variable time frames. Acknowledging the presence of m
APA, Harvard, Vancouver, ISO, and other styles
45

Jin, Weina, Xiaoxiao Li, and Ghassan Hamarneh. "Evaluating Explainable AI on a Multi-Modal Medical Imaging Task: Can Existing Algorithms Fulfill Clinical Requirements?" Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (2022): 11945–53. http://dx.doi.org/10.1609/aaai.v36i11.21452.

Full text
Abstract:
Being able to explain the prediction to clinical end-users is a necessity to leverage the power of artificial intelligence (AI) models for clinical decision support. For medical images, a feature attribution map, or heatmap, is the most common form of explanation that highlights important features for AI models' prediction. However, it is unknown how well heatmaps perform on explaining decisions on multi-modal medical images, where each image modality or channel visualizes distinct clinical information of the same underlying biomedical phenomenon. Understanding such modality-dependent features
APA, Harvard, Vancouver, ISO, and other styles
46

Admassu, Tsehay. "Evaluation of Local Interpretable Model-Agnostic Explanation and Shapley Additive Explanation for Chronic Heart Disease Detection." Proceedings of Engineering and Technology Innovation 23 (January 1, 2023): 48–59. http://dx.doi.org/10.46604/peti.2023.10101.

Full text
Abstract:
This study aims to investigate the effectiveness of local interpretable model-agnostic explanation (LIME) and Shapley additive explanation (SHAP) approaches for chronic heart disease detection. The efficiency of LIME and SHAP are evaluated by analyzing the diagnostic results of the XGBoost model and the stability and quality of counterfactual explanations. Firstly, 1025 heart disease samples are collected from the University of California Irvine. Then, the performance of LIME and SHAP is compared by using the XGBoost model with various measures, such as consistency and proximity. Finally, Pyth
APA, Harvard, Vancouver, ISO, and other styles
47

Lee, Eun-Hun, and Hyeoncheol Kim. "Feature-Based Interpretation of the Deep Neural Network." Electronics 10, no. 21 (2021): 2687. http://dx.doi.org/10.3390/electronics10212687.

Full text
Abstract:
The significant advantage of deep neural networks is that the upper layer can capture the high-level features of data based on the information acquired from the lower layer by stacking layers deeply. Since it is challenging to interpret what knowledge the neural network has learned, various studies for explaining neural networks have emerged to overcome this problem. However, these studies generate the local explanation of a single instance rather than providing a generalized global interpretation of the neural network model itself. To overcome such drawbacks of the previous approaches, we pro
APA, Harvard, Vancouver, ISO, and other styles
48

Li, Qiong. "Variations in developmental patterns across pragmatic features." Studies in Second Language Learning and Teaching 6, no. 4 (2016): 587–617. http://dx.doi.org/10.14746/ssllt.2016.6.4.3.

Full text
Abstract:
Drawing on the findings of longitudinal studies in uninstructed contexts over the last two decades, this synthesis explores variations in developmental patterns across second language (L2) pragmatic features. Two synthesis questions were addressed: (a) What are the variations in developmental patterns across pragmatic features?, and (b) What are the potential explanations for the variations? In response to the first question, previous studies showed that L2 pragmatic development is a non-linear, dynamic process, with developmental paces varying across pragmatic features (Ortactepe, 2013; Taguc
APA, Harvard, Vancouver, ISO, and other styles
49

Arnold, Nina R., Daniel W. Heck, Arndt Bröder, Thorsten Meiser, and C. Dennis Boywitt. "Testing Hypotheses About Binding in Context Memory With a Hierarchical Multinomial Modeling Approach." Experimental Psychology 66, no. 3 (2019): 239–51. http://dx.doi.org/10.1027/1618-3169/a000442.

Full text
Abstract:
Abstract. In experiments on multidimensional source memory, a stochastic dependency of source memory for different facets of an episode has been repeatedly demonstrated. This may suggest an integrated representation leading to mutual cuing in context retrieval. However, experiments involving a manipulated reinstatement of one source feature have often failed to affect retrieval of the other feature, suggesting unbound features or rather item-feature binding. The stochastic dependency found in former studies might be a spurious correlation due to aggregation across participants varying in memor
APA, Harvard, Vancouver, ISO, and other styles
50

Gebreyesus, Yibrah, Damian Dalton, Sebastian Nixon, Davide De Chiara, and Marta Chinnici. "Machine Learning for Data Center Optimizations: Feature Selection Using Shapley Additive exPlanation (SHAP)." Future Internet 15, no. 3 (2023): 88. http://dx.doi.org/10.3390/fi15030088.

Full text
Abstract:
The need for artificial intelligence (AI) and machine learning (ML) models to optimize data center (DC) operations increases as the volume of operations management data upsurges tremendously. These strategies can assist operators in better understanding their DC operations and help them make informed decisions upfront to maintain service reliability and availability. The strategies include developing models that optimize energy efficiency, identifying inefficient resource utilization and scheduling policies, and predicting outages. In addition to model hyperparameter tuning, feature subset sel
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!