To see the other types of publications on this topic, follow the link: Explanation.

Journal articles on the topic 'Explanation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Explanation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Atanasova, Pepa, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. "Diagnostics-Guided Explanation Generation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (2022): 10445–53. http://dx.doi.org/10.1609/aaai.v36i10.21287.

Full text
Abstract:
Explanations shed light on a machine learning model's rationales and can aid in identifying deficiencies in its reasoning process. Explanation generation models are typically trained in a supervised way given human explanations. When such annotations are not available, explanations are often selected as those portions of the input that maximise a downstream task's performance, which corresponds to optimising an explanation's Faithfulness to a given model. Faithfulness is one of several so-called diagnostic properties, which prior work has identified as useful for gauging the quality of an expl
APA, Harvard, Vancouver, ISO, and other styles
2

Clark, Stephen R. L. "The Limits of Explanation: Limited Explanations." Royal Institute of Philosophy Supplement 27 (March 1990): 195–210. http://dx.doi.org/10.1017/s1358246100005117.

Full text
Abstract:
When I was first approached to read a paper at the conference from which this volume takes its beginning I expected that Flint Schier, with whom I had taught a course on the Philosophy of Biology in my years at Glasgow, would be with us to comment and to criticize. I cannot let this occasion pass without expressing once again my own sense of loss. I am sure that we would all have gained by his presence, and hope that he would find things both to approve, and disapprove, in the following venture.
APA, Harvard, Vancouver, ISO, and other styles
3

Kleih, Björn-Christian. "Die mündliche Erklärung zur Abstimmung gemäß § 31 Absatz 1 GOBT – eine parlamentarische Wundertüte mit Potenzial?" Zeitschrift für Parlamentsfragen 51, no. 4 (2020): 865–87. http://dx.doi.org/10.5771/0340-1758-2020-4-865.

Full text
Abstract:
According to the Rules of Procedure of the German Bundestag (”GOBT”), every Member of Parliament is granted a five minutes’ verbal explanation of vote . It is granted for nearly every kind of vote in the House . The verbal explanation is often considered a privilege to MPs going against the position taken by their group . Yet, it is also used to confirm the party position and it is abused to continue already closed debates . In either case, they can be a grab bag for both parliament’s plenum and its president; the verbal explanation’s content is only revealed when the explanation is given . A
APA, Harvard, Vancouver, ISO, and other styles
4

Fogelin, Lars. "Inference to the Best Explanation: A Common and Effective Form of Archaeological Reasoning." American Antiquity 72, no. 4 (2007): 603–26. http://dx.doi.org/10.2307/25470436.

Full text
Abstract:
Processual and postprocessual archaeologists implicitly employ the same epistemological system to evaluate the worth of different explanations: inference to the best explanation. This is good since inference to the best explanation is the most effective epistemological approach to archaeological reasoning available. Underlying the logic of inference to the best explanation is the assumption that the explanation that accounts for the most evidence is also most likely to be true. This view of explanation often reflects the practice of archaeological reasoning better than either the hypothetico-d
APA, Harvard, Vancouver, ISO, and other styles
5

Brdnik, Saša, Vili Podgorelec, and Boštjan Šumak. "Assessing Perceived Trust and Satisfaction with Multiple Explanation Techniques in XAI-Enhanced Learning Analytics." Electronics 12, no. 12 (2023): 2594. http://dx.doi.org/10.3390/electronics12122594.

Full text
Abstract:
This study aimed to observe the impact of eight explainable AI (XAI) explanation techniques on user trust and satisfaction in the context of XAI-enhanced learning analytics while comparing two groups of STEM college students based on their Bologna study level, using various established feature relevance techniques, certainty, and comparison explanations. Overall, the students reported the highest trust in local feature explanation in the form of a bar graph. Additionally, master’s students presented with global feature explanations also reported high trust in this form of explanation. The high
APA, Harvard, Vancouver, ISO, and other styles
6

Weisberg, Deena Skolnick, Frank C. Keil, Joshua Goodstein, Elizabeth Rawson, and Jeremy R. Gray. "The Seductive Allure of Neuroscience Explanations." Journal of Cognitive Neuroscience 20, no. 3 (2008): 470–77. http://dx.doi.org/10.1162/jocn.2008.20040.

Full text
Abstract:
Explanations of psychological phenomena seem to generate more public interest when they contain neuroscientific information. Even irrelevant neuroscience information in an explanation of a psychological phenomenon may interfere with people's abilities to critically consider the underlying logic of this explanation. We tested this hypothesis by giving naïve adults, students in a neuroscience course, and neuroscience experts brief descriptions of psychological phenomena followed by one of four types of explanation, according to a 2 (good explanation vs. bad explanation) × 2 (without neuroscience
APA, Harvard, Vancouver, ISO, and other styles
7

Skorupski, John. "Explanation in the Social Sciences: Explanation and Understanding in Social Science." Royal Institute of Philosophy Supplement 27 (March 1990): 119–34. http://dx.doi.org/10.1017/s1358246100005075.

Full text
Abstract:
Hempelian orthodoxy on the nature of explanation in general, and on explanation in the social sciences in particular, holds that(a) full explanations are arguments(b) full explanations must include at least one law(c) reason explanations are causalDavid Ruben disputes (a) and (b) but he does not dispute (c). Nor does he dispute that ‘explanations in both natural and social science need laws in other ways, even when not as part of the explanation itself (p. 97 above). The distance between his view and the covering law theory, he points out, ‘is not as great as it may first appear to be’ (p. 97
APA, Harvard, Vancouver, ISO, and other styles
8

Yamaguchi, Shin'ya, and Kosuke Nishida. "Explanation Bottleneck Models." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 20 (2025): 21886–94. https://doi.org/10.1609/aaai.v39i20.35495.

Full text
Abstract:
Recent concept-based interpretable models have succeeded in providing meaningful explanations by pre-defined concept sets. However, the dependency on the pre-defined concepts restricts the application because of the limited number of concepts for explanations. This paper proposes a novel interpretable deep neural network called explanation bottleneck models (XBMs). XBMs generate a text explanation from the input without pre-defined concepts and then predict a final task prediction based on the generated explanation by leveraging pre-trained vision-language encoder-decoder models. To achieve bo
APA, Harvard, Vancouver, ISO, and other styles
9

Swinburne, Richard. "The Limits of Explanation: The Limits of Explanation." Royal Institute of Philosophy Supplement 27 (March 1990): 177–93. http://dx.doi.org/10.1017/s1358246100005105.

Full text
Abstract:
In purporting to explain the occurrence of some event or process we cite the causal factors which, we assert, brought it about or keeps it in being. The explanation is a true one if those factors did indeed bring it about or keep it in being. In discussing explanation I shall henceforward (unless I state otherwise) concern myself only with true explanations. I believe that there are two distinct kinds of way in which causal factors operate in the world, two distinct kinds of causality, and so two distinct kinds of explanation. For historical reasons, I shall call these kinds of causality and e
APA, Harvard, Vancouver, ISO, and other styles
10

Gillett, Carl. "WHY CONSTITUTIVE MECHANISTIC EXPLANATION CANNOT BE CAUSAL." American Philosophical Quarterly 57, no. 1 (2020): 31–50. http://dx.doi.org/10.2307/48570644.

Full text
Abstract:
Abstract In his “New Consensus” on explanation, Wesley Salmon (1989) famously argued that there are two kinds of scientific explanation: global, derivational, and unifying explanations, and then local, ontic explanations backed by causal relations. Following Salmon’s New Consensus, the dominant view in philosophy of science is what I term “neo-Causalism” which assumes that all ontic explanations of singular fact/event are causal explanations backed by causal relations, and that scientists only search for causal patterns or relations and only offer causal explanations of singular facts/events.
APA, Harvard, Vancouver, ISO, and other styles
11

Chalyi, Serhii, and Volodymyr Leshchynskyi. "POSSIBLE EVALUATION OF THE CORRECTNESS OF EXPLANATIONS TO THE END USER IN AN ARTIFICIAL INTELLIGENCE SYSTEM." Advanced Information Systems 7, no. 4 (2023): 75–79. http://dx.doi.org/10.20998/2522-9052.2023.4.10.

Full text
Abstract:
The subject of this paper is the process of evaluation of explanations in an artificial intelligence system. The aim is to develop a method for forming a possible evaluation of the correctness of explanations for the end user in an artificial intelligence system. The evaluation of the correctness of explanations makes it possible to increase the user's confidence in the solution of an artificial intelligence system and, as a result, to create conditions for the effective use of this solution. Aims: to structure explanations according to the user's needs; to develop an indicator of the correctn
APA, Harvard, Vancouver, ISO, and other styles
12

Kostić, Daniel, and Kareem Khalifa. "The directionality of topological explanations." Synthese 199, no. 5-6 (2021): 14143–65. http://dx.doi.org/10.1007/s11229-021-03414-y.

Full text
Abstract:
AbstractProponents of ontic conceptions of explanation require all explanations to be backed by causal, constitutive, or similar relations. Among their justifications is that only ontic conceptions can do justice to the ‘directionality’ of explanation, i.e., the requirement that if X explains Y, then not-Y does not explain not-X. Using topological explanations as an illustration, we argue that non-ontic conceptions of explanation have ample resources for securing the directionality of explanations. The different ways in which neuroscientists rely on multiplexes involving both functional and an
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, Yuhao, Shi-Jun Luo, Hyoil Han, Jun Miyazaki, and Alfrin Letus Saldanha. "Generating Personalized Explanations for Recommender Systems Using a Knowledge Base." International Journal of Multimedia Data Engineering and Management 12, no. 4 (2021): 20–37. http://dx.doi.org/10.4018/ijmdem.2021100102.

Full text
Abstract:
In the last decade, we have seen an increase in the need for interpretable recommendations. Explaining why a product is recommended to a user increases user trust and makes the recommendations more acceptable. The authors propose a personalized explanation generation system, PEREXGEN (personalized explanation generation) that generates personalized explanations for recommender systems using a model-agnostic approach. The proposed model consists of a recommender and an explanation module. Since they implement a model-agnostic approach to generate personalized explanations, they focus more on th
APA, Harvard, Vancouver, ISO, and other styles
14

Morton, Adam. "Mathematical Modelling and Contrastive Explanation." Canadian Journal of Philosophy Supplementary Volume 16 (1990): 251–70. http://dx.doi.org/10.1080/00455091.1990.10717228.

Full text
Abstract:
This is an enquiry into flawed explanations. Most of the effort in studies of the concept of explanation, scientific or otherwise, has gone into the contrast between clear cases of explanation and clear non-explanations.
APA, Harvard, Vancouver, ISO, and other styles
15

Schurz, Gerhard. "Causality and Unification: How Causality Unifies Statistical Regularities." THEORIA. An International Journal for Theory, History and Foundations of Science 30, no. 1 (2015): 73. http://dx.doi.org/10.1387/theoria.11913.

Full text
Abstract:
Two key ideas of scientific explanation - explanations as causal information and explanation as unification - have frequently been set into mutual opposition. This paper proposes a "dialectical solution" to this conflict, by arguing that causal explanations are preferable to non-causal explanations because they lead to a higher degree of unification at the level of the explanation of statistical regularities. The core axioms of the theory of causal nets (TC) are justified because they give the best if not the only unifying explanation of two statistical phenomena: screening off and linking up.
APA, Harvard, Vancouver, ISO, and other styles
16

Janssen, Annelli. "Het web-model." Algemeen Nederlands Tijdschrift voor Wijsbegeerte 111, no. 3 (2019): 419–32. http://dx.doi.org/10.5117/antw2019.3.007.jans.

Full text
Abstract:
Abstract The web-model: A new model of explanation for neuroimaging studiesWhat can neuroimaging tell us about the relation between our brain and our mind? A lot, or so I argue. But neuroscientists should update their model of explanation. Currently, many explanations are (implicitly) based on what I call the ‘mapping model’: a model of explanation which centers on mapping relations between cognition and the brain. I argue that these mappings give us very little information, and that instead, we should focus on finding causal relations. If we take a difference-making approach to causation, we
APA, Harvard, Vancouver, ISO, and other styles
17

Belkoniene, Miloud. "Explanationism, Circularity and Non-Evaluative Grounding." Grazer Philosophische Studien 101, no. 1 (2024): 28–46. https://doi.org/10.1163/18756735-00000212.

Full text
Abstract:
Abstract The present article examines two important challenges raised by Steup for explanationist accounts of evidential fit. The first challenge targets the notion of available explanation which is key to any explanationist account of evidential fit. According to Steup, any plausible construal of the notion of available explanation already presupposes the notion of evidential fit. In response to that challenge, an alternative conception of what it takes for an explanation to be available to a subject is offered and shown to be able to shed better light on the specific role played by that noti
APA, Harvard, Vancouver, ISO, and other styles
18

Jansson, Lina. "Network explanations and explanatory directionality." Philosophical Transactions of the Royal Society B: Biological Sciences 375, no. 1796 (2020): 20190318. http://dx.doi.org/10.1098/rstb.2019.0318.

Full text
Abstract:
Network explanations raise foundational questions about the nature of scientific explanation. The challenge discussed in this article comes from the fact that network explanations are often thought to be non-causal, i.e. they do not describe the dynamical or mechanistic interactions responsible for some behaviour, instead they appeal to topological properties of network models describing the system. These non-causal features are often thought to be valuable precisely because they do not invoke mechanistic or dynamical interactions and provide insights that are not available through causal expl
APA, Harvard, Vancouver, ISO, and other styles
19

Chalyi, Sergiy, and Volodymyr Leshchynskyi. "Construction of explanations in intelligent systems based on the formation of causal dependencies." Management Information System and Devises, no. 180 (May 22, 2024): 4–15. http://dx.doi.org/10.30837/0135-1710.2024.180.004.

Full text
Abstract:
The article considers the process of building explanations in intelligent information systems. A causal approach to building explanations in such systems is developed, which creates conditions for automated refinement of explanations in order to make them understandable for users, taking into account their goals and needs. The explanation is built using the indicators of possibility and necessity, which makes it possible to take into account the uncertainty of the intermediate data of the intelligent system presented in the form of a "black box". Within the framework of the proposed approach,
APA, Harvard, Vancouver, ISO, and other styles
20

Serhii, Chalyi, Leshchynskyi Volodymyr, and Leshchynska Iryna. "DETAILING EXPLANATIONS IN THE RECOMMENDER SYSTEM BASED ON MATCHING TEMPORAL KNOWLEDGE." Eastern-European Journal of Enterprise Technologies 4, no. 2 (106) (2020): 6–13. https://doi.org/10.15587/1729-4061.2020.210013.

Full text
Abstract:
The problem of matching knowledge in the temporal aspect when constructing explanations for recommendations is considered. Matching allows reducing the influence of conflicting knowledge on the explanation in a recommender system. A model of knowledge representation in the form of a temporal rule with the explanation constraint is proposed. The temporal rule sets the order for two sets of events of the same type that occurred at two different time intervals in time. An explanation constraint establishes a correspondence between the temporal order represented by the rule for a pair of intervals
APA, Harvard, Vancouver, ISO, and other styles
21

PREISENDÖRFER, PETER, ANSGAR BITZ, and FRANS J. BEZUIDENHOUT. "IN SEARCH OF BLACK ENTREPRENEURSHIP: WHY IS THERE A LACK OF ENTREPRENEURIAL ACTIVITY AMONG THE BLACK POPULATION IN SOUTH AFRICA?" Journal of Developmental Entrepreneurship 17, no. 01 (2012): 1250006. http://dx.doi.org/10.1142/s1084946712500069.

Full text
Abstract:
Compared to other ethnic groups, the black population of South Africa has a low participation rate in entrepreneurship activities. The research question of this article is to explain this empirical fact. Based on twenty-four expert interviews, five patterns of explanation are presented and elaborated: a historical apartheid explanation, a financial resources explanation, a human capital explanation, a traits and mindset explanation and a social capital and network explanation. The historical apartheid explanation cannot be qualified independently of the other explanations as a distinctive expl
APA, Harvard, Vancouver, ISO, and other styles
22

Hardcastle, Valerie Gray. "[Explanation] Is Explanation Better." Philosophy of Science 64, no. 1 (1997): 154–60. http://dx.doi.org/10.1086/392540.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Redhead, Michael. "Explanation in Physics: Explanation." Royal Institute of Philosophy Supplement 27 (March 1990): 135–54. http://dx.doi.org/10.1017/s1358246100005087.

Full text
Abstract:
In what sense do the sciences explain? Or do they merely describe what is going on without answering why-questions at all. But cannot description at an appropriate ‘level’ provide all that we can reasonably ask of an explanation? Well, what do we mean by explanation anyway? What, if anything, gets left out when we provide a so-called scientific explanation? Are there limits of explanation in general, and scientific explanation, in particular? What are the criteria for a good explanation? Is it possible to satisfy all the desiderata simultaneously? If not, which should we regard as paramount? W
APA, Harvard, Vancouver, ISO, and other styles
24

Ceylan, İsmail İlkan, Thomas Lukasiewicz, Enrico Malizia, Cristian Molinaro, and Andrius Vaicenavičius. "Preferred Explanations for Ontology-Mediated Queries under Existential Rules." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 7 (2021): 6262–70. http://dx.doi.org/10.1609/aaai.v35i7.16778.

Full text
Abstract:
Recently, explanations for query answers under existential rules have been investigated, where an explanation is an inclusion-minimal subset of a given database that, together with the ontology, entails the query. In this paper, we take a step further and study explanations under different minimality criteria. In particular, we first study cardinality-minimal explanations and hence focus on deriving explanations of minimum size. We then study a more general preference order induced by a weight distribution. We assume that every database fact is annotated with a (penalization) weight, and we ar
APA, Harvard, Vancouver, ISO, and other styles
25

Meng, Fanyu, Xin Liu, Zhaodan Kong, and Xin Chen. "CohEx: A Generalized Framework for Cohort Explanation." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 18 (2025): 19440–48. https://doi.org/10.1609/aaai.v39i18.34140.

Full text
Abstract:
eXplainable Artificial Intelligence (XAI) has garnered significant attention for enhancing transparency and trust in machine learning models. However, the scopes of most existing explanation techniques focus either on offering a holistic view of the explainee model (global explanation) or on individual instances (local explanation), while the middle ground, i.e., cohort-based explanation, is less explored. Cohort explanations offer insights into the explainee's behavior on a specific group or cohort of instances, enabling a deeper understanding of model decisions within a defined context. In t
APA, Harvard, Vancouver, ISO, and other styles
26

Gilbert, Nigel. "Explanation and dialogue." Knowledge Engineering Review 4, no. 3 (1989): 235–47. http://dx.doi.org/10.1017/s026988890000504x.

Full text
Abstract:
AbstractRecent approaches to providing advisory knowledge-based systems with explanation capabilities are reviewed. The importance of explaining a system's behaviour and conclusions was recognized early in the development of expert systems. Initial approaches were based on the presentation of an edited proof trace to the user, but while helpful for debugging knowledge bases, these explanations are of limited value to most users. Current work aims to expand the kinds of explanation which can be offered and to embed explanations into a dialogue so that the topic of the explanation can be negotia
APA, Harvard, Vancouver, ISO, and other styles
27

Swamy, Vinitra, Davide Romano, Bhargav Srinivasa Desikan, Oana-Maria Camburu, and Tanja Käser. "iLLuMinaTE: An LLM-XAI Framework Leveraging Social Science Explanation Theories Towards Actionable Student Performance Feedback." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 27 (2025): 28431–39. https://doi.org/10.1609/aaai.v39i27.35065.

Full text
Abstract:
Recent advances in eXplainable AI (XAI) for education have highlighted a critical challenge: ensuring that explanations for state-of-the-art models are understandable for non-technical users such as educators and students. In response, we introduce iLLuMinaTE, a zero-shot, chain-of-prompts LLM-XAI pipeline inspired by Miller (2019)'s cognitive model of explanation. iLLuMinaTE is designed to deliver theory-driven, actionable feedback to students in online courses. iLLuMinaTE navigates three main stages — causal connection, explanation selection, and explanation presentation — with variations dr
APA, Harvard, Vancouver, ISO, and other styles
28

de Jong, Sander, Ville Paananen, Benjamin Tag, and Niels van Berkel. "Cognitive Forcing for Better Decision-Making: Reducing Overreliance on AI Systems Through Partial Explanations." Proceedings of the ACM on Human-Computer Interaction 9, no. 2 (2025): 1–30. https://doi.org/10.1145/3710946.

Full text
Abstract:
In AI-assisted decision-making, explanations aim to enhance transparency and user trust but can also lead to negligence. In two separate studies, we explore the use of partial explanations to activate cognitive forcing and increase user engagement. In Study I (N = 264), we present participants with weighted graphs and ask them to identify the shortest paths. In Study II (N = 210), participants correct spelling and grammar mistakes in short text segments. In both studies, we provide a solution suggestion accompanied by either no explanation, a full explanation, or a partial explanation. Our res
APA, Harvard, Vancouver, ISO, and other styles
29

Caro-Martínez, Marta, Guillermo Jiménez-Díaz, and Juan A. Recio-García. "Conceptual Modeling of Explainable Recommender Systems: An Ontological Formalization to Guide Their Design and Development." Journal of Artificial Intelligence Research 71 (July 24, 2021): 557–89. http://dx.doi.org/10.1613/jair.1.12789.

Full text
Abstract:
With the increasing importance of e-commerce and the immense variety of products, users need help to decide which ones are the most interesting to them. This is one of the main goals of recommender systems. However, users’ trust may be compromised if they do not understand how or why the recommendation was achieved. Here, explanations are essential to improve user confidence in recommender systems and to make the recommendation useful.
 Providing explanation capabilities into recommender systems is not an easy task as their success depends on several aspects such as the explanation’s goal
APA, Harvard, Vancouver, ISO, and other styles
30

Chalyi, Serhii, and Irina Leshchynska. "THE CONCEPTUAL MENTAL MODEL OF EXPLANATION IN AN ARTIFICIAL INTELLIGENCE SYSTEM." Bulletin of National Technical University "KhPI". Series: System Analysis, Control and Information Technologies, no. 1 (9) (July 15, 2023): 70–75. http://dx.doi.org/10.20998/2079-0023.2023.01.11.

Full text
Abstract:
The subject of research is the process of formation of explanations in artificial intelligence systems. To solve the problem of the opacity of decision-making in artificial intelligence systems, users should receive an explanation of the decisions made. The explanation allows you to trust these solutions and ensure their use in practice. The purpose of the work is to develop a conceptual mental model of explanation to determine the basic dependencies that determine the relationship between input data, as well as actions to obtain a result in an intelligent system, and its final solution. To ac
APA, Harvard, Vancouver, ISO, and other styles
31

Chalyi, Sergiy F., and Volodymyr O. Leshchynskyi. "Temporal-causal methods for constructing explanations in artificial intelligence systems." Management Information System and Devises, no. 181 (September 16, 2024): 91–99. http://dx.doi.org/10.30837/0135-1710.2024.181.091.

Full text
Abstract:
The subject of the research is the process of constructing explanations in artificial intelligence systems. The goal is to develop a temporal-causal approach to constructing explanations in artificial intelligence systems to present explanations both for the decision-making process and the obtained decision, and to make them transparent and understandable for solving practical user tasks. Tasks: structuring the levels of explanation representation considering temporal and causal aspects; developing a generalized method for constructing explanations using temporal and causal dependencies; devel
APA, Harvard, Vancouver, ISO, and other styles
32

Richards, Graham. "The Psychology of Explanation." History & Philosophy of Psychology 7, no. 1 (2005): 53–61. http://dx.doi.org/10.53841/bpshpp.2005.7.1.53.

Full text
Abstract:
While there are extensive literatures on the nature of scientific explanation, ‘psychological explanation’ and the logical character of ‘good’ explanations, relatively little has been written that considers the seeking and offering of explanations as psychological phenomena in their own right. In this paper it is suggested that, fundamentally, explanations are responses to specific puzzles and that a ‘good’ explanation is, operationally, simply one that leaves the person requiring it no longer feeling puzzled. (To achieve this it may of course have to meet all kinds of criteria set by the indi
APA, Harvard, Vancouver, ISO, and other styles
33

Rittle-Johnson, Bethany, and Abbey M. Loehr. "Eliciting explanations: Constraints on when self-explanation aids learning." Psychonomic Bulletin & Review 24, no. 5 (2016): 1501–10. http://dx.doi.org/10.3758/s13423-016-1079-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Mamun, Tauseef Ibne, Kenzie Baker, Hunter Malinowski, Rober R. Hoffman, and Shane T. Mueller. "Assessing Collaborative Explanations of AI using Explanation Goodness Criteria." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 65, no. 1 (2021): 988–93. http://dx.doi.org/10.1177/1071181321651307.

Full text
Abstract:
Explainable AI represents an increasingly important category of systems that attempt to support human understanding and trust in machine intelligence and automation. Typical systems rely on algorithms to help understand underlying information about decisions and establish justified trust and reliance. Researchers have proposed using goodness criteria to measure the quality of explanations as a formative evaluation of an XAI system, but these criteria have not been systematically investigated in the literature. To explore this, we present a novel collaborative explanation system (CXAI) and prop
APA, Harvard, Vancouver, ISO, and other styles
35

Páez, Andrés. "Artificial explanations: the epistemological interpretation of explanation in AI." Synthese 170, no. 1 (2008): 131–46. http://dx.doi.org/10.1007/s11229-008-9361-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Madumal, Prashan. "Explainable Agency in Reinforcement Learning Agents." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 10 (2020): 13724–25. http://dx.doi.org/10.1609/aaai.v34i10.7134.

Full text
Abstract:
This thesis explores how reinforcement learning (RL) agents can provide explanations for their actions and behaviours. As humans, we build causal models to encode cause-effect relations of events and use these to explain why events happen. Taking inspiration from cognitive psychology and social science literature, I build causal explanation models and explanation dialogue models for RL agents. By mimicking human-like explanation models, these agents can provide explanations that are natural and intuitive to humans.
APA, Harvard, Vancouver, ISO, and other styles
37

Faul, Bogdan V. "Externalism about Moral Responsibility: Modification of A. Mele’s Thought Experiment." Ethical Thought 21, no. 1 (2021): 40–49. http://dx.doi.org/10.21146/2074-4870-2021-21-1-40-49.

Full text
Abstract:
The author modifies A. Mele’s thought experiment for externalism about moral res­ponsibility, which suggests that the agent’s history partially determines whether the agent is morally responsible for particular actions, or the consequences of actions. The original thought experiment constructs a situation in which the individual is not morally responsible for the killing because of manipulation, that is, for a reason external to the agent. A. Mele’s theory was criticized by A.V. Mertsalov, D.B. Volkov, and V.V. Vasiliev at the seminar orga­nized by the Moscow Center for Consciousness. The argu
APA, Harvard, Vancouver, ISO, and other styles
38

Lei, Xia, Jia-Jiang Lin, Xiong-Lin Luo, and Yongkai Fan. "Explaining deep residual networks predictions with symplectic adjoint method." Computer Science and Information Systems, no. 00 (2023): 47. http://dx.doi.org/10.2298/csis230310047l.

Full text
Abstract:
Understanding deep residual networks (ResNets) decisions are receiving much attention as a way to ensure their security and reliability. Recent research, however, lacks theoretical analysis to guarantee the faithfulness of explanations and could produce an unreliable explanation. In order to explain ResNets predictions, we suggest a provably faithful explanation for ResNet using a surrogate explainable model, a neural ordinary differential equation network (Neural ODE). First, ResNets are proved to converge to a Neural ODE and the Neural ODE is regarded as a surrogate model to explain the deci
APA, Harvard, Vancouver, ISO, and other styles
39

Hiller, Sara, Stefan Rumann, Kirsten Berthold, and Julian Roelle. "Example-based learning: should learners receive closed-book or open-book self-explanation prompts?" Instructional Science 48, no. 6 (2020): 623–49. http://dx.doi.org/10.1007/s11251-020-09523-4.

Full text
Abstract:
AbstractIn learning from examples, students are often first provided with basic instructional explanations of new principles and concepts and second with examples thereof. In this sequence, it is important that learners self-explain by generating links between the basic instructional explanations’ content and the examples. Therefore, it is well established that learners receive self-explanation prompts. However, there is hardly any research on whether these prompts should be provided in a closed-book format—in which learners cannot access the basic instructional explanations during self-explai
APA, Harvard, Vancouver, ISO, and other styles
40

Izza, Yacine, Alexey Ignatiev, and Joao Marques-Silva. "On Tackling Explanation Redundancy in Decision Trees." Journal of Artificial Intelligence Research 75 (September 29, 2022): 261–321. http://dx.doi.org/10.1613/jair.1.13575.

Full text
Abstract:
Decision trees (DTs) epitomize the ideal of interpretability of machine learning (ML) models. The interpretability of decision trees motivates explainability approaches by so-called intrinsic interpretability, and it is at the core of recent proposals for applying interpretable ML models in high-risk applications. The belief in DT interpretability is justified by the fact that explanations for DT predictions are generally expected to be succinct. Indeed, in the case of DTs, explanations correspond to DT paths. Since decision trees are ideally shallow, and so paths contain far fewer features th
APA, Harvard, Vancouver, ISO, and other styles
41

Maroney, James J., Timothy J. Rupert, and Martha L. Wartick. "The Perceived Fairness of Taxing Social Security Benefits: The Effect of Explanations Based on Different Dimensions of Tax Equity." Journal of the American Taxation Association 24, no. 2 (2002): 79–92. http://dx.doi.org/10.2308/jata.2002.24.2.79.

Full text
Abstract:
In this study, we construct explanations for the taxation of social security benefits based on previously identified dimensions of fairness (exchange, horizontal, and vertical equity). We then conduct an experiment to examine whether providing senior citizen taxpayers with explanations increases the perceived fairness of taxing social security. The results indicate that for those subjects with the greatest self-interest (subjects currently taxed on a portion of their social security benefits), the exchange equity explanation had the most consistent positive effects on both acceptance of the ex
APA, Harvard, Vancouver, ISO, and other styles
42

Kwan, Kai-man. "An Atheistic Argument from Naturalistic Explanations of Religious Belief: A Preliminary Reply to Robert Nola." Religions 13, no. 11 (2022): 1084. http://dx.doi.org/10.3390/rel13111084.

Full text
Abstract:
Robert Nola has recently defended an argument against the existence of God on the basis of naturalistic explanations of religious belief. I will critically evaluate his argument in this paper. Nola’s argument takes the form of an inference to the best explanation: since the naturalistic stance offers a better explanation of religious belief relative to the theistic explanation, the ontology of God(s) is eliminated. I rebut Nola’s major assumption that naturalistic explanations and theistic explanations of religion are incompatible. I go on to criticize Nola’s proposed naturalistic explanations
APA, Harvard, Vancouver, ISO, and other styles
43

Núñez Michea, Felipe. "Explicación e inferencialismo." Culturas Científicas 5, no. 1 (2024): 32–39. https://doi.org/10.35588/cc.v5i1.6588.

Full text
Abstract:
In this paper, I propose a possible anti-realist solution to the paradox of explanation. This proposal is in line with the instrumentalist position of Reiss (2012b, 2013) but goes further: a normative inferentialist position of explanation is proposed. First, I will show some fundamental aspects of inferentialism in the context of explanation, and then, I will show how an inferentialist position regarding paradox of explanation reduces to accepting the following three propositions: 1. models are true in the sense of true explanations, 2. models explain, and 3. only true explanations explain. T
APA, Harvard, Vancouver, ISO, and other styles
44

Atauchi, Paul Dany Flores, André Levi Zanon, Leonardo Chaves Dutra da Rocha, and Marcelo Garcia Manzato. "Do Calibrated Recommendations Affect Explanations? A Study on Post-Hoc Adjustments." Journal on Interactive Systems 16, no. 1 (2025): 441–60. https://doi.org/10.5753/jis.2025.5563.

Full text
Abstract:
Recommender systems generate suggestions by identifying relationships among past interactions, user similarities, and item metadata. Recently, there has been an increased focus on evaluating recommendations based not only on accuracy but also on aspects like transparency and calibration. Transparency is important, as explanations can enhance user trust and persuasion, while calibration aligns users’ interests with recommendation lists, improving fairness and reducing popularity bias. Traditionally, calibration and explanation are applied in post-processing. Our study investigates two key resea
APA, Harvard, Vancouver, ISO, and other styles
45

Eiras-Franco, Carlos, Anna Hedström, and Marina M. C. Höhne. "Evaluate with the Inverse: Efficient Approximation of Latent Explanation Quality Distribution." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 26 (2025): 27258–67. https://doi.org/10.1609/aaai.v39i26.34935.

Full text
Abstract:
Obtaining high-quality explanations of a model's output enables developers to identify and correct biases, align the system's behavior with human values, and ensure ethical compliance. Explainable Artificial Intelligence (XAI) practitioners rely on specific measures to gauge the quality of such explanations. These measures assess key attributes, such as how closely an explanation aligns with a model's decision process (faithfulness), how accurately it pinpoints the relevant input features (localization), and its consistency across different cases (robustness). Despite providing valuable inform
APA, Harvard, Vancouver, ISO, and other styles
46

Castro, Eduardo. "A deductive-nomological model for mathematical scientific explanation." Principia: an international journal of epistemology 24, no. 1 (2020): 1–27. http://dx.doi.org/10.5007/1808-1711.2020v24n1p1.

Full text
Abstract:
I propose a deductive-nomological model for mathematical scientific explanation. In this regard, I modify Hempel’s deductive-nomological model and test it against some of the following recent paradigmatic examples of the mathematical explanation of empirical facts: the seven bridges of Königsberg, the North American synchronized cicadas, and Hénon-Heiles Hamiltonian systems. I argue that mathematical scientific explanations that invoke laws of nature are qualitative explanations, and ordinary scientific explanations that employ mathematics are quantitative explanations. I analyse the repercuss
APA, Harvard, Vancouver, ISO, and other styles
47

Chalyi, Serhii, and Volodymyr Leshchynskyi. "A METHOD FOR EVALUATING EXPLANATIONS IN AN ARTIFICIAL INTELLIGENCE SYSTEM USING POSSIBILITY THEORY." Bulletin of National Technical University "KhPI". Series: System Analysis, Control and Information Technologies, no. 2 (10) (December 19, 2023): 95–101. http://dx.doi.org/10.20998/2079-0023.2023.02.14.

Full text
Abstract:
The subject of the research is the process of generating explanations for the decision of an artificial intelligence system. Explanations are used to help the user understand the process of reaching the result and to be able to use an intelligent information system more effectively to make practical decisions for him or her. The purpose of this paper is to develop a method for evaluating explanations taking into account differences in input data and the corresponding decision of an artificial intelligence system. The solution of this problem makes it possible to evaluate the relevance of the e
APA, Harvard, Vancouver, ISO, and other styles
48

Ray, Arijit, Yi Yao, Rakesh Kumar, Ajay Divakaran, and Giedrius Burachas. "Can You Explain That? Lucid Explanations Help Human-AI Collaborative Image Retrieval." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7 (October 28, 2019): 153–61. http://dx.doi.org/10.1609/hcomp.v7i1.5275.

Full text
Abstract:
While there have been many proposals on making AI algorithms explainable, few have attempted to evaluate the impact of AI-generated explanations on human performance in conducting human-AI collaborative tasks. To bridge the gap, we propose a Twenty-Questions style collaborative image retrieval game, Explanation-assisted Guess Which (ExAG), as a method of evaluating the efficacy of explanations (visual evidence or textual justification) in the context of Visual Question Answering (VQA). In our proposed ExAG, a human user needs to guess a secret image picked by the VQA agent by asking natural la
APA, Harvard, Vancouver, ISO, and other styles
49

Lai, Chengen, Shengli Song, Shiqi Meng, Jingyang Li, Sitong Yan, and Guangneng Hu. "Towards More Faithful Natural Language Explanation Using Multi-Level Contrastive Learning in VQA." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (2024): 2849–57. http://dx.doi.org/10.1609/aaai.v38i3.28065.

Full text
Abstract:
Natural language explanation in visual question answer (VQA-NLE) aims to explain the decision-making process of models by generating natural language sentences to increase users' trust in the black-box systems. Existing post-hoc methods have achieved significant progress in obtaining a plausible explanation. However, such post-hoc explanations are not always aligned with human logical inference, suffering from the issues on: 1) Deductive unsatisfiability, the generated explanations do not logically lead to the answer; 2) Factual inconsistency, the model falsifies its counterfactual explanation
APA, Harvard, Vancouver, ISO, and other styles
50

Halliwell, Nicholas. "Evaluating Explanations of Relational Graph Convolutional Network Link Predictions on Knowledge Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (2022): 12880–81. http://dx.doi.org/10.1609/aaai.v36i11.21577.

Full text
Abstract:
Recently, explanation methods have been proposed to evaluate the predictions of Graph Neural Networks on the task of link prediction. Evaluating explanation quality is difficult without ground truth explanations. This thesis is focused on providing a method, including datasets and scoring metrics, to quantitatively evaluate explanation methods on link prediction on Knowledge Graphs.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!