Academic literature on the topic 'Feature explanation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Feature explanation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Feature explanation"

1

Brdnik, Saša, Vili Podgorelec, and Boštjan Šumak. "Assessing Perceived Trust and Satisfaction with Multiple Explanation Techniques in XAI-Enhanced Learning Analytics." Electronics 12, no. 12 (2023): 2594. http://dx.doi.org/10.3390/electronics12122594.

Full text
Abstract:
This study aimed to observe the impact of eight explainable AI (XAI) explanation techniques on user trust and satisfaction in the context of XAI-enhanced learning analytics while comparing two groups of STEM college students based on their Bologna study level, using various established feature relevance techniques, certainty, and comparison explanations. Overall, the students reported the highest trust in local feature explanation in the form of a bar graph. Additionally, master’s students presented with global feature explanations also reported high trust in this form of explanation. The high
APA, Harvard, Vancouver, ISO, and other styles
2

Chapman-Rounds, Matt, Umang Bhatt, Erik Pazos, Marc-Andre Schulz, and Konstantinos Georgatzis. "FIMAP: Feature Importance by Minimal Adversarial Perturbation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (2021): 11433–41. http://dx.doi.org/10.1609/aaai.v35i13.17362.

Full text
Abstract:
Instance-based model-agnostic feature importance explanations (LIME, SHAP, L2X) are a popular form of algorithmic transparency. These methods generally return either a weighting or subset of input features as an explanation for the classification of an instance. An alternative literature argues instead that counterfactual instances, which alter the black-box model's classification, provide a more actionable form of explanation. We present Feature Importance by Minimal Adversarial Perturbation (FIMAP), a neural network based approach that unifies feature importance and counterfactual explanatio
APA, Harvard, Vancouver, ISO, and other styles
3

Olatunji, Iyiola E., Mandeep Rathee, Thorben Funke, and Megha Khosla. "Private Graph Extraction via Feature Explanations." Proceedings on Privacy Enhancing Technologies 2023, no. 2 (2023): 59–78. http://dx.doi.org/10.56553/popets-2023-0041.

Full text
Abstract:
Privacy and interpretability are two important ingredients for achieving trustworthy machine learning. We study the interplay of these two aspects in graph machine learning through graph reconstruction attacks. The goal of the adversary here is to reconstruct the graph structure of the training data given access to model explanations. Based on the different kinds of auxiliary information available to the adversary, we propose several graph reconstruction attacks. We show that additional knowledge of post-hoc feature explanations substantially increases the success rate of these attacks. Furthe
APA, Harvard, Vancouver, ISO, and other styles
4

Izza, Yacine, Alexey Ignatiev, Peter J. Stuckey, and Joao Marques-Silva. "Delivering Inflated Explanations." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (2024): 12744–53. http://dx.doi.org/10.1609/aaai.v38i11.29170.

Full text
Abstract:
In the quest for Explainable Artificial Intelligence (XAI) one of the questions that frequently arises given a decision made by an AI system is, ``why was the decision made in this way?'' Formal approaches to explainability build a formal model of the AI system and use this to reason about the properties of the system. Given a set of feature values for an instance to be explained, and a resulting decision, a formal abductive explanation is a set of features, such that if they take the given value will always lead to the same decision. This explanation is useful, it shows that only some feature
APA, Harvard, Vancouver, ISO, and other styles
5

An, Shuai, and Yang Cao. "Relative Keys: Putting Feature Explanation into Context." Proceedings of the ACM on Management of Data 2, no. 1 (2024): 1–28. http://dx.doi.org/10.1145/3639263.

Full text
Abstract:
Formal feature explanations strictly maintain perfect conformity but are intractable to compute, while heuristic methods are much faster but can lead to problematic explanations due to lack of conformity guarantees. We propose relative keys that have the best of both worlds. Relative keys associate feature explanations with a set of instances as context, and warrant perfect conformity over the context as formal explanations do, whilst being orders of magnitudes faster and working for complex blackbox models. Based on it, we develop CCE, a prototype that computes explanations with provably boun
APA, Harvard, Vancouver, ISO, and other styles
6

AlJalaud, Ebtisam, and Manar Hosny. "Enhancing Explainable Artificial Intelligence: Using Adaptive Feature Weight Genetic Explanation (AFWGE) with Pearson Correlation to Identify Crucial Feature Groups." Mathematics 12, no. 23 (2024): 3727. http://dx.doi.org/10.3390/math12233727.

Full text
Abstract:
The ‘black box’ nature of machine learning (ML) approaches makes it challenging to understand how most artificial intelligence (AI) models make decisions. Explainable AI (XAI) aims to provide analytical techniques to understand the behavior of ML models. XAI utilizes counterfactual explanations that indicate how variations in input features lead to different outputs. However, existing methods must also highlight the importance of features to provide more actionable explanations that would aid in the identification of key drivers behind model decisions—and, hence, more reliable interpretations—
APA, Harvard, Vancouver, ISO, and other styles
7

Utkin, Lev, and Andrei Konstantinov. "Ensembles of Random SHAPs." Algorithms 15, no. 11 (2022): 431. http://dx.doi.org/10.3390/a15110431.

Full text
Abstract:
The ensemble-based modifications of the well-known SHapley Additive exPlanations (SHAP) method for the local explanation of a black-box model are proposed. The modifications aim to simplify the SHAP which is computationally expensive when there is a large number of features. The main idea behind the proposed modifications is to approximate the SHAP by an ensemble of SHAPs with a smaller number of features. According to the first modification, called the ER-SHAP, several features are randomly selected many times from the feature set, and the Shapley values for the features are computed by means
APA, Harvard, Vancouver, ISO, and other styles
8

Lin, Ming-Yen, I.-Chen Hsieh, and Sue-Chen Hsush. "Enhancing Personalized Explainable Recommendations with Transformer Architecture and Feature Handling." Electronics 14, no. 5 (2025): 998. https://doi.org/10.3390/electronics14050998.

Full text
Abstract:
The advancement of explainable recommendations aims to improve the quality of textual explanations for recommendations. Traditional methods primarily used Recurrent Neural Networks (RNNs) or their variants to generate personalized explanations. However, recent research has focused on leveraging Transformer architectures to enhance explanations by extracting user reviews and incorporating features from interacted items. Nevertheless, previous studies have failed to fully exploit the relationship between reviews and user ratings to generate more personalized explanations. In this paper, we propo
APA, Harvard, Vancouver, ISO, and other styles
9

Beckh, Katharina, Joann Rachel Jacob, Adrian Seeliger, Stefan Rüping, and Najmeh Mousavi Nejad. "Limitations of Feature Attribution in Long Text Classification of Standards." Proceedings of the AAAI Symposium Series 4, no. 1 (2024): 10–17. http://dx.doi.org/10.1609/aaaiss.v4i1.31765.

Full text
Abstract:
Managing complex AI systems requires insight into a model's decision-making processes. Understanding how these systems arrive at their conclusions is essential for ensuring reliability. In the field of explainable natural language processing, many approaches have been developed and evaluated. However, experimental analysis of explainability for text classification has been largely constrained to short text and binary classification. In this applied work, we study explainability for a real-world task where the goal is to assess the technological suitability of standards. This prototypical use c
APA, Harvard, Vancouver, ISO, and other styles
10

Long, Marilee. "Scientific explanation in US newspaper science stories." Public Understanding of Science 4, no. 2 (1995): 119–30. http://dx.doi.org/10.1088/0963-6625/4/2/002.

Full text
Abstract:
Mass media are important sources of science information for many adults. However, this study, which reports a content analysis of science stories in 100 US newspapers, found that while 70 newspapers carried science stories, the majority of these stories contained little scientific explanation. Ten percent or less of content was comprised of elucidating (definitions of terms) and/or quasi-scientific explanations (explications of relationships among scientific concepts). The study also investigated the effect of production-based variables on scientific explanation. Stories in feature and science
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Feature explanation"

1

Kane, Mark Vinton. "Transient subsurface features in Mars Express radar data: an explanation based on ionospheric holes." Thesis, University of Iowa, 2012. https://ir.uiowa.edu/etd/3477.

Full text
Abstract:
This study was motivated by the discovery of semi-circular subsurface craters, or basins, at multiple locations on Mars by the MARSIS (Mars Advanced Radar for Subsurface and Ionospheric Sounding) radar sounder on board the Mars Express spacecraft. The nature of these subsurface structures was called into question when it was realized that some of the radar observations were not repeatable on subsequent passes over the same region. If they were true geological structures, such as ancient craters buried by dust, one would expect to always see them when the spacecraft passes over these regions. T
APA, Harvard, Vancouver, ISO, and other styles
2

Oates, Martin J. "Observations and explanations of characteristic features in the performance profiles of evolutionary algorithms." Thesis, University of Reading, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.409051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Southard, Katelyn M. "Exploring Features of Expertise and Knowledge Building among Undergraduate Students in Molecular and Cellular Biology." Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/612137.

Full text
Abstract:
Experts in the field of molecular and cellular biology (MCB) use domain-specific reasoning strategies to navigate the unique complexities of the phenomena they study and creatively explore problems in their fields. One primary goal of instruction in undergraduate MCB is to foster the development of these domain-specific reasoning strategies among students. However, decades of evidence-based research and many national calls for undergraduate instructional reform have demonstrated that teaching and learning complex fields like MCB is difficult for instructors and learners alike. Therefore, how d
APA, Harvard, Vancouver, ISO, and other styles
4

Bai, Yun. "Textual Data for Advanced Modelling of Electricity Demand and Price Dynamics." Electronic Thesis or Diss., Université Paris sciences et lettres, 2025. http://www.theses.fr/2025UPSLM004.

Full text
Abstract:
Malgré la maturité croissante des systèmes électriques et des marchés actuels, ils sont toujours confrontés à de nombreux risques incertains, tels que des changements soudains de la demande d’électricité en raison d’évènements spéciaux. Dans ce contexte, l’utilisation des technologies de l’apprentissage automatique et du traitement du langage naturel (NLP) pour une modélisation intégrée afin de répondre aux incertitudes des systèmes électriques présente un potentiel significatif.Cette thèse revisite les modèles de prévision de l’électricité traditionnels, en explorant le potentiel d’utilisation
APA, Harvard, Vancouver, ISO, and other styles
5

Lim, Shiau Hong. "Explanation-based feature construction /." 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3363019.

Full text
Abstract:
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2009.<br>Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3605. Adviser: Gerald DeJong. Includes bibliographical references (leaves 107-112) Available on microfilm from Pro Quest Information and Learning.
APA, Harvard, Vancouver, ISO, and other styles
6

Kuo, Chia-Yu, and 郭家諭. "Explainable Risk Prediction System for Child Abuse Event by Individual Feature Attribution and Counterfactual Explanation." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/yp2nr3.

Full text
Abstract:
碩士<br>國立交通大學<br>統計學研究所<br>107<br>There always have a trade-off: Performance or Interpretability. The complex model, such as ensemble learning can achieve outstanding prediction accuracy. However, it is not easy to interpret the complex model. Understanding why a model made a prediction help us to trust the black-box model, and also help users to make decisions. This work plans to use the techniques of explainable machine learning to develop the appropriate model for empirical data with high prediction and good interpretability. In this study, we use the data provided by Taipei City Center for
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Feature explanation"

1

Market, Caribbean Common. Rules of origin of the Caribbean Common Market: An explanation of its scope and operational features. Caribbean Community Secretariat, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Secretariat, Caribbean Community, ed. Common external tariff of the Caribbean Common Market: An explanation of its scope and operational features. CARICOM, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nimersheim, Jack. Microsoft's Word 6.0: Features Microsoft's Windows 95 : explanations you can understand and use! Tips that save you time and aggravation! Expert help in plain English! WorldComm, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Barkovich, Aleksandr, and Taisa Filimonova. Web design. INFRA-M Academic Publishing LLC., 2025. https://doi.org/10.12737/2116156.

Full text
Abstract:
The tutorial covers important aspects of web design and allows you to acquire professional competence in the field of creating and developing web resources. The features of this manual are its versatility and adaptability. All the necessary basic information is presented concisely and detailed practice-oriented material with the necessary explanations is systematically presented. It will be useful both for specialists who already have some experience in the field of information technology, and for beginners. It allows you to master a problem area both under the guidance of a teacher and indepe
APA, Harvard, Vancouver, ISO, and other styles
5

Ebrey, David. Identity and Explanation in the Euthyphro. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198805762.003.0003.

Full text
Abstract:
According to many interpreters, Socrates in the Euthyphro thinks that an answer to ‘what is the holy?’ should pick out some feature that is prior to being holy. While this is a powerful way to think of answers to the ‘what is it?’ question, one that Aristotle develops, I argue that the Euthyphro provides an important alternative to this Aristotelian account. Instead, an answer to ‘what is the holy?’ should pick out precisely being holy, not some feature prior to it. I begin by showing how this interpretation allows for a straightforward reading of a key argument: Socrates’ refutation of Euthyp
APA, Harvard, Vancouver, ISO, and other styles
6

Colyvan, Mark, John Cusbert, and Kelvin McQueen. Two Flavours of Mathematical Explanation. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198777946.003.0012.

Full text
Abstract:
A proof of a mathematical theorem tells us that the theorem is true (or should be accepted), but some proofs go further and tell us why the theorem is true (or should be accepted). That is, some, but not all, proofs are explanatory. Call this intra-mathematical explanation and it is to be contrasted with extra-mathematical explanation, where mathematics explains things external to mathematics. This chapter focuses on the intra-mathematical case. The authors consider a couple of examples of explanatory proofs from contemporary mathematics. They determine whether these proofs share some common f
APA, Harvard, Vancouver, ISO, and other styles
7

Alvarez, Maria. Desires, Dispositions and the Explanation of Action. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780199370962.003.0005.

Full text
Abstract:
We often explain human actions by reference to the desires of the person whose actions we are explaining: “Jane is studying law because she wants to become a judge.” But how do desires explain actions? A widely accepted view is that desires are dispositional states that are manifested in behavior. Accordingly, desires explain actions as ordinary physical dispositions, such as fragility or conductivity, explain their manifestations, namely causally. This paper argues that desires, unlike ordinary physical dispositions, are “manifestation-dependent dispositions”: dispositions whose attribution d
APA, Harvard, Vancouver, ISO, and other styles
8

Glennan, Stuart. Explanation. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198779711.003.0008.

Full text
Abstract:
This concluding chapter offers an abstract account of explanation as such, arguing that explanations involve the construction of models that always show what the targets of explanation depend upon (dependence), and sometimes show how multiple targets depend upon similar things (unification). It then suggests, in light of this account, how Salmon’s three conceptions of scientific explanation are not alternative conceptions, but are in fact complementary aspects of successful explanation. Explanations of natural phenomena are then divided into three kinds—bare causal, mechanistic, and non-causal
APA, Harvard, Vancouver, ISO, and other styles
9

Robins, Sarah K., and Carl F. Craver. Biological Clocks: Explaining with Models of Mechanisms. Edited by John Bickle. Oxford University Press, 2009. http://dx.doi.org/10.1093/oxfordhb/9780195304787.003.0003.

Full text
Abstract:
This article examines the concept of mechanistic explanation by considering the mechanism of circadian rhythm or biological clocks. It provides an account of mechanistic explanation and some common failures of mechanistic explanation and discusses the sense in which mechanistic explanations typically span multiple levels. The article suggests that models that describe mechanisms are more useful for the purposes of manipulation and control than are scientific models that do not describe mechanisms. It comments on the criticism that the mechanistic explanation is far too simple to fully express
APA, Harvard, Vancouver, ISO, and other styles
10

Montgomery, Derek E. Situational features influencing mentalistic explanations of action. 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Feature explanation"

1

Rakesh, Deepak Kumar, and Prasanta K. Jana. "Feature Explanation Algorithms for Outliers." In Artificial Intelligence and Technologies. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-6448-9_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Xuanxiang, Martin C. Cooper, Antonio Morgado, Jordi Planes, and Joao Marques-Silva. "Feature Necessity & Relevancy in ML Classifier Explanations." In Tools and Algorithms for the Construction and Analysis of Systems. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30823-9_9.

Full text
Abstract:
AbstractGiven a machine learning (ML) model and a prediction, explanations can be defined as sets of features which are sufficient for the prediction. In some applications, and besides asking for an explanation, it is also critical to understand whether sensitive features can occur in some explanation, or whether a non-interesting feature must occur in all explanations. This paper starts by relating such queries respectively with the problems of relevancy and necessity in logic-based abduction. The paper then proves membership and hardness results for several families of ML classifiers. Afterw
APA, Harvard, Vancouver, ISO, and other styles
3

Langer, Markus, and Isabel Valera. "Leveraging Actionable Explanations to Improve People’s Reactions to AI-Based Decisions." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-73741-1_18.

Full text
Abstract:
AbstractThis paper explores the role of explanations in mitigating negative reactions among people affected by AI-based decisions. While existing research focuses primarily on user perspectives, this study addresses the unique needs of people affected by AI-based decisions. Drawing on justice theory and the algorithmic recourse literature, we propose that actionability is a primary need of people affected by AI-based decisions. Thus, we expected that more actionable explanations – that is, explanations that guide people on how to address negative outcomes – would elicit more favorable reaction
APA, Harvard, Vancouver, ISO, and other styles
4

Hurtado, Remigio, and Eduardo Ayora. "Intelligent System for Predicting Bank Policy Acceptance by Ensemble Machine Learning and Model Explanation." In Lecture Notes in Networks and Systems. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-87065-1_41.

Full text
Abstract:
Abstract Efficient management of financial resources is crucial for the sustainability and competitiveness of banks, particularly in optimizing term deposit subscriptions to maintain liquidity. This paper introduces an advanced intelligent system for predicting term deposit acceptance using ensemble machine learning techniques. Our approach combines Random Forest and K-Nearest Neighbors (KNN) models to enhance prediction accuracy while providing clear explanations. The system follows the CRISP-DM methodology, which includes detailed phases of data preparation, modeling, fine-tuning, and model
APA, Harvard, Vancouver, ISO, and other styles
5

Bourroux, Luca, Jenny Benois-Pineau, Romain Bourqui, and Romain Giot. "Multi Layered Feature Explanation Method for Convolutional Neural Networks." In Pattern Recognition and Artificial Intelligence. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09037-0_49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fan, Yongdong, Qiong Li, Haokun Mao, and Xingyuan Song. "Feature Attribution-Based Explanation Comparison of Magnetoencephalography Decoding Models." In Lecture Notes in Computer Science. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-95-0030-7_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Charachon, Martin, Paul-Henry Cournède, Céline Hudelot, and Roberto Ardon. "Visual Explanation by Unifying Adversarial Generation and Feature Importance Attributions." In Interpretability of Machine Intelligence in Medical Image Computing, and Topological Data Analysis and Its Applications for Medical Data. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87444-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cappuccio, Eleonora, Daniele Fadda, Rosa Lanzilotti, and Salvatore Rinzivillo. "FIPER: A Visual-Based Explanation Combining Rules and Feature Importance." In Communications in Computer and Information Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-74633-8_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pi, Yulu. "INFEATURE: An Interactive Feature-Based-Explanation Framework for Non-technical Users." In Artificial Intelligence in HCI. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-35891-3_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Zhao, Yuanzhe Zhang, Zhongtao Jiang, Yiming Ju, Jun Zhao, and Kang Liu. "Can We Really Trust Explanations? Evaluating the Stability of Feature Attribution Explanation Methods via Adversarial Attack." In Lecture Notes in Computer Science. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-18315-7_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Feature explanation"

1

Zhou, Yan, Xiaodong Li, Kedong Zhu, Feng Li, Huibiao Yang, and Yong Ren. "Temporal Feature Impact Explanation via Dynamic Sliding Window Sampling." In 2025 5th International Conference on Consumer Electronics and Computer Engineering (ICCECE). IEEE, 2025. https://doi.org/10.1109/iccece65250.2025.10985638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vu, Kiana, Phung Lai, and Truc Nguyen. "XSub: Explanation-Driven Adversarial Attack against Blackbox Classifiers via Feature Substitution." In 2024 IEEE International Conference on Big Data (BigData). IEEE, 2024. https://doi.org/10.1109/bigdata62323.2024.10825935.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Prajapati, Yogendra Narayan, and Dev Baloni. "Detailed Explanation: Optimizing COVID-19 CT-Scan Classification with Feature Engineering and Genetic Algorithm." In 2024 1st International Conference on Advanced Computing and Emerging Technologies (ACET). IEEE, 2024. http://dx.doi.org/10.1109/acet61898.2024.10730057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

VS, Balaji, Anirudh Ganapathy PS, and Sangeetha K. "Forecasting and Analysing Cyber Threats with Graph Neural Networks and Gradient Based Explanation for Feature Impacts." In 2024 Global Conference on Communications and Information Technologies (GCCIT). IEEE, 2024. https://doi.org/10.1109/gccit63234.2024.10861933.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sun, Jingyi, Pepa Atanasova, and Isabelle Augenstein. "Evaluating Input Feature Explanations through a Unified Diagnostic Evaluation Framework." In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). Association for Computational Linguistics, 2025. https://doi.org/10.18653/v1/2025.naacl-long.530.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bhatt, Umang, Adrian Weller, and José M. F. Moura. "Evaluating and Aggregating Feature-based Model Explanations." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/417.

Full text
Abstract:
A feature-based model explanation denotes how much each input feature contributes to a model's output for a given data point. As the number of proposed explanation functions grows, we lack quantitative evaluation criteria to help practitioners know when to use which explanation function. This paper proposes quantitative evaluation criteria for feature-based explanations: low sensitivity, high faithfulness, and low complexity. We devise a framework for aggregating explanation functions. We develop a procedure for learning an aggregate explanation function with lower complexity and then derive a
APA, Harvard, Vancouver, ISO, and other styles
7

Stergiou, Alexandros, Georgios Kapidis, Grigorios Kalliatakis, Christos Chrysoulas, Ronald Poppe, and Remco Veltkamp. "Class Feature Pyramids for Video Explanation." In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE, 2019. http://dx.doi.org/10.1109/iccvw.2019.00524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhai, Xukai, Renze Hu, and Zhishuai Yin. "Feature Explanation for Robust Trajectory Prediction." In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2023. http://dx.doi.org/10.1109/iros55552.2023.10341825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Raghavan, Anand, and Thomas F. Stahovich. "Computing Design Rationales by Interpreting Simulations." In ASME 1998 Design Engineering Technical Conferences. American Society of Mechanical Engineers, 1998. http://dx.doi.org/10.1115/detc98/dtm-5652.

Full text
Abstract:
Abstract We describe an approach for automatically computing a class of design rationales. Our focus is computing the purposes of the geometric features on the parts of a device. We first simulate the device with the feature in question removed and compare this to a simulation of the nominal device. The differences in the simulations are indicative of the behaviors that the feature ultimately causes. We then use fundamental principles of mechanics to construct a causal explanation that links the feature to these behaviors. This explanation constitutes one of the rationales for the feature. We
APA, Harvard, Vancouver, ISO, and other styles
10

El Shawi, Radwa, and Mouaz Al-Mallah. "Interpretable Local Concept-based Explanation with Human Feedback to Predict All-cause Mortality (Extended Abstract)." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/774.

Full text
Abstract:
Machine learning models are incorporated in different fields and disciplines, some of which require high accountability and transparency, for example, the healthcare sector. A widely used category of explanation techniques attempts to explain models' predictions by quantifying the importance score of each input feature. However, summarizing such scores to provide human-interpretable explanations is challenging. Another category of explanation techniques focuses on learning a domain representation in terms of high-level human-understandable concepts and then utilizing them to explain prediction
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Feature explanation"

1

Lalisse, Matthias. Measuring the Impact of Campaign Finance on Congressional Voting: A Machine Learning Approach. Institute for New Economic Thinking Working Paper Series, 2022. http://dx.doi.org/10.36687/inetwp178.

Full text
Abstract:
How much does money drive legislative outcomes in the United States? In this article, we use aggregated campaign finance data as well as a Transformer based text embedding model to predict roll call votes for legislation in the US Congress with more than 90% accuracy. In a series of model comparisons in which the input feature sets are varied, we investigate the extent to which campaign finance is predictive of voting behavior in comparison with variables like partisan affiliation. We find that the financial interests backing a legislator’s campaigns are independently predictive in both chambe
APA, Harvard, Vancouver, ISO, and other styles
2

Schulz, Jan, Daniel Mayerhoffer, and Anna Gebhard. A Network-Based Explanation of Perceived Inequality. Otto-Friedrich-Universität, 2021. http://dx.doi.org/10.20378/irb-49393.

Full text
Abstract:
Across income groups and countries, the public perception of economic inequality and many other macroeconomic variables such as inflation or unemployment rates is spectacularly wrong. These misperceptions have far-reaching consequences, as it is perceived inequality, not actual inequality informing redistributive preferences. The prevalence of this phenomenon is independent of social class and welfare regime, which suggests the existence of a common mechanism behind public perceptions. We propose a network-based explanation of perceived inequality building on recent advances in random geometri
APA, Harvard, Vancouver, ISO, and other styles
3

Blundell, S. Tutorial : the DEM Breakline and Differencing Analysis Tool—step-by-step workflows and procedures for effective gridded DEM analysis. Engineer Research and Development Center (U.S.), 2022. http://dx.doi.org/10.21079/11681/46085.

Full text
Abstract:
The DEM Breakline and Differencing Analysis Tool is the result of a multi-year research effort in the analysis of digital elevation models (DEMs) and the extraction of features associated with breaklines identified on the DEM by numerical analysis. Developed in the ENVI/IDL image processing application, the tool is designed to serve as an aid to research in the investigation of DEMs by taking advantage of local variation in the height. A set of specific workflow exercises is described as applied to a diverse set of four sample DEMs. These workflows instruct the user in applying the tool to ext
APA, Harvard, Vancouver, ISO, and other styles
4

Mayerhoffer, Daniel, Moritz Schulz, Simon Scheller, and Jan Schulz-Gebhard. Networks of Polarisation : A Generative Mechanism. Otto-Friedrich-Universität Bamberg, 2025. https://doi.org/10.20378/irb-108470.

Full text
Abstract:
This paper presents a generative algorithm for simulating network polarisation based on attitudinal homophily, i.e., the tendency to connect to others with similar attitudes as oneself. To do so, it applies the notion of preferential attachment to node properties other than degree, aiding intuitive communication within and beyond the network science community. The algorithm works with one or more flexibly weighted attitude dimensions, heterogeneous populations. The generated networks commonly share features of real-world social networks such as (weak) small-worldiness. They can contribute to h
APA, Harvard, Vancouver, ISO, and other styles
5

Ilgenfritz, Pedro. Guide Me Without Touching My Hand: Reflections on the Dramaturgical Development of the Devised-theatre Show One by One. Unitec ePress, 2016. http://dx.doi.org/10.34074/ocds.038.

Full text
Abstract:
This essay is a reflection on some aspects of dramaturgy observed during the creation and development of One by One, a silent tragicomedy designed by the Auckland company, LAB Theatre, in 2011 and restaged in 2013. The emphasis of the essay is on pedagogical aspects at the core of the company’s work, as they inform the creative process and lead to the blending of the actor’s function into that of the dramaturg. The following discussion makes apparent the fact that this process of hybridisation, made possible by implementing features of devised theatre, emancipates the actor and brings improvis
APA, Harvard, Vancouver, ISO, and other styles
6

Yatsymirska, Mariya. MODERN MEDIA TEXT: POLITICAL NARRATIVES, MEANINGS AND SENSES, EMOTIONAL MARKERS. Ivan Franko National University of Lviv, 2022. http://dx.doi.org/10.30970/vjo.2022.51.11411.

Full text
Abstract:
The article examines modern media texts in the field of political journalism; the role of information narratives and emotional markers in media doctrine is clarified; verbal expression of rational meanings in the articles of famous Ukrainian analysts is shown. Popular theories of emotions in the process of cognition are considered, their relationship with the author’s personality, reader psychology and gonzo journalism is shown. Since the media text, in contrast to the text, is a product of social communication, the main narrative is information with the intention of influencing public opinion
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!