To see the other types of publications on this topic, follow the link: Explainable AI.

Journal articles on the topic 'Explainable AI'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Explainable AI.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Storey, Veda C., Roman Lukyanenko, Wolfgang Maass, and Jeffrey Parsons. "Explainable AI." Communications of the ACM 65, no. 4 (2022): 27–29. http://dx.doi.org/10.1145/3490699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Holzinger, Andreas. "Explainable AI (ex-AI)." Informatik-Spektrum 41, no. 2 (2018): 138–43. http://dx.doi.org/10.1007/s00287-018-1102-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Matsuo, Tatsuru, Masaru Todoriki, and Shin-ichiro Tago. "2. Explainable AI." Journal of The Institute of Image Information and Television Engineers 74, no. 1 (2020): 30–34. http://dx.doi.org/10.3169/itej.74.30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hind, Michael. "Explaining explainable AI." XRDS: Crossroads, The ACM Magazine for Students 25, no. 3 (2019): 16–19. http://dx.doi.org/10.1145/3313096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pingel, Johanna. "Making AI Explainable." New Electronics 55, no. 10 (2022): 30–31. http://dx.doi.org/10.12968/s0047-9624(23)60440-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Qiming, Xu, Feng Zheng, Gong Chenwei, et al. "Applications of Explainable AI in Natural Language Processing." Global Academic Frontiers 2, no. 3 (2024): 51–64. https://doi.org/10.5281/zenodo.12684705.

Full text
Abstract:
This paper investigates and discusses the applications of explainable AI in natural language processing. It first analyzes the importance and current state of AI in natural language processing, then focuses on the role and advantages of explainable AI technology in this field. It compares explainable AI with traditional AI from various angles and elucidates the unique value of explainable AI in natural language processing. On this basis, suggestions for further improvements and applications of explainable AI are proposed to advance the field of natural language processing. Finally, the potenti
APA, Harvard, Vancouver, ISO, and other styles
7

Hafermalz, Ella, and Marleen Huysman. "Please Explain: Key Questions for Explainable AI research from an Organizational perspective." Morals & Machines 1, no. 2 (2021): 10–23. http://dx.doi.org/10.5771/2747-5174-2021-2-10.

Full text
Abstract:
There is growing interest in explanations as an ethical and technical solution to the problem of 'opaque' AI systems. In this essay we point out that technical and ethical approaches to Explainable AI (XAI) have different assumptions and aims. Further, the organizational perspective is missing from this discourse. In response we formulate key questions for explainable AI research from an organizational perspective: 1) Who is the 'user' in Explainable AI? 2) What is the 'purpose' of an explanation in Explainable AI? and 3) Where does an explanation 'reside' in Explainable AI? Our aim is to prom
APA, Harvard, Vancouver, ISO, and other styles
8

Shah, Jyoti Kunal. "Explainable AI In Software Engineering: Enhancing Developer-AI Collaboration." American Journal of Engineering and Technology 06, no. 07 (2024): 99–108. https://doi.org/10.37547/tajet/volume06issue07-11.

Full text
Abstract:
Artificial Intelligence (AI) tools are increasingly integrated into software engineering tasks such as code generation, defect prediction, and project planning. However, widespread adoption is hindered by developers’ skepticism toward opaque AI models that lack transparency. This paper explores the integration of Explainable AI (XAI) into software engineering to foster a “developer-in-the-loop” paradigm that enhances trust, understanding, and collaboration between developers and AI agents. We review existing research on XAI techniques applied to feature planning, debugging, and refactoring, an
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Jiachi, Wenchao Zhou, and Benjamin E. Ujcich. "Provenance-Enabled Explainable AI." Proceedings of the ACM on Management of Data 2, no. 6 (2024): 1–27. https://doi.org/10.1145/3698826.

Full text
Abstract:
Machine learning (ML) algorithms have advanced significantly in recent years, progressively evolving into artificial intelligence (AI) agents capable of solving complex, human-like intellectual challenges. Despite the advancements, the interpretability of these sophisticated models lags behind, with many ML architectures remaining "black boxes" that are too intricate and expansive for human interpretation. Recognizing this issue, there has been a revived interest in the field of explainable AI (XAI) aimed at explaining these opaque ML models. However, XAI tools often suffer from being tightly
APA, Harvard, Vancouver, ISO, and other styles
10

SANO, Takanori. "Explainable AI in Art." International Symposium on Affective Science and Engineering ISASE2024 (2024): 1–3. http://dx.doi.org/10.5057/isase.2024-c000043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Pazzani, Michael, Severine Soltani, Robert Kaufman, Samson Qian, and Albert Hsiao. "Expert-Informed, User-Centric Explanations for Machine Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (2022): 12280–86. http://dx.doi.org/10.1609/aaai.v36i11.21491.

Full text
Abstract:
We argue that the dominant approach to explainable AI for explaining image classification, annotating images with heatmaps, provides little value for users unfamiliar with deep learning. We argue that explainable AI for images should produce output like experts produce when communicating with one another, with apprentices, and with novices. We provide an expanded set of goals of explainable AI systems and propose a Turing Test for explainable AI.
APA, Harvard, Vancouver, ISO, and other styles
12

Alufaisan, Yasmeen, Laura R. Marusich, Jonathan Z. Bakdash, Yan Zhou, and Murat Kantarcioglu. "Does Explainable Artificial Intelligence Improve Human Decision-Making?" Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (2021): 6618–26. http://dx.doi.org/10.1609/aaai.v35i8.16819.

Full text
Abstract:
Explainable AI provides insights to users into the why for model predictions, offering potential for users to better understand and trust a model, and to recognize and correct AI predictions that are incorrect. Prior research on human and explainable AI interactions has focused on measures such as interpretability, trust, and usability of the explanation. There are mixed findings whether explainable AI can improve actual human decision-making and the ability to identify the problems with the underlying model. Using real datasets, we compare objective human decision accuracy without AI (control
APA, Harvard, Vancouver, ISO, and other styles
13

Jishnu, Setia. "Explainable AI: Methods and Applications." Explainable AI: Methods and Applications 8, no. 10 (2023): 5. https://doi.org/10.5281/zenodo.10021461.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) has emerged as a critical area of research, ensuring that AI systems are transparent, interpretable, and accountable. This paper provides a comprehensive overview of various methods and applications of Explainable AI. We delve into the importance of interpretability in AI models, explore different techniques for making complex AI models understandable, and discuss real-world applications where explainability is crucial. Through this paper, I aim to shed light on the advancements in the field of XAI and its potentialto bridge the gap between AI's predic
APA, Harvard, Vancouver, ISO, and other styles
14

Prentzas, Jim, and Ariadni Binopoulou. "Explainable Artificial Intelligence Approaches in Primary Education: A Review." Electronics 14, no. 11 (2025): 2279. https://doi.org/10.3390/electronics14112279.

Full text
Abstract:
Artificial intelligence (AI) methods have been integrated in education during the last few decades. Interest in this integration has increased in recent years due to the popularity of AI. The use of explainable AI in educational settings is becoming a research trend. Explainable AI provides insight into the decisions made by AI, increases trust in AI, and enhances the effectiveness of the AI-supported processes. In this context, there is an increasing interest in the integration of AI, and specifically explainable AI, in the education of young children. This paper reviews research regarding ex
APA, Harvard, Vancouver, ISO, and other styles
15

Chauhan, Tavishee, and Sheetal Sonawane. "Contemplation of Explainable Artificial Intelligence Techniques." International Journal on Recent and Innovation Trends in Computing and Communication 10, no. 4 (2022): 65–71. http://dx.doi.org/10.17762/ijritcc.v10i4.5538.

Full text
Abstract:
Machine intelligence and data science are two disciplines that are attempting to develop Artificial Intelligence. Explainable AI is one of the disciplines being investigated, with the goal of improving the transparency of black-box systems. This article aims to help people comprehend the necessity for Explainable AI, as well as the various methodologies used in various areas, all in one place. This study clarified how model interpretability and Explainable AI work together. This paper aims to investigate the Explainable artificial intelligence approaches their applications in multiple domains.
APA, Harvard, Vancouver, ISO, and other styles
16

Chalamayya, Batchu Veera Venkata Satya. "Unlocking the Potential of Explainable AI in the Paper Manufacturing." Journal of Scientific and Engineering Research 7, no. 5 (2020): 403–6. https://doi.org/10.5281/zenodo.13759268.

Full text
Abstract:
Explainable AI (XAI) plays a crucial role in manufacturing firms by enhancing transparency, trustworthiness, and decision-making capabilities in various aspects of operations. Explainable AI (XAI) in business refers to the capability of AI systems to provide understandable explanations of their decisions and recommendations. This transparency is crucial in various business applications to build trust, ensure compliance, and facilitate effective decision-making. AI is transforming the paper manufacturing industry by driving operational efficiency, enhancing product quality, ensuring environment
APA, Harvard, Vancouver, ISO, and other styles
17

Dikshit, Abhirup, and Biswajeet Pradhan. "Explainable AI in drought forecasting." Machine Learning with Applications 6 (December 2021): 100192. http://dx.doi.org/10.1016/j.mlwa.2021.100192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Bertossi, Leopoldo, and Floris Geerts. "Data Quality and Explainable AI." Journal of Data and Information Quality 12, no. 2 (2020): 1–9. http://dx.doi.org/10.1145/3386687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Chavan, Devang, and Shrihari Padatare. "Explainable AI for News Classification." International Journal for Research in Applied Science and Engineering Technology 12, no. 11 (2024): 2400–2408. https://doi.org/10.22214/ijraset.2024.65670.

Full text
Abstract:
Abstract: The proliferation of news content across digital platforms necessitates robust and interpretable machine learning models to classify news into predefined categories effectively. This study investigates the integration of Explainable AI (XAI) techniques within the context of traditional machine learning models, including Naive Bayes, Logistic Regression, and Support Vector Machines (SVM), to achieve interpretable and accurate news classification. Utilizing the News Category Dataset, we preprocess the data to focus on the top 15 categories while addressing class imbalance challenges. M
APA, Harvard, Vancouver, ISO, and other styles
20

Alan Varghese, Jefin Varghese, Jubin Biju, Roshan Thomas, and Merlin Thomas. "Explainable AI in Healthcare Applications." International Research Journal on Advanced Engineering and Management (IRJAEM) 2, no. 12 (2024): 3671–79. https://doi.org/10.47392/irjaem.2024.0545.

Full text
Abstract:
The entry of artificial intelligence into health care systems brings unprecedented advances in diagnosing, personalized treatment, and predictive analytics. Many of these AI models, especially the deep-learning algorithms, have been referred to as "black boxes" and raise gigantic questions about trust, transparency, and reliability in clinical settings. Therefore, explainable AI answers the challenges by drawing to the fore methodologies that make AI models more interpretable, thereby making them more accepted and usable in the fraternity of health. It engages with XAI in healthcare by scrutin
APA, Harvard, Vancouver, ISO, and other styles
21

Hoffman, Robert R., Gary Klein, and Shane T. Mueller. "Explaining Explanation For “Explainable Ai”." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, no. 1 (2018): 197–201. http://dx.doi.org/10.1177/1541931218621047.

Full text
Abstract:
What makes for an explanation of “black box” AI systems such as Deep Nets? We reviewed the pertinent literatures on explanation and derived key ideas. This set the stage for our empirical inquiries, which include conceptual cognitive modeling, the analysis of a corpus of cases of "naturalistic explanation" of computational systems, computational cognitive modeling, and the development of measures for performance evaluation. The purpose of our work is to contribute to the program of research on “Explainable AI.” In this report we focus on our initial synthetic modeling activities and the develo
APA, Harvard, Vancouver, ISO, and other styles
22

Hagras, Hani. "Toward Human-Understandable, Explainable AI." Computer 51, no. 9 (2018): 28–36. http://dx.doi.org/10.1109/mc.2018.3620965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Andriole, Stephen J., Saeid Abolfazli, and Michalis Feidakis. "Responsible, Explainable, and Emotional AI." IT Professional 24, no. 5 (2022): 16–17. http://dx.doi.org/10.1109/mitp.2022.3211900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Mansoor, Nazneen, and Alexander I. Iliev. "Explainable AI for DeepFake Detection." Applied Sciences 15, no. 2 (2025): 725. https://doi.org/10.3390/app15020725.

Full text
Abstract:
The surge in technological advancements has resulted in concerns over its misuse in politics and entertainment, making reliable detection methods essential. This study introduces a deepfake detection technique that enhances interpretability using the network dissection algorithm. This research consists of two stages: (1) detection of forged images using advanced convolutional neural networks such as ResNet-50, Inception V3, and VGG-16, and (2) applying the network dissection algorithm to understand the models’ internal decision-making processes. The CNNs’ performance is evaluated through F1-sc
APA, Harvard, Vancouver, ISO, and other styles
25

Mutkule, Prasad R., Nilesh P. Sable, Parikshit N. Mahalle, and Gitanjali R. Shinde. "Histopathological parameter and brain tumor mapping using distributed optimizer tuned explainable AI classifier." Journal of Autonomous Intelligence 7, no. 5 (2024): 1617. http://dx.doi.org/10.32629/jai.v7i5.1617.

Full text
Abstract:
<p>Brain tumors represent a critical and severe challenge worldwide early and accurate diagnosis is necessary to increase the predictions for individuals with brain tumors. Several studies on brain tumor mapping have been conducted recently; however, the methods have some drawbacks, including poor image quality, a lack of data, and a limited capacity for generalization ability. To tackle these drawbacks this research presents a distributed optimizer tuned explainable AI classifier model for brain tumor mapping from histopathological images. The foraging gyps africanus optimization enable
APA, Harvard, Vancouver, ISO, and other styles
26

de Brito Duarte, Regina, Filipa Correia, Patrícia Arriaga, and Ana Paiva. "AI Trust: Can Explainable AI Enhance Warranted Trust?" Human Behavior and Emerging Technologies 2023 (October 31, 2023): 1–12. http://dx.doi.org/10.1155/2023/4637678.

Full text
Abstract:
Explainable artificial intelligence (XAI), known to produce explanations so that predictions from AI models can be understood, is commonly used to mitigate possible AI mistrust. The underlying premise is that the explanations of the XAI models enhance AI trust. However, such an increase may depend on many factors. This article examined how trust in an AI recommendation system is affected by the presence of explanations, the performance of the system, and the level of risk. Our experimental study, conducted with 215 participants, has shown that the presence of explanations increases AI trust, b
APA, Harvard, Vancouver, ISO, and other styles
27

Raikov, Alexander N. "Subjectivity of Explainable Artificial Intelligence." Russian Journal of Philosophical Sciences 65, no. 1 (2022): 72–90. http://dx.doi.org/10.30727/0235-1188-2022-65-1-72-90.

Full text
Abstract:
The article addresses the problem of identifying methods to develop the ability of artificial intelligence (AI) systems to provide explanations for their findings. This issue is not new, but, nowadays, the increasing complexity of AI systems is forcing scientists to intensify research in this direction. Modern neural networks contain hundreds of layers of neurons. The number of parameters of these networks reaches trillions, genetic algorithms generate thousands of generations of solutions, and the semantics of AI models become more complicated, going to the quantum and non-local levels. The w
APA, Harvard, Vancouver, ISO, and other styles
28

Zednik, Carlos, and Hannes Boelsen. "Scientific Exploration and Explainable Artificial Intelligence." Minds and Machines 32, no. 1 (2022): 219–39. http://dx.doi.org/10.1007/s11023-021-09583-6.

Full text
Abstract:
AbstractModels developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future investigations of (potentially) causal relations
APA, Harvard, Vancouver, ISO, and other styles
29

Sachin, Samrat Medavarapu. "Demystifying AI: A Comprehensive Review of Explainable AI Techniques and Applications." European Journal of Advances in Engineering and Technology 10, no. 6 (2023): 49–52. https://doi.org/10.5281/zenodo.13627267.

Full text
Abstract:
Explainable Artificial Intelligence (XAI) seeks to make AI systems more transparent and understandable to users. This review examines the various techniques developed to achieve explainability in AI models and their applications across different domains. We discuss methods such as feature attribution, model simplification, and example-based explanations, highlighting their strengths and limitations. Additionally, we explore the importance of XAI in critical fields like healthcare, finance, and law. The findings underscore the necessity of explainability for trust, accountability, and ethical A
APA, Harvard, Vancouver, ISO, and other styles
30

Liu, Yijun. "Explainable artificial intelligence and its practical applications." Applied and Computational Engineering 4, no. 1 (2023): 755–59. http://dx.doi.org/10.54254/2755-2721/4/2023419.

Full text
Abstract:
With the continuous development of the times, the artificial intelligence industry is also booming, and its presence in various fields has a huge role in promoting social progress and advancing industrial development. Research on it is also in full swing. People are eager to understand the cause-and-effect relationship between the actions performed or the strategies decided based on the black-box model, so that they can learn or judge from another perspective. Thus the Explainable AI is proposed, it is a new generation of AI that allows humans to understand the cause and give them a decision s
APA, Harvard, Vancouver, ISO, and other styles
31

Colantonio, Lorenzo, Lucas Equeter, Pierre Dehombreux, and François Ducobu. "Explainable AI for tool condition monitoring using Explainable Boosting Machine." Procedia CIRP 133 (2025): 138–43. https://doi.org/10.1016/j.procir.2025.02.025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

soni,, Rajat. "Enhancing Transparency and Accountability in Predictive Maintenance with Explainable AI." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem32027.

Full text
Abstract:
Predictive maintenance is a critical aspect of industrial operations, enabling proactive identification and mitigation of potential failures in machinery and equipment. However, the widespread adoption of AI-driven predictive maintenance solutions has been hindered by the opaque nature of many machines learning models, raising concerns about transparency, accountability, and trust. This research aims to address these challenges by developing explainable AI techniques for predictive maintenance in industrial systems. By integrating interpretability methods with advanced predictive models, we se
APA, Harvard, Vancouver, ISO, and other styles
33

Kalyanathaya, Krishna P., and Krishna Prasad K. "novel method for developing explainable machine learning framework using feature neutralization technique." Scientific Temper 15, no. 02 (2024): 2225–30. http://dx.doi.org/10.58414/scientifictemper.2024.15.2.35.

Full text
Abstract:
The rapid advancement of artificial intelligence (AI) has led to its widespread adoption across various domains. One of the most important challenges faced by AI adoption is to justify the outcome of the AI model. In response, explainable AI (XAI) has emerged as a critical area of research, aiming to enhance transparency and interpretability in AI systems. However, existing XAI methods facing several challenges, such as complexity, difficulty in interpretation, limited applicability, and lack of transparency. In this paper, we discuss current challenges using SHAP and LIME metrics being popula
APA, Harvard, Vancouver, ISO, and other styles
34

Mishra, Sarbaree. "The Age of Explainable AI: Improving Trust and Transparency in AI Models." International Journal of Science and Research (IJSR) 9, no. 8 (2020): 1603–11. https://doi.org/10.21275/sr20087120519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Burkart, Nadia, Danilo Brajovic, and Marco F. Huber. "Explainable AI: introducing trust and comprehensibility to AI engineering." at - Automatisierungstechnik 70, no. 9 (2022): 787–92. http://dx.doi.org/10.1515/auto-2022-0013.

Full text
Abstract:
Abstract Machine learning (ML) rapidly gains increasing interest due to the continuous improvements in performance. ML is used in many different applications to support human users. The representational power of ML models allows solving difficult tasks, while making them impossible to be understood by humans. This provides room for possible errors and limits the full potential of ML, as it cannot be applied in critical environments. In this paper, we propose employing Explainable AI (xAI) for both model and data set refinement, in order to introduce trust and comprehensibility. Model refinemen
APA, Harvard, Vancouver, ISO, and other styles
36

Luthra, Vihaan. "Explainable AI – The Errors, Insights, and Lessons of AI." International Journal of Computer Trends and Technology 70, no. 4 (2022): 19–24. http://dx.doi.org/10.14445/22312803/ijctt-v70i4p103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Kumar, Dr Naveen. "Enhancing Transparency and Trust in Cybersecurity: Developing Explainable AI Models for Threat Detection." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 06 (2025): 1–9. https://doi.org/10.55041/ijsrem49406.

Full text
Abstract:
Abstract The increasing reliance on artificial intelligence (AI) in cybersecurity has significantly improved threat detection and response. However, many AI-driven Defense mechanisms function as "black boxes," making it difficult for security professionals to interpret their decisions. This lack of transparency reduces trust in AI systems and limits their adoption in critical security operations. Despite advancements in explainable AI (XAI), there is a significant research gap in applying XAI techniques specifically to cybersecurity. This study aims to bridge this gap by developing and evaluat
APA, Harvard, Vancouver, ISO, and other styles
38

Tejashwini, Deepa, Maitri Gaonkar, Lakshmi H D, Rosline Mary, and Madhuri J M. "AN EXPLAINABLE AI MODEL FOR DIABETIC RETINOPATHY DETECTION." International Journal of Innovative Research in Advanced Engineering 9, no. 8 (2022): 306–11. http://dx.doi.org/10.26562/ijirae.2022.v0908.28.

Full text
Abstract:
Diabetes is the most common long-term condition that affects people of all ages due to inadequate insulin production. The appearance of black spots for millennia, interpreting eye pictures, and detecting diabetic retinopathy in its early stages has been a big challenge. The Explainable AI method is to explain the deep learning model that is understandable by humans and trusts the result. This is especially important in safety critical domains like healthcare or security, which replaces manual processes and understanding of the models function by non-technical domain skilled person. Explainable
APA, Harvard, Vancouver, ISO, and other styles
39

Park, Woosuk. "How to Make AlphaGo’s Children Explainable." Philosophies 7, no. 3 (2022): 55. http://dx.doi.org/10.3390/philosophies7030055.

Full text
Abstract:
Under the rubric of understanding the problem of explainability of AI in terms of abductive cognition, I propose to review the lessons from AlphaGo and her more powerful successors. As AI players in Baduk (Go, Weiqi) have arrived at superhuman level, there seems to be no hope for understanding the secret of their breathtakingly brilliant moves. Without making AI players explainable in some ways, both human and AI players would be less-than omniscient, if not ignorant, epistemic agents. Are we bound to have less explainable AI Baduk players as they make further progress? I shall show that the r
APA, Harvard, Vancouver, ISO, and other styles
40

Sakshi, Sakshi, Sunil Kumar Khatri, and Neeraj Kumar Sharma. "Neutrosophic Meta SHAP and Neutrosophic Meta LIME: An Efficient Framework for Explainable AI in Oral Cancer Detection." International Journal of Neutrosophic Science 23, no. 3 (2024): 373–245. http://dx.doi.org/10.54216/ijns.230328.

Full text
Abstract:
Among the current generation researcher, artificial intelligence has played vital role in various fields, including healthcare. One of the key areas where it has shown enormous potential is in cancer detection and treatment. AI and methods of machine learning algorithms have been applied to analyze large datasets, such as genomics, transcriptomic, and imaging data, to identify patterns and relationships that can help in cancer diagnosis and therapy. However, due to the inherent complexity and heterogeneity of tumors in individual patients, building a diagnostic and therapeutic platform that ca
APA, Harvard, Vancouver, ISO, and other styles
41

Kundu, Shinjini. "AI in medicine must be explainable." Nature Medicine 27, no. 8 (2021): 1328. http://dx.doi.org/10.1038/s41591-021-01461-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Medianovskyi, Kyrylo, and Ahti-Veikko Pietarinen. "On Explainable AI and Abductive Inference." Philosophies 7, no. 2 (2022): 35. http://dx.doi.org/10.3390/philosophies7020035.

Full text
Abstract:
Modern explainable AI (XAI) methods remain far from providing human-like answers to ‘why’ questions, let alone those that satisfactorily agree with human-level understanding. Instead, the results that such methods provide boil down to sets of causal attributions. Currently, the choice of accepted attributions rests largely, if not solely, on the explainee’s understanding of the quality of explanations. The paper argues that such decisions may be transferred from a human to an XAI agent, provided that its machine-learning (ML) algorithms perform genuinely abductive inferences. The paper outline
APA, Harvard, Vancouver, ISO, and other styles
43

Jagreet Kaur, Dr, Suryakant ,, and Kuldeep Kaur. "Explainable AI in Diabetes Prediction System." Acta Scientific Medical Sciences 5, no. 10 (2021): 131–36. http://dx.doi.org/10.31080/asms.2021.05.1046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Xiao, Guohua. "Explainable AI: Linking Human and Machine." KNOWLEDGE ORGANIZATION 46, no. 5 (2019): 398–99. http://dx.doi.org/10.5771/0943-7444-2019-5-398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Tosun, Akif B., Filippo Pullara, Michael J. Becich, D. Lansing Taylor, Jeffrey L. Fine, and S. Chakra Chennubhotla. "Explainable AI (xAI) for Anatomic Pathology." Advances in Anatomic Pathology 27, no. 4 (2020): 241–50. http://dx.doi.org/10.1097/pap.0000000000000264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Junior, Kamese Jordan, Kouayep Sonia Carole, Tagne Poupi Theodore Armand, Hee-Cheol Kim, and The Alzheimer’s Disease Neuroimaging Initiative The Alzheimer’s Disease Neuroimaging Initiative. "Alzheimer’s Multiclassification Using Explainable AI Techniques." Applied Sciences 14, no. 18 (2024): 8287. http://dx.doi.org/10.3390/app14188287.

Full text
Abstract:
In this study, we address the early detection challenges of Alzheimer’s disease (AD) using explainable artificial intelligence (XAI) techniques. AD, characterized by amyloid plaques and tau tangles, leads to cognitive decline and remains hard to diagnose due to genetic and environmental factors. Utilizing deep learning models, we analyzed brain MRI scans from the ADNI database, categorizing them into normal cognition (NC), mild cognitive impairment (MCI), and AD. The ResNet-50 architecture was employed, enhanced by a channel-wise attention mechanism to improve feature extraction. To ensure mod
APA, Harvard, Vancouver, ISO, and other styles
47

Sheh, Raymond, and Isaac Monteath. "Defining Explainable AI for Requirements Analysis." KI - Künstliche Intelligenz 32, no. 4 (2018): 261–66. http://dx.doi.org/10.1007/s13218-018-0559-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Wolf, Christine T., and Kathryn E. Ringland. "Designing accessible, explainable AI (XAI) experiences." ACM SIGACCESS Accessibility and Computing, no. 125 (March 2, 2020): 1. http://dx.doi.org/10.1145/3386296.3386302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Sarder Abdulla Al Shiam, Md Mahdi Hasan, Md Jubair Pantho, et al. "Credit Risk Prediction Using Explainable AI." Journal of Business and Management Studies 6, no. 2 (2024): 61–66. http://dx.doi.org/10.32996/jbms.2024.6.2.6.

Full text
Abstract:
Despite advancements in machine-learning prediction techniques, the majority of lenders continue to rely on conventional methods for predicting credit defaults, largely due to their lack of transparency and explainability. This reluctance to embrace newer approaches persists as there is a compelling need for credit default prediction models to be explainable. This study introduces credit default prediction models employing several tree-based ensemble methods, with the most effective model, XGBoost, being further utilized to enhance explainability. We implement SHapley Additive exPlanations (SH
APA, Harvard, Vancouver, ISO, and other styles
50

Cavallaro, Massimo, Ed Moran, Benjamin Collyer, Noel D. McCarthy, Christopher Green, and Matt J. Keeling. "Informing antimicrobial stewardship with explainable AI." PLOS Digital Health 2, no. 1 (2023): e0000162. http://dx.doi.org/10.1371/journal.pdig.0000162.

Full text
Abstract:
The accuracy and flexibility of artificial intelligence (AI) systems often comes at the cost of a decreased ability to offer an intuitive explanation of their predictions. This hinders trust and discourage adoption of AI in healthcare, exacerbated by concerns over liabilities and risks to patients’ health in case of misdiagnosis. Providing an explanation for a model’s prediction is possible due to recent advances in the field of interpretable machine learning. We considered a data set of hospital admissions linked to records of antibiotic prescriptions and susceptibilities of bacterial isolate
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!