Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Explainable artifical intelligence.

Статті в журналах з теми "Explainable artifical intelligence"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Explainable artifical intelligence".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Amitha, T., P. Shobana, M. Jayashree, and R. Rajalakshmi. "Explainable Artificial Intelligence for Safely Health Care." International Journal of Science and Research (IJSR) 14, no. 1 (2025): 1155–60. https://doi.org/10.21275/sr25124103755.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ridley, Michael. "Explainable Artificial Intelligence." Ethics of Artificial Intelligence, no. 299 (September 19, 2019): 28–46. http://dx.doi.org/10.29242/rli.299.3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Rousseau, Axel-Jan, Melvin Geubbelmans, Dirk Valkenborg, and Tomasz Burzykowski. "Explainable artificial intelligence." American Journal of Orthodontics and Dentofacial Orthopedics 165, no. 4 (2024): 491–94. http://dx.doi.org/10.1016/j.ajodo.2024.01.006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Gunning, David, Mark Stefik, Jaesik Choi, Timothy Miller, Simone Stumpf, and Guang-Zhong Yang. "XAI—Explainable artificial intelligence." Science Robotics 4, no. 37 (2019): eaay7120. http://dx.doi.org/10.1126/scirobotics.aay7120.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Sewada, Ranu, Ashwani Jangid, Piyush Kumar, and Neha Mishra. "Explainable Artificial Intelligence (XAI)." Journal of Nonlinear Analysis and Optimization 13, no. 01 (2023): 41–47. http://dx.doi.org/10.36893/jnao.2022.v13i02.041-047.

Повний текст джерела
Анотація:
Explainable Artificial Intelligence (XAI) has emerged as a critical facet in the realm of machine learning and artificial intelligence, responding to the increasing complexity of models, particularly deep neural networks, and the subsequent need for transparent decision making processes. This research paper delves into the essence of XAI, unraveling its significance across diverse domains such as healthcare, finance, and criminal justice. As a countermeasure to the opacity of intricate models, the paper explores various XAI methods and techniques, including LIME and SHAP, weighing their interp
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Suresh, S. E., and K. Venkateswara Reddy. "Explainable Artificial Intelligence Model for Predictive Maintenance in Smart Agricultural Facilities." International Journal of Research Publication and Reviews 6, no. 5 (2025): 11806–8. https://doi.org/10.55248/gengpi.6.0525.18125.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Chauhan, Tavishee, and Sheetal Sonawane. "Contemplation of Explainable Artificial Intelligence Techniques." International Journal on Recent and Innovation Trends in Computing and Communication 10, no. 4 (2022): 65–71. http://dx.doi.org/10.17762/ijritcc.v10i4.5538.

Повний текст джерела
Анотація:
Machine intelligence and data science are two disciplines that are attempting to develop Artificial Intelligence. Explainable AI is one of the disciplines being investigated, with the goal of improving the transparency of black-box systems. This article aims to help people comprehend the necessity for Explainable AI, as well as the various methodologies used in various areas, all in one place. This study clarified how model interpretability and Explainable AI work together. This paper aims to investigate the Explainable artificial intelligence approaches their applications in multiple domains.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Moosavi, Sajad, Maryam Farajzadeh-Zanjani, Roozbeh Razavi-Far, Vasile Palade, and Mehrdad Saif. "Explainable AI in Manufacturing and Industrial Cyber–Physical Systems: A Survey." Electronics 13, no. 17 (2024): 3497. http://dx.doi.org/10.3390/electronics13173497.

Повний текст джерела
Анотація:
This survey explores applications of explainable artificial intelligence in manufacturing and industrial cyber–physical systems. As technological advancements continue to integrate artificial intelligence into critical infrastructure and industrial processes, the necessity for clear and understandable intelligent models becomes crucial. Explainable artificial intelligence techniques play a pivotal role in enhancing the trustworthiness and reliability of intelligent systems applied to industrial systems, ensuring human operators can comprehend and validate the decisions made by these intelligen
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Abdelmonem, Ahmed, and Nehal N. Mostafa. "Interpretable Machine Learning Fusion and Data Analytics Models for Anomaly Detection." Fusion: Practice and Applications 3, no. 1 (2021): 54–69. http://dx.doi.org/10.54216/fpa.030104.

Повний текст джерела
Анотація:
Explainable artificial intelligence received great research attention in the past few years during the widespread of Black-Box techniques in sensitive fields such as medical care, self-driving cars, etc. Artificial intelligence needs explainable methods to discover model biases. Explainable artificial intelligence will lead to obtaining fairness and Transparency in the model. Making artificial intelligence models explainable and interpretable is challenging when implementing black-box models. Because of the inherent limitations of collecting data in its raw form, data fusion has become a popul
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Darwish, Ashraf. "Explainable Artificial Intelligence: A New Era of Artificial Intelligence." Digital Technologies Research and Applications 1, no. 1 (2022): 1. http://dx.doi.org/10.54963/dtra.v1i1.29.

Повний текст джерела
Анотація:
Recently, Artificial Intelligence (AI) has emerged as an emerging with advanced methodologies and innovative applications. With the rapid advancement of AI concepts and technologies, there has been a recent trend to add interpretability and explainability to the paradigm. With the increasing complexity of AI applications, their a relationship with data analytics, and the ubiquity of demanding applications in a variety of critical applications such as medicine, defense, justice and autonomous vehicles , there is an increasing need to associate the results with sound explanations to domain exper
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Sharma, Deepak Kumar, Jahanavi Mishra, Aeshit Singh, Raghav Govil, Gautam Srivastava, and Jerry Chun-Wei Lin. "Explainable Artificial Intelligence for Cybersecurity." Computers and Electrical Engineering 103 (October 2022): 108356. http://dx.doi.org/10.1016/j.compeleceng.2022.108356.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Karpov, O. E., D. A. Andrikov, V. A. Maksimenko, and A. E. Hramov. "EXPLAINABLE ARTIFICIAL INTELLIGENCE FOR MEDICINE." Vrach i informacionnye tehnologii, no. 2 (2022): 4–11. http://dx.doi.org/10.25881/18110193_2022_2_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Raikov, Alexander N. "Subjectivity of Explainable Artificial Intelligence." Russian Journal of Philosophical Sciences 65, no. 1 (2022): 72–90. http://dx.doi.org/10.30727/0235-1188-2022-65-1-72-90.

Повний текст джерела
Анотація:
The article addresses the problem of identifying methods to develop the ability of artificial intelligence (AI) systems to provide explanations for their findings. This issue is not new, but, nowadays, the increasing complexity of AI systems is forcing scientists to intensify research in this direction. Modern neural networks contain hundreds of layers of neurons. The number of parameters of these networks reaches trillions, genetic algorithms generate thousands of generations of solutions, and the semantics of AI models become more complicated, going to the quantum and non-local levels. The w
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Kalmykov, Vyacheslav L., and Lev V. Kalmykov. "Towards eXplicitly eXplainable Artificial Intelligence." Information Fusion 123 (November 2025): 103352. https://doi.org/10.1016/j.inffus.2025.103352.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Zednik, Carlos, and Hannes Boelsen. "Scientific Exploration and Explainable Artificial Intelligence." Minds and Machines 32, no. 1 (2022): 219–39. http://dx.doi.org/10.1007/s11023-021-09583-6.

Повний текст джерела
Анотація:
AbstractModels developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future investigations of (potentially) causal relations
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Shevchenko, Alexey V., and Alexey N. Averkin. "PROTECTING ARTIFICIAL INTELLIGENCE AND EXPLAINABLE ARTIFICIAL INTELLIGENCE FROM ADVERSARIAL ATTACKS." SOFT MEASUREMENTS AND COMPUTING 12, no. 85 (2024): 103–13. https://doi.org/10.36871/2618-9976.2024.12.009.

Повний текст джерела
Анотація:
The paper examines attacks on the input of a neural network (AI and XAI) that lead to loss of functionality or the state of security of the neural network. Modern approaches and methods for protecting neural networks from competitive attacks, as private attacks on the input, are presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Liu, Yijun. "Explainable artificial intelligence and its practical applications." Applied and Computational Engineering 4, no. 1 (2023): 755–59. http://dx.doi.org/10.54254/2755-2721/4/2023419.

Повний текст джерела
Анотація:
With the continuous development of the times, the artificial intelligence industry is also booming, and its presence in various fields has a huge role in promoting social progress and advancing industrial development. Research on it is also in full swing. People are eager to understand the cause-and-effect relationship between the actions performed or the strategies decided based on the black-box model, so that they can learn or judge from another perspective. Thus the Explainable AI is proposed, it is a new generation of AI that allows humans to understand the cause and give them a decision s
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Allen, Ben. "Discovering Themes in Deep Brain Stimulation Research Using Explainable Artificial Intelligence." Biomedicines 11, no. 3 (2023): 771. http://dx.doi.org/10.3390/biomedicines11030771.

Повний текст джерела
Анотація:
Deep brain stimulation is a treatment that controls symptoms by changing brain activity. The complexity of how to best treat brain dysfunction with deep brain stimulation has spawned research into artificial intelligence approaches. Machine learning is a subset of artificial intelligence that uses computers to learn patterns in data and has many healthcare applications, such as an aid in diagnosis, personalized medicine, and clinical decision support. Yet, how machine learning models make decisions is often opaque. The spirit of explainable artificial intelligence is to use machine learning mo
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Dikmen, Murat, and Catherine Burns. "Abstraction Hierarchy Based Explainable Artificial Intelligence." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (2020): 319–23. http://dx.doi.org/10.1177/1071181320641073.

Повний текст джерела
Анотація:
This work explores the application of Cognitive Work Analysis (CWA) in the context of Explainable Artificial Intelligence (XAI). We built an AI system using a loan evaluation data set and applied an XAI technique to obtain data-driven explanations for predictions. Using an Abstraction Hierarchy (AH), we generated domain knowledge-based explanations to accompany data-driven explanations. An online experiment was conducted to test the usefulness of AH-based explanations. Participants read financial profiles of loan applicants, the AI system’s loan approval/rejection decisions, and explanations t
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Gunning, David, and David Aha. "DARPA’s Explainable Artificial Intelligence (XAI) Program." AI Magazine 40, no. 2 (2019): 44–58. http://dx.doi.org/10.1609/aimag.v40i2.2850.

Повний текст джерела
Анотація:
Dramatic success in machine learning has led to a new wave of AI applications (for example, transportation, security, medicine, finance, defense) that offer tremendous benefits but cannot explain their decisions and actions to human users. DARPA’s explainable artificial intelligence (XAI) program endeavors to create AI systems whose learned models and decisions can be understood and appropriately trusted by end users. Realizing this goal requires methods for learning more explainable models, designing effective explanation interfaces, and understanding the psychologic requirements for effectiv
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Jiménez-Luna, José, Francesca Grisoni, and Gisbert Schneider. "Drug discovery with explainable artificial intelligence." Nature Machine Intelligence 2, no. 10 (2020): 573–84. http://dx.doi.org/10.1038/s42256-020-00236-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Miller, Tim. ""But why?" Understanding explainable artificial intelligence." XRDS: Crossroads, The ACM Magazine for Students 25, no. 3 (2019): 20–25. http://dx.doi.org/10.1145/3313107.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Owens, Emer, Barry Sheehan, Martin Mullins, Martin Cunneen, Juliane Ressel, and German Castignani. "Explainable Artificial Intelligence (XAI) in Insurance." Risks 10, no. 12 (2022): 230. http://dx.doi.org/10.3390/risks10120230.

Повний текст джерела
Анотація:
Explainable Artificial Intelligence (XAI) models allow for a more transparent and understandable relationship between humans and machines. The insurance industry represents a fundamental opportunity to demonstrate the potential of XAI, with the industry’s vast stores of sensitive data on policyholders and centrality in societal progress and innovation. This paper analyses current Artificial Intelligence (AI) applications in insurance industry practices and insurance research to assess their degree of explainability. Using search terms representative of (X)AI applications in insurance, 419 orig
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Knox, Andrew T., Yasmin Khakoo, and Grace Gombolay. "Explainable Artificial Intelligence: Point and Counterpoint." Pediatric Neurology 148 (November 2023): 54–55. http://dx.doi.org/10.1016/j.pediatrneurol.2023.08.010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Aysel, Halil Ibrahim, Xiaohao Cai, and Adam Prugel-Bennett. "Explainable Artificial Intelligence: Advancements and Limitations." Applied Sciences 15, no. 13 (2025): 7261. https://doi.org/10.3390/app15137261.

Повний текст джерела
Анотація:
Explainable artificial intelligence (XAI) has emerged as a crucial field for understanding and interpreting the decisions of complex machine learning models, particularly deep neural networks. This review presents a structured overview of XAI methodologies, encompassing a diverse range of techniques designed to provide explainability at different levels of abstraction. We cover pixel-level explanation strategies such as saliency maps, perturbation-based methods and gradient-based visualisations, as well as concept-based approaches that align model behaviour with human-understandable semantics.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Alufaisan, Yasmeen, Laura R. Marusich, Jonathan Z. Bakdash, Yan Zhou, and Murat Kantarcioglu. "Does Explainable Artificial Intelligence Improve Human Decision-Making?" Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (2021): 6618–26. http://dx.doi.org/10.1609/aaai.v35i8.16819.

Повний текст джерела
Анотація:
Explainable AI provides insights to users into the why for model predictions, offering potential for users to better understand and trust a model, and to recognize and correct AI predictions that are incorrect. Prior research on human and explainable AI interactions has focused on measures such as interpretability, trust, and usability of the explanation. There are mixed findings whether explainable AI can improve actual human decision-making and the ability to identify the problems with the underlying model. Using real datasets, we compare objective human decision accuracy without AI (control
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Kim, Raehwan, Woohyun Kwon, Jiyun Kim, and Teyoun Kang. "Development of a Deep Learning-Based Grad-Shafranov Equation Solver and Verification with Explainable Artificial Intelligence." Korean Science Education Society for the Gifted 17, no. 1 (2025): 96–110. https://doi.org/10.29306/jseg.2025.17.1.96.

Повний текст джерела
Анотація:
The purpose of this study is to develop a deep learning-based Grad-Shafranov equation solver applicable to various tokamak systems. After generating a large training dataset using the FreeGS library, a preprocessing step was performed to exclude data of unstable plasma states to improve the ease of training. As a result, the developed Keras-based deep learning model was able to predict the equilibrium state  distribution from variables such as X-point position, plasma current, and pressure more than 600 times faster than the numerical method. By modifying the loss function of the deep learnin
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Miller, Tim, Rosina Weber, and Daniele Magazenni. "Report on the 2019 IJCAI Explainable Artificial Intelligence Workshop." AI Magazine 41, no. 1 (2020): 103–5. http://dx.doi.org/10.1609/aimag.v41i1.5302.

Повний текст джерела
Анотація:
This article reports on the Explainable Artificial Intelligence Workshop, held within the International Joint Conferences on Artificial Intelligence 2019 Workshop Program in Macau, August 11, 2019. With over 160 registered attendees, the workshop was the largest workshop at the conference. It featured an invited talk and 23 oral presentations, and closed with an audience discussion about where explainable artificial intelligence research stands.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Zahoor, Kanwal, Narmeen Zakaria Bawany, and Tehreem Qamar. "Evaluating text classification with explainable artificial intelligence." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 1 (2024): 278. http://dx.doi.org/10.11591/ijai.v13.i1.pp278-286.

Повний текст джерела
Анотація:
<span lang="EN-US">Nowadays, artificial intelligence (AI) in general and machine learning techniques in particular has been widely employed in automated systems. Increasing complexity of these machine learning based systems have consequently given rise to blackbox models that are typically not understandable or explainable by humans. There is a need to understand the logic and reason behind these automated decision-making black box models as they are involved in our day-to-day activities such as driving, facial recognition identity systems, online recruitment. Explainable artificial inte
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Zahoor, Kanwal, Narmeen Zakaria Bawany, and Tehreem Qamar. "Evaluating text classification with explainable artificial intelligence." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 1 (2024): 278–86. https://doi.org/10.11591/ijai.v13.i1.pp278-286.

Повний текст джерела
Анотація:
Nowadays, artificial intelligence (AI) in general and machine learning techniques in particular has been widely employed in automated systems. Increasing complexity of these machine learning based systems have consequently given rise to blackbox models that are typically not understandable or explainable by humans. There is a need to understand the logic and reason behind these automated decision-making black box models as they are involved in our day-to-day activities such as driving, facial recognition identity systems, online recruitment. Explainable artificial intelligence (XAI) is an evol
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Abbass, Hussein, Keeley Crockett, Jonathan Garibaldi, Alexander Gegov, Uzay Kaymak, and Joao Miguel C. Sousa. "Editorial: From Explainable Artificial Intelligence (xAI) to Understandable Artificial Intelligence (uAI)." IEEE Transactions on Artificial Intelligence 5, no. 9 (2024): 4310–14. http://dx.doi.org/10.1109/tai.2024.3439048.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Prentzas, Jim, and Ariadni Binopoulou. "Explainable Artificial Intelligence Approaches in Primary Education: A Review." Electronics 14, no. 11 (2025): 2279. https://doi.org/10.3390/electronics14112279.

Повний текст джерела
Анотація:
Artificial intelligence (AI) methods have been integrated in education during the last few decades. Interest in this integration has increased in recent years due to the popularity of AI. The use of explainable AI in educational settings is becoming a research trend. Explainable AI provides insight into the decisions made by AI, increases trust in AI, and enhances the effectiveness of the AI-supported processes. In this context, there is an increasing interest in the integration of AI, and specifically explainable AI, in the education of young children. This paper reviews research regarding ex
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Ihejirika, Chidimma Judith. "Explainable Artificial Intelligence in Deep Learning Models for Transparent Decision-Making in High-Stakes Applications." International Journal of Research Publication and Reviews 6, no. 2 (2025): 1924–40. https://doi.org/10.55248/gengpi.6.0225.0905.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Liu, Peng, Lizhe Wang, and Jun Li. "Unlocking the Potential of Explainable Artificial Intelligence in Remote Sensing Big Data." Remote Sensing 15, no. 23 (2023): 5448. http://dx.doi.org/10.3390/rs15235448.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Webb-Robertson, Bobbie-Jo M. "Explainable Artificial Intelligence in Endocrinological Medical Research." Journal of Clinical Endocrinology & Metabolism 106, no. 7 (2021): e2809-e2810. http://dx.doi.org/10.1210/clinem/dgab237.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Patil, Shruti, Vijayakumar Varadarajan, Siddiqui Mohd Mazhar, et al. "Explainable Artificial Intelligence for Intrusion Detection System." Electronics 11, no. 19 (2022): 3079. http://dx.doi.org/10.3390/electronics11193079.

Повний текст джерела
Анотація:
Intrusion detection systems are widely utilized in the cyber security field, to prevent and mitigate threats. Intrusion detection systems (IDS) help to keep threats and vulnerabilities out of computer networks. To develop effective intrusion detection systems, a range of machine learning methods are available. Machine learning ensemble methods have a well-proven track record when it comes to learning. Using ensemble methods of machine learning, this paper proposes an innovative intrusion detection system. To improve classification accuracy and eliminate false positives, features from the CICID
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Babaei, Golnoosh, Paolo Giudici, and Emanuela Raffinetti. "Explainable artificial intelligence for crypto asset allocation." Finance Research Letters 47 (June 2022): 102941. http://dx.doi.org/10.1016/j.frl.2022.102941.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Miller, Tim, Robert Hoffman, Ofra Amir, and Andreas Holzinger. "Special issue on Explainable Artificial Intelligence (XAI)." Artificial Intelligence 307 (June 2022): 103705. http://dx.doi.org/10.1016/j.artint.2022.103705.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Javaid, Kumail, Ayesha Siddiqa, Syed Abbas Zilqurnain Naqvi, et al. "Explainable Artificial Intelligence Solution for Online Retail." Computers, Materials & Continua 71, no. 3 (2022): 4425–42. http://dx.doi.org/10.32604/cmc.2022.022984.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Alonso-Moral, Jose Maria, Corrado Mencar, and Hisao Ishibuchi. "Explainable and Trustworthy Artificial Intelligence [Guest Editorial]." IEEE Computational Intelligence Magazine 17, no. 1 (2022): 14–15. http://dx.doi.org/10.1109/mci.2021.3129953.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Han, Juhee, and Younghoon Lee. "Explainable Artificial Intelligence-Based Competitive Factor Identification." ACM Transactions on Knowledge Discovery from Data 16, no. 1 (2021): 1–11. http://dx.doi.org/10.1145/3451529.

Повний текст джерела
Анотація:
Competitor analysis is an essential component of corporate strategy, providing both offensive and defensive strategic contexts to identify opportunities and threats. The rapid development of social media has recently led to several methodologies and frameworks facilitating competitor analysis through online reviews. Existing studies only focused on detecting comparative sentences in review comments or utilized low-performance models. However, this study proposes a novel approach to identifying the competitive factors using a recent explainable artificial intelligence approach at the comprehens
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Sandu, Marian Gabriel, and Stefan Trausan-Matu. "Explainable Artificial Intelligence in Natural Language Processing." International Joural of User-System Interaction 14, no. 2 (2021): 68–84. http://dx.doi.org/10.37789/ijusi.2021.14.2.2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Do, Synho. "Explainable & Safe Artificial Intelligence in Radiology." Journal of the Korean Society of Radiology 85, no. 5 (2024): 834. http://dx.doi.org/10.3348/jksr.2024.0118.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

M N., Sowmiya, Jaya Sri S., Deepshika S., and Hanushya Devi G. "Credit Risk Analysis using Explainable Artificial Intelligence." Journal of Soft Computing Paradigm 6, no. 3 (2024): 272–83. http://dx.doi.org/10.36548/jscp.2024.3.004.

Повний текст джерела
Анотація:
The proposed research focuses on enhancing the interpretability of risk evaluation in credit approvals within the banking sector. This work employs LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) to provide explanations for individual predictions: LIME approximates the model locally with an interpretable model, while SHAP offers insights into the contribution of each feature to the prediction through both global and local explanations. The research integrates gradient boosting algorithms (XGBoost, LightGBM) and Random Forest with these Explainabl
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Gregoriades, Andreas, and Christos Themistocleous. "Improving Crowdfunding Decisions Using Explainable Artificial Intelligence." Sustainability 17, no. 4 (2025): 1361. https://doi.org/10.3390/su17041361.

Повний текст джерела
Анотація:
This paper investigates points of vulnerability in the decisions made by backers and campaigners in crowdfund pledges in an attempt to facilitate a sustainable entrepreneurial ecosystem by increasing the rate of good projects being funded. In doing so, this research examines factors that contribute to the success or failure of crowdfunding campaign pledges using eXplainable AI methods (SHapley Additive exPlanations and Counterfactual Explanations). A dataset of completed Kickstarter campaigns was used to train two binary classifiers. The first model used textual features from the campaigns’ de
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Rosenberg, Gili, John Kyle Brubaker, Martin J. A. Schuetz, et al. "Explainable Artificial Intelligence Using Expressive Boolean Formulas." Machine Learning and Knowledge Extraction 5, no. 4 (2023): 1760–95. http://dx.doi.org/10.3390/make5040086.

Повний текст джерела
Анотація:
We propose and implement an interpretable machine learning classification model for Explainable AI (XAI) based on expressive Boolean formulas. Potential applications include credit scoring and diagnosis of medical conditions. The Boolean formula defines a rule with tunable complexity (or interpretability) according to which input data are classified. Such a formula can include any operator that can be applied to one or more Boolean variables, thus providing higher expressivity compared to more rigid rule- and tree-based approaches. The classifier is trained using native local optimization tech
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Son, Yeongyeong, Yewon Shin, and Sunyoung Kwon. "Explainable Artificial Intelligence in Molecular Graph Classification." Journal of KIISE 51, no. 2 (2024): 157–64. http://dx.doi.org/10.5626/jok.2024.51.2.157.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Kumar R S, Vinod, Bushara A R, Abubeker K M, et al. "Explainable artificial intelligence for detecting lung cancer." Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska 15, no. 1 (2025): 125–30. https://doi.org/10.35784/iapgos.6626.

Повний текст джерела
Анотація:
Early and reliable diagnosis of lung cancer is a major medical objective. This study makes a groundbreaking contribution to the field of smart healthcare by employing the capabilities of Explainable Artificial Intelligence (AI) and the Grad-CAM (Gradient-weighted Class Activation Mapping) visualization technique to improve lung cancer detection. The LIDC-IDRI dataset is used in the study to create a deep-learning model that can distinguish between benign and malignant lung diseases based on image features. This study demonstrates the importance of the Grad-CAM technique by highlighting the par
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Kauffmann, Jacob, Jonas Dippel, Lukas Ruff, Wojciech Samek, Klaus-Robert Müller, and Grégoire Montavon. "Explainable AI reveals Clever Hans effects in unsupervised learning models." Nature Machine Intelligence, March 17, 2025. https://doi.org/10.1038/s42256-025-01000-2.

Повний текст джерела
Анотація:
Abstract Unsupervised learning has become an essential building block of artifical intelligence systems. The representations it produces, for example, in foundation models, are critical to a wide variety of downstream applications. It is therefore important to carefully examine unsupervised models to ensure not only that they produce accurate predictions on the available data but also that these accurate predictions do not arise from a Clever Hans (CH) effect. Here, using specially developed explainable artifical intelligence techniques and applying them to popular representation learning and
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Su, Shimiao, Taekyu Ahn, and Yun Yang. "Enhancing Durability of Organic–Inorganic Hybrid Perovskite Solar Cells in High‐Temperature Environments: Exploring Thermal Stability, Molecular Structures, and AI Applications." Advanced Functional Materials, November 19, 2024. http://dx.doi.org/10.1002/adfm.202408480.

Повний текст джерела
Анотація:
AbstractThe commercialization of perovskite solar cells (PSCs), as an emerging industry, still faces competition from other renewable energy technologies in the market. It is essential to ensure that PSCs are durable and stable in high‐temperature environments in order to meet the varied market demands of hot regions or seasons. The influence of high temperatures on the PSCs is complex, encompassing factors such as lattice strain, crystal phase changes, the creation of defects, and ion movement. Furthermore, it intensifies lattice vibrations and phonon scattering, which in turn impacts the mig
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!