Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Explainable intelligence models.

Статті в журналах з теми "Explainable intelligence models"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Explainable intelligence models".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Abdelmonem, Ahmed, and Nehal N. Mostafa. "Interpretable Machine Learning Fusion and Data Analytics Models for Anomaly Detection." Fusion: Practice and Applications 3, no. 1 (2021): 54–69. http://dx.doi.org/10.54216/fpa.030104.

Повний текст джерела
Анотація:
Explainable artificial intelligence received great research attention in the past few years during the widespread of Black-Box techniques in sensitive fields such as medical care, self-driving cars, etc. Artificial intelligence needs explainable methods to discover model biases. Explainable artificial intelligence will lead to obtaining fairness and Transparency in the model. Making artificial intelligence models explainable and interpretable is challenging when implementing black-box models. Because of the inherent limitations of collecting data in its raw form, data fusion has become a popular method for dealing with such data and acquiring more trustworthy, helpful, and precise insights. Compared to other, more traditional-based data fusion methods, machine learning's capacity to automatically learn from experience with nonexplicit programming significantly improves fusion's computational and predictive power. This paper comprehensively studies the most explainable artificial intelligent methods based on anomaly detection. We proposed the required criteria of the transparency model to measure the data fusion analytics techniques. Also, define the different used evaluation metrics in explainable artificial intelligence. We provide some applications for explainable artificial intelligence. We provide a case study of anomaly detection with the fusion of machine learning. Finally, we discuss the key challenges and future directions in explainable artificial intelligence.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zednik, Carlos, and Hannes Boelsen. "Scientific Exploration and Explainable Artificial Intelligence." Minds and Machines 32, no. 1 (March 2022): 219–39. http://dx.doi.org/10.1007/s11023-021-09583-6.

Повний текст джерела
Анотація:
AbstractModels developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future investigations of (potentially) causal relationships, and to generate possible explanations of target phenomena in cognitive science. In this way, this paper describes how Explainable AI—over and above machine learning itself—contributes to the efficiency and scope of data-driven scientific research.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Raikov, Alexander N. "Subjectivity of Explainable Artificial Intelligence." Russian Journal of Philosophical Sciences 65, no. 1 (June 25, 2022): 72–90. http://dx.doi.org/10.30727/0235-1188-2022-65-1-72-90.

Повний текст джерела
Анотація:
The article addresses the problem of identifying methods to develop the ability of artificial intelligence (AI) systems to provide explanations for their findings. This issue is not new, but, nowadays, the increasing complexity of AI systems is forcing scientists to intensify research in this direction. Modern neural networks contain hundreds of layers of neurons. The number of parameters of these networks reaches trillions, genetic algorithms generate thousands of generations of solutions, and the semantics of AI models become more complicated, going to the quantum and non-local levels. The world’s leading companies are investing heavily in creating explainable AI (XAI). However, the result is still unsatisfactory: a person often cannot understand the “explanations” of AI because the latter makes decisions differently than a person, and perhaps because a good explanation is impossible within the framework of the classical AI paradigm. AI faced a similar problem 40 years ago when expert systems contained only a few hundred logical production rules. The problem was then solved by complicating the logic and building added knowledge bases to explain the conclusions given by AI. At present, other approaches are needed, primarily those that consider the external environment and the subjectivity of AI systems. This work focuses on solving this problem by immersing AI models in the social and economic environment, building ontologies of this environment, taking into account a user profile and creating conditions for purposeful convergence of AI solutions and conclusions to user-friendly goals.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Althoff, Daniel, Helizani Couto Bazame, and Jessica Garcia Nascimento. "Untangling hybrid hydrological models with explainable artificial intelligence." H2Open Journal 4, no. 1 (January 1, 2021): 13–28. http://dx.doi.org/10.2166/h2oj.2021.066.

Повний текст джерела
Анотація:
Abstract Hydrological models are valuable tools for developing streamflow predictions in unmonitored catchments to increase our understanding of hydrological processes. A recent effort has been made in the development of hybrid (conceptual/machine learning) models that can preserve some of the hydrological processes represented by conceptual models and can improve streamflow predictions. However, these studies have not explored how the data-driven component of hybrid models resolved runoff routing. In this study, explainable artificial intelligence (XAI) techniques are used to turn a ‘black-box’ model into a ‘glass box’ model. The hybrid models reduced the root-mean-square error of the simulated streamflow values by approximately 27, 50, and 24% for stations 17120000, 27380000, and 33680000, respectively, relative to the traditional method. XAI techniques helped unveil the importance of accounting for soil moisture in hydrological models. Differing from purely data-driven hydrological models, the inclusion of the production storage in the proposed hybrid model, which is responsible for estimating the water balance, reduced the short- and long-term dependencies of input variables for streamflow prediction. In addition, soil moisture controlled water percolation, which was the main predictor of streamflow. This finding is because soil moisture controls the underlying mechanisms of groundwater flow into river streams.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Gunning, David, and David Aha. "DARPA’s Explainable Artificial Intelligence (XAI) Program." AI Magazine 40, no. 2 (June 24, 2019): 44–58. http://dx.doi.org/10.1609/aimag.v40i2.2850.

Повний текст джерела
Анотація:
Dramatic success in machine learning has led to a new wave of AI applications (for example, transportation, security, medicine, finance, defense) that offer tremendous benefits but cannot explain their decisions and actions to human users. DARPA’s explainable artificial intelligence (XAI) program endeavors to create AI systems whose learned models and decisions can be understood and appropriately trusted by end users. Realizing this goal requires methods for learning more explainable models, designing effective explanation interfaces, and understanding the psychologic requirements for effective explanations. The XAI developer teams are addressing the first two challenges by creating ML techniques and developing principles, strategies, and human-computer interaction techniques for generating effective explanations. Another XAI team is addressing the third challenge by summarizing, extending, and applying psychologic theories of explanation to help the XAI evaluator define a suitable evaluation framework, which the developer teams will use to test their systems. The XAI teams completed the first of this 4-year program in May 2018. In a series of ongoing evaluations, the developer teams are assessing how well their XAM systems’ explanations improve user understanding, user trust, and user task performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Owens, Emer, Barry Sheehan, Martin Mullins, Martin Cunneen, Juliane Ressel, and German Castignani. "Explainable Artificial Intelligence (XAI) in Insurance." Risks 10, no. 12 (December 1, 2022): 230. http://dx.doi.org/10.3390/risks10120230.

Повний текст джерела
Анотація:
Explainable Artificial Intelligence (XAI) models allow for a more transparent and understandable relationship between humans and machines. The insurance industry represents a fundamental opportunity to demonstrate the potential of XAI, with the industry’s vast stores of sensitive data on policyholders and centrality in societal progress and innovation. This paper analyses current Artificial Intelligence (AI) applications in insurance industry practices and insurance research to assess their degree of explainability. Using search terms representative of (X)AI applications in insurance, 419 original research articles were screened from IEEE Xplore, ACM Digital Library, Scopus, Web of Science and Business Source Complete and EconLit. The resulting 103 articles (between the years 2000–2021) representing the current state-of-the-art of XAI in insurance literature are analysed and classified, highlighting the prevalence of XAI methods at the various stages of the insurance value chain. The study finds that XAI methods are particularly prevalent in claims management, underwriting and actuarial pricing practices. Simplification methods, called knowledge distillation and rule extraction, are identified as the primary XAI technique used within the insurance value chain. This is important as the combination of large models to create a smaller, more manageable model with distinct association rules aids in building XAI models which are regularly understandable. XAI is an important evolution of AI to ensure trust, transparency and moral values are embedded within the system’s ecosystem. The assessment of these XAI foci in the context of the insurance industry proves a worthwhile exploration into the unique advantages of XAI, highlighting to industry professionals, regulators and XAI developers where particular focus should be directed in the further development of XAI. This is the first study to analyse XAI’s current applications within the insurance industry, while simultaneously contributing to the interdisciplinary understanding of applied XAI. Advancing the literature on adequate XAI definitions, the authors propose an adapted definition of XAI informed by the systematic review of XAI literature in insurance.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Lorente, Maria Paz Sesmero, Elena Magán Lopez, Laura Alvarez Florez, Agapito Ledezma Espino, José Antonio Iglesias Martínez, and Araceli Sanchis de Miguel. "Explaining Deep Learning-Based Driver Models." Applied Sciences 11, no. 8 (April 7, 2021): 3321. http://dx.doi.org/10.3390/app11083321.

Повний текст джерела
Анотація:
Different systems based on Artificial Intelligence (AI) techniques are currently used in relevant areas such as healthcare, cybersecurity, natural language processing, and self-driving cars. However, many of these systems are developed with “black box” AI, which makes it difficult to explain how they work. For this reason, explainability and interpretability are key factors that need to be taken into consideration in the development of AI systems in critical areas. In addition, different contexts produce different explainability needs which must be met. Against this background, Explainable Artificial Intelligence (XAI) appears to be able to address and solve this situation. In the field of automated driving, XAI is particularly needed because the level of automation is constantly increasing according to the development of AI techniques. For this reason, the field of XAI in the context of automated driving is of particular interest. In this paper, we propose the use of an explainable intelligence technique in the understanding of some of the tasks involved in the development of advanced driver-assistance systems (ADAS). Since ADAS assist drivers in driving functions, it is essential to know the reason for the decisions taken. In addition, trusted AI is the cornerstone of the confidence needed in this research area. Thus, due to the complexity and the different variables that are part of the decision-making process, this paper focuses on two specific tasks in this area: the detection of emotions and the distractions of drivers. The results obtained are promising and show the capacity of the explainable artificial techniques in the different tasks of the proposed environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Letzgus, Simon, Patrick Wagner, Jonas Lederer, Wojciech Samek, Klaus-Robert Muller, and Gregoire Montavon. "Toward Explainable Artificial Intelligence for Regression Models: A methodological perspective." IEEE Signal Processing Magazine 39, no. 4 (July 2022): 40–58. http://dx.doi.org/10.1109/msp.2022.3153277.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Han, Juhee, and Younghoon Lee. "Explainable Artificial Intelligence-Based Competitive Factor Identification." ACM Transactions on Knowledge Discovery from Data 16, no. 1 (July 3, 2021): 1–11. http://dx.doi.org/10.1145/3451529.

Повний текст джерела
Анотація:
Competitor analysis is an essential component of corporate strategy, providing both offensive and defensive strategic contexts to identify opportunities and threats. The rapid development of social media has recently led to several methodologies and frameworks facilitating competitor analysis through online reviews. Existing studies only focused on detecting comparative sentences in review comments or utilized low-performance models. However, this study proposes a novel approach to identifying the competitive factors using a recent explainable artificial intelligence approach at the comprehensive product feature level. We establish a model to classify the review comments for each corresponding product and evaluate the relevance of each keyword in such comments during the classification process. We then extract and prioritize the keywords and determine their competitiveness based on relevance. Our experiment results show that the proposed method can effectively extract the competitive factors both qualitatively and quantitatively.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Patil, Shruti, Vijayakumar Varadarajan, Siddiqui Mohd Mazhar, Abdulwodood Sahibzada, Nihal Ahmed, Onkar Sinha, Satish Kumar, Kailash Shaw, and Ketan Kotecha. "Explainable Artificial Intelligence for Intrusion Detection System." Electronics 11, no. 19 (September 27, 2022): 3079. http://dx.doi.org/10.3390/electronics11193079.

Повний текст джерела
Анотація:
Intrusion detection systems are widely utilized in the cyber security field, to prevent and mitigate threats. Intrusion detection systems (IDS) help to keep threats and vulnerabilities out of computer networks. To develop effective intrusion detection systems, a range of machine learning methods are available. Machine learning ensemble methods have a well-proven track record when it comes to learning. Using ensemble methods of machine learning, this paper proposes an innovative intrusion detection system. To improve classification accuracy and eliminate false positives, features from the CICIDS-2017 dataset were chosen. This paper proposes an intrusion detection system using machine learning algorithms such as decision trees, random forests, and SVM (IDS). After training these models, an ensemble technique voting classifier was added and achieved an accuracy of 96.25%. Furthermore, the proposed model also incorporates the XAI algorithm LIME for better explainability and understanding of the black-box approach to reliable intrusion detection. Our experimental results confirmed that XAI LIME is more explanation-friendly and more responsive.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Hassan, Ali, Riza Sulaiman, Mansoor Abdullateef Abdulgabber, and Hasan Kahtan. "TOWARDS USER-CENTRIC EXPLANATIONS FOR EXPLAINABLE MODELS: A REVIEW." Journal of Information System and Technology Management 6, no. 22 (September 1, 2021): 36–50. http://dx.doi.org/10.35631/jistm.622004.

Повний текст джерела
Анотація:
Recent advances in artificial intelligence, particularly in the field of machine learning (ML), have shown that these models can be incredibly successful, producing encouraging results and leading to diverse applications. Despite the promise of artificial intelligence, without transparency of machine learning models, it is difficult for stakeholders to trust the results of such models, which can hinder successful adoption. This concern has sparked scientific interest and led to the development of transparency-supporting algorithms. Although studies have raised awareness of the need for explainable AI, the question of how to meet real users' needs for understanding AI remains unresolved. This study provides a review of the literature on human-centric Machine Learning and new approaches to user-centric explanations for deep learning models. We highlight the challenges and opportunities facing this area of research. The goal is for this review to serve as a resource for both researchers and practitioners. The study found that one of the most difficult aspects of implementing machine learning models is gaining the trust of end-users.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Islam, Mohammed Saidul, Iqram Hussain, Md Mezbaur Rahman, Se Jin Park, and Md Azam Hossain. "Explainable Artificial Intelligence Model for Stroke Prediction Using EEG Signal." Sensors 22, no. 24 (December 15, 2022): 9859. http://dx.doi.org/10.3390/s22249859.

Повний текст джерела
Анотація:
State-of-the-art healthcare technologies are incorporating advanced Artificial Intelligence (AI) models, allowing for rapid and easy disease diagnosis. However, most AI models are considered “black boxes,” because there is no explanation for the decisions made by these models. Users may find it challenging to comprehend and interpret the results. Explainable AI (XAI) can explain the machine learning (ML) outputs and contribution of features in disease prediction models. Electroencephalography (EEG) is a potential predictive tool for understanding cortical impairment caused by an ischemic stroke and can be utilized for acute stroke prediction, neurologic prognosis, and post-stroke treatment. This study aims to utilize ML models to classify the ischemic stroke group and the healthy control group for acute stroke prediction in active states. Moreover, XAI tools (Eli5 and LIME) were utilized to explain the behavior of the model and determine the significant features that contribute to stroke prediction models. In this work, we studied 48 patients admitted to a hospital with acute ischemic stroke and 75 healthy adults who had no history of identified other neurological illnesses. EEG was obtained within three months following the onset of ischemic stroke symptoms using frontal, central, temporal, and occipital cortical electrodes (Fz, C1, T7, Oz). EEG data were collected in an active state (walking, working, and reading tasks). In the results of the ML approach, the Adaptive Gradient Boosting models showed around 80% accuracy for the classification of the control group and the stroke group. Eli5 and LIME were utilized to explain the behavior of the stroke prediction model and interpret the model locally around the prediction. The Eli5 and LIME interpretable models emphasized the spectral delta and theta features as local contributors to stroke prediction. From the findings of this explainable AI research, it is expected that the stroke-prediction XAI model will help with post-stroke treatment and recovery, as well as help healthcare professionals, make their diagnostic decisions more explainable.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Kumar, Akshi, Shubham Dikshit, and Victor Hugo C. Albuquerque. "Explainable Artificial Intelligence for Sarcasm Detection in Dialogues." Wireless Communications and Mobile Computing 2021 (July 2, 2021): 1–13. http://dx.doi.org/10.1155/2021/2939334.

Повний текст джерела
Анотація:
Sarcasm detection in dialogues has been gaining popularity among natural language processing (NLP) researchers with the increased use of conversational threads on social media. Capturing the knowledge of the domain of discourse, context propagation during the course of dialogue, and situational context and tone of the speaker are some important features to train the machine learning models for detecting sarcasm in real time. As situational comedies vibrantly represent human mannerism and behaviour in everyday real-life situations, this research demonstrates the use of an ensemble supervised learning algorithm to detect sarcasm in the benchmark dialogue dataset, MUStARD. The punch-line utterance and its associated context are taken as features to train the eXtreme Gradient Boosting (XGBoost) method. The primary goal is to predict sarcasm in each utterance of the speaker using the chronological nature of a scene. Further, it is vital to prevent model bias and help decision makers understand how to use the models in the right way. Therefore, as a twin goal of this research, we make the learning model used for conversational sarcasm detection interpretable. This is done using two post hoc interpretability approaches, Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP), to generate explanations for the output of a trained classifier. The classification results clearly depict the importance of capturing the intersentence context to detect sarcasm in conversational threads. The interpretability methods show the words (features) that influence the decision of the model the most and help the user understand how the model is making the decision for detecting sarcasm in dialogues.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Esmaeili, Morteza, Riyas Vettukattil, Hasan Banitalebi, Nina R. Krogh, and Jonn Terje Geitung. "Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization." Journal of Personalized Medicine 11, no. 11 (November 16, 2021): 1213. http://dx.doi.org/10.3390/jpm11111213.

Повний текст джерела
Анотація:
Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step towards translating AI imaging tools into clinical routine. An explainable AI approach aims to visualize the high-level features of trained models or integrate into the training process. This study aims to evaluate the performance of selected deep-learning algorithms on localizing tumor lesions and distinguishing the lesion from healthy regions in magnetic resonance imaging contrasts. Despite a significant correlation between classification and lesion localization accuracy (R = 0.46, p = 0.005), the known AI algorithms, examined in this study, classify some tumor brains based on other non-relevant features. The results suggest that explainable AI approaches can develop an intuition for model interpretability and may play an important role in the performance evaluation of deep learning models. Developing explainable AI approaches will be an essential tool to improve human–machine interactions and assist in the selection of optimal training methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Adarsh, V., and G. R. Gangadharan. "Applying Explainable Artificial Intelligence Models for Understanding Depression Among IT Workers." IT Professional 24, no. 5 (September 1, 2022): 25–29. http://dx.doi.org/10.1109/mitp.2022.3209803.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Başağaoğlu, Hakan, Debaditya Chakraborty, Cesar Do Lago, Lilianna Gutierrez, Mehmet Arif Şahinli, Marcio Giacomoni, Chad Furl, Ali Mirchi, Daniel Moriasi, and Sema Sevinç Şengör. "A Review on Interpretable and Explainable Artificial Intelligence in Hydroclimatic Applications." Water 14, no. 8 (April 11, 2022): 1230. http://dx.doi.org/10.3390/w14081230.

Повний текст джерела
Анотація:
This review focuses on the use of Interpretable Artificial Intelligence (IAI) and eXplainable Artificial Intelligence (XAI) models for data imputations and numerical or categorical hydroclimatic predictions from nonlinearly combined multidimensional predictors. The AI models considered in this paper involve Extreme Gradient Boosting, Light Gradient Boosting, Categorical Boosting, Extremely Randomized Trees, and Random Forest. These AI models can transform into XAI models when they are coupled with the explanatory methods such as the Shapley additive explanations and local interpretable model-agnostic explanations. The review highlights that the IAI models are capable of unveiling the rationale behind the predictions while XAI models are capable of discovering new knowledge and justifying AI-based results, which are critical for enhanced accountability of AI-driven predictions. The review also elaborates the importance of domain knowledge and interventional IAI modeling, potential advantages and disadvantages of hybrid IAI and non-IAI predictive modeling, unequivocal importance of balanced data in categorical decisions, and the choice and performance of IAI versus physics-based modeling. The review concludes with a proposed XAI framework to enhance the interpretability and explainability of AI models for hydroclimatic applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Rostami, Mehrdad, and Mourad Oussalah. "Cancer prediction using graph-based gene selection and explainable classifier." Finnish Journal of eHealth and eWelfare 14, no. 1 (April 14, 2022): 61–78. http://dx.doi.org/10.23996/fjhw.111772.

Повний текст джерела
Анотація:
Several Artificial Intelligence-based models have been developed for cancer prediction. In spite of the promise of artificial intelligence, there are very few models which bridge the gap between traditional human-centered prediction and the potential future of machine-centered cancer prediction. In this study, an efficient and effective model is developed for gene selection and cancer prediction. Moreover, this study proposes an artificial intelligence decision system to provide physicians with a simple and human-interpretable set of rules for cancer prediction. In contrast to previous deep learning-based cancer prediction models, which are difficult to explain to physicians due to their black-box nature, the proposed prediction model is based on a transparent and explainable decision forest model. The performance of the developed approach is compared to three state-of-the-art cancer prediction including TAGA, HPSO and LL. The reported results on five cancer datasets indicate that the developed model can improve the accuracy of cancer prediction and reduce the execution time.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Kobylińska, Katarzyna, Tadeusz Orłowski, Mariusz Adamek, and Przemysław Biecek. "Explainable Machine Learning for Lung Cancer Screening Models." Applied Sciences 12, no. 4 (February 12, 2022): 1926. http://dx.doi.org/10.3390/app12041926.

Повний текст джерела
Анотація:
Modern medicine is supported by increasingly sophisticated algorithms. In diagnostics or screening, statistical models are commonly used to assess the risk of disease development, the severity of its course, and expected treatment outcome. The growing availability of very detailed data and increased interest in personalized medicine are leading to the development of effective but complex machine learning models. For these models to be trusted, their predictions must be understandable to both the physician and the patient, hence the growing interest in the area of Explainable Artificial Intelligence (XAI). In this paper, we present selected methods from the XAI field in the example of models applied to assess lung cancer risk in lung cancer screening through low-dose computed tomography. The use of these techniques provides a better understanding of the similarities and differences between three commonly used models in lung cancer screening, i.e., BACH, PLCOm2012, and LCART. For the presentation of the results, we used data from the Domestic Lung Cancer Database. The XAI techniques help to better understand (1) which variables are most important in which model, (2) how they are transformed into model predictions, and facilitate (3) the explanation of model predictions for a particular screenee.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Feng, Jinchao, Joshua L. Lansford, Markos A. Katsoulakis, and Dionisios G. Vlachos. "Explainable and trustworthy artificial intelligence for correctable modeling in chemical sciences." Science Advances 6, no. 42 (October 2020): eabc3204. http://dx.doi.org/10.1126/sciadv.abc3204.

Повний текст джерела
Анотація:
Data science has primarily focused on big data, but for many physics, chemistry, and engineering applications, data are often small, correlated and, thus, low dimensional, and sourced from both computations and experiments with various levels of noise. Typical statistics and machine learning methods do not work for these cases. Expert knowledge is essential, but a systematic framework for incorporating it into physics-based models under uncertainty is lacking. Here, we develop a mathematical and computational framework for probabilistic artificial intelligence (AI)–based predictive modeling combining data, expert knowledge, multiscale models, and information theory through uncertainty quantification and probabilistic graphical models (PGMs). We apply PGMs to chemistry specifically and develop predictive guarantees for PGMs generally. Our proposed framework, combining AI and uncertainty quantification, provides explainable results leading to correctable and, eventually, trustworthy models. The proposed framework is demonstrated on a microkinetic model of the oxygen reduction reaction.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Mehta, Harshkumar, and Kalpdrum Passi. "Social Media Hate Speech Detection Using Explainable Artificial Intelligence (XAI)." Algorithms 15, no. 8 (August 17, 2022): 291. http://dx.doi.org/10.3390/a15080291.

Повний текст джерела
Анотація:
Explainable artificial intelligence (XAI) characteristics have flexible and multifaceted potential in hate speech detection by deep learning models. Interpreting and explaining decisions made by complex artificial intelligence (AI) models to understand the decision-making process of these model were the aims of this research. As a part of this research study, two datasets were taken to demonstrate hate speech detection using XAI. Data preprocessing was performed to clean data of any inconsistencies, clean the text of the tweets, tokenize and lemmatize the text, etc. Categorical variables were also simplified in order to generate a clean dataset for training purposes. Exploratory data analysis was performed on the datasets to uncover various patterns and insights. Various pre-existing models were applied to the Google Jigsaw dataset such as decision trees, k-nearest neighbors, multinomial naïve Bayes, random forest, logistic regression, and long short-term memory (LSTM), among which LSTM achieved an accuracy of 97.6%. Explainable methods such as LIME (local interpretable model—agnostic explanations) were applied to the HateXplain dataset. Variants of BERT (bidirectional encoder representations from transformers) model such as BERT + ANN (artificial neural network) with an accuracy of 93.55% and BERT + MLP (multilayer perceptron) with an accuracy of 93.67% were created to achieve a good performance in terms of explainability using the ERASER (evaluating rationales and simple English reasoning) benchmark.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Yu, Jinqiang, Alexey Ignatiev, Peter J. Stuckey, and Pierre Le Bodic. "Learning Optimal Decision Sets and Lists with SAT." Journal of Artificial Intelligence Research 72 (December 10, 2021): 1251–79. http://dx.doi.org/10.1613/jair.1.12719.

Повний текст джерела
Анотація:
Decision sets and decision lists are two of the most easily explainable machine learning models. Given the renewed emphasis on explainable machine learning decisions, both of these machine learning models are becoming increasingly attractive, as they combine small size and clear explainability. In this paper, we define size as the total number of literals in the SAT encoding of these rule-based models as opposed to earlier work that concentrates on the number of rules. In this paper, we develop approaches to computing minimum-size “perfect” decision sets and decision lists, which are perfectly accurate on the training data, and minimal in size, making use of modern SAT solving technology. We also provide a new method for determining optimal sparse alternatives, which trade off size and accuracy. The experiments in this paper demonstrate that the optimal decision sets computed by the SAT-based approach are comparable with the best heuristic methods, but much more succinct, and thus, more explainable. We contrast the size and test accuracy of optimal decisions lists versus optimal decision sets, as well as other state-of-the-art methods for determining optimal decision lists. Finally, we examine the size of average explanations generated by decision sets and decision lists.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Rajabi, Enayat, and Somayeh Kafaie. "Knowledge Graphs and Explainable AI in Healthcare." Information 13, no. 10 (September 28, 2022): 459. http://dx.doi.org/10.3390/info13100459.

Повний текст джерела
Анотація:
Building trust and transparency in healthcare can be achieved using eXplainable Artificial Intelligence (XAI), as it facilitates the decision-making process for healthcare professionals. Knowledge graphs can be used in XAI for explainability by structuring information, extracting features and relations, and performing reasoning. This paper highlights the role of knowledge graphs in XAI models in healthcare, considering a state-of-the-art review. Based on our review, knowledge graphs have been used for explainability to detect healthcare misinformation, adverse drug reactions, drug-drug interactions and to reduce the knowledge gap between healthcare experts and AI-based models. We also discuss how to leverage knowledge graphs in pre-model, in-model, and post-model XAI models in healthcare to make them more explainable.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Khrais, Laith T. "Role of Artificial Intelligence in Shaping Consumer Demand in E-Commerce." Future Internet 12, no. 12 (December 8, 2020): 226. http://dx.doi.org/10.3390/fi12120226.

Повний текст джерела
Анотація:
The advent and incorporation of technology in businesses have reformed operations across industries. Notably, major technical shifts in e-commerce aim to influence customer behavior in favor of some products and brands. Artificial intelligence (AI) comes on board as an essential innovative tool for personalization and customizing products to meet specific demands. This research finds that, despite the contribution of AI systems in e-commerce, its ethical soundness is a contentious issue, especially regarding the concept of explainability. The study adopted the use of word cloud analysis, voyance analysis, and concordance analysis to gain a detailed understanding of the idea of explainability as has been utilized by researchers in the context of AI. Motivated by a corpus analysis, this research lays the groundwork for a uniform front, thus contributing to a scientific breakthrough that seeks to formulate Explainable Artificial Intelligence (XAI) models. XAI is a machine learning field that inspects and tries to understand the models and steps involved in how the black box decisions of AI systems are made; it provides insights into the decision points, variables, and data used to make a recommendation. This study suggested that, to deploy explainable XAI systems, ML models should be improved, making them interpretable and comprehensible.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Hacker, Philipp, Ralf Krestel, Stefan Grundmann, and Felix Naumann. "Explainable AI under contract and tort law: legal incentives and technical challenges." Artificial Intelligence and Law 28, no. 4 (January 19, 2020): 415–39. http://dx.doi.org/10.1007/s10506-020-09260-6.

Повний текст джерела
Анотація:
Abstract This paper shows that the law, in subtle ways, may set hitherto unrecognized incentives for the adoption of explainable machine learning applications. In doing so, we make two novel contributions. First, on the legal side, we show that to avoid liability, professional actors, such as doctors and managers, may soon be legally compelled to use explainable ML models. We argue that the importance of explainability reaches far beyond data protection law, and crucially influences questions of contractual and tort liability for the use of ML models. To this effect, we conduct two legal case studies, in medical and corporate merger applications of ML. As a second contribution, we discuss the (legally required) trade-off between accuracy and explainability and demonstrate the effect in a technical case study in the context of spam classification.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Mankodiya, Harsh, Dhairya Jadav, Rajesh Gupta, Sudeep Tanwar, Wei-Chiang Hong, and Ravi Sharma. "OD-XAI: Explainable AI-Based Semantic Object Detection for Autonomous Vehicles." Applied Sciences 12, no. 11 (May 24, 2022): 5310. http://dx.doi.org/10.3390/app12115310.

Повний текст джерела
Анотація:
In recent years, artificial intelligence (AI) has become one of the most prominent fields in autonomous vehicles (AVs). With the help of AI, the stress levels of drivers have been reduced, as most of the work is executed by the AV itself. With the increasing complexity of models, explainable artificial intelligence (XAI) techniques work as handy tools that allow naive people and developers to understand the intricate workings of deep learning models. These techniques can be paralleled to AI to increase their interpretability. One essential task of AVs is to be able to follow the road. This paper attempts to justify how AVs can detect and segment the road on which they are moving using deep learning (DL) models. We trained and compared three semantic segmentation architectures for the task of pixel-wise road detection. Max IoU scores of 0.9459 and 0.9621 were obtained on the train and test set. Such DL algorithms are called “black box models” as they are hard to interpret due to their highly complex structures. Integrating XAI enables us to interpret and comprehend the predictions of these abstract models. We applied various XAI methods and generated explanations for the proposed segmentation model for road detection in AVs.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Hussain, Sardar Mehboob, Domenico Buongiorno, Nicola Altini, Francesco Berloco, Berardino Prencipe, Marco Moschetta, Vitoantonio Bevilacqua, and Antonio Brunetti. "Shape-Based Breast Lesion Classification Using Digital Tomosynthesis Images: The Role of Explainable Artificial Intelligence." Applied Sciences 12, no. 12 (June 19, 2022): 6230. http://dx.doi.org/10.3390/app12126230.

Повний текст джерела
Анотація:
Computer-aided diagnosis (CAD) systems can help radiologists in numerous medical tasks including classification and staging of the various diseases. The 3D tomosynthesis imaging technique adds value to the CAD systems in diagnosis and classification of the breast lesions. Several convolutional neural network (CNN) architectures have been proposed to classify the lesion shapes to the respective classes using a similar imaging method. However, not only is the black box nature of these CNN models questionable in the healthcare domain, but so is the morphological-based cancer classification, concerning the clinicians. As a result, this study proposes both a mathematically and visually explainable deep-learning-driven multiclass shape-based classification framework for the tomosynthesis breast lesion images. In this study, authors exploit eight pretrained CNN architectures for the classification task on the previously extracted regions of interests images containing the lesions. Additionally, the study also unleashes the black box nature of the deep learning models using two well-known perceptive explainable artificial intelligence (XAI) algorithms including Grad-CAM and LIME. Moreover, two mathematical-structure-based interpretability techniques, i.e., t-SNE and UMAP, are employed to investigate the pretrained models’ behavior towards multiclass feature clustering. The experimental results of the classification task validate the applicability of the proposed framework by yielding the mean area under the curve of 98.2%. The explanability study validates the applicability of all employed methods, mainly emphasizing the pros and cons of both Grad-CAM and LIME methods that can provide useful insights towards explainable CAD systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Aslam, Nida, Irfan Ullah Khan, Samiha Mirza, Alanoud AlOwayed, Fatima M. Anis, Reef M. Aljuaid, and Reham Baageel. "Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)." Sustainability 14, no. 12 (June 16, 2022): 7375. http://dx.doi.org/10.3390/su14127375.

Повний текст джерела
Анотація:
With the expansion of the internet, a major threat has emerged involving the spread of malicious domains intended by attackers to perform illegal activities aiming to target governments, violating privacy of organizations, and even manipulating everyday users. Therefore, detecting these harmful domains is necessary to combat the growing network attacks. Machine Learning (ML) models have shown significant outcomes towards the detection of malicious domains. However, the “black box” nature of the complex ML models obstructs their wide-ranging acceptance in some of the fields. The emergence of Explainable Artificial Intelligence (XAI) has successfully incorporated the interpretability and explicability in the complex models. Furthermore, the post hoc XAI model has enabled the interpretability without affecting the performance of the models. This study aimed to propose an Explainable Artificial Intelligence (XAI) model to detect malicious domains on a recent dataset containing 45,000 samples of malicious and non-malicious domains. In the current study, initially several interpretable ML models, such as Decision Tree (DT) and Naïve Bayes (NB), and black box ensemble models, such as Random Forest (RF), Extreme Gradient Boosting (XGB), AdaBoost (AB), and Cat Boost (CB) algorithms, were implemented and found that XGB outperformed the other classifiers. Furthermore, the post hoc XAI global surrogate model (Shapley additive explanations) and local surrogate LIME were used to generate the explanation of the XGB prediction. Two sets of experiments were performed; initially the model was executed using a preprocessed dataset and later with selected features using the Sequential Forward Feature selection algorithm. The results demonstrate that ML algorithms were able to distinguish benign and malicious domains with overall accuracy ranging from 0.8479 to 0.9856. The ensemble classifier XGB achieved the highest result, with an AUC and accuracy of 0.9991 and 0.9856, respectively, before the feature selection algorithm, while there was an AUC of 0.999 and accuracy of 0.9818 after the feature selection algorithm. The proposed model outperformed the benchmark study.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Fior, Jacopo, Luca Cagliero, and Paolo Garza. "Leveraging Explainable AI to Support Cryptocurrency Investors." Future Internet 14, no. 9 (August 24, 2022): 251. http://dx.doi.org/10.3390/fi14090251.

Повний текст джерела
Анотація:
In the last decade, cryptocurrency trading has attracted the attention of private and professional traders and investors. To forecast the financial markets, algorithmic trading systems based on Artificial Intelligence (AI) models are becoming more and more established. However, they suffer from the lack of transparency, thus hindering domain experts from directly monitoring the fundamentals behind market movements. This is particularly critical for cryptocurrency investors, because the study of the main factors influencing cryptocurrency prices, including the characteristics of the blockchain infrastructure, is crucial for driving experts’ decisions. This paper proposes a new visual analytics tool to support domain experts in the explanation of AI-based cryptocurrency trading systems. To describe the rationale behind AI models, it exploits an established method, namely SHapley Additive exPlanations, which allows experts to identify the most discriminating features and provides them with an interactive and easy-to-use graphical interface. The simulations carried out on 21 cryptocurrencies over a 8-year period demonstrate the usability of the proposed tool.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

de Lange, Petter Eilif, Borger Melsom, Christian Bakke Vennerød, and Sjur Westgaard. "Explainable AI for Credit Assessment in Banks." Journal of Risk and Financial Management 15, no. 12 (November 28, 2022): 556. http://dx.doi.org/10.3390/jrfm15120556.

Повний текст джерела
Анотація:
Banks’ credit scoring models are required by financial authorities to be explainable. This paper proposes an explainable artificial intelligence (XAI) model for predicting credit default on a unique dataset of unsecured consumer loans provided by a Norwegian bank. We combined a LightGBM model with SHAP, which enables the interpretation of explanatory variables affecting the predictions. The LightGBM model clearly outperforms the bank’s actual credit scoring model (Logistic Regression). We found that the most important explanatory variables for predicting default in the LightGBM model are the volatility of utilized credit balance, remaining credit in percentage of total credit and the duration of the customer relationship. Our main contribution is the implementation of XAI methods in banking, exploring how these methods can be applied to improve the interpretability and reliability of state-of-the-art AI models. We also suggest a method for analyzing the potential economic value of an improved credit scoring model.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Baniecki, Hubert, and Przemyslaw Biecek. "Responsible Prediction Making of COVID-19 Mortality (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 18 (May 18, 2021): 15755–56. http://dx.doi.org/10.1609/aaai.v35i18.17874.

Повний текст джерела
Анотація:
For high-stakes prediction making, the Responsible Artificial Intelligence (RAI) is more important than ever. It builds upon Explainable Artificial Intelligence (XAI) to advance the efforts in providing fairness, model explainability, and accountability to the AI systems. During the literature review of COVID-19 related prognosis and diagnosis, we found out that most of the predictive models are not faithful to the RAI principles, which can lead to biassed results and wrong reasoning. To solve this problem, we show how novel XAI techniques boost transparency, reproducibility and quality of models.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Vieira, Carla Piazzon Ramos, and Luciano Antonio Digiampietri. "A study about Explainable Articial Intelligence: using decision tree to explain SVM." Revista Brasileira de Computação Aplicada 12, no. 1 (January 8, 2020): 113–21. http://dx.doi.org/10.5335/rbca.v12i1.10247.

Повний текст джерела
Анотація:
The technologies supporting Artificial Intelligence (AI) have advanced rapidly over the past few years and AI is becoming a commonplace in every aspect of life like the future of self-driving cars or earlier health diagnosis. For this to occur shortly, the entire community stands in front of the barrier of explainability, an inherent problem of latest models (e.g. Deep Neural Networks) that were not present in the previous hype of AI (linear and rule-based models). Most of these recent models are used as black boxes without understanding partially or even completely how different features influence the model prediction avoiding algorithmic transparency. In this paper, we focus on how much we can understand the decisions made by an SVM Classifier in a post-hoc model agnostic approach. Furthermore, we train a tree-based model (inherently interpretable) using labels from the SVM, called secondary training data to provide explanations and compare permutation importance method to the more commonly used measures such as accuracy and show that our methods are both more reliable and meaningful techniques to use. We also outline the main challenges for such methods and conclude that model-agnostic interpretability is a key component in making machine learning more trustworthy.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Burkart, Nadia, and Marco F. Huber. "A Survey on the Explainability of Supervised Machine Learning." Journal of Artificial Intelligence Research 70 (January 19, 2021): 245–317. http://dx.doi.org/10.1613/jair.1.12228.

Повний текст джерела
Анотація:
Predictions obtained by, e.g., artificial neural networks have a high accuracy but humans often perceive the models as black boxes. Insights about the decision making are mostly opaque for humans. Particularly understanding the decision making in highly sensitive areas such as healthcare or finance, is of paramount importance. The decision-making behind the black boxes requires it to be more transparent, accountable, and understandable for humans. This survey paper provides essential definitions, an overview of the different principles and methodologies of explainable Supervised Machine Learning (SML). We conduct a state-of-the-art survey that reviews past and recent explainable SML approaches and classifies them according to the introduced definitions. Finally, we illustrate principles by means of an explanatory case study and discuss important future directions.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Combs, Kara, Mary Fendley, and Trevor Bihl. "A Preliminary Look at Heuristic Analysis for Assessing Artificial Intelligence Explainability." WSEAS TRANSACTIONS ON COMPUTER RESEARCH 8 (June 1, 2020): 61–72. http://dx.doi.org/10.37394/232018.2020.8.9.

Повний текст джерела
Анотація:
Artificial Intelligence and Machine Learning (AI/ML) models are increasingly criticized for their “black-box” nature. Therefore, eXplainable AI (XAI) approaches to extract human-interpretable decision processes from algorithms have been explored. However, XAI research lacks understanding of algorithmic explainability from a human factors’ perspective. This paper presents a repeatable human factors heuristic analysis for XAI with a demonstration on four decision tree classifier algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Sabol, Patrik, Peter Sinčák, Jan Magyar, and Pitoyo Hartono. "Semantically Explainable Fuzzy Classifier." International Journal of Pattern Recognition and Artificial Intelligence 33, no. 12 (November 2019): 2051006. http://dx.doi.org/10.1142/s0218001420510064.

Повний текст джерела
Анотація:
In machine learning, there are many high-performance classifiers. However, because of lack of transparency, they are not able to explain the data in a human-friendly form. In this paper, Cumulative Fuzzy Class Membership Criterion (CFCMC), a recently proposed fuzzy modeling classifier, is modified and utilized for a novel approach of information extraction from the labeled data. This approach is able to explain the classifiability of the data in the form of semantics. Extracted semantics give information about the structure of the data and the similarities between classes. To get a relevant image of its classification performance, it is compared to three well-known and frequently used classifiers, which are considered as black boxes, namely, SVM, MLP, and kNN, and to a similar transparent approach, MF ARTMAP. To validate extracted semantics, they are compared to visualization of classified data and to confusion matrices generated during the evaluation of the created CFCMC models. The experimental result shows that CFCMC is not necessarily the best classifier, although, in most cases, it is not too far from the best performing methods. However, the semantical explanation potentially allows the classifier to be applied as a support for human decision processes in real-world problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Wei, Kaihua, Bojian Chen, Jingcheng Zhang, Shanhui Fan, Kaihua Wu, Guangyu Liu, and Dongmei Chen. "Explainable Deep Learning Study for Leaf Disease Classification." Agronomy 12, no. 5 (April 26, 2022): 1035. http://dx.doi.org/10.3390/agronomy12051035.

Повний текст джерела
Анотація:
Explainable artificial intelligence has been extensively studied recently. However, the research of interpretable methods in the agricultural field has not been systematically studied. We studied the interpretability of deep learning models in different agricultural classification tasks based on the fruit leaves dataset. The purpose is to explore whether the classification model is more inclined to extract the appearance characteristics of leaves or the texture characteristics of leaf lesions during the feature extraction process. The dataset was arranged into three experiments with different categories. In each experiment, the VGG, GoogLeNet, and ResNet models were used and the ResNet-attention model was applied with three interpretable methods. The results show that the ResNet model has the highest accuracy rate in the three experiments, which are 99.11%, 99.4%, and 99.89%, respectively. It is also found that the attention module could improve the feature extraction of the model, and clarify the focus of the model in different experiments when extracting features. These results will help agricultural practitioners better apply deep learning models to solve more practical problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Wei, Kaihua, Bojian Chen, Jingcheng Zhang, Shanhui Fan, Kaihua Wu, Guangyu Liu, and Dongmei Chen. "Explainable Deep Learning Study for Leaf Disease Classification." Agronomy 12, no. 5 (April 26, 2022): 1035. http://dx.doi.org/10.3390/agronomy12051035.

Повний текст джерела
Анотація:
Explainable artificial intelligence has been extensively studied recently. However, the research of interpretable methods in the agricultural field has not been systematically studied. We studied the interpretability of deep learning models in different agricultural classification tasks based on the fruit leaves dataset. The purpose is to explore whether the classification model is more inclined to extract the appearance characteristics of leaves or the texture characteristics of leaf lesions during the feature extraction process. The dataset was arranged into three experiments with different categories. In each experiment, the VGG, GoogLeNet, and ResNet models were used and the ResNet-attention model was applied with three interpretable methods. The results show that the ResNet model has the highest accuracy rate in the three experiments, which are 99.11%, 99.4%, and 99.89%, respectively. It is also found that the attention module could improve the feature extraction of the model, and clarify the focus of the model in different experiments when extracting features. These results will help agricultural practitioners better apply deep learning models to solve more practical problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Wei, Kaihua, Bojian Chen, Jingcheng Zhang, Shanhui Fan, Kaihua Wu, Guangyu Liu, and Dongmei Chen. "Explainable Deep Learning Study for Leaf Disease Classification." Agronomy 12, no. 5 (April 26, 2022): 1035. http://dx.doi.org/10.3390/agronomy12051035.

Повний текст джерела
Анотація:
Explainable artificial intelligence has been extensively studied recently. However, the research of interpretable methods in the agricultural field has not been systematically studied. We studied the interpretability of deep learning models in different agricultural classification tasks based on the fruit leaves dataset. The purpose is to explore whether the classification model is more inclined to extract the appearance characteristics of leaves or the texture characteristics of leaf lesions during the feature extraction process. The dataset was arranged into three experiments with different categories. In each experiment, the VGG, GoogLeNet, and ResNet models were used and the ResNet-attention model was applied with three interpretable methods. The results show that the ResNet model has the highest accuracy rate in the three experiments, which are 99.11%, 99.4%, and 99.89%, respectively. It is also found that the attention module could improve the feature extraction of the model, and clarify the focus of the model in different experiments when extracting features. These results will help agricultural practitioners better apply deep learning models to solve more practical problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Lu, Haohui, and Shahadat Uddin. "Explainable Stacking-Based Model for Predicting Hospital Readmission for Diabetic Patients." Information 13, no. 9 (September 15, 2022): 436. http://dx.doi.org/10.3390/info13090436.

Повний текст джерела
Анотація:
Artificial intelligence is changing the practice of healthcare. While it is essential to employ such solutions, making them transparent to medical experts is more critical. Most of the previous work presented disease prediction models, but did not explain them. Many healthcare stakeholders do not have a solid foundation in these models. Treating these models as ‘black box’ diminishes confidence in their predictions. The development of explainable artificial intelligence (XAI) methods has enabled us to change the models into a ‘white box’. XAI allows human users to comprehend the results from machine learning algorithms by making them easy to interpret. For instance, the expenditures of healthcare services associated with unplanned readmissions are enormous. This study proposed a stacking-based model to predict 30-day hospital readmission for diabetic patients. We employed Random Under-Sampling to solve the imbalanced class issue, then utilised SelectFromModel for feature selection and constructed a stacking model with base and meta learners. Compared with the different machine learning models, performance analysis showed that our model can better predict readmission than other existing models. This proposed model is also explainable and interpretable. Based on permutation feature importance, the strong predictors were the number of inpatients, the primary diagnosis, discharge to home with home service, and the number of emergencies. The local interpretable model-agnostic explanations method was also employed to demonstrate explainability at the individual level. The findings for the readmission of diabetic patients could be helpful in medical practice and provide valuable recommendations to stakeholders for minimising readmission and reducing public healthcare costs.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Srinivasu, Parvathaneni Naga, N. Sandhya, Rutvij H. Jhaveri, and Roshani Raut. "From Blackbox to Explainable AI in Healthcare: Existing Tools and Case Studies." Mobile Information Systems 2022 (June 13, 2022): 1–20. http://dx.doi.org/10.1155/2022/8167821.

Повний текст джерела
Анотація:
Introduction. Artificial intelligence (AI) models have been employed to automate decision-making, from commerce to more critical fields directly affecting human lives, including healthcare. Although the vast majority of these proposed AI systems are considered black box models that lack explainability, there is an increasing trend of attempting to create medical explainable Artificial Intelligence (XAI) systems using approaches such as attention mechanisms and surrogate models. An AI system is said to be explainable if humans can tell how the system reached its decision. Various XAI-driven healthcare approaches and their performances in the current study are discussed. The toolkits used in local and global post hoc explainability and the multiple techniques for explainability pertaining the Rational, Data, and Performance explainability are discussed in the current study. Methods. The explainability of the artificial intelligence model in the healthcare domain is implemented through the Local Interpretable Model-Agnostic Explanations and Shapley Additive Explanations for better comprehensibility of the internal working mechanism of the original AI models and the correlation among the feature set that influences decision of the model. Results. The current state-of-the-art XAI-based and future technologies through XAI are reported on research findings in various implementation aspects, including research challenges and limitations of existing models. The role of XAI in the healthcare domain ranging from the earlier prediction of future illness to the disease’s smart diagnosis is discussed. The metrics considered in evaluating the model’s explainability are presented, along with various explainability tools. Three case studies about the role of XAI in the healthcare domain with their performances are incorporated for better comprehensibility. Conclusion. The future perspective of XAI in healthcare will assist in obtaining research insight in the healthcare domain.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Ammar, Nariman, and Arash Shaban-Nejad. "Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development." JMIR Medical Informatics 8, no. 11 (November 4, 2020): e18752. http://dx.doi.org/10.2196/18752.

Повний текст джерела
Анотація:
Background The study of adverse childhood experiences and their consequences has emerged over the past 20 years. Although the conclusions from these studies are available, the same is not true of the data. Accordingly, it is a complex problem to build a training set and develop machine-learning models from these studies. Classic machine learning and artificial intelligence techniques cannot provide a full scientific understanding of the inner workings of the underlying models. This raises credibility issues due to the lack of transparency and generalizability. Explainable artificial intelligence is an emerging approach for promoting credibility, accountability, and trust in mission-critical areas such as medicine by combining machine-learning approaches with explanatory techniques that explicitly show what the decision criteria are and why (or how) they have been established. Hence, thinking about how machine learning could benefit from knowledge graphs that combine “common sense” knowledge as well as semantic reasoning and causality models is a potential solution to this problem. Objective In this study, we aimed to leverage explainable artificial intelligence, and propose a proof-of-concept prototype for a knowledge-driven evidence-based recommendation system to improve mental health surveillance. Methods We used concepts from an ontology that we have developed to build and train a question-answering agent using the Google DialogFlow engine. In addition to the question-answering agent, the initial prototype includes knowledge graph generation and recommendation components that leverage third-party graph technology. Results To showcase the framework functionalities, we here present a prototype design and demonstrate the main features through four use case scenarios motivated by an initiative currently implemented at a children’s hospital in Memphis, Tennessee. Ongoing development of the prototype requires implementing an optimization algorithm of the recommendations, incorporating a privacy layer through a personal health library, and conducting a clinical trial to assess both usability and usefulness of the implementation. Conclusions This semantic-driven explainable artificial intelligence prototype can enhance health care practitioners’ ability to provide explanations for the decisions they make.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Škrlj, Blaž, Matej Martinc, Nada Lavrač, and Senja Pollak. "autoBOT: evolving neuro-symbolic representations for explainable low resource text classification." Machine Learning 110, no. 5 (April 14, 2021): 989–1028. http://dx.doi.org/10.1007/s10994-021-05968-x.

Повний текст джерела
Анотація:
AbstractLearning from texts has been widely adopted throughout industry and science. While state-of-the-art neural language models have shown very promising results for text classification, they are expensive to (pre-)train, require large amounts of data and tuning of hundreds of millions or more parameters. This paper explores how automatically evolved text representations can serve as a basis for explainable, low-resource branch of models with competitive performance that are subject to automated hyperparameter tuning. We present autoBOT (automatic Bags-Of-Tokens), an autoML approach suitable for low resource learning scenarios, where both the hardware and the amount of data required for training are limited. The proposed approach consists of an evolutionary algorithm that jointly optimizes various sparse representations of a given text (including word, subword, POS tag, keyword-based, knowledge graph-based and relational features) and two types of document embeddings (non-sparse representations). The key idea of autoBOT is that, instead of evolving at the learner level, evolution is conducted at the representation level. The proposed method offers competitive classification performance on fourteen real-world classification tasks when compared against a competitive autoML approach that evolves ensemble models, as well as state-of-the-art neural language models such as BERT and RoBERTa. Moreover, the approach is explainable, as the importance of the parts of the input space is part of the final solution yielded by the proposed optimization procedure, offering potential for meta-transfer learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Chaddad, Ahmad, Jihao Peng, Jian Xu, and Ahmed Bouridane. "Survey of Explainable AI Techniques in Healthcare." Sensors 23, no. 2 (January 5, 2023): 634. http://dx.doi.org/10.3390/s23020634.

Повний текст джерела
Анотація:
Artificial intelligence (AI) with deep learning models has been widely applied in numerous domains, including medical imaging and healthcare tasks. In the medical field, any judgment or decision is fraught with risk. A doctor will carefully judge whether a patient is sick before forming a reasonable explanation based on the patient’s symptoms and/or an examination. Therefore, to be a viable and accepted tool, AI needs to mimic human judgment and interpretation skills. Specifically, explainable AI (XAI) aims to explain the information behind the black-box model of deep learning that reveals how the decisions are made. This paper provides a survey of the most recent XAI techniques used in healthcare and related medical imaging applications. We summarize and categorize the XAI types, and highlight the algorithms used to increase interpretability in medical imaging topics. In addition, we focus on the challenging XAI problems in medical applications and provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis. Furthermore, this survey provides future directions to guide developers and researchers for future prospective investigations on clinical topics, particularly on applications with medical imaging.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Lim, Suk-Young, Dong-Kyu Chae, and Sang-Chul Lee. "Detecting Deepfake Voice Using Explainable Deep Learning Techniques." Applied Sciences 12, no. 8 (April 13, 2022): 3926. http://dx.doi.org/10.3390/app12083926.

Повний текст джерела
Анотація:
Fake media, generated by methods such as deepfakes, have become indistinguishable from real media, but their detection has not improved at the same pace. Furthermore, the absence of interpretability on deepfake detection models makes their reliability questionable. In this paper, we present a human perception level of interpretability for deepfake audio detection. Based on their characteristics, we implement several explainable artificial intelligence (XAI) methods used for image classification on an audio-related task. In addition, by examining the human cognitive process of XAI on image classification, we suggest the use of a corresponding data format for providing interpretability. Using this novel concept, a fresh interpretation using attribution scores can be provided.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Mishra, Sunny, Amit K. Shukla, and Pranab K. Muhuri. "Explainable Fuzzy AI Challenge 2022: Winner’s Approach to a Computationally Efficient and Explainable Solution." Axioms 11, no. 10 (September 20, 2022): 489. http://dx.doi.org/10.3390/axioms11100489.

Повний текст джерела
Анотація:
An explainable artificial intelligence (XAI) agent is an autonomous agent that uses a fundamental XAI model at its core to perceive its environment and suggests actions to be performed. One of the significant challenges for these XAI agents is performing their operation efficiently, which is governed by the underlying inference and optimization system. Along similar lines, an Explainable Fuzzy AI Challenge (XFC 2022) competition was launched, whose principal objective was to develop a fully autonomous and optimized XAI algorithm that could play the Python arcade game “Asteroid Smasher”. This research first investigates inference models to implement an efficient (XAI) agent using rule-based fuzzy systems. We also discuss the proposed approach (which won the competition) to attain efficiency in the XAI algorithm. We have explored the potential of the widely used Mamdani- and TSK-based fuzzy inference systems and investigated which model might have a more optimized implementation. Even though the TSK-based model outperforms Mamdani in several applications, no empirical evidence suggests this will also be applicable in implementing an XAI agent. The experimentations are then performed to find a better-performing inference system in a fast-paced environment. The thorough analysis recommends more robust and efficient TSK-based XAI agents than Mamdani-based fuzzy inference systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Ding, Tao, Fatema Hasan, Warren K. Bickel, and Shimei Pan. "Building High Performance Explainable Machine Learning Models for Social Media-based Substance Use Prediction." International Journal on Artificial Intelligence Tools 29, no. 03n04 (June 2020): 2060009. http://dx.doi.org/10.1142/s021821302060009x.

Повний текст джерела
Анотація:
Social media contain rich information that can be used to help understand human mind and behavior. Social media data, however, are mostly unstructured (e.g., text and image) and a large number of features may be needed to represent them (e.g., we may need millions of unigrams to represent social media texts). Moreover, accurately assessing human behavior is often difficult (e.g., assessing addiction may require medical diagnosis). As a result, the ground truth data needed to train a supervised human behavior model are often difficult to obtain at a large scale. To avoid overfitting, many state-of-the-art behavior models employ sophisticated unsupervised or self-supervised machine learning methods to leverage a large amount of unsupervised data for both feature learning and dimension reduction. Unfortunately, despite their high performance, these advanced machine learning models often rely on latent features that are hard to explain. Since understanding the knowledge captured in these models is important to behavior scientists and public health providers, we explore new methods to build machine learning models that are not only accurate but also interpretable. We evaluate the effectiveness of the proposed methods in predicting Substance Use Disorders (SUD). We believe the methods we proposed are general and applicable to a wide range of data-driven human trait and behavior analysis applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Linardatos, Pantelis, Vasilis Papastefanopoulos, and Sotiris Kotsiantis. "Explainable AI: A Review of Machine Learning Interpretability Methods." Entropy 23, no. 1 (December 25, 2020): 18. http://dx.doi.org/10.3390/e23010018.

Повний текст джерела
Анотація:
Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption, with machine learning systems demonstrating superhuman performance in a significant number of tasks. However, this surge in performance, has often been achieved through increased model complexity, turning such systems into “black box” approaches and causing uncertainty regarding the way they operate and, ultimately, the way that they come to decisions. This ambiguity has made it problematic for machine learning systems to be adopted in sensitive yet critical domains, where their value could be immense, such as healthcare. As a result, scientific interest in the field of Explainable Artificial Intelligence (XAI), a field that is concerned with the development of new methods that explain and interpret machine learning models, has been tremendously reignited over recent years. This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Vilone, Giulia, and Luca Longo. "Classification of Explainable Artificial Intelligence Methods through Their Output Formats." Machine Learning and Knowledge Extraction 3, no. 3 (August 4, 2021): 615–61. http://dx.doi.org/10.3390/make3030032.

Повний текст джерела
Анотація:
Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret. Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences. This systematic review aimed to organise these methods into a hierarchical classification system that builds upon and extends existing taxonomies by adding a significant dimension—the output formats. The reviewed scientific papers were retrieved by conducting an initial search on Google Scholar with the keywords “explainable artificial intelligence”; “explainable machine learning”; and “interpretable machine learning”. A subsequent iterative search was carried out by checking the bibliography of these articles. The addition of the dimension of the explanation format makes the proposed classification system a practical tool for scholars, supporting them to select the most suitable type of explanation format for the problem at hand. Given the wide variety of challenges faced by researchers, the existing XAI methods provide several solutions to meet the requirements that differ considerably between the users, problems and application fields of artificial intelligence (AI). The task of identifying the most appropriate explanation can be daunting, thus the need for a classification system that helps with the selection of methods. This work concludes by critically identifying the limitations of the formats of explanations and by providing recommendations and possible future research directions on how to build a more generally applicable XAI method. Future work should be flexible enough to meet the many requirements posed by the widespread use of AI in several fields, and the new regulations.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Sudmanns, Martin, Hannah Augustin, Lucas van der Meer, Andrea Baraldi, and Dirk Tiede. "The Austrian Semantic EO Data Cube Infrastructure." Remote Sensing 13, no. 23 (November 26, 2021): 4807. http://dx.doi.org/10.3390/rs13234807.

Повний текст джерела
Анотація:
Big optical Earth observation (EO) data analytics usually start from numerical, sub-symbolic reflectance values that lack inherent semantic information (meaning) and require interpretation. However, interpretation is an ill-posed problem that is difficult for many users to solve. Our semantic EO data cube architecture aims to implement computer vision in EO data cubes as an explainable artificial intelligence approach. Automatic semantic enrichment provides semi-symbolic spectral categories for all observations as an initial interpretation of color information. Users graphically create knowledge-based semantic models in a convergence-of-evidence approach, where color information is modelled a-priori as one property of semantic concepts, such as land cover entities. This differs from other approaches that do not use a-priori knowledge and assume a direct 1:1 relationship between reflectance values and land cover. The semantic models are explainable, transferable, reusable, and users can share them in a knowledgebase. We provide insights into our web-based architecture, called Sen2Cube.at, including semantic enrichment, data models, knowledge engineering, semantic querying, and the graphical user interface. Our implemented prototype uses all Sentinel-2 MSI images covering Austria; however, the approach is transferable to other geographical regions and sensors. We demonstrate that explainable, knowledge-based big EO data analysis is possible via graphical semantic querying in EO data cubes.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Prédhumeau, Manon, Lyuba Mancheva, Julie Dugdale, and Anne Spalanzani. "Agent-Based Modeling for Predicting Pedestrian Trajectories Around an Autonomous Vehicle." Journal of Artificial Intelligence Research 73 (April 19, 2022): 1385–433. http://dx.doi.org/10.1613/jair.1.13425.

Повний текст джерела
Анотація:
This paper addresses modeling and simulating pedestrian trajectories when interacting with an autonomous vehicle in a shared space. Most pedestrian–vehicle interaction models are not suitable for predicting individual trajectories. Data-driven models yield accurate predictions but lack generalizability to new scenarios, usually do not run in real time and produce results that are poorly explainable. Current expert models do not deal with the diversity of possible pedestrian interactions with the vehicle in a shared space and lack microscopic validation. We propose an expert pedestrian model that combines the social force model and a new decision model for anticipating pedestrian–vehicle interactions. The proposed model integrates different observed pedestrian behaviors, as well as the behaviors of the social groups of pedestrians, in diverse interaction scenarios with a car. We calibrate the model by fitting the parameters values on a training set. We validate the model and evaluate its predictive potential through qualitative and quantitative comparisons with ground truth trajectories. The proposed model reproduces observed behaviors that have not been replicated by the social force model and outperforms the social force model at predicting pedestrian behavior around the vehicle on the used dataset. The model generates explainable and real-time trajectory predictions. Additional evaluation on a new dataset shows that the model generalizes well to new scenarios and can be applied to an autonomous vehicle embedded prediction.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Melo, Elvis, Ivanovitch Silva, Daniel G. Costa, Carlos M. D. Viegas, and Thiago M. Barros. "On the Use of eXplainable Artificial Intelligence to Evaluate School Dropout." Education Sciences 12, no. 12 (November 22, 2022): 845. http://dx.doi.org/10.3390/educsci12120845.

Повний текст джерела
Анотація:
The school dropout problem has been recurrent in different educational areas, which has reinforced important challenges when pursuing education objectives. In this scenario, technical schools have also suffered from considerable dropout levels, even when considering a still increasing need for professionals in areas associated to computing and engineering. Actually, the dropout phenomenon may be not uniform and thus it has become urgent the identification of the profile of those students, putting in evidence techniques such as eXplainable Artificial Intelligence (XAI) that can ensure more ethical, transparent, and auditable use of educational data. Therefore, this article applies and evaluates XAI methods to predict students in school dropout situation, considering a database of students from the Federal Institute of Rio Grande do Norte (IFRN), a Brazilian technical school. For that, a checklist was created comprising explanatory evaluation metrics according to a broad literature review, resulting in the proposal of a new explainability index to evaluate XAI frameworks. Doing so, we expect to support the adoption of XAI models to better understand school-related data, supporting important research efforts in this area.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії