Добірка наукової літератури з теми "Local Interpretable Model-Agnostic Explanations (LIME)"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Local Interpretable Model-Agnostic Explanations (LIME)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Local Interpretable Model-Agnostic Explanations (LIME)":

1

Zafar, Muhammad Rehman, and Naimul Khan. "Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability." Machine Learning and Knowledge Extraction 3, no. 3 (June 30, 2021): 525–41. http://dx.doi.org/10.3390/make3030027.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g., linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While LIME and similar local algorithms have gained popularity due to their simplicity, the random perturbation methods result in shifts in data and instability in the generated explanations, where for the same prediction, different explanations can be generated. These are critical issues that can prevent deployment of LIME in sensitive domains. We propose a deterministic version of LIME. Instead of random perturbation, we utilize Agglomerative Hierarchical Clustering (AHC) to group the training data together and K-Nearest Neighbour (KNN) to select the relevant cluster of the new instance that is being explained. After finding the relevant cluster, a simple model (i.e., linear model or decision tree) is trained over the selected cluster to generate the explanations. Experimental results on six public (three binary and three multi-class) and six synthetic datasets show the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME), where we quantitatively determine the stability and faithfulness of DLIME compared to LIME.
2

Knapič, Samanta, Avleen Malhi, Rohit Saluja, and Kary Främling. "Explainable Artificial Intelligence for Human Decision Support System in the Medical Domain." Machine Learning and Knowledge Extraction 3, no. 3 (September 19, 2021): 740–70. http://dx.doi.org/10.3390/make3030037.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). In vivo gastral images obtained by a video capsule endoscopy (VCE) were the subject of visual explanations, with the goal of increasing health professionals’ trust in black-box predictions. We implemented two post hoc interpretable machine learning methods, called Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), and an alternative explanation approach, the Contextual Importance and Utility (CIU) method. The produced explanations were assessed by human evaluation. We conducted three user studies based on explanations provided by LIME, SHAP and CIU. Users from different non-medical backgrounds carried out a series of tests in a web-based survey setting and stated their experience and understanding of the given explanations. Three user groups (n = 20, 20, 20) with three distinct forms of explanations were quantitatively analyzed. We found that, as hypothesized, the CIU-explainable method performed better than both LIME and SHAP methods in terms of improving support for human decision-making and being more transparent and thus understandable to users. Additionally, CIU outperformed LIME and SHAP by generating explanations more rapidly. Our findings suggest that there are notable differences in human decision-making between various explanation support settings. In line with that, we present three potential explainable methods that, with future improvements in implementation, can be generalized to different medical data sets and can provide effective decision support to medical experts.
3

Singh, Devesh. "Interpretable Machine-Learning Approach in Estimating FDI Inflow: Visualization of ML Models with LIME and H2O." TalTech Journal of European Studies 11, no. 1 (May 1, 2021): 133–52. http://dx.doi.org/10.2478/bjes-2021-0009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract In advancement of interpretable machine learning (IML), this research proposes local interpretable model-agnostic explanations (LIME) as a new visualization technique in a novel informative way to analyze the foreign direct investment (FDI) inflow. This article examines the determinants of FDI inflow through IML with a supervised learning method to analyze the foreign investment determinants in Hungary by using an open-source artificial intelligence H2O platform. This author used three ML algorithms—general linear model (GML), gradient boosting machine (GBM), and random forest (RF) classifier—to analyze the FDI inflow from 2001 to 2018. The result of this study shows that in all three classifiers GBM performs better to analyze FDI inflow determinants. The variable value of production in a region is the most influenced determinant to the inflow of FDI in Hungarian regions. Explanatory visualizations are presented from the analyzed dataset, which leads to their use in decision-making.
4

Weitz, Katharina, Teena Hassan, Ute Schmid, and Jens-Uwe Garbas. "Deep-learned faces of pain and emotions: Elucidating the differences of facial expressions with the help of explainable AI methods." tm - Technisches Messen 86, no. 7-8 (July 26, 2019): 404–12. http://dx.doi.org/10.1515/teme-2019-0024.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractDeep neural networks are successfully used for object and face recognition in images and videos. In order to be able to apply such networks in practice, for example in hospitals as a pain recognition tool, the current procedures are only suitable to a limited extent. The advantage of deep neural methods is that they can learn complex non-linear relationships between raw data and target classes without limiting themselves to a set of hand-crafted features provided by humans. However, the disadvantage is that due to the complexity of these networks, it is not possible to interpret the knowledge that is stored inside the network. It is a black-box learning procedure. Explainable Artificial Intelligence (AI) approaches mitigate this problem by extracting explanations for decisions and representing them in a human-interpretable form. The aim of this paper is to investigate the explainable AI methods Layer-wise Relevance Propagation (LRP) and Local Interpretable Model-agnostic Explanations (LIME). These approaches are applied to explain how a deep neural network distinguishes facial expressions of pain from facial expressions of emotions such as happiness and disgust.
5

Kumar, Akshi, Shubham Dikshit, and Victor Hugo C. Albuquerque. "Explainable Artificial Intelligence for Sarcasm Detection in Dialogues." Wireless Communications and Mobile Computing 2021 (July 2, 2021): 1–13. http://dx.doi.org/10.1155/2021/2939334.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Sarcasm detection in dialogues has been gaining popularity among natural language processing (NLP) researchers with the increased use of conversational threads on social media. Capturing the knowledge of the domain of discourse, context propagation during the course of dialogue, and situational context and tone of the speaker are some important features to train the machine learning models for detecting sarcasm in real time. As situational comedies vibrantly represent human mannerism and behaviour in everyday real-life situations, this research demonstrates the use of an ensemble supervised learning algorithm to detect sarcasm in the benchmark dialogue dataset, MUStARD. The punch-line utterance and its associated context are taken as features to train the eXtreme Gradient Boosting (XGBoost) method. The primary goal is to predict sarcasm in each utterance of the speaker using the chronological nature of a scene. Further, it is vital to prevent model bias and help decision makers understand how to use the models in the right way. Therefore, as a twin goal of this research, we make the learning model used for conversational sarcasm detection interpretable. This is done using two post hoc interpretability approaches, Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive exPlanations (SHAP), to generate explanations for the output of a trained classifier. The classification results clearly depict the importance of capturing the intersentence context to detect sarcasm in conversational threads. The interpretability methods show the words (features) that influence the decision of the model the most and help the user understand how the model is making the decision for detecting sarcasm in dialogues.
6

Hung, Sheng-Chieh, Hui-Ching Wu, and Ming-Hseng Tseng. "Remote Sensing Scene Classification and Explanation Using RSSCNet and LIME." Applied Sciences 10, no. 18 (September 4, 2020): 6151. http://dx.doi.org/10.3390/app10186151.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Classification is needed in disaster investigation, traffic control, and land-use resource management. How to quickly and accurately classify such remote sensing imagery has become a popular research topic. However, the application of large, deep neural network models for the training of classifiers in the hope of obtaining good classification results is often very time-consuming. In this study, a new CNN (convolutional neutral networks) architecture, i.e., RSSCNet (remote sensing scene classification network), with high generalization capability was designed. Moreover, a two-stage cyclical learning rate policy and the no-freezing transfer learning method were developed to speed up model training and enhance accuracy. In addition, the manifold learning t-SNE (t-distributed stochastic neighbor embedding) algorithm was used to verify the effectiveness of the proposed model, and the LIME (local interpretable model, agnostic explanation) algorithm was applied to improve the results in cases where the model made wrong predictions. Comparing the results of three publicly available datasets in this study with those obtained in previous studies, the experimental results show that the model and method proposed in this paper can achieve better scene classification more quickly and more efficiently.
7

Manikis, Georgios C., Georgios S. Ioannidis, Loizos Siakallis, Katerina Nikiforaki, Michael Iv, Diana Vozlic, Katarina Surlan-Popovic, Max Wintermark, Sotirios Bisdas, and Kostas Marias. "Multicenter DSC–MRI-Based Radiomics Predict IDH Mutation in Gliomas." Cancers 13, no. 16 (August 5, 2021): 3965. http://dx.doi.org/10.3390/cancers13163965.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
To address the current lack of dynamic susceptibility contrast magnetic resonance imaging (DSC–MRI)-based radiomics to predict isocitrate dehydrogenase (IDH) mutations in gliomas, we present a multicenter study that featured an independent exploratory set for radiomics model development and external validation using two independent cohorts. The maximum performance of the IDH mutation status prediction on the validation set had an accuracy of 0.544 (Cohen’s kappa: 0.145, F1-score: 0.415, area under the curve-AUC: 0.639, sensitivity: 0.733, specificity: 0.491), which significantly improved to an accuracy of 0.706 (Cohen’s kappa: 0.282, F1-score: 0.474, AUC: 0.667, sensitivity: 0.6, specificity: 0.736) when dynamic-based standardization of the images was performed prior to the radiomics. Model explainability using local interpretable model-agnostic explanations (LIME) and Shapley additive explanations (SHAP) revealed potential intuitive correlations between the IDH–wildtype increased heterogeneity and the texture complexity. These results strengthened our hypothesis that DSC–MRI radiogenomics in gliomas hold the potential to provide increased predictive performance from models that generalize well and provide understandable patterns between IDH mutation status and the extracted features toward enabling the clinical translation of radiogenomics in neuro-oncology.
8

Modhukur, Vijayachitra, Shakshi Sharma, Mainak Mondal, Ankita Lawarde, Keiu Kask, Rajesh Sharma, and Andres Salumets. "Machine Learning Approaches to Classify Primary and Metastatic Cancers Using Tissue of Origin-Based DNA Methylation Profiles." Cancers 13, no. 15 (July 27, 2021): 3768. http://dx.doi.org/10.3390/cancers13153768.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Metastatic cancers account for up to 90% of cancer-related deaths. The clear differentiation of metastatic cancers from primary cancers is crucial for cancer type identification and developing targeted treatment for each cancer type. DNA methylation patterns are suggested to be an intriguing target for cancer prediction and are also considered to be an important mediator for the transition to metastatic cancer. In the present study, we used 24 cancer types and 9303 methylome samples downloaded from publicly available data repositories, including The Cancer Genome Atlas (TCGA) and the Gene Expression Omnibus (GEO). We constructed machine learning classifiers to discriminate metastatic, primary, and non-cancerous methylome samples. We applied support vector machines (SVM), Naive Bayes (NB), extreme gradient boosting (XGBoost), and random forest (RF) machine learning models to classify the cancer types based on their tissue of origin. RF outperformed the other classifiers, with an average accuracy of 99%. Moreover, we applied local interpretable model-agnostic explanations (LIME) to explain important methylation biomarkers to classify cancer types.
9

Steed, Ryan, and Aylin Caliskan. "A set of distinct facial traits learned by machines is not predictive of appearance bias in the wild." AI and Ethics 1, no. 3 (January 12, 2021): 249–60. http://dx.doi.org/10.1007/s43681-020-00035-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractResearch in social psychology has shown that people’s biased, subjective judgments about another’s personality based solely on their appearance are not predictive of their actual personality traits. But researchers and companies often utilize computer vision models to predict similarly subjective personality attributes such as “employability”. We seek to determine whether state-of-the-art, black box face processing technology can learn human-like appearance biases. With features extracted with FaceNet, a widely used face recognition framework, we train a transfer learning model on human subjects’ first impressions of personality traits in other faces as measured by social psychologists. We find that features extracted with FaceNet can be used to predict human appearance bias scores for deliberately manipulated faces but not for randomly generated faces scored by humans. Additionally, in contrast to work with human biases in social psychology, the model does not find a significant signal correlating politicians’ vote shares with perceived competence bias. With Local Interpretable Model-Agnostic Explanations (LIME), we provide several explanations for this discrepancy. Our results suggest that some signals of appearance bias documented in social psychology are not embedded by the machine learning techniques we investigate. We shed light on the ways in which appearance bias could be embedded in face processing technology and cast further doubt on the practice of predicting subjective traits based on appearances.
10

Udo Sass, A., E. Esatbeyoglu, and T. Iwwerks. "Signal Pre-Selection for Monitoring and Prediction of Vehicle Powertrain Component Aging." Science & Technique 18, no. 6 (December 5, 2019): 519–24. http://dx.doi.org/10.21122/2227-1031-2019-18-6-519-524.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Predictive maintenance has become important for avoiding unplanned downtime of modern vehicles. With increasing functionality the exchanged data between Electronic Control Units (ECU) grows simultaneously rapidly. A large number of in-vehicle signals are provided for monitoring an aging process. Various components of a vehicle age due to their usage. This component aging is only visible in a certain number of in-vehicle signals. In this work, we present a signal selection method for in-vehicle signals in order to determine relevant signals to monitor and predict powertrain component aging of vehicles. Our application considers the aging of powertrain components with respect to clogging of structural components. We measure the component aging process in certain time intervals. Owing to this, unevenly spaced time series data is preprocessed to generate comparable in-vehicle data. First, we aggregate the data in certain intervals. Thus, the dynamic in-vehicle database is reduced which enables us to analyze the signals more efficiently. Secondly, we implement machine learning algorithms to generate a digital model of the measured aging process. With the help of Local Interpretable Model-Agnostic Explanations (LIME) the model gets interpretable. This allows us to extract the most relevant signals and to reduce the amount of processed data. Our results show that a certain number of in-vehicle signals are sufficient for predicting the aging process of the considered structural component. Consequently, our approach allows to reduce data transmission of in-vehicle signals with the goal of predictive maintenance.

Дисертації з теми "Local Interpretable Model-Agnostic Explanations (LIME)":

1

Fjellström, Lisa. "The Contribution of Visual Explanations in Forensic Investigations of Deepfake Video : An Evaluation." Thesis, Umeå universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184671.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Videos manipulated by machine learning have rapidly increased online in the past years. So called deepfakes can depict people who never participated in a video recording by transposing their faces onto others in it. This raises the concern of authenticity of media, which demand for higher performing detection methods in forensics. Introduction of AI detectors have been of interest, but is held back today by their lack of interpretability. The objective of this thesis was therefore to examine what the explainable AI method local interpretable model-agnostic explanations (LIME) could contribute to forensic investigations of deepfake video.  An evaluation was conducted where 3 multimedia forensics evaluated the contribution of visual explanations of classifications when investigating deepfake video frames. The estimated contribution was not significant yet answers showed that LIME may be used to indicate areas to start examine. LIME was however not considered to provide sufficient proof to why a frame was classified as `fake', and would if introduced be used as one of several methods in the process. Issues were apparent regarding the interpretability of the explanations, as well as LIME's ability to indicate features of manipulation with superpixels.
2

Norrie, Christian. "Explainable AI techniques for sepsis diagnosis : Evaluating LIME and SHAP through a user study." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19845.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Articial intelligence has had a large impact on many industries and transformed some domains quite radically. There is tremendous potential in applying AI to the eld of medical diagnostics. A major issue with applying these techniques to some domains is an inability for AI models to provide an explanation or justication for their predictions. This creates a problem wherein a user may not trust an AI prediction, or there are legal requirements for justifying decisions that are not met. This thesis overviews how two explainable AI techniques (Shapley Additive Explanations and Local Interpretable Model-Agnostic Explanations) can establish a degree of trust for the user in the medical diagnostics eld. These techniques are evaluated through a user study. User study results suggest that supplementing classications or predictions with a post-hoc visualization increases interpretability by a small margin. Further investigation and research utilizing a user study surveyor interview is suggested to increase interpretability and explainability of machine learning results.
3

Malmberg, Jacob, Öhman Marcus Nystad, and Alexandra Hotti. "Implementing Machine Learning in the Credit Process of a Learning Organization While Maintaining Transparency Using LIME." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-232579.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
To determine whether a credit limit for a corporate client should be changed, a financial institution writes a PM containingtext and financial data that then is assessed by a credit committee which decides whether to increase the limit or not. To make thisprocess more efficient, machine learning algorithms was used to classify the credit PMs instead of a committee. Since most machinelearning algorithms are black boxes, the LIME framework was used to find the most important features driving the classification. Theresults of this study show that credit memos can be classified with high accuracy and that LIME can be used to indicate which parts ofthe memo had the biggest impact. This implicates that the credit process could be improved by utilizing machine learning, whilemaintaining transparency. However, machine learning may disrupt learning processes within the organization.
För att bedöma om en kreditlimit för ett företag ska förändras eller inte skriver ett finansiellt institut ett PM innehållande text och finansiella data. Detta PM granskas sedan av en kreditkommitté som beslutar om limiten ska förändras eller inte. För att effektivisera denna process användes i denna rapport maskininlärning istället för en kreditkommitté för att besluta om limiten ska förändras. Eftersom de flesta maskininlärningsalgoritmer är svarta lådor så användes LIME-ramverket för att hitta de viktigaste drivarna bakom klassificeringen. Denna studies resultat visar att kredit-PM kan klassificeras med hög noggrannhet och att LIME kan visa vilken del av ett PM som hade störst påverkan vid klassificeringen. Implikationerna av detta är att kreditprocessen kan förbättras av maskininlärning, utan att förlora transparens. Maskininlärning kan emellertid störa lärandeprocesser i organisationen, varför införandet av dessa algoritmer bör vägas mot hur betydelsefullt det är att bevara och utveckla kunskap inom organisationen.
4

Saluja, Rohit. "Interpreting Multivariate Time Series for an Organization Health Platform." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-289465.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Machine learning-based systems are rapidly becoming popular because it has been realized that machines are more efficient and effective than humans at performing certain tasks. Although machine learning algorithms are extremely popular, they are also very literal and undeviating. This has led to a huge research surge in the field of interpretability in machine learning to ensure that machine learning models are reliable, fair, and can be held liable for their decision-making process. Moreover, in most real-world problems just making predictions using machine learning algorithms only solves the problem partially. Time series is one of the most popular and important data types because of its dominant presence in the fields of business, economics, and engineering. Despite this, interpretability in time series is still relatively unexplored as compared to tabular, text, and image data. With the growing research in the field of interpretability in machine learning, there is also a pressing need to be able to quantify the quality of explanations produced after interpreting machine learning models. Due to this reason, evaluation of interpretability is extremely important. The evaluation of interpretability for models built on time series seems completely unexplored in research circles. This thesis work focused on achieving and evaluating model agnostic interpretability in a time series forecasting problem.  The use case discussed in this thesis work focused on finding a solution to a problem faced by a digital consultancy company. The digital consultancy wants to take a data-driven approach to understand the effect of various sales related activities in the company on the sales deals closed by the company. The solution involved framing the problem as a time series forecasting problem to predict the sales deals and interpreting the underlying forecasting model. The interpretability was achieved using two novel model agnostic interpretability techniques, Local interpretable model- agnostic explanations (LIME) and Shapley additive explanations (SHAP). The explanations produced after achieving interpretability were evaluated using human evaluation of interpretability. The results of the human evaluation studies clearly indicate that the explanations produced by LIME and SHAP greatly helped lay humans in understanding the predictions made by the machine learning model. The human evaluation study results also indicated that LIME and SHAP explanations were almost equally understandable with LIME performing better but with a very small margin. The work done during this project can easily be extended to any time series forecasting or classification scenario for achieving and evaluating interpretability. Furthermore, this work can offer a very good framework for achieving and evaluating interpretability in any machine learning-based regression or classification problem.
Maskininlärningsbaserade system blir snabbt populära eftersom man har insett att maskiner är effektivare än människor när det gäller att utföra vissa uppgifter. Även om maskininlärningsalgoritmer är extremt populära, är de också mycket bokstavliga. Detta har lett till en enorm forskningsökning inom området tolkbarhet i maskininlärning för att säkerställa att maskininlärningsmodeller är tillförlitliga, rättvisa och kan hållas ansvariga för deras beslutsprocess. Dessutom löser problemet i de flesta verkliga problem bara att göra förutsägelser med maskininlärningsalgoritmer bara delvis. Tidsserier är en av de mest populära och viktiga datatyperna på grund av dess dominerande närvaro inom affärsverksamhet, ekonomi och teknik. Trots detta är tolkningsförmågan i tidsserier fortfarande relativt outforskad jämfört med tabell-, text- och bilddata. Med den växande forskningen inom området tolkbarhet inom maskininlärning finns det också ett stort behov av att kunna kvantifiera kvaliteten på förklaringar som produceras efter tolkning av maskininlärningsmodeller. Av denna anledning är utvärdering av tolkbarhet extremt viktig. Utvärderingen av tolkbarhet för modeller som bygger på tidsserier verkar helt outforskad i forskarkretsar. Detta uppsatsarbete fokuserar på att uppnå och utvärdera agnostisk modelltolkbarhet i ett tidsserieprognosproblem.  Fokus ligger i att hitta lösningen på ett problem som ett digitalt konsultföretag står inför som användningsfall. Det digitala konsultföretaget vill använda en datadriven metod för att förstå effekten av olika försäljningsrelaterade aktiviteter i företaget på de försäljningsavtal som företaget stänger. Lösningen innebar att inrama problemet som ett tidsserieprognosproblem för att förutsäga försäljningsavtalen och tolka den underliggande prognosmodellen. Tolkningsförmågan uppnåddes med hjälp av två nya tekniker för agnostisk tolkbarhet, lokala tolkbara modellagnostiska förklaringar (LIME) och Shapley additiva förklaringar (SHAP). Förklaringarna som producerats efter att ha uppnått tolkbarhet utvärderades med hjälp av mänsklig utvärdering av tolkbarhet. Resultaten av de mänskliga utvärderingsstudierna visar tydligt att de förklaringar som produceras av LIME och SHAP starkt hjälpte människor att förstå förutsägelserna från maskininlärningsmodellen. De mänskliga utvärderingsstudieresultaten visade också att LIME- och SHAP-förklaringar var nästan lika förståeliga med LIME som presterade bättre men med en mycket liten marginal. Arbetet som utförts under detta projekt kan enkelt utvidgas till alla tidsserieprognoser eller klassificeringsscenarier för att uppnå och utvärdera tolkbarhet. Dessutom kan detta arbete erbjuda en mycket bra ram för att uppnå och utvärdera tolkbarhet i alla maskininlärningsbaserade regressions- eller klassificeringsproblem.

Частини книг з теми "Local Interpretable Model-Agnostic Explanations (LIME)":

1

Recio-Garcí­a, Juan A., Belén Dí­az-Agudo, and Victor Pino-Castilla. "CBR-LIME: A Case-Based Reasoning Approach to Provide Specific Local Interpretable Model-Agnostic Explanations." In Case-Based Reasoning Research and Development, 179–94. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58342-2_12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Davagdorj, Khishigsuren, Meijing Li, and Keun Ho Ryu. "Local Interpretable Model-Agnostic Explanations of Predictive Models for Hypertension." In Advances in Intelligent Information Hiding and Multimedia Signal Processing, 426–33. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-6757-9_53.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Thanh-Hai, Nguyen, Toan Bao Tran, An Cong Tran, and Nguyen Thai-Nghe. "Feature Selection Using Local Interpretable Model-Agnostic Explanations on Metagenomic Data." In Future Data and Security Engineering. Big Data, Security and Privacy, Smart City and Industry 4.0 Applications, 340–57. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-33-4370-2_24.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Graziani, Mara, Iam Palatnik de Sousa, Marley M. B. R. Vellasco, Eduardo Costa da Silva, Henning Müller, and Vincent Andrearczyk. "Sharpening Local Interpretable Model-Agnostic Explanations for Histopathology: Improved Understandability and Reliability." In Medical Image Computing and Computer Assisted Intervention – MICCAI 2021, 540–49. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87199-4_51.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Lampridis, Orestis, Riccardo Guidotti, and Salvatore Ruggieri. "Explaining Sentiment Classification with Synthetic Exemplars and Counter-Exemplars." In Discovery Science, 357–73. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61527-7_24.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract We present xspells, a model-agnostic local approach for explaining the decisions of a black box model for sentiment classification of short texts. The explanations provided consist of a set of exemplar sentences and a set of counter-exemplar sentences. The former are examples classified by the black box with the same label as the text to explain. The latter are examples classified with a different label (a form of counter-factuals). Both are close in meaning to the text to explain, and both are meaningful sentences – albeit they are synthetically generated. xspells generates neighbors of the text to explain in a latent space using Variational Autoencoders for encoding text and decoding latent instances. A decision tree is learned from randomly generated neighbors, and used to drive the selection of the exemplars and counter-exemplars. We report experiments on two datasets showing that xspells outperforms the well-known lime method in terms of quality of explanations, fidelity, and usefulness, and that is comparable to it in terms of stability.
6

Biecek, Przemyslaw, and Tomasz Burzykowski. "Local Interpretable Model-agnostic Explanations (LIME)." In Explanatory Model Analysis, 107–23. Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9780429027192-11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Local Interpretable Model-Agnostic Explanations (LIME)":

1

Sousa, Iam, Marley Vellasco, and Eduardo Silva. "Classificações Explicáveis para Imagens de Células Infectadas por Malária." In Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/eniac.2020.12116.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Este trabalho apresenta o desenvolvimento de um classificador explicável de imagens, treinado para a tarefa de determinar se uma célula foi infectada por malária. O classificador consiste em uma rede neural residual, com acurácia de classificação de 96%, treinada com o dataset de Malária do National Health Institute. Técnicas de Inteligência Artificial Explicável foram aplicadas para tornar as classificações mais interpretáveis. Estas explicações são geradas usando duas metodologias: Local Interpretable Model Agnostic Explanations (LIME) e SquareGrid. As explicações fornecem perspectivas novas e importantes para os padrões de decisão de modelos como este, de alto desempenho para tarefas médicas.
2

Yao, Chen, Xi Yueyun, Chen Jinwei, and Zhang Huisheng. "A Novel Gas Path Fault Diagnostic Model for Gas Turbine Based on Explainable Convolutional Neural Network With LIME Method." In ASME Turbo Expo 2021: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/gt2021-59289.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Gas turbine is widely used in aviation and energy industries. Gas path fault diagnosis is an important task for gas turbine operation and maintenance. With the development of information technology, especially deep learning methods, data-driven approaches for gas path diagnosis are developing rapidly in recent years. However, the mechanism of most data-driven models are difficult to explain, resulting in lacking of the credibility of the data-driven methods. In this paper, a novel explainable data-driven model for gas path fault diagnosis based on Convolutional Neural Network (CNN) using Local Interpretable Model-agnostic Explanations (LIME) method is proposed. The input matrix of CNN model is established by considering the mechanism information of gas turbine fault modes and their effects. The relationship between the measurement parameters and fault modes are considered to arrange the relative position in the input matrix. The key parameters which contributes to fault recognition can be achieved by LIME method, and the mechanism information is used to verify the fault diagnostic proceeding and improve the measurement sensor matrix arrangement. A double shaft gas turbine model is used to generate healthy and fault data including 12 typical faults to test the model. The accuracy and interpretability between the CNN diagnosis model built with prior mechanism knowledge and built by parameter correlation matrix are compared, whose accuracy are 96.34% and 89.46% respectively. The result indicates that CNN diagnosis model built with prior mechanism knowledge shows better accuracy and interpretability. This method can express the relevance of the failure mode and its high-correlation measurement parameters in the model, which can greatly improve the interpretability and application value.
3

Barr Kumarakulasinghe, Nesaretnam, Tobias Blomberg, Jintai Liu, Alexandra Saraiva Leao, and Panagiotis Papapetrou. "Evaluating Local Interpretable Model-Agnostic Explanations on Clinical Machine Learning Classification Models." In 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS). IEEE, 2020. http://dx.doi.org/10.1109/cbms49503.2020.00009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wang, Bin, Wenbin Pei, Bing Xue, and Mengjie Zhang. "Evolving local interpretable model-agnostic explanations for deep neural networks in image classification." In GECCO '21: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3449726.3459452.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Freitas da Cruz, Harry, Frederic Schneider, and Matthieu-P. Schapranow. "Prediction of Acute Kidney Injury in Cardiac Surgery Patients: Interpretation using Local Interpretable Model-agnostic Explanations." In 12th International Conference on Health Informatics. SCITEPRESS - Science and Technology Publications, 2019. http://dx.doi.org/10.5220/0007399203800387.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Demajo, Lara Marie, Vince Vella, and Alexiei Dingli. "Explainable AI for Interpretable Credit Scoring." In 10th International Conference on Advances in Computing and Information Technology (ACITY 2020). AIRCC Publishing Corporation, 2020. http://dx.doi.org/10.5121/csit.2020.101516.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the ever-growing achievements in Artificial Intelligence (AI) and the recent boosted enthusiasm in Financial Technology (FinTech), applications such as credit scoring have gained substantial academic interest. Credit scoring helps financial experts make better decisions regarding whether or not to accept a loan application, such that loans with a high probability of default are not accepted. Apart from the noisy and highly imbalanced data challenges faced by such credit scoring models, recent regulations such as the `right to explanation' introduced by the General Data Protection Regulation (GDPR) and the Equal Credit Opportunity Act (ECOA) have added the need for model interpretability to ensure that algorithmic decisions are understandable and coherent. An interesting concept that has been recently introduced is eXplainable AI (XAI), which focuses on making black-box models more interpretable. In this work, we present a credit scoring model that is both accurate and interpretable. For classification, state-of-the-art performance on the Home Equity Line of Credit (HELOC) and Lending Club (LC) Datasets is achieved using the Extreme Gradient Boosting (XGBoost) model. The model is then further enhanced with a 360-degree explanation framework, which provides different explanations (i.e. global, local feature-based and local instance-based) that are required by different people in different situations. Evaluation through the use of functionallygrounded, application-grounded and human-grounded analysis show that the explanations provided are simple, consistent as well as satisfy the six predetermined hypotheses testing for correctness, effectiveness, easy understanding, detail sufficiency and trustworthiness.

До бібліографії