Добірка наукової літератури з теми "Explainable intelligence models"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Explainable intelligence models".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Explainable intelligence models":

1

Abdelmonem, Ahmed, and Nehal N. Mostafa. "Interpretable Machine Learning Fusion and Data Analytics Models for Anomaly Detection." Fusion: Practice and Applications 3, no. 1 (2021): 54–69. http://dx.doi.org/10.54216/fpa.030104.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Explainable artificial intelligence received great research attention in the past few years during the widespread of Black-Box techniques in sensitive fields such as medical care, self-driving cars, etc. Artificial intelligence needs explainable methods to discover model biases. Explainable artificial intelligence will lead to obtaining fairness and Transparency in the model. Making artificial intelligence models explainable and interpretable is challenging when implementing black-box models. Because of the inherent limitations of collecting data in its raw form, data fusion has become a popular method for dealing with such data and acquiring more trustworthy, helpful, and precise insights. Compared to other, more traditional-based data fusion methods, machine learning's capacity to automatically learn from experience with nonexplicit programming significantly improves fusion's computational and predictive power. This paper comprehensively studies the most explainable artificial intelligent methods based on anomaly detection. We proposed the required criteria of the transparency model to measure the data fusion analytics techniques. Also, define the different used evaluation metrics in explainable artificial intelligence. We provide some applications for explainable artificial intelligence. We provide a case study of anomaly detection with the fusion of machine learning. Finally, we discuss the key challenges and future directions in explainable artificial intelligence.
2

Zednik, Carlos, and Hannes Boelsen. "Scientific Exploration and Explainable Artificial Intelligence." Minds and Machines 32, no. 1 (March 2022): 219–39. http://dx.doi.org/10.1007/s11023-021-09583-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractModels developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future investigations of (potentially) causal relationships, and to generate possible explanations of target phenomena in cognitive science. In this way, this paper describes how Explainable AI—over and above machine learning itself—contributes to the efficiency and scope of data-driven scientific research.
3

Raikov, Alexander N. "Subjectivity of Explainable Artificial Intelligence." Russian Journal of Philosophical Sciences 65, no. 1 (June 25, 2022): 72–90. http://dx.doi.org/10.30727/0235-1188-2022-65-1-72-90.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The article addresses the problem of identifying methods to develop the ability of artificial intelligence (AI) systems to provide explanations for their findings. This issue is not new, but, nowadays, the increasing complexity of AI systems is forcing scientists to intensify research in this direction. Modern neural networks contain hundreds of layers of neurons. The number of parameters of these networks reaches trillions, genetic algorithms generate thousands of generations of solutions, and the semantics of AI models become more complicated, going to the quantum and non-local levels. The world’s leading companies are investing heavily in creating explainable AI (XAI). However, the result is still unsatisfactory: a person often cannot understand the “explanations” of AI because the latter makes decisions differently than a person, and perhaps because a good explanation is impossible within the framework of the classical AI paradigm. AI faced a similar problem 40 years ago when expert systems contained only a few hundred logical production rules. The problem was then solved by complicating the logic and building added knowledge bases to explain the conclusions given by AI. At present, other approaches are needed, primarily those that consider the external environment and the subjectivity of AI systems. This work focuses on solving this problem by immersing AI models in the social and economic environment, building ontologies of this environment, taking into account a user profile and creating conditions for purposeful convergence of AI solutions and conclusions to user-friendly goals.
4

Althoff, Daniel, Helizani Couto Bazame, and Jessica Garcia Nascimento. "Untangling hybrid hydrological models with explainable artificial intelligence." H2Open Journal 4, no. 1 (January 1, 2021): 13–28. http://dx.doi.org/10.2166/h2oj.2021.066.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Hydrological models are valuable tools for developing streamflow predictions in unmonitored catchments to increase our understanding of hydrological processes. A recent effort has been made in the development of hybrid (conceptual/machine learning) models that can preserve some of the hydrological processes represented by conceptual models and can improve streamflow predictions. However, these studies have not explored how the data-driven component of hybrid models resolved runoff routing. In this study, explainable artificial intelligence (XAI) techniques are used to turn a ‘black-box’ model into a ‘glass box’ model. The hybrid models reduced the root-mean-square error of the simulated streamflow values by approximately 27, 50, and 24% for stations 17120000, 27380000, and 33680000, respectively, relative to the traditional method. XAI techniques helped unveil the importance of accounting for soil moisture in hydrological models. Differing from purely data-driven hydrological models, the inclusion of the production storage in the proposed hybrid model, which is responsible for estimating the water balance, reduced the short- and long-term dependencies of input variables for streamflow prediction. In addition, soil moisture controlled water percolation, which was the main predictor of streamflow. This finding is because soil moisture controls the underlying mechanisms of groundwater flow into river streams.
5

Gunning, David, and David Aha. "DARPA’s Explainable Artificial Intelligence (XAI) Program." AI Magazine 40, no. 2 (June 24, 2019): 44–58. http://dx.doi.org/10.1609/aimag.v40i2.2850.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dramatic success in machine learning has led to a new wave of AI applications (for example, transportation, security, medicine, finance, defense) that offer tremendous benefits but cannot explain their decisions and actions to human users. DARPA’s explainable artificial intelligence (XAI) program endeavors to create AI systems whose learned models and decisions can be understood and appropriately trusted by end users. Realizing this goal requires methods for learning more explainable models, designing effective explanation interfaces, and understanding the psychologic requirements for effective explanations. The XAI developer teams are addressing the first two challenges by creating ML techniques and developing principles, strategies, and human-computer interaction techniques for generating effective explanations. Another XAI team is addressing the third challenge by summarizing, extending, and applying psychologic theories of explanation to help the XAI evaluator define a suitable evaluation framework, which the developer teams will use to test their systems. The XAI teams completed the first of this 4-year program in May 2018. In a series of ongoing evaluations, the developer teams are assessing how well their XAM systems’ explanations improve user understanding, user trust, and user task performance.
6

Owens, Emer, Barry Sheehan, Martin Mullins, Martin Cunneen, Juliane Ressel, and German Castignani. "Explainable Artificial Intelligence (XAI) in Insurance." Risks 10, no. 12 (December 1, 2022): 230. http://dx.doi.org/10.3390/risks10120230.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Explainable Artificial Intelligence (XAI) models allow for a more transparent and understandable relationship between humans and machines. The insurance industry represents a fundamental opportunity to demonstrate the potential of XAI, with the industry’s vast stores of sensitive data on policyholders and centrality in societal progress and innovation. This paper analyses current Artificial Intelligence (AI) applications in insurance industry practices and insurance research to assess their degree of explainability. Using search terms representative of (X)AI applications in insurance, 419 original research articles were screened from IEEE Xplore, ACM Digital Library, Scopus, Web of Science and Business Source Complete and EconLit. The resulting 103 articles (between the years 2000–2021) representing the current state-of-the-art of XAI in insurance literature are analysed and classified, highlighting the prevalence of XAI methods at the various stages of the insurance value chain. The study finds that XAI methods are particularly prevalent in claims management, underwriting and actuarial pricing practices. Simplification methods, called knowledge distillation and rule extraction, are identified as the primary XAI technique used within the insurance value chain. This is important as the combination of large models to create a smaller, more manageable model with distinct association rules aids in building XAI models which are regularly understandable. XAI is an important evolution of AI to ensure trust, transparency and moral values are embedded within the system’s ecosystem. The assessment of these XAI foci in the context of the insurance industry proves a worthwhile exploration into the unique advantages of XAI, highlighting to industry professionals, regulators and XAI developers where particular focus should be directed in the further development of XAI. This is the first study to analyse XAI’s current applications within the insurance industry, while simultaneously contributing to the interdisciplinary understanding of applied XAI. Advancing the literature on adequate XAI definitions, the authors propose an adapted definition of XAI informed by the systematic review of XAI literature in insurance.
7

Lorente, Maria Paz Sesmero, Elena Magán Lopez, Laura Alvarez Florez, Agapito Ledezma Espino, José Antonio Iglesias Martínez, and Araceli Sanchis de Miguel. "Explaining Deep Learning-Based Driver Models." Applied Sciences 11, no. 8 (April 7, 2021): 3321. http://dx.doi.org/10.3390/app11083321.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Different systems based on Artificial Intelligence (AI) techniques are currently used in relevant areas such as healthcare, cybersecurity, natural language processing, and self-driving cars. However, many of these systems are developed with “black box” AI, which makes it difficult to explain how they work. For this reason, explainability and interpretability are key factors that need to be taken into consideration in the development of AI systems in critical areas. In addition, different contexts produce different explainability needs which must be met. Against this background, Explainable Artificial Intelligence (XAI) appears to be able to address and solve this situation. In the field of automated driving, XAI is particularly needed because the level of automation is constantly increasing according to the development of AI techniques. For this reason, the field of XAI in the context of automated driving is of particular interest. In this paper, we propose the use of an explainable intelligence technique in the understanding of some of the tasks involved in the development of advanced driver-assistance systems (ADAS). Since ADAS assist drivers in driving functions, it is essential to know the reason for the decisions taken. In addition, trusted AI is the cornerstone of the confidence needed in this research area. Thus, due to the complexity and the different variables that are part of the decision-making process, this paper focuses on two specific tasks in this area: the detection of emotions and the distractions of drivers. The results obtained are promising and show the capacity of the explainable artificial techniques in the different tasks of the proposed environments.
8

Letzgus, Simon, Patrick Wagner, Jonas Lederer, Wojciech Samek, Klaus-Robert Muller, and Gregoire Montavon. "Toward Explainable Artificial Intelligence for Regression Models: A methodological perspective." IEEE Signal Processing Magazine 39, no. 4 (July 2022): 40–58. http://dx.doi.org/10.1109/msp.2022.3153277.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Han, Juhee, and Younghoon Lee. "Explainable Artificial Intelligence-Based Competitive Factor Identification." ACM Transactions on Knowledge Discovery from Data 16, no. 1 (July 3, 2021): 1–11. http://dx.doi.org/10.1145/3451529.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Competitor analysis is an essential component of corporate strategy, providing both offensive and defensive strategic contexts to identify opportunities and threats. The rapid development of social media has recently led to several methodologies and frameworks facilitating competitor analysis through online reviews. Existing studies only focused on detecting comparative sentences in review comments or utilized low-performance models. However, this study proposes a novel approach to identifying the competitive factors using a recent explainable artificial intelligence approach at the comprehensive product feature level. We establish a model to classify the review comments for each corresponding product and evaluate the relevance of each keyword in such comments during the classification process. We then extract and prioritize the keywords and determine their competitiveness based on relevance. Our experiment results show that the proposed method can effectively extract the competitive factors both qualitatively and quantitatively.
10

Patil, Shruti, Vijayakumar Varadarajan, Siddiqui Mohd Mazhar, Abdulwodood Sahibzada, Nihal Ahmed, Onkar Sinha, Satish Kumar, Kailash Shaw, and Ketan Kotecha. "Explainable Artificial Intelligence for Intrusion Detection System." Electronics 11, no. 19 (September 27, 2022): 3079. http://dx.doi.org/10.3390/electronics11193079.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Intrusion detection systems are widely utilized in the cyber security field, to prevent and mitigate threats. Intrusion detection systems (IDS) help to keep threats and vulnerabilities out of computer networks. To develop effective intrusion detection systems, a range of machine learning methods are available. Machine learning ensemble methods have a well-proven track record when it comes to learning. Using ensemble methods of machine learning, this paper proposes an innovative intrusion detection system. To improve classification accuracy and eliminate false positives, features from the CICIDS-2017 dataset were chosen. This paper proposes an intrusion detection system using machine learning algorithms such as decision trees, random forests, and SVM (IDS). After training these models, an ensemble technique voting classifier was added and achieved an accuracy of 96.25%. Furthermore, the proposed model also incorporates the XAI algorithm LIME for better explainability and understanding of the black-box approach to reliable intrusion detection. Our experimental results confirmed that XAI LIME is more explanation-friendly and more responsive.

Дисертації з теми "Explainable intelligence models":

1

Palmisano, Enzo Pio. "A First Study of Transferable and Explainable Deep Learning Models for HPC Systems." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20099/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Il lavoro descritto in questa tesi è basato sullo studio di modelli di Deep Learning applicati all’anomaly detection per la rilevazioni di stati anomali nei sistemi HPC (High-Performance Computing). In particolare, l’obiettivo è studiare la trasferibilità di un modello e di spiegarne il risultato prodotto attraverso tecniche dell’Explainable Artificial Intelligence. I sistemi HPC sono dotati di numerosi sensori in grado di monitorare il corretto funzionamento in tempo reale. Tuttavia, a causa dell’elevato grado di complessità di questi sistemi, si rende necessario l’uso di tecniche innovative e capaci di prevedere guasti, errori e qualsiasi tipo di anomalia per ridurre i costi di manutenzione e per rendere il servizio sempre disponibile. Negli anni ci sono stati numerosi studi per la realizzazioni di modelli di Deep Learning utili allo scopo, ma ciò che manca a questi modelli è la capacità di generalizzare a condizioni diverse da quelle riscontrate durante la fase di training. Nella prima parte di questo lavoro andremo, quindi, ad analizzare un modello già sviluppato per studiarne la trasferibilità e la sua generalizzazione per un’applicazione più ampia rispetto al dominio su cui è costruito. Un ulteriore problema è dato dalle modalità di risposta di un modello ad un determinato input. Molto spesso risulta essere incomprensibile anche per coloro che hanno costruito il modello, la risposta prodotta. Perciò, attraverso le tecniche dell’Explainable Artificial Intelligence, vengono studiati e analizzati i vari output del modello per comprenderne il funzionamento e motivare i risultati corretti ed errati. Il sistema HPC che andremo ad analizzare è chiamato MARCONI ed è di proprietà del consorzio no-profit Cineca di Bologna. Il Cineca è composto da 70 università italiane, quattro centri di ricerca nazionali e il Ministero di Università e ricerca (MIUR). Il Cineca rappresenta uno dei più potenti centri di supercalcolo per la ricerca scientifica in Italia.
2

Costa, Bueno Vicente. "Fuzzy Horn clauses in artificial intelligence: a study of free models, and applications in art painting style categorization." Doctoral thesis, Universitat Autònoma de Barcelona, 2021. http://hdl.handle.net/10803/673374.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Aquesta tesi doctoral contribueix a l’estudi de les clàusules de Horn en lògiques difuses, així com al seu ús en representació difusa del coneixement aplicada al disseny d’un algorisme de classificació de pintures segons el seu estil artístic. En la primera part del treball ens centrem en algunes nocions rellevants per a la programació lògica, com ho són per exemple els models lliures i les estructures de Herbrand en lògica matemàtica difusa. Així doncs, provem l’existència de models lliures en classes universals difuses de Horn, i demostrem que tota teoria difusa universal de Horn sense igualtat té un model de Herbrand. A més, introduïm dues nocions de minimalitat per a models lliures, i demostrem que aquestes nocions són equivalents en el cas de les fully named structures. En la segona part de la tesi doctoral, utilitzem les clàusules de Horn combinades amb el modelatge qualitatiu com a marc de representació difusa del coneixement per a la categorització d’estils de pintura artística. Finalment, dissenyem un classificador de pintures basat en clàusules de Horn avaluades, descriptors qualitatius de colors i explicacions. Aquest algorisme, anomenat l-SHE, proporciona raons dels resultats obtinguts i mostra percentatges competitius de precisió a l’experimentació.
La presente tesis doctoral contribuye al estudio de las cláusulas de Horn en lógicas difusas, así como a su uso en representación difusa del conocimiento aplicada al diseño de un algoritmo de clasificación de pinturas según su estilo artístico. En la primera parte del trabajo nos centramos en algunas nociones relevantes para la programación lógica, como lo son por ejemplo los modelos libres y las estructuras de Herbrand en lógica matemática difusa. Así pues, probamos la existencia de modelos libres en clases universales difusas de Horn y demostramos que toda teoría difusa universal de Horn sin igualdad tiene un modelo de Herbrand. Asimismo, introducimos dos nociones de minimalidad para modelos libres, y demostramos que estas nociones son equivalentes en el caso de las fully named structures. En la segunda parte de la tesis doctoral, utilizamos cláusulas de Horn combinadas con el modelado cualitativo como marco de representación difusa del conocimiento para la categorización de estilos de pintura artística. Finalmente, diseñamos un clasificador de pinturas basado en cláusulas de Horn evaluadas, descriptores cualitativos de colores y explicaciones. Este algoritmo, que llamamos l-SHE, proporciona razones de los resultados obtenidos y obtiene porcentajes competitivos de precisión en la experimentación.
This PhD thesis contributes to the systematic study of Horn clauses of predicate fuzzy logics and their use in knowledge representation for the design of an art painting style classification algorithm. We first focus the study on relevant notions in logic programming, such as free models and Herbrand structures in mathematical fuzzy logic. We show the existence of free models in fuzzy universal Horn classes, and we prove that every equality-free consistent universal Horn fuzzy theory has a Herbrand model. Two notions of minimality of free models are introduced, and we show that these notions are equivalent in the case of fully named structures. Then, we use Horn clauses combined with qualitative modeling as a fuzzy knowledge representation framework for art painting style categorization. Finally, we design a style painting classifier based on evaluated Horn clauses, qualitative color descriptors, and explanations. This algorithm, called l-SHE, provides reasons for the obtained results and obtains percentages of accuracy in the experimentation that are competitive.
Universitat Autònoma de Barcelona. Programa de Doctorat en Ciència Cognitiva i Llenguatge
3

Rouget, Thierry. "Learning explainable concepts in the presence of a qualitative model." Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/9762.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis addresses the problem of learning concept descriptions that are interpretable, or explainable. Explainability is understood as the ability to justify the learned concept in terms of the existing background knowledge. The starting point for the work was an existing system that would induce only fully explainable rules. The system performed well when the model used during induction was complete and correct. In practice, however, models are likely to be imperfect, i.e. incomplete and incorrect. We report here a new approach that achieves explainability with imperfect models. The basis of the system is the standard inductive search driven by an accuracy-oriented heuristic, biased towards rule explainability. The bias is abandoned when there is heuristic evidence that a significant loss of accuracy results from constraining the search to explainable rules only. The users can express their relative preference for accuracy vs. explainability. Experiments with the system indicate that, even with a partially incomplete and/or incorrect model, insisting on explainability results in only a small loss of accuracy. We also show how the new approach described can repair a faulty model using evidence derived from data during induction.
4

Giuliani, Luca. "Extending the Moving Targets Method for Injecting Constraints in Machine Learning." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23885/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Informed Machine Learning is an umbrella term that comprises a set of methodologies in which domain knowledge is injected into a data-driven system in order to improve its level of accuracy, satisfy some external constraint, and in general serve the purposes of explainability and reliability. The said topid has been widely explored in the literature by means of many different techniques. Moving Targets is one such a technique particularly focused on constraint satisfaction: it is based on decomposition and bi-level optimization and proceeds by iteratively refining the target labels through a master step which is in charge of enforcing the constraints, while the training phase is delegated to a learner. In this work, we extend the algorithm in order to deal with semi-supervised learning and soft constraints. In particular, we focus our empirical evaluation on both regression and classification tasks involving monotonicity shape constraints. We demonstrate that our method is robust with respect to its hyperparameters, as well as being able to generalize very well while reducing the number of violations on the enforced constraints. Additionally, the method can even outperform, both in terms of accuracy and constraint satisfaction, other state-of-the-art techniques such as Lattice Models and Semantic-based Regularization with a Lagrangian Dual approach for automatic hyperparameter tuning.
5

Saluja, Rohit. "Interpreting Multivariate Time Series for an Organization Health Platform." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-289465.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Machine learning-based systems are rapidly becoming popular because it has been realized that machines are more efficient and effective than humans at performing certain tasks. Although machine learning algorithms are extremely popular, they are also very literal and undeviating. This has led to a huge research surge in the field of interpretability in machine learning to ensure that machine learning models are reliable, fair, and can be held liable for their decision-making process. Moreover, in most real-world problems just making predictions using machine learning algorithms only solves the problem partially. Time series is one of the most popular and important data types because of its dominant presence in the fields of business, economics, and engineering. Despite this, interpretability in time series is still relatively unexplored as compared to tabular, text, and image data. With the growing research in the field of interpretability in machine learning, there is also a pressing need to be able to quantify the quality of explanations produced after interpreting machine learning models. Due to this reason, evaluation of interpretability is extremely important. The evaluation of interpretability for models built on time series seems completely unexplored in research circles. This thesis work focused on achieving and evaluating model agnostic interpretability in a time series forecasting problem.  The use case discussed in this thesis work focused on finding a solution to a problem faced by a digital consultancy company. The digital consultancy wants to take a data-driven approach to understand the effect of various sales related activities in the company on the sales deals closed by the company. The solution involved framing the problem as a time series forecasting problem to predict the sales deals and interpreting the underlying forecasting model. The interpretability was achieved using two novel model agnostic interpretability techniques, Local interpretable model- agnostic explanations (LIME) and Shapley additive explanations (SHAP). The explanations produced after achieving interpretability were evaluated using human evaluation of interpretability. The results of the human evaluation studies clearly indicate that the explanations produced by LIME and SHAP greatly helped lay humans in understanding the predictions made by the machine learning model. The human evaluation study results also indicated that LIME and SHAP explanations were almost equally understandable with LIME performing better but with a very small margin. The work done during this project can easily be extended to any time series forecasting or classification scenario for achieving and evaluating interpretability. Furthermore, this work can offer a very good framework for achieving and evaluating interpretability in any machine learning-based regression or classification problem.
Maskininlärningsbaserade system blir snabbt populära eftersom man har insett att maskiner är effektivare än människor när det gäller att utföra vissa uppgifter. Även om maskininlärningsalgoritmer är extremt populära, är de också mycket bokstavliga. Detta har lett till en enorm forskningsökning inom området tolkbarhet i maskininlärning för att säkerställa att maskininlärningsmodeller är tillförlitliga, rättvisa och kan hållas ansvariga för deras beslutsprocess. Dessutom löser problemet i de flesta verkliga problem bara att göra förutsägelser med maskininlärningsalgoritmer bara delvis. Tidsserier är en av de mest populära och viktiga datatyperna på grund av dess dominerande närvaro inom affärsverksamhet, ekonomi och teknik. Trots detta är tolkningsförmågan i tidsserier fortfarande relativt outforskad jämfört med tabell-, text- och bilddata. Med den växande forskningen inom området tolkbarhet inom maskininlärning finns det också ett stort behov av att kunna kvantifiera kvaliteten på förklaringar som produceras efter tolkning av maskininlärningsmodeller. Av denna anledning är utvärdering av tolkbarhet extremt viktig. Utvärderingen av tolkbarhet för modeller som bygger på tidsserier verkar helt outforskad i forskarkretsar. Detta uppsatsarbete fokuserar på att uppnå och utvärdera agnostisk modelltolkbarhet i ett tidsserieprognosproblem.  Fokus ligger i att hitta lösningen på ett problem som ett digitalt konsultföretag står inför som användningsfall. Det digitala konsultföretaget vill använda en datadriven metod för att förstå effekten av olika försäljningsrelaterade aktiviteter i företaget på de försäljningsavtal som företaget stänger. Lösningen innebar att inrama problemet som ett tidsserieprognosproblem för att förutsäga försäljningsavtalen och tolka den underliggande prognosmodellen. Tolkningsförmågan uppnåddes med hjälp av två nya tekniker för agnostisk tolkbarhet, lokala tolkbara modellagnostiska förklaringar (LIME) och Shapley additiva förklaringar (SHAP). Förklaringarna som producerats efter att ha uppnått tolkbarhet utvärderades med hjälp av mänsklig utvärdering av tolkbarhet. Resultaten av de mänskliga utvärderingsstudierna visar tydligt att de förklaringar som produceras av LIME och SHAP starkt hjälpte människor att förstå förutsägelserna från maskininlärningsmodellen. De mänskliga utvärderingsstudieresultaten visade också att LIME- och SHAP-förklaringar var nästan lika förståeliga med LIME som presterade bättre men med en mycket liten marginal. Arbetet som utförts under detta projekt kan enkelt utvidgas till alla tidsserieprognoser eller klassificeringsscenarier för att uppnå och utvärdera tolkbarhet. Dessutom kan detta arbete erbjuda en mycket bra ram för att uppnå och utvärdera tolkbarhet i alla maskininlärningsbaserade regressions- eller klassificeringsproblem.
6

AfzaliSeresht, Neda. "Explainable Intelligence for Comprehensive Interpretation of Cybersecurity Data in Incident Management." Thesis, 2022. https://vuir.vu.edu.au/44414/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
On a regular basis, a variety of events take place in computer systems: program launches, firewall updates, user logins, and so on. To secure information resources, modern organisations have established security management systems. In cyber incident management, reporting and awareness-raising are a critical to identify and respond to potential threats in organisations. Security equipment operation systems record ’all’ events or actions, and major abnormalities are signaling via alerts based on rules or patterns. Investigation of these alerts is handled by specialists in the incident response team. Security professionals rely on the information in alert messages to respond appropriately. Incident response teams do not audit or trace the log files until an incident happens. Insufficient information in alert messages, and machine-friendly rather than human-friendly format cause cognitive overload on already limited cybersecurity human resources. As a result, only a smaller number of threat alerts are investigated by specialist staff and security holes may be left open for potential attacks. Furthermore, incident response teams have to derive the context of incidents by applying prior knowledge, communicate with the right people to understand what has happened, and initiate the appropriate actions. Insufficient information in alert messages and stakeholders’ participation raise challenges for the incident management process, which may result in late responses. In other words, cybersecurity resources are overburdened due to a lack of information in alert messages that provide an incomplete picture of a subject (incident) to assist with necessary decision making. The need to identify and track local and global sources in order to process and understand the critical elements of threat information causes cognitive overload on the company’s currently limited cybersecurity professionals. This problem can be overcome with a fully integrated report that clarifies the subject (incident) in order to reduce overall cognitive burden. Instead of spending additional time to investigating each subject of incident, which is dependent on the person’s expertise and the amount of time he has, a detailed report of incident can be utilised as an input of human-analyst. If cyber experts’ cognitive loads can be reduced, their response time efficiency may improves. The relationship between achieving incident management agility through contextual analytical with a comprehensive report and reducing human cognition overload is still being studied. There is currently a research gap in determining the key relationships between explainable Artificial Intelligence (AI) models and other technologies used in security management to gain insight into how explainable contextual analytics can provide distinct response capabilities. When using an explainable AI model for event modelling, research is necessary on how to improve self and shared insight about cyber data by gathering and interpreting security knowledge to reduce cognitive burden on analysts. Due to the fact that the level of cyber security expertise depends on prior knowledge or the results of a thorough report as an input, explainable intelligent models for understanding the inputs have been proposed. By enriching and interpreting security data in a comprehensive humanreadable report, analysts can get a better understanding of the situation and make better decisions. Explainable intelligent models are proposed in cyber incident management by interpreting security logs and cybersecurity alerts, and include a model which can be used in fraud detection where a large number of financial transactions necessitates the involvement of a human in the analysis process. In cyber incident management application, a wide and diverse amount of data are digested, and a report in natural language is developed to assist cyber analysts’ understanding of the situation. The proposed model produced easy-to-read reports/stories by presenting supplementary information in a novel narrative framework to communicate the context and root cause of the alert. It has been confirmed that, when compared to baseline reports, a more comprehensive report that answers core questions about the actor (who), riskiness (what), evidence (why), mechanism (how), time (when), and location (where) that support making real-time decisions by providing incident awareness. Furthermore, a common understanding of an incident and its consequences was established through a graph, resulting in Shared Situation Awareness (SSA) capability (the acquisition of cognition through collaboration with others). A knowledge graph, also known as a graph to semantic knowledge, is a data structure that represents various properties and relationships between objects. It has been widely researched and utilised in information processing and organisation. The knowledge graph depicts the various connections between the alert and relevant information from local and global knowledge bases. It interpreted knowledge in a human-readable format to enable more engagement in the cyber incident management. The proposed models are also known as explainable intelligence because they can reduce the cognitive effort required to process a large amount of security data. As a result, self-awareness and shared awareness of what is happening in cybersecurity incidents have been accomplished. The analyses and survey evaluation empirically demonstrated the models’ success in reducing significant overload on expert cognition, bringing more comprehensive information about the incident, and interpreting knowledge in a human-readable format to enable greater participation in cyber incident management. Finally, the intelligent model of knowledge graph is provided for transaction visualisation for fraud detection, an important challenge in security research. As with the same incident management challenges, fraud detection methods need to be more transparent by explaining their results in more detail. Despite the fact that fraudulent practices are always evolving, investigating money laundering based on an explainable AI that uses graph analysis, assist in the comprehension of schemes. A visual representation of the complex interactions that occur in transactions between money sender and money receiver, with explanations of human-readable aspects for easier digestion is provided. The proposed model, which was used in transaction visualisation and fraud detection, was highly regarded by domain experts. The Digital Defense Hackathon in December 2020 demonstrated that the model is adaptable and widely applicable (received first place in the Hackathon competition).
7

"Foundations of Human-Aware Planning -- A Tale of Three Models." Doctoral diss., 2018. http://hdl.handle.net/2286/R.I.51791.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
abstract: A critical challenge in the design of AI systems that operate with humans in the loop is to be able to model the intentions and capabilities of the humans, as well as their beliefs and expectations of the AI system itself. This allows the AI system to be "human- aware" -- i.e. the human task model enables it to envisage desired roles of the human in joint action, while the human mental model allows it to anticipate how its own actions are perceived from the point of view of the human. In my research, I explore how these concepts of human-awareness manifest themselves in the scope of planning or sequential decision making with humans in the loop. To this end, I will show (1) how the AI agent can leverage the human task model to generate symbiotic behavior; and (2) how the introduction of the human mental model in the deliberative process of the AI agent allows it to generate explanations for a plan or resort to explicable plans when explanations are not desired. The latter is in addition to traditional notions of human-aware planning which typically use the human task model alone and thus enables a new suite of capabilities of a human-aware AI agent. Finally, I will explore how the AI agent can leverage emerging mixed-reality interfaces to realize effective channels of communication with the human in the loop.
Dissertation/Thesis
Doctoral Dissertation Computer Science 2018
8

Pereira, Filipe Inácio da Costa. "Explainable artificial intelligence - learning decision sets with sat." Master's thesis, 2018. http://hdl.handle.net/10451/34903.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Tese de mestrado, Engenharia Informática (Interação e Conhecimento) Universidade de Lisboa, Faculdade de Ciências, 2018
Artificial Intelligence is a core research topic with key significance in technological growth. With the increase of data, we have more efficient models that in a few seconds will inform us of their prediction on a given input set. The more complex techniques nowadays with better results are Black Box Models. Unfortunately, these can’t provide an explanation behind their prediction, which is a major drawback for us humans. Explainable Artificial Intelligence, whose objective is to associate explanations with decisions made by autonomous agents, breaks this lack of transparency. This can be done by two approaches, either by creating models that are interpretable by themselves or by creating frameworks that justify and interpret any prediction made by any given model. This thesis describes the implementation of two interpretable models (Decision Sets and Decision Trees) based on Logic Reasoners, either SAT (Satisfiability) or SMT (Satisfiability Modulo Theories) solvers. This work was motivated by an in-depth analysis of past work in the area of Explainable Artificial Intelligence, with the purpose of seeking applications of logic in this domain. The Decision Sets approach focuses on the training data, as does any other model, and encoding the variables and constraints as a CNF (Conjuctive Normal Form) formula which can then be solved by a SAT/SMT oracle. This approach focuses on minimizing the number of rules (or Disjunctive Normal Forms) for each binary class representation and avoiding overlap, whether it is training sample or feature-space overlap, while maintaining interpretable explanations and perfect accuracy. The Decision Tree model studied in this work consists in computing a minimum size decision tree, which would represent a 100% accurate classifier given a set of training samples. The model is based on encoding the problem as a CNF formula, which can be tackled with the efficient use of a SAT oracle.
9

Neves, Maria Inês Lourenço das. "Opening the black-box of artificial intelligence predictions on clinical decision support systems." Master's thesis, 2021. http://hdl.handle.net/10362/126699.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cardiovascular diseases are the leading global death cause. Their treatment and prevention rely on electrocardiogram interpretation, which is dependent on the physician’s variability. Subjectiveness is intrinsic to electrocardiogram interpretation and hence, prone to errors. To assist physicians in making precise and thoughtful decisions, artificial intelligence is being deployed to develop models that can interpret extent datasets and provide accurate decisions. However, the lack of interpretability of most machine learning models stands as one of the drawbacks of their deployment, particularly in the medical domain. Furthermore, most of the currently deployed explainable artificial intelligence methods assume independence between features, which means temporal independence when dealing with time series. The inherent characteristic of time series cannot be ignored as it carries importance for the human decision making process. This dissertation focuses on the explanation of heartbeat classification using several adaptations of state-of-the-art model-agnostic methods, to locally explain time series classification. To address the explanation of time series classifiers, a preliminary conceptual framework is proposed, and the use of the derivative is suggested as a complement to add temporal dependency between samples. The results were validated on an extent public dataset, through the 1-D Jaccard’s index, which consists of the comparison of the subsequences extracted from an interpretable model and the explanation methods used. Secondly, through the performance’s decrease, to evaluate whether the explanation fits the model’s behaviour. To assess models with distinct internal logic, the validation was conducted on a more transparent model and more opaque one in both binary and multiclass situation. The results show the promising use of including the signal’s derivative to introduce temporal dependency between samples in the explanations, for models with simpler internal logic.
As doenças cardiovasculares são, a nível mundial, a principal causa de morte e o seu tratamento e prevenção baseiam-se na interpretação do electrocardiograma. A interpretação do electrocardiograma, feita por médicos, é intrinsecamente subjectiva e, portanto, sujeita a erros. De modo a apoiar a decisão dos médicos, a inteligência artificial está a ser usada para desenvolver modelos com a capacidade de interpretar extensos conjuntos de dados e fornecer decisões precisas. No entanto, a falta de interpretabilidade da maioria dos modelos de aprendizagem automática é uma das desvantagens do recurso à mesma, principalmente em contexto clínico. Adicionalmente, a maioria dos métodos inteligência artifical explicável assumem independência entre amostras, o que implica a assunção de independência temporal ao lidar com séries temporais. A característica inerente das séries temporais não pode ser ignorada, uma vez que apresenta importância para o processo de tomada de decisão humana. Esta dissertação baseia-se em inteligência artificial explicável para tornar inteligível a classificação de batimentos cardíacos, através da utilização de várias adaptações de métodos agnósticos do estado-da-arte. Para abordar a explicação dos classificadores de séries temporais, propõe-se uma taxonomia preliminar, e o uso da derivada como um complemento para adicionar dependência temporal entre as amostras. Os resultados foram validados para um conjunto extenso de dados públicos, por meio do índice de Jaccard em 1-D, com a comparação das subsequências extraídas de um modelo interpretável e os métodos inteligência artificial explicável utilizados, e a análise de qualidade, para avaliar se a explicação se adequa ao comportamento do modelo. De modo a avaliar modelos com lógicas internas distintas, a validação foi realizada usando, por um lado, um modelo mais transparente e, por outro, um mais opaco, tanto numa situação de classificação binária como numa situação de classificação multiclasse. Os resultados mostram o uso promissor da inclusão da derivada do sinal para introduzir dependência temporal entre as amostras nas explicações fornecidas, para modelos com lógica interna mais simples.

Книги з теми "Explainable intelligence models":

1

Escalera, Sergio, Isabelle Guyon, Xavier Baró, Umut Güçlü, Hugo Jair Escalante, Yağmur Güçlütürk, and Marcel van Gerven. Explainable and Interpretable Models in Computer Vision and Machine Learning. Springer, 2019.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Mishra, Pradeepta. Practical Explainable AI Using Python: Artificial Intelligence Model Explanations Using Python-Based Libraries, Extensions, and Frameworks. Apress L. P., 2021.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Explainable intelligence models":

1

Holzinger, Andreas, Randy Goebel, Ruth Fong, Taesup Moon, Klaus-Robert Müller, and Wojciech Samek. "xxAI - Beyond Explainable Artificial Intelligence." In xxAI - Beyond Explainable AI, 3–10. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThe success of statistical machine learning from big data, especially of deep learning, has made artificial intelligence (AI) very popular. Unfortunately, especially with the most successful methods, the results are very difficult to comprehend by human experts. The application of AI in areas that impact human life (e.g., agriculture, climate, forestry, health, etc.) has therefore led to an demand for trust, which can be fostered if the methods can be interpreted and thus explained to humans. The research field of explainable artificial intelligence (XAI) provides the necessary foundations and methods. Historically, XAI has focused on the development of methods to explain the decisions and internal mechanisms of complex AI systems, with much initial research concentrating on explaining how convolutional neural networks produce image classification predictions by producing visualizations which highlight what input patterns are most influential in activating hidden units, or are most responsible for a model’s decision. In this volume, we summarize research that outlines and takes next steps towards a broader vision for explainable AI in moving beyond explaining classifiers via such methods, to include explaining other kinds of models (e.g., unsupervised and reinforcement learning models) via a diverse array of XAI techniques (e.g., question-and-answering systems, structured explanations). In addition, we also intend to move beyond simply providing model explanations to directly improving the transparency, efficiency and generalization ability of models. We hope this volume presents not only exciting research developments in explainable AI but also a guide for what next areas to focus on within this fascinating and highly relevant research field as we enter the second decade of the deep learning revolution. This volume is an outcome of the ICML 2020 workshop on “XXAI: Extending Explainable AI Beyond Deep Models and Classifiers.”
2

Gaur, Loveleen, and Biswa Mohan Sahoo. "Intelligent Transportation System: Modern Business Models." In Explainable Artificial Intelligence for Intelligent Transportation Systems, 67–77. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09644-0_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Chennam, Krishna Keerthi, Swapna Mudrakola, V. Uma Maheswari, Rajanikanth Aluvalu, and K. Gangadhara Rao. "Black Box Models for eXplainable Artificial Intelligence." In Explainable AI: Foundations, Methodologies and Applications, 1–24. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12807-3_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Banerjee, Puja, and Rajesh P. Barnwal. "Methods and Metrics for Explaining Artificial Intelligence Models: A Review." In Explainable AI: Foundations, Methodologies and Applications, 61–88. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12807-3_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Hutchison, Jack, Duc-Son Pham, Sie-Teng Soh, and Huo-Chong Ling. "Explainable Network Intrusion Detection Using External Memory Models." In AI 2022: Advances in Artificial Intelligence, 220–33. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-22695-3_16.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Adadi, Amina, and Mohammed Berrada. "Explainable AI for Healthcare: From Black Box to Interpretable Models." In Embedded Systems and Artificial Intelligence, 327–37. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-0947-6_31.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Holzinger, Andreas, Anna Saranti, Christoph Molnar, Przemyslaw Biecek, and Wojciech Samek. "Explainable AI Methods - A Brief Overview." In xxAI - Beyond Explainable AI, 13–38. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractExplainable Artificial Intelligence (xAI) is an established field with a vibrant community that has developed a variety of very successful approaches to explain and interpret predictions of complex machine learning models such as deep neural networks. In this article, we briefly introduce a few selected methods and discuss them in a short, clear and concise way. The goal of this article is to give beginners, especially application engineers and data scientists, a quick overview of the state of the art in this current topic. The following 17 methods are covered in this chapter: LIME, Anchors, GraphLIME, LRP, DTD, PDA, TCAV, XGNN, SHAP, ASV, Break-Down, Shapley Flow, Textual Explanations of Visual Models, Integrated Gradients, Causal Models, Meaningful Perturbations, and X-NeSyL.
8

Udenwagu, Nnaemeka E., Ambrose A. Azeta, Sanjay Misra, Vivian O. Nwaocha, Daniel L. Enosegbe, and Mayank Mohan Sharma. "ExplainEx: An Explainable Artificial Intelligence Framework for Interpreting Predictive Models." In Hybrid Intelligent Systems, 505–15. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-73050-5_51.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Brini, Iheb, Maroua Mehri, Rolf Ingold, and Najoua Essoukri Ben Amara. "An End-to-End Framework for Evaluating Explainable Deep Models: Application to Historical Document Image Segmentation." In Computational Collective Intelligence, 106–19. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16014-1_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Boutorh, Aicha, Hala Rahim, and Yassmine Bendoumia. "Explainable AI Models for COVID-19 Diagnosis Using CT-Scan Images and Clinical Data." In Computational Intelligence Methods for Bioinformatics and Biostatistics, 185–99. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-20837-9_15.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Explainable intelligence models":

1

Ignatiev, Alexey. "Towards Trustable Explainable AI." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/726.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Explainable artificial intelligence (XAI) represents arguably one of the most crucial challenges being faced by the area of AI these days. Although the majority of approaches to XAI are of heuristic nature, recent work proposed the use of abductive reasoning to computing provably correct explanations for machine learning (ML) predictions. The proposed rigorous approach was shown to be useful not only for computing trustable explanations but also for validating explanations computed heuristically. It was also applied to uncover a close relationship between XAI and verification of ML models. This paper overviews the advances of the rigorous logic-based approach to XAI and argues that it is indispensable if trustable XAI is of concern.
2

Sampat, Shailaja. "Technical, Hard and Explainable Question Answering (THE-QA)." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/916.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The ability of an agent to rationally answer questions about a given task is the key measure of its intelligence. While we have obtained phenomenal performance over various language and vision tasks separately, 'Technical, Hard and Explainable Question Answering' (THE-QA) is a new challenging corpus which addresses them jointly. THE-QA is a question answering task involving diagram understanding and reading comprehension. We plan to establish benchmarks over this new corpus using deep learning models guided by knowledge representation methods. The proposed approach will envisage detailed semantic parsing of technical figures and text, which is robust against diverse formats. It will be aided by knowledge acquisition and reasoning module that categorizes different knowledge types, identify sources to acquire that knowledge and perform reasoning to answer the questions correctly. THE-QA data will present a strong challenge to the community for future research and will bridge the gap between state-of-the-art Artificial Intelligence (AI) and 'Human-level' AI.
3

Daniels, Zachary A., Logan D. Frank, Christopher Menart, Michael Raymer, and Pascal Hitzler. "A framework for explainable deep neural models using external knowledge graphs." In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, edited by Tien Pham, Latasha Solomon, and Katie Rainey. SPIE, 2020. http://dx.doi.org/10.1117/12.2558083.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Song, Haekang, and Sungho Kim. "Explainable artificial intelligence (XAI): How to make image analysis deep learning models transparent." In 2022 22nd International Conference on Control, Automation and Systems (ICCAS). IEEE, 2022. http://dx.doi.org/10.23919/iccas55662.2022.10003813.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Nagaraj Rao, Varun, Xingjian Zhen, Karen Hovsepian, and Mingwei Shen. "A First Look: Towards Explainable TextVQA Models via Visual and Textual Explanations." In Proceedings of the Third Workshop on Multimodal Artificial Intelligence. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.maiworkshop-1.4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Lisboa, Paulo, Sascha Saralajew, Alfredo Vellido, and Thomas Villmann. "The Coming of Age of Interpretable and Explainable Machine Learning Models." In ESANN 2021 - European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Louvain-la-Neuve (Belgium): Ciaco - i6doc.com, 2021. http://dx.doi.org/10.14428/esann/2021.es2021-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Byrne, Ruth M. J. "Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/876.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Counterfactuals about what could have happened are increasingly used in an array of Artificial Intelligence (AI) applications, and especially in explainable AI (XAI). Counterfactuals can aid the provision of interpretable models to make the decisions of inscrutable systems intelligible to developers and users. However, not all counterfactuals are equally helpful in assisting human comprehension. Discoveries about the nature of the counterfactuals that humans create are a helpful guide to maximize the effectiveness of counterfactual use in AI.
8

Alibekov, M. R. "Diagnosis of Plant Biotic Stress by Methods of Explainable Artificial Intelligence." In 32nd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/graphicon-2022-728-739.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Methods for digital image preprocessing, which significantly increase the efficiency of ML methods, and also a number of ML methods and models as a basis for constructing simple and efficient XAI networks for diagnosing plant biotic stresses, have been studied. A complex solution has been built, which includes the following stages: automatic segmentation; feature extraction; classification by ML models. The best classifiers and feature vectors are selected. The study was carried out on the open dataset PlantVillage Dataset. The single-layer perceptron (SLP) trained on a full vector of 92 features (20 statistical, 72 textural) became the best according to the F1- score=93% criterion. The training time on a PC with an Intel Core i5-8300H CPU took 189 minutes. According to the criterion “F1-score/number of features”, SLP trained on 7 principal components with F1-score=85% also became the best. Training time - 29 minutes. The criterion “F1- score/number+interpretability of features” favors the selected 9 features and the random forest model, F1-score=83%. The research software package is made in a modern version of Python using the OpenCV and deep learning model libraries, and is able for using in precision farming.
9

Wang, Yifan, and Guangmo Tong. "Learnability of Competitive Threshold Models." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/553.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Modeling the spread of social contagions is central to various applications in social computing. In this paper, we study the learnability of the competitive threshold model from a theoretical perspective. We demonstrate how competitive threshold models can be seamlessly simulated by artificial neural networks with finite VC dimensions, which enables analytical sample complexity and generalization bounds. Based on the proposed hypothesis space, we design efficient algorithms under the empirical risk minimization scheme. The theoretical insights are finally translated into practical and explainable modeling methods, the effectiveness of which is verified through a sanity check over a few synthetic and real datasets. The experimental results promisingly show that our method enjoys a decent performance without using excessive data points, outperforming off-the-shelf methods.
10

Demajo, Lara Marie, Vince Vella, and Alexiei Dingli. "Explainable AI for Interpretable Credit Scoring." In 10th International Conference on Advances in Computing and Information Technology (ACITY 2020). AIRCC Publishing Corporation, 2020. http://dx.doi.org/10.5121/csit.2020.101516.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the ever-growing achievements in Artificial Intelligence (AI) and the recent boosted enthusiasm in Financial Technology (FinTech), applications such as credit scoring have gained substantial academic interest. Credit scoring helps financial experts make better decisions regarding whether or not to accept a loan application, such that loans with a high probability of default are not accepted. Apart from the noisy and highly imbalanced data challenges faced by such credit scoring models, recent regulations such as the `right to explanation' introduced by the General Data Protection Regulation (GDPR) and the Equal Credit Opportunity Act (ECOA) have added the need for model interpretability to ensure that algorithmic decisions are understandable and coherent. An interesting concept that has been recently introduced is eXplainable AI (XAI), which focuses on making black-box models more interpretable. In this work, we present a credit scoring model that is both accurate and interpretable. For classification, state-of-the-art performance on the Home Equity Line of Credit (HELOC) and Lending Club (LC) Datasets is achieved using the Extreme Gradient Boosting (XGBoost) model. The model is then further enhanced with a 360-degree explanation framework, which provides different explanations (i.e. global, local feature-based and local instance-based) that are required by different people in different situations. Evaluation through the use of functionallygrounded, application-grounded and human-grounded analysis show that the explanations provided are simple, consistent as well as satisfy the six predetermined hypotheses testing for correctness, effectiveness, easy understanding, detail sufficiency and trustworthiness.

До бібліографії