Добірка наукової літератури з теми "Explainable intelligence models"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Explainable intelligence models".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Explainable intelligence models"
Abdelmonem, Ahmed, and Nehal N. Mostafa. "Interpretable Machine Learning Fusion and Data Analytics Models for Anomaly Detection." Fusion: Practice and Applications 3, no. 1 (2021): 54–69. http://dx.doi.org/10.54216/fpa.030104.
Повний текст джерелаZednik, Carlos, and Hannes Boelsen. "Scientific Exploration and Explainable Artificial Intelligence." Minds and Machines 32, no. 1 (March 2022): 219–39. http://dx.doi.org/10.1007/s11023-021-09583-6.
Повний текст джерелаRaikov, Alexander N. "Subjectivity of Explainable Artificial Intelligence." Russian Journal of Philosophical Sciences 65, no. 1 (June 25, 2022): 72–90. http://dx.doi.org/10.30727/0235-1188-2022-65-1-72-90.
Повний текст джерелаAlthoff, Daniel, Helizani Couto Bazame, and Jessica Garcia Nascimento. "Untangling hybrid hydrological models with explainable artificial intelligence." H2Open Journal 4, no. 1 (January 1, 2021): 13–28. http://dx.doi.org/10.2166/h2oj.2021.066.
Повний текст джерелаGunning, David, and David Aha. "DARPA’s Explainable Artificial Intelligence (XAI) Program." AI Magazine 40, no. 2 (June 24, 2019): 44–58. http://dx.doi.org/10.1609/aimag.v40i2.2850.
Повний текст джерелаOwens, Emer, Barry Sheehan, Martin Mullins, Martin Cunneen, Juliane Ressel, and German Castignani. "Explainable Artificial Intelligence (XAI) in Insurance." Risks 10, no. 12 (December 1, 2022): 230. http://dx.doi.org/10.3390/risks10120230.
Повний текст джерелаLorente, Maria Paz Sesmero, Elena Magán Lopez, Laura Alvarez Florez, Agapito Ledezma Espino, José Antonio Iglesias Martínez, and Araceli Sanchis de Miguel. "Explaining Deep Learning-Based Driver Models." Applied Sciences 11, no. 8 (April 7, 2021): 3321. http://dx.doi.org/10.3390/app11083321.
Повний текст джерелаLetzgus, Simon, Patrick Wagner, Jonas Lederer, Wojciech Samek, Klaus-Robert Muller, and Gregoire Montavon. "Toward Explainable Artificial Intelligence for Regression Models: A methodological perspective." IEEE Signal Processing Magazine 39, no. 4 (July 2022): 40–58. http://dx.doi.org/10.1109/msp.2022.3153277.
Повний текст джерелаHan, Juhee, and Younghoon Lee. "Explainable Artificial Intelligence-Based Competitive Factor Identification." ACM Transactions on Knowledge Discovery from Data 16, no. 1 (July 3, 2021): 1–11. http://dx.doi.org/10.1145/3451529.
Повний текст джерелаPatil, Shruti, Vijayakumar Varadarajan, Siddiqui Mohd Mazhar, Abdulwodood Sahibzada, Nihal Ahmed, Onkar Sinha, Satish Kumar, Kailash Shaw, and Ketan Kotecha. "Explainable Artificial Intelligence for Intrusion Detection System." Electronics 11, no. 19 (September 27, 2022): 3079. http://dx.doi.org/10.3390/electronics11193079.
Повний текст джерелаДисертації з теми "Explainable intelligence models"
Palmisano, Enzo Pio. "A First Study of Transferable and Explainable Deep Learning Models for HPC Systems." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20099/.
Повний текст джерелаCosta, Bueno Vicente. "Fuzzy Horn clauses in artificial intelligence: a study of free models, and applications in art painting style categorization." Doctoral thesis, Universitat Autònoma de Barcelona, 2021. http://hdl.handle.net/10803/673374.
Повний текст джерелаLa presente tesis doctoral contribuye al estudio de las cláusulas de Horn en lógicas difusas, así como a su uso en representación difusa del conocimiento aplicada al diseño de un algoritmo de clasificación de pinturas según su estilo artístico. En la primera parte del trabajo nos centramos en algunas nociones relevantes para la programación lógica, como lo son por ejemplo los modelos libres y las estructuras de Herbrand en lógica matemática difusa. Así pues, probamos la existencia de modelos libres en clases universales difusas de Horn y demostramos que toda teoría difusa universal de Horn sin igualdad tiene un modelo de Herbrand. Asimismo, introducimos dos nociones de minimalidad para modelos libres, y demostramos que estas nociones son equivalentes en el caso de las fully named structures. En la segunda parte de la tesis doctoral, utilizamos cláusulas de Horn combinadas con el modelado cualitativo como marco de representación difusa del conocimiento para la categorización de estilos de pintura artística. Finalmente, diseñamos un clasificador de pinturas basado en cláusulas de Horn evaluadas, descriptores cualitativos de colores y explicaciones. Este algoritmo, que llamamos l-SHE, proporciona razones de los resultados obtenidos y obtiene porcentajes competitivos de precisión en la experimentación.
This PhD thesis contributes to the systematic study of Horn clauses of predicate fuzzy logics and their use in knowledge representation for the design of an art painting style classification algorithm. We first focus the study on relevant notions in logic programming, such as free models and Herbrand structures in mathematical fuzzy logic. We show the existence of free models in fuzzy universal Horn classes, and we prove that every equality-free consistent universal Horn fuzzy theory has a Herbrand model. Two notions of minimality of free models are introduced, and we show that these notions are equivalent in the case of fully named structures. Then, we use Horn clauses combined with qualitative modeling as a fuzzy knowledge representation framework for art painting style categorization. Finally, we design a style painting classifier based on evaluated Horn clauses, qualitative color descriptors, and explanations. This algorithm, called l-SHE, provides reasons for the obtained results and obtains percentages of accuracy in the experimentation that are competitive.
Universitat Autònoma de Barcelona. Programa de Doctorat en Ciència Cognitiva i Llenguatge
Rouget, Thierry. "Learning explainable concepts in the presence of a qualitative model." Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/9762.
Повний текст джерелаGiuliani, Luca. "Extending the Moving Targets Method for Injecting Constraints in Machine Learning." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23885/.
Повний текст джерелаSaluja, Rohit. "Interpreting Multivariate Time Series for an Organization Health Platform." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-289465.
Повний текст джерелаMaskininlärningsbaserade system blir snabbt populära eftersom man har insett att maskiner är effektivare än människor när det gäller att utföra vissa uppgifter. Även om maskininlärningsalgoritmer är extremt populära, är de också mycket bokstavliga. Detta har lett till en enorm forskningsökning inom området tolkbarhet i maskininlärning för att säkerställa att maskininlärningsmodeller är tillförlitliga, rättvisa och kan hållas ansvariga för deras beslutsprocess. Dessutom löser problemet i de flesta verkliga problem bara att göra förutsägelser med maskininlärningsalgoritmer bara delvis. Tidsserier är en av de mest populära och viktiga datatyperna på grund av dess dominerande närvaro inom affärsverksamhet, ekonomi och teknik. Trots detta är tolkningsförmågan i tidsserier fortfarande relativt outforskad jämfört med tabell-, text- och bilddata. Med den växande forskningen inom området tolkbarhet inom maskininlärning finns det också ett stort behov av att kunna kvantifiera kvaliteten på förklaringar som produceras efter tolkning av maskininlärningsmodeller. Av denna anledning är utvärdering av tolkbarhet extremt viktig. Utvärderingen av tolkbarhet för modeller som bygger på tidsserier verkar helt outforskad i forskarkretsar. Detta uppsatsarbete fokuserar på att uppnå och utvärdera agnostisk modelltolkbarhet i ett tidsserieprognosproblem. Fokus ligger i att hitta lösningen på ett problem som ett digitalt konsultföretag står inför som användningsfall. Det digitala konsultföretaget vill använda en datadriven metod för att förstå effekten av olika försäljningsrelaterade aktiviteter i företaget på de försäljningsavtal som företaget stänger. Lösningen innebar att inrama problemet som ett tidsserieprognosproblem för att förutsäga försäljningsavtalen och tolka den underliggande prognosmodellen. Tolkningsförmågan uppnåddes med hjälp av två nya tekniker för agnostisk tolkbarhet, lokala tolkbara modellagnostiska förklaringar (LIME) och Shapley additiva förklaringar (SHAP). Förklaringarna som producerats efter att ha uppnått tolkbarhet utvärderades med hjälp av mänsklig utvärdering av tolkbarhet. Resultaten av de mänskliga utvärderingsstudierna visar tydligt att de förklaringar som produceras av LIME och SHAP starkt hjälpte människor att förstå förutsägelserna från maskininlärningsmodellen. De mänskliga utvärderingsstudieresultaten visade också att LIME- och SHAP-förklaringar var nästan lika förståeliga med LIME som presterade bättre men med en mycket liten marginal. Arbetet som utförts under detta projekt kan enkelt utvidgas till alla tidsserieprognoser eller klassificeringsscenarier för att uppnå och utvärdera tolkbarhet. Dessutom kan detta arbete erbjuda en mycket bra ram för att uppnå och utvärdera tolkbarhet i alla maskininlärningsbaserade regressions- eller klassificeringsproblem.
AfzaliSeresht, Neda. "Explainable Intelligence for Comprehensive Interpretation of Cybersecurity Data in Incident Management." Thesis, 2022. https://vuir.vu.edu.au/44414/.
Повний текст джерела"Foundations of Human-Aware Planning -- A Tale of Three Models." Doctoral diss., 2018. http://hdl.handle.net/2286/R.I.51791.
Повний текст джерелаDissertation/Thesis
Doctoral Dissertation Computer Science 2018
Pereira, Filipe Inácio da Costa. "Explainable artificial intelligence - learning decision sets with sat." Master's thesis, 2018. http://hdl.handle.net/10451/34903.
Повний текст джерелаArtificial Intelligence is a core research topic with key significance in technological growth. With the increase of data, we have more efficient models that in a few seconds will inform us of their prediction on a given input set. The more complex techniques nowadays with better results are Black Box Models. Unfortunately, these can’t provide an explanation behind their prediction, which is a major drawback for us humans. Explainable Artificial Intelligence, whose objective is to associate explanations with decisions made by autonomous agents, breaks this lack of transparency. This can be done by two approaches, either by creating models that are interpretable by themselves or by creating frameworks that justify and interpret any prediction made by any given model. This thesis describes the implementation of two interpretable models (Decision Sets and Decision Trees) based on Logic Reasoners, either SAT (Satisfiability) or SMT (Satisfiability Modulo Theories) solvers. This work was motivated by an in-depth analysis of past work in the area of Explainable Artificial Intelligence, with the purpose of seeking applications of logic in this domain. The Decision Sets approach focuses on the training data, as does any other model, and encoding the variables and constraints as a CNF (Conjuctive Normal Form) formula which can then be solved by a SAT/SMT oracle. This approach focuses on minimizing the number of rules (or Disjunctive Normal Forms) for each binary class representation and avoiding overlap, whether it is training sample or feature-space overlap, while maintaining interpretable explanations and perfect accuracy. The Decision Tree model studied in this work consists in computing a minimum size decision tree, which would represent a 100% accurate classifier given a set of training samples. The model is based on encoding the problem as a CNF formula, which can be tackled with the efficient use of a SAT oracle.
Neves, Maria Inês Lourenço das. "Opening the black-box of artificial intelligence predictions on clinical decision support systems." Master's thesis, 2021. http://hdl.handle.net/10362/126699.
Повний текст джерелаAs doenças cardiovasculares são, a nível mundial, a principal causa de morte e o seu tratamento e prevenção baseiam-se na interpretação do electrocardiograma. A interpretação do electrocardiograma, feita por médicos, é intrinsecamente subjectiva e, portanto, sujeita a erros. De modo a apoiar a decisão dos médicos, a inteligência artificial está a ser usada para desenvolver modelos com a capacidade de interpretar extensos conjuntos de dados e fornecer decisões precisas. No entanto, a falta de interpretabilidade da maioria dos modelos de aprendizagem automática é uma das desvantagens do recurso à mesma, principalmente em contexto clínico. Adicionalmente, a maioria dos métodos inteligência artifical explicável assumem independência entre amostras, o que implica a assunção de independência temporal ao lidar com séries temporais. A característica inerente das séries temporais não pode ser ignorada, uma vez que apresenta importância para o processo de tomada de decisão humana. Esta dissertação baseia-se em inteligência artificial explicável para tornar inteligível a classificação de batimentos cardíacos, através da utilização de várias adaptações de métodos agnósticos do estado-da-arte. Para abordar a explicação dos classificadores de séries temporais, propõe-se uma taxonomia preliminar, e o uso da derivada como um complemento para adicionar dependência temporal entre as amostras. Os resultados foram validados para um conjunto extenso de dados públicos, por meio do índice de Jaccard em 1-D, com a comparação das subsequências extraídas de um modelo interpretável e os métodos inteligência artificial explicável utilizados, e a análise de qualidade, para avaliar se a explicação se adequa ao comportamento do modelo. De modo a avaliar modelos com lógicas internas distintas, a validação foi realizada usando, por um lado, um modelo mais transparente e, por outro, um mais opaco, tanto numa situação de classificação binária como numa situação de classificação multiclasse. Os resultados mostram o uso promissor da inclusão da derivada do sinal para introduzir dependência temporal entre as amostras nas explicações fornecidas, para modelos com lógica interna mais simples.
Книги з теми "Explainable intelligence models"
Escalera, Sergio, Isabelle Guyon, Xavier Baró, Umut Güçlü, Hugo Jair Escalante, Yağmur Güçlütürk, and Marcel van Gerven. Explainable and Interpretable Models in Computer Vision and Machine Learning. Springer, 2019.
Знайти повний текст джерелаMishra, Pradeepta. Practical Explainable AI Using Python: Artificial Intelligence Model Explanations Using Python-Based Libraries, Extensions, and Frameworks. Apress L. P., 2021.
Знайти повний текст джерелаЧастини книг з теми "Explainable intelligence models"
Holzinger, Andreas, Randy Goebel, Ruth Fong, Taesup Moon, Klaus-Robert Müller, and Wojciech Samek. "xxAI - Beyond Explainable Artificial Intelligence." In xxAI - Beyond Explainable AI, 3–10. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_1.
Повний текст джерелаGaur, Loveleen, and Biswa Mohan Sahoo. "Intelligent Transportation System: Modern Business Models." In Explainable Artificial Intelligence for Intelligent Transportation Systems, 67–77. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09644-0_4.
Повний текст джерелаChennam, Krishna Keerthi, Swapna Mudrakola, V. Uma Maheswari, Rajanikanth Aluvalu, and K. Gangadhara Rao. "Black Box Models for eXplainable Artificial Intelligence." In Explainable AI: Foundations, Methodologies and Applications, 1–24. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12807-3_1.
Повний текст джерелаBanerjee, Puja, and Rajesh P. Barnwal. "Methods and Metrics for Explaining Artificial Intelligence Models: A Review." In Explainable AI: Foundations, Methodologies and Applications, 61–88. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12807-3_4.
Повний текст джерелаHutchison, Jack, Duc-Son Pham, Sie-Teng Soh, and Huo-Chong Ling. "Explainable Network Intrusion Detection Using External Memory Models." In AI 2022: Advances in Artificial Intelligence, 220–33. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-22695-3_16.
Повний текст джерелаAdadi, Amina, and Mohammed Berrada. "Explainable AI for Healthcare: From Black Box to Interpretable Models." In Embedded Systems and Artificial Intelligence, 327–37. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-0947-6_31.
Повний текст джерелаHolzinger, Andreas, Anna Saranti, Christoph Molnar, Przemyslaw Biecek, and Wojciech Samek. "Explainable AI Methods - A Brief Overview." In xxAI - Beyond Explainable AI, 13–38. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_2.
Повний текст джерелаUdenwagu, Nnaemeka E., Ambrose A. Azeta, Sanjay Misra, Vivian O. Nwaocha, Daniel L. Enosegbe, and Mayank Mohan Sharma. "ExplainEx: An Explainable Artificial Intelligence Framework for Interpreting Predictive Models." In Hybrid Intelligent Systems, 505–15. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-73050-5_51.
Повний текст джерелаBrini, Iheb, Maroua Mehri, Rolf Ingold, and Najoua Essoukri Ben Amara. "An End-to-End Framework for Evaluating Explainable Deep Models: Application to Historical Document Image Segmentation." In Computational Collective Intelligence, 106–19. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16014-1_10.
Повний текст джерелаBoutorh, Aicha, Hala Rahim, and Yassmine Bendoumia. "Explainable AI Models for COVID-19 Diagnosis Using CT-Scan Images and Clinical Data." In Computational Intelligence Methods for Bioinformatics and Biostatistics, 185–99. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-20837-9_15.
Повний текст джерелаТези доповідей конференцій з теми "Explainable intelligence models"
Ignatiev, Alexey. "Towards Trustable Explainable AI." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/726.
Повний текст джерелаSampat, Shailaja. "Technical, Hard and Explainable Question Answering (THE-QA)." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/916.
Повний текст джерелаDaniels, Zachary A., Logan D. Frank, Christopher Menart, Michael Raymer, and Pascal Hitzler. "A framework for explainable deep neural models using external knowledge graphs." In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, edited by Tien Pham, Latasha Solomon, and Katie Rainey. SPIE, 2020. http://dx.doi.org/10.1117/12.2558083.
Повний текст джерелаSong, Haekang, and Sungho Kim. "Explainable artificial intelligence (XAI): How to make image analysis deep learning models transparent." In 2022 22nd International Conference on Control, Automation and Systems (ICCAS). IEEE, 2022. http://dx.doi.org/10.23919/iccas55662.2022.10003813.
Повний текст джерелаNagaraj Rao, Varun, Xingjian Zhen, Karen Hovsepian, and Mingwei Shen. "A First Look: Towards Explainable TextVQA Models via Visual and Textual Explanations." In Proceedings of the Third Workshop on Multimodal Artificial Intelligence. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.maiworkshop-1.4.
Повний текст джерелаLisboa, Paulo, Sascha Saralajew, Alfredo Vellido, and Thomas Villmann. "The Coming of Age of Interpretable and Explainable Machine Learning Models." In ESANN 2021 - European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Louvain-la-Neuve (Belgium): Ciaco - i6doc.com, 2021. http://dx.doi.org/10.14428/esann/2021.es2021-2.
Повний текст джерелаByrne, Ruth M. J. "Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/876.
Повний текст джерелаAlibekov, M. R. "Diagnosis of Plant Biotic Stress by Methods of Explainable Artificial Intelligence." In 32nd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/graphicon-2022-728-739.
Повний текст джерелаWang, Yifan, and Guangmo Tong. "Learnability of Competitive Threshold Models." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/553.
Повний текст джерелаDemajo, Lara Marie, Vince Vella, and Alexiei Dingli. "Explainable AI for Interpretable Credit Scoring." In 10th International Conference on Advances in Computing and Information Technology (ACITY 2020). AIRCC Publishing Corporation, 2020. http://dx.doi.org/10.5121/csit.2020.101516.
Повний текст джерела