Academic literature on the topic 'Uncertainty in AI'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Uncertainty in AI.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Uncertainty in AI"

1

Martinho, Andreia, Maarten Kroesen, and Caspar Chorus. "Computer Says I Don’t Know: An Empirical Approach to Capture Moral Uncertainty in Artificial Intelligence." Minds and Machines 31, no. 2 (February 23, 2021): 215–37. http://dx.doi.org/10.1007/s11023-021-09556-9.

Full text
Abstract:
AbstractAs AI Systems become increasingly autonomous, they are expected to engage in decision-making processes that have moral implications. In this research we integrate theoretical and empirical lines of thought to address the matters of moral reasoning and moral uncertainty in AI Systems. We reconceptualize the metanormative framework for decision-making under moral uncertainty and we operationalize it through a latent class choice model. The core idea being that moral heterogeneity in society can be codified in terms of a small number of classes with distinct moral preferences and that this codification can be used to express moral uncertainty of an AI. Choice analysis allows for the identification of classes and their moral preferences based on observed choice data. Our reformulation of the metanormative framework is theory-rooted and practical in the sense that it avoids runtime issues in real time applications. To illustrate our approach we conceptualize a society in which AI Systems are in charge of making policy choices. While one of the systems uses a baseline morally certain model, the other uses a morally uncertain model. We highlight cases in which the AI Systems disagree about the policy to be chosen, thus illustrating the need to capture moral uncertainty in AI systems.
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Junyi, and Shari Shang. "Managing Uncertainty in AI-Enabled Decision Making and Achieving Sustainability." Sustainability 12, no. 21 (October 22, 2020): 8758. http://dx.doi.org/10.3390/su12218758.

Full text
Abstract:
Artificial intelligence (AI) has been applied to various decision-making tasks. However, scholars have yet to comprehend how computers can integrate decision making with uncertainty management. Obtaining such comprehension would enable scholars to deliver sustainable AI decision-making applications that adapt to the changing world. This research examines uncertainties in AI-enabled decision-making applications and some approaches for managing various types of uncertainty. By referring to studies on uncertainty in decision making, this research describes three dimensions of uncertainty, namely informational, environmental and intentional. To understand how to manage uncertainty in AI-enabled decision-making applications, the authors conduct a literature review using content analysis with practical approaches. According to the analysis results, a mechanism related to those practical approaches is proposed for managing diverse types of uncertainty in AI-enabled decision making.
APA, Harvard, Vancouver, ISO, and other styles
3

Catton, David. "AI tools for DP — programming under uncertainty." Data Processing 27, no. 4 (May 1985): 24–27. http://dx.doi.org/10.1016/0011-684x(85)90051-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yager, Ronald R. "Ordinal scale based uncertainty models for AI." Information Fusion 64 (December 2020): 92–98. http://dx.doi.org/10.1016/j.inffus.2020.06.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cohen, Paul R. "The control of reasoning under uncertainty: A discussion of some programs." Knowledge Engineering Review 2, no. 1 (March 1987): 5–25. http://dx.doi.org/10.1017/s0269888900000680.

Full text
Abstract:
AbstractThis paper proposes that managing uncertainty is a control problem, a task for the control component of AI systems that decides what to do next. This view emphasizes the process of planning and executing sequences of actions that simultaneously satisfy domain goals and minimize uncertainty. The paper reviews AI systems that manage uncertainty by control. It is not an exhaustive survey, but rather illustrates issues in managing uncertainty with selected AI programs.
APA, Harvard, Vancouver, ISO, and other styles
6

Saffiotti, Alessandro. "An AI view of the treatment of uncertainty." Knowledge Engineering Review 2, no. 2 (June 1987): 75–97. http://dx.doi.org/10.1017/s0269888900000795.

Full text
Abstract:
AbstractThis paper reviews many of the very varied concepts of uncertainty used in AI. Because of their great popularity and generality “parallel certainty inference” techniques, so-called, are prominently in the foreground. We illustrate and comment in detail on three of these techniques; Bayes' theory (section 2); Dempster-Shafer theory (section 3); Cohen's model of endorsements (section 4), and give an account of the debate that has arisen around each of them. Techniques of a different kind (such as Zadeh's fuzzy-sets, fuzzy-logic theory, and the use of non-standard logics and methods that manage uncertainty without explicitly dealing with it) may be seen in the background (section 5).The discussion of technicalities is accompanied by a historical and philosophical excursion on the nature and the use of uncertainty (section 1), and by a brief discussion of the problem of choosing an adequate AI approach to the treatment of uncertainty (section 6). The aim of the paper is to highlight the complex nature of uncertainty and to argue for an open-minded attitude towards its representation and use. In this spirit the pros and cons of uncertainty treatment techniques are presented in order to reflect the various uncertainty types. A guide to the literature in the field, and an extensive bibliography are appended.
APA, Harvard, Vancouver, ISO, and other styles
7

Lebovitz, Sarah, Natalia Levina, and Hila Lifshitz-Assa. "Is AI Ground Truth Really True? The Dangers of Training and Evaluating AI Tools Based on Experts’ Know-What." MIS Quarterly 45, no. 3 (September 1, 2021): 1501–26. http://dx.doi.org/10.25300/misq/2021/16564.

Full text
Abstract:
Organizational decision-makers need to evaluate AI tools in light of increasing claims that such tools out-perform human experts. Yet, measuring the quality of knowledge work is challenging, raising the question of how to evaluate AI performance in such contexts. We investigate this question through a field study of a major U.S. hospital, observing how managers evaluated five different machine-learning (ML) based AI tools. Each tool reported high performance according to standard AI accuracy measures, which were based on ground truth labels provided by qualified experts. Trying these tools out in practice, however, revealed that none of them met expectations. Searching for explanations, managers began confronting the high uncertainty of experts’ know-what knowledge captured in ground truth labels used to train and validate ML models. In practice, experts address this uncertainty by drawing on rich know-how practices, which were not incorporated into these ML-based tools. Discovering the disconnect between AI’s know-what and experts’ know-how enabled managers to better understand the risks and benefits of each tool. This study shows dangers of treating ground truth labels used in ML models objectively when the underlying knowledge is uncertain. We outline implications of our study for developing, training, and evaluating AI for knowledge work.
APA, Harvard, Vancouver, ISO, and other styles
8

Asan, Onur, Alparslan Emrah Bayrak, and Avishek Choudhury. "Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians." Journal of Medical Internet Research 22, no. 6 (June 19, 2020): e15154. http://dx.doi.org/10.2196/15154.

Full text
Abstract:
Artificial intelligence (AI) can transform health care practices with its increasing ability to translate the uncertainty and complexity in data into actionable—though imperfect—clinical decisions or suggestions. In the evolving relationship between humans and AI, trust is the one mechanism that shapes clinicians’ use and adoption of AI. Trust is a psychological mechanism to deal with the uncertainty between what is known and unknown. Several research studies have highlighted the need for improving AI-based systems and enhancing their capabilities to help clinicians. However, assessing the magnitude and impact of human trust on AI technology demands substantial attention. Will a clinician trust an AI-based system? What are the factors that influence human trust in AI? Can trust in AI be optimized to improve decision-making processes? In this paper, we focus on clinicians as the primary users of AI systems in health care and present factors shaping trust between clinicians and AI. We highlight critical challenges related to trust that should be considered during the development of any AI system for clinical use.
APA, Harvard, Vancouver, ISO, and other styles
9

D'Avanzo, Ernesto. "AI and Neuroeconomics." International Journal of Smart Education and Urban Society 9, no. 2 (April 2018): 39–48. http://dx.doi.org/10.4018/ijseus.2018040104.

Full text
Abstract:
This article describes how Plato proposed the dualistic solution to the mind-body problem, providing an explanation along the lines of his epistemology. Francis Bacon, in 1600, formulated his vision of the scientific method that will be valid until the 1960's when Karl Popper proposed his version, entering into controversy with the Lord Chancellor. Thanks to recent developments in artificial intelligence and computational neuroscience these problems have new empirical tools to be analyzed. An interesting aspect of this research program, better known as neuroeconomics, is the use it makes of the probability calculation tool for dealing with so-called decisions under uncertainty. The paper is an attempt to tell the birth, development, and some examples of these toolboxes, available to all those who want to apply them to improve knowledge inspired organizations.
APA, Harvard, Vancouver, ISO, and other styles
10

Ageev, Alexander I. "Artificial Intelligence: The Opacity of Concepts in the Uncertainty of Realities." Russian Journal of Philosophical Sciences 65, no. 1 (June 25, 2022): 27–43. http://dx.doi.org/10.30727/0235-1188-2022-65-1-27-43.

Full text
Abstract:
The development of the systems of artificial intelligence (AI) and digital transformation in general lead to the formation of multitude of autonomous agents of artificial and mixed genealogy, as well as to complex structures in the information and regulatory environment with many opportunities and pathologies and a growing level of uncertainty in making managerial decisions. The situation is complicated by the continuing plurality of understanding of the essence of AI systems. The modern expanded understanding of AI goes back to ideas formulated more than 100 years ago. In official national policy documents on the development of AI, working definitions of AI are preferred. The current stage of AI systems life cycle can be assessed as the completion of the initial period in the development of systems associated with weak AI. The ability of artificial systems to realize themselves as a separate person becomes one of the serious scientific and practical challenges. Attention to the issues of the ethics of AIS indicates the expansion of the diversity of its forms and the beginning of the work in the field of goal-setting. New moral and ethical problems also arise in connection with the possibility of the creation of genuine conscious subjects in the foreseeable future. There is an increasing phenomenon of degradation of natural intelligence. It is required to take into account the issue of the heterogeneity of data generated by humans, electronic sensors and network devices in the dynamic complex environments of the digital economy, the issue of the complexity of the process of co-evolution of AI systems, collective and individual natural consciousness. A special area of opportunities and risks is the development of neurotechnologies. The object of control is digital twins, through which there can be manipulation of real attitudes, preferences, and behavior of individuals. As a result, there are the development of technological capabilities that provoke destructive phenomena as well as the formation of a new class of mass addictions.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Uncertainty in AI"

1

Karlsson, Fredrik. "User-centered Visualizations of Transcription Uncertainty in AI-generated Subtitles of News Broadcast." Thesis, Uppsala universitet, Människa-datorinteraktion, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-415658.

Full text
Abstract:
AI-generated subtitles have recently started to automate the process of subtitling with automatic speech recognition. However, people may not perceive that the transcription is based on probabilities and may entail errors. For news that is broadcast live may this be controversial and cause misinterpretation. A user-centered design approach was performed investigating three possible solutions towards visualizing transcription uncertainties in real-time presentation. Based on the user needs, one proposed solution was used in a qualitative comparison with AI- generated subtitles without visualizations. The results suggest that visualization of uncertainties support users’ interpretation of AI-generated subtitles and helps to identify possible errors. However, it does not improve the transcription intelligibility. The result also suggests that unnoticed transcription errors during news broadcast is perceived as critical and decrease trust towards the news. Uncertainty visualizations may increase trust and prevent the risk of misinterpretation with important information.
APA, Harvard, Vancouver, ISO, and other styles
2

Rukanskaitė, Julija. "Tuning into uncertainty : A material exploration of object detection through play." Thesis, Malmö universitet, Institutionen för konst, kultur och kommunikation (K3), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-44239.

Full text
Abstract:
The ubiquitous yet opaque logic of machine learning complicates both the design process and end-use. Because of this, much of Interaction Design and HCI now focus on making this logic transparent through human-like explanations and tight control while disregarding other, non-normative human-AI interactions as technical failures. In this thesis I re-frame such interactions as generative for both material exploration and user experience in non-purpose-driven applications. By expanding on the notion of machine learning uncertainty with play, queering, and more-than human design, I try to understand them in a designerly way. This re-framing is followed by a material-centred Research through Design process that concludes with Object Detection Radio: a ludic device that sonifies Tensorflow.js Object Detection API’s prediction probabilities. The design process suggests ways of making machine learning uncertainty explicit in human-AI interaction. In addition, I propose play as an alternative way of relating to and understanding the agency of machine learning technology.
APA, Harvard, Vancouver, ISO, and other styles
3

Sozak, Ahmet. "Uncertainty Analysis Of Coordinate Measuring Machine (cmm) Measurements." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/2/12608887/index.pdf.

Full text
Abstract:
In this thesis, the measurement uncertainty of Coordinate Measuring Machine (CMM) is analysed and software is designed to simulate this. Analysis begins with the inspection of the measurement process and structure of the CMMs. After that, error sources are defined with respect to their effects on the measurement and then an error model is constructed to compensate these effects. In other words, systematic part of geometric, kinematic and thermal errors are compensated with error modelling. Kinematic and geometric error model is specific for the structure of CMM under inspection. Also, a common orthogonal kinematic model is formed and with using the laser error data of the CMM and error maps of the machine volume is obtained. Afterwards, the models are compared with each other by taking the difference and ratio. The definition and compensation of the systematic errors leave the uncertainty of measurements for analysing. Measurement uncertainty consists of the uncompensated systematic errors and random errors. The other aim of the thesis is to quantify these uncertainties with using the different methods and to inspect the success of these methods. Uncertainty budgeting, comparison, statistical evaluation by designing an experiments and simulation methods are examined and applied to the CMM under inspection. In addition, Virtual CMM software is designed to simulate the task specific measurement uncertainty of circle, sphere and plane without using the repeated measurements. Finally, the performance of the software, highly depending on the mathematical modelling of machine volume, is tested by using actual measurements.
APA, Harvard, Vancouver, ISO, and other styles
4

MORRESI, NICOLE. "Sviluppo di un metodo innovativo per la misura del comfort termico attraverso il monitoraggio di parametri fisiologici e ambientali in ambienti indoor." Doctoral thesis, Università Politecnica delle Marche, 2022. http://hdl.handle.net/11566/295518.

Full text
Abstract:
La misura del comfort termico in ambienti indoor è un argomento di interesse per la comunità scientifica, poiché il comfort termico incide profondamente sul benessere degli utenti ed inoltre, per garantire condizioni di comfort ottimali, gli edifici devono affrontare costi energetici elevati. Anche se esistono norme nel campo dell'ergonomia del comfort che forniscono linee guida per la valutazione del comfort termico, può succedere che in contesti reali sia molto difficile ottenere una misurazione accurata. Pertanto, per migliorare la misura del comfort termico negli edifici, la ricerca si sta concentrando sulla valutazione dei parametri personali e fisiologici legati al comfort termico, per creare ambienti su misura per l’utente. Questa tesi presenta diversi contributi riguardo questo argomento. Infatti, in questo lavoro di ricerca, sono stati implementati una serie di studi per sviluppare e testare procedure di misurazione in grado di valutare quantitativamente il comfort termico umano, tramite parametri ambientali e fisiologici, per catturare le peculiarità che esistono tra i diversi utenti. In primo luogo, è stato condotto uno studio in una camera climatica controllata, con un set di sensori invasivi utilizzati per la misurazione dei parametri fisiologici. L'esito di questa ricerca è stato utile per ottenere una prima accuratezza nella misurazione del comfort termico dell'82%, ottenuta mediante algoritmi di machine learning (ML) che forniscono la sensazione termica (TSV) utilizzando la variabilità della frequenza cardiaca (HRV) , parametro che la letteratura ha spesso riportato legato sia al comfort termico dell'utenza che alle grandezze ambientali. Questa ricerca ha dato origine a uno studio successivo in cui la valutazione del comfort termico è stata effettuata utilizzando uno smartwatch minimamente invasivo per la raccolta dell’HRV. Questo secondo studio consisteva nel variare le condizioni ambientali di una stanza semi-controllata, mentre i partecipanti potevano svolgere attività di ufficio ma in modo limitato, ovvero evitando il più possibile i movimenti della mano su cui era indossato lo smartwatch. Con questa configurazione, è stato possibile stabilire che l'uso di algoritmi di intelligenza artificiale (AI) e il set di dati eterogeneo creato aggregando parametri ambientali e fisiologici, può fornire una misura di TSV con un errore medio assoluto (MAE) di 1.2 e un errore percentuale medio assoluto (MAPE) del 20%. Inoltre, tramite il Metodo Monte Carlo (MCM) è stato possibile calcolare l'impatto delle grandezze in ingresso sul calcolo del TSV. L'incertezza più alta è stata raggiunta a causa dell'incertezza nella misura della temperatura dell'aria (U = 14%) e dell'umidità relativa (U = 10,5%). L'ultimo contributo rilevante ottenuto con questa ricerca riguarda la misura del comfort termico in ambiente reale, semi controllato, in cui il partecipante non è stato costretto a limitare i propri movimenti. La temperatura della pelle è stata inclusa nel set-up sperimentale, per migliorare la misurazione del TSV. I risultati hanno mostrato che l'inclusione della temperatura della pelle per la creazione di modelli personalizzati, realizzati utilizzando i dati provenienti dal singolo partecipante, porta a risultati soddisfacenti (MAE = 0,001±0,0003 e MAPE = 0,02%±0,09%). L'approccio più generalizzato, invece, che consiste nell'addestrare gli algoritmi sull'intero gruppo di partecipanti tranne uno, e utilizzare quello tralasciato per il test, fornisce prestazioni leggermente inferiori (MAE = 1±0.2 e MAPE = 25% ±6%). Questo risultato evidenzia come in condizioni semi-controllate, la previsione di TSV utilizzando la temperatura della pelle e l'HRV possa essere eseguita con un certo grado di incertezza.
Measuring human thermal comfort in indoor environments is a topic of interest in the scientific community, since thermal comfort deeply affects the well-being of occupants and furthermore, to guarantee optimal comfort conditions, buildings must face high energy costs. Even if there are standards in the field of the ergonomics of the thermal environment that provide guidelines for thermal comfort assessment, it can happen that in real-world settings it is very difficult to obtain an accurate measurement. Therefore, to improve the measurement of thermal comfort of occupants in buildings, research is focusing on the assessment of personal and physiological parameters related to thermal comfort, to create environments carefully tailored to the occupant that lives in it. This thesis presents several contributions to this topic. In fact, in the following research work, a set of studies were implemented to develop and test measurement procedures capable of quantitatively assessing human thermal comfort, by means of environmental and physiological parameters, to capture peculiarities among different occupants. Firstly, it was conducted a study in a controlled climatic chamber with an invasive set of sensors used for measuring physiological parameters. The outcome of this research was helpful to achieve a first accuracy in the measurement of thermal comfort of 82%, obtained by training machine learning (ML) algorithms that provide the thermal sensation vote (TSV) by means of environmental quantities and heart rate variability (HRV), a parameter that literature has often reported being related to both users' thermal comfort. This research gives rise to a subsequent study in which thermal comfort assessment was made by using a minimally invasive smartwatch for collecting HRV. This second study consisted in varying the environmental conditions of a semi-controlled test-room, while participants could carry out light-office activities but in a limited way, i.e. avoiding the movements of the hand on which the smartwatch was worn as much as possible. With this experimental setup, it was possible to establish that the use of artificial intelligence (AI) algorithms (such as random forest or convolutional neural networks) and the heterogeneous dataset created by aggregating environmental and physiological parameters, can provide a measure of TSV with a mean absolute error (MAE) of 1.2 and a mean absolute percentage error (MAPE) of 20%. In addition, by using of Monte Carlo Method (MCM), it was possible to compute the impact of the uncertainty of the input quantities on the computation of the TSV. The highest uncertainty was reached due to the air temperature uncertainty (U = 14%) and relative humidity (U = 10.5%). The last relevant contribution obtained with this research work concerns the measurement of thermal comfort in a real-life setting, semi-controlled environment, in which the participant was not forced to limit its movements. Skin temperature was included in the experimental set-up, to improve the measurement of TSV. The results showed that the inclusion of skin temperature for the creation of personalized models, made by using data coming from the single participant brings satisfactory results (MAE = 0.001±0.0003 and MAPE = 0.02%±0.09%). On the other hand, the more generalized approach, which consists in training the algorithms on the whole bunch of participants except one, and using the one left out for the test, provides slightly lower performances (MAE = 1±0.2 and MAPE = 25%±6%). This result highlights how in semi-controlled conditions, the prediction of TSV using skin temperature and HRV can be performed with acceptable accuracy.
APA, Harvard, Vancouver, ISO, and other styles
5

Richards, Whitman. "Collective Choice with Uncertain Domain Moldels." 2005. http://hdl.handle.net/1721.1/30565.

Full text
Abstract:
When groups of individuals make choices among several alternatives, the most compelling social outcome is the Condorcet winner, namely the alternative beating all others in a pair-wise contest. Obviously the Condorcet winner cannot be overturned if one sub-group proposes another alternative it happens to favor. However, in some cases, and especially with haphazard voting, there will be no clear unique winner, with the outcome consisting of a triple of pair-wise winners that each beat different subsets of the alternatives (i.e. a “top-cycle”.) We explore the sensitivity of Condorcet winners to various perturbations in the voting process that lead to top-cycles. Surprisingly, variations in the number of votes for each alternative is much less important than consistency in a voter’s view of how alternatives are related. As more and more voter’s preference orderings on alternatives depart from a shared model of the domain, then unique Condorcet outcomes become increasingly unlikely.
APA, Harvard, Vancouver, ISO, and other styles
6

"Cost-Sensitive Selective Classification and its Applications to Online Fraud Management." Doctoral diss., 2019. http://hdl.handle.net/2286/R.I.53598.

Full text
Abstract:
abstract: Fraud is defined as the utilization of deception for illegal gain by hiding the true nature of the activity. While organizations lose around $3.7 trillion in revenue due to financial crimes and fraud worldwide, they can affect all levels of society significantly. In this dissertation, I focus on credit card fraud in online transactions. Every online transaction comes with a fraud risk and it is the merchant's liability to detect and stop fraudulent transactions. Merchants utilize various mechanisms to prevent and manage fraud such as automated fraud detection systems and manual transaction reviews by expert fraud analysts. Many proposed solutions mostly focus on fraud detection accuracy and ignore financial considerations. Also, the highly effective manual review process is overlooked. First, I propose Profit Optimizing Neural Risk Manager (PONRM), a selective classifier that (a) constitutes optimal collaboration between machine learning models and human expertise under industrial constraints, (b) is cost and profit sensitive. I suggest directions on how to characterize fraudulent behavior and assess the risk of a transaction. I show that my framework outperforms cost-sensitive and cost-insensitive baselines on three real-world merchant datasets. While PONRM is able to work with many supervised learners and obtain convincing results, utilizing probability outputs directly from the trained model itself can pose problems, especially in deep learning as softmax output is not a true uncertainty measure. This phenomenon, and the wide and rapid adoption of deep learning by practitioners brought unintended consequences in many situations such as in the infamous case of Google Photos' racist image recognition algorithm; thus, necessitated the utilization of the quantified uncertainty for each prediction. There have been recent efforts towards quantifying uncertainty in conventional deep learning methods (e.g., dropout as Bayesian approximation); however, their optimal use in decision making is often overlooked and understudied. Thus, I present a mixed-integer programming framework for selective classification called MIPSC, that investigates and combines model uncertainty and predictive mean to identify optimal classification and rejection regions. I also extend this framework to cost-sensitive settings (MIPCSC) and focus on the critical real-world problem, online fraud management and show that my approach outperforms industry standard methods significantly for online fraud management in real-world settings.
Dissertation/Thesis
Doctoral Dissertation Computer Science 2019
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Uncertainty in AI"

1

Sifeng, Liu, and Lin Yi 1959-, eds. Hybrid rough sets and applications in uncertain decision-making. Boca Raton: Auerbach Publications, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pearl, Judea. Uncertainty management in AI systems (Tutorial). American Association for Artificial Intelligence, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Durbin, Gary. Nano-Uncertainty: An AI that programs itself, a twisted killer, an uncertain outcome. Gary Durbin, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Du, Yi, and Deyi Li. Artificial Intelligence with Uncertainty. Taylor & Francis Group, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Artificial Intelligence with Uncertainty. Chapman & Hall/CRC, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Du, Yi, and Deyi Li. Artificial Intelligence with Uncertainty. Taylor & Francis Group, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Du, Yi, and Deyi Li. Artificial Intelligence with Uncertainty. Taylor & Francis Group, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Du, Yi, and Deyi Li. Artificial Intelligence with Uncertainty. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Du, Yi, and Deyi Li. Artificial Intelligence with Uncertainty. Taylor & Francis Group, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ehsani, Sepehr, Florian M. Thieringer, Philipp Plugmann, and Patrick Glauner. Future Circle of Healthcare: AI, 3D Printing, Longevity, Ethics, and Uncertainty Mitigation. Springer International Publishing AG, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Uncertainty in AI"

1

Morrissey, J. M. "Incomplete Information and Uncertainty." In AI and Cognitive Science ’90, 355–66. London: Springer London, 1991. http://dx.doi.org/10.1007/978-1-4471-3542-5_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Voorbraak, Frans. "Reasoning with uncertainty in AI." In Reasoning with Uncertainty in Robotics, 52–90. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/bfb0013954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bobek, Szymon, and Grzegorz J. Nalepa. "Introducing Uncertainty into Explainable AI Methods." In Computational Science – ICCS 2021, 444–57. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77980-1_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xia, Tong, Jing Han, and Cecilia Mascolo. "Benchmarking Uncertainty Quantification on Biosignal Classification Tasks Under Dataset Shift." In Multimodal AI in Healthcare, 347–59. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-14771-5_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Marcos, Diego, Jana Kierdorf, Ted Cheeseman, Devis Tuia, and Ribana Roscher. "A Whale’s Tail - Finding the Right Whale in an Uncertain World." In xxAI - Beyond Explainable AI, 297–313. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_15.

Full text
Abstract:
AbstractExplainable machine learning and uncertainty quantification have emerged as promising approaches to check the suitability and understand the decision process of a data-driven model, to learn new insights from data, but also to get more information about the quality of a specific observation. In particular, heatmapping techniques that indicate the sensitivity of image regions are routinely used in image analysis and interpretation. In this paper, we consider a landmark-based approach to generate heatmaps that help derive sensitivity and uncertainty information for an application in marine science to support the monitoring of whales. Single whale identification is important to monitor the migration of whales, to avoid double counting of individuals and to reach more accurate population estimates. Here, we specifically explore the use of fluke landmarks learned as attention maps for local feature extraction and without other supervision than the whale IDs. These individual fluke landmarks are then used jointly to predict the whale ID. With this model, we use several techniques to estimate the sensitivity and uncertainty as a function of the consensus level and stability of localisation among the landmarks. For our experiments, we use images of humpback whale flukes provided by the Kaggle Challenge “Humpback Whale Identification” and compare our results to those of a whale expert.
APA, Harvard, Vancouver, ISO, and other styles
6

Mayer, Marta Cialdea, Carla Limongelli, Andrea Orlandini, and Valentina Poggioni. "Planning under Uncertainty in Linear Time Logic." In AI*IA 2003: Advances in Artificial Intelligence, 324–35. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39853-0_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Piscopo, Carlotta. "Uncertainty in AI and the Debate on Probability." In The Metaphysical Nature of the Non-adequacy Claim, 39–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-35359-8_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Traverso, Paolo. "Planning Under Uncertainty and Its Applications." In Reasoning, Action and Interaction in AI Theories and Systems, 213–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11829263_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Guo, Yang, Zhengyuan Liu, Savitha Ramasamy, and Pavitra Krishnaswamy. "Uncertainty Characterization for Predictive Analytics with Clinical Time Series Data." In Explainable AI in Healthcare and Medicine, 69–78. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-53352-6_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hu, Chenyi, Victor S. Sheng, Ningning Wu, and Xintao Wu. "Managing Uncertainty in Crowdsourcing with Interval-Valued Labels." In Explainable AI and Other Applications of Fuzzy Techniques, 166–78. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-82099-2_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Uncertainty in AI"

1

Bhatt, Umang, Javier Antorán, Yunfeng Zhang, Q. Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Melançon, et al. "Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty." In AIES '21: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3461702.3462571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cassenti, Daniel, and Lance M. Kaplan. "Robust uncertainty representation in human-AI collaboration." In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications III, edited by Tien Pham, Latasha Solomon, and Myron E. Hohil. SPIE, 2021. http://dx.doi.org/10.1117/12.2584818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Khayut, Ben, Lina Fabri, and Maya Avikhana. "Toward General AI: Consciousness Computational Modeling Under Uncertainty." In 2020 International Conference on Mathematics and Computers in Science and Engineering (MACISE). IEEE, 2020. http://dx.doi.org/10.1109/macise49704.2020.00022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sarathy, Vasanth. "Learning Context-Sensitive Norms under Uncertainty." In AIES '19: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3306618.3314315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Yongchun, Jianfeng Xun, and Zhenjian Jiang. "An Electronic Bidding System based on AI and J2EE." In 2011 International Conference on Uncertainty Reasoning and Knowledge Engineering (URKE). IEEE, 2011. http://dx.doi.org/10.1109/urke.2011.6007911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ali, Junaid, Preethi Lahoti, and Krishna P. Gummadi. "Accounting for Model Uncertainty in Algorithmic Discrimination." In AIES '21: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3461702.3462630.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Malinetskii, Georgii Gennadyevich, Vladimir Sergeevich Smolin, Olga Yurievna Kolesnichenko, and Tatiana Nikolaevna Zhilina. "The sociological trajectory in AI drafting: Challenges of uncertainty." In 3rd International Conference “Futurity designing. Digital reality problems”. Keldysh Institute of Applied Mathematics, 2020. http://dx.doi.org/10.20948/future-2020-22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Martinho, Andreia, Maarten Kroesen, and Caspar Chorus. "An Empirical Approach to Capture Moral Uncertainty in AI." In AIES '20: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3375627.3375805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Xu, Lily. "Learning and Planning Under Uncertainty for Green Security." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/695.

Full text
Abstract:
Green security concerns the protection of the world's wildlife, forests, and fisheries from poaching, illegal logging, and illegal fishing. Unfortunately, conservation efforts in green security domains are constrained by the limited availability of defenders, who must patrol vast areas to protect from attackers. Artificial intelligence (AI) techniques have been developed for green security and other security settings, such as US Coast Guard patrols and airport screenings, but effective deployment of AI in these settings requires learning adversarial behavior and planning in complex environments where the true dynamics may be unknown. My research develops novel techniques in machine learning and game theory to enable the effective development and deployment of AI in these resource-constrained settings. Notably, my work has spanned the pipeline from learning in a supervised setting, planning in stochastic environments, sequential planning in uncertain environments, and deployment in the real world. The overarching goal is to optimally allocate scarce resources under uncertainty for environmental conservation.
APA, Harvard, Vancouver, ISO, and other styles
10

Valdenegro-Toro, Matias, and Daniel Saromo. "A Deeper Look into Aleatoric and Epistemic Uncertainty Disentanglement." In LatinX in AI at Computer Vision and Pattern Recognition Conference 2022. Journal of LatinX in AI Research, 2022. http://dx.doi.org/10.52591/lxai202206244.

Full text
Abstract:
Neural networks are ubiquitous in many tasks, but trusting their predictions is an open issue. Uncertainty quantification is required for many applications, and disentangled aleatoric and epistemic uncertainties are best. In this paper, we generalize methods to produce disentangled uncertainties to work with different uncertainty quantification methods, and evaluate their capability to produce disentangled uncertainties. Our results show that: there is an interaction between learning aleatoric and epistemic uncertainty, which is unexpected and violates assumptions on aleatoric uncertainty, some methods like Flipout produce zero epistemic uncertainty, aleatoric uncertainty is unreliable in the out-of-distribution setting, and Ensembles provide overall the best disentangling quality. We also explore the error produced by the number of samples hyper-parameter in the sampling softmax function, recommending N > 100 samples. We expect that our formulation and results help practitioners and researchers choose uncertainty methods and expand the use of disentangled uncertainties, as well as motivate additional research into this topic.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Uncertainty in AI"

1

Chen, Thomas, Biprateep Dey, Aishik Ghosh, Michael Kagan, Brian Nord, and Nesar Ramachandra. Interpretable Uncertainty Quantification in AI for HEP. Office of Scientific and Technical Information (OSTI), August 2022. http://dx.doi.org/10.2172/1886020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Caldwell, Peter, Chris Golaz, Peter Bogenschutz, Marcus Lier-Walqui, Aaron Donahue, Chris Vogl, Barry Rountree, Aniruddha Marathe, and Tapasya Patki. AI-Assisted Parameter Tuning Will Speed Development and Clarify Uncertainty in E3SM. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1769663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Dali, Shih-Chieh Kao, and Daniel Ricciuto. Development of Explainable, Knowledge-Guided AI Models to Enhance the E3SM Land Model Development and Uncertainty Quantification. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1769696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fridlind, Ann, Marcus van Lier-Walqui, Gregory Elsaesser, Maxwell Kelley, Andrew Ackerman, Gregory Cesna, and Gavin Schmidt. A Grand Challenge "Uncertainty Project" to Accelerate Advances in Earth System Predictability: AI-Enabled Concepts and Applications. Office of Scientific and Technical Information (OSTI), February 2021. http://dx.doi.org/10.2172/1769643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography