To see the other types of publications on this topic, follow the link: Predictive uncertainty quantification.

Journal articles on the topic 'Predictive uncertainty quantification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Predictive uncertainty quantification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Cacuci, Dan Gabriel. "Sensitivity Analysis, Uncertainty Quantification and Predictive Modeling of Nuclear Energy Systems." Energies 15, no. 17 (2022): 6379. http://dx.doi.org/10.3390/en15176379.

Full text
Abstract:
The Special Issue “Sensitivity Analysis, Uncertainty Quantification and Predictive Modeling of Nuclear Energy Systems” comprises nine articles that present important applications of concepts for performing sensitivity analyses and uncertainty quantifications of models of nuclear energy systems [...]
APA, Harvard, Vancouver, ISO, and other styles
2

Csillag, Daniel, Lucas Monteiro Paes, Thiago Ramos, et al. "AmnioML: Amniotic Fluid Segmentation and Volume Prediction with Uncertainty Quantification." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (2023): 15494–502. http://dx.doi.org/10.1609/aaai.v37i13.26837.

Full text
Abstract:
Accurately predicting the volume of amniotic fluid is fundamental to assessing pregnancy risks, though the task usually requires many hours of laborious work by medical experts. In this paper, we present AmnioML, a machine learning solution that leverages deep learning and conformal prediction to output fast and accurate volume estimates and segmentation masks from fetal MRIs with Dice coefficient over 0.9. Also, we make available a novel, curated dataset for fetal MRIs with 853 exams and benchmark the performance of many recent deep learning architectures. In addition, we introduce a conformal prediction tool that yields narrow predictive intervals with theoretically guaranteed coverage, thus aiding doctors in detecting pregnancy risks and saving lives. A successful case study of AmnioML deployed in a medical setting is also reported. Real-world clinical benefits include up to 20x segmentation time reduction, with most segmentations deemed by doctors as not needing any further manual refinement. Furthermore, AmnioML's volume predictions were found to be highly accurate in practice, with mean absolute error below 56mL and tight predictive intervals, showcasing its impact in reducing pregnancy complications.
APA, Harvard, Vancouver, ISO, and other styles
3

Lew, Jiann-Shiun, and Jer-Nan Juang. "Robust Generalized Predictive Control with Uncertainty Quantification." Journal of Guidance, Control, and Dynamics 35, no. 3 (2012): 930–37. http://dx.doi.org/10.2514/1.54510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Karimi, Hamed, and Reza Samavi. "Quantifying Deep Learning Model Uncertainty in Conformal Prediction." Proceedings of the AAAI Symposium Series 1, no. 1 (2023): 142–48. http://dx.doi.org/10.1609/aaaiss.v1i1.27492.

Full text
Abstract:
Precise estimation of predictive uncertainty in deep neural networks is a critical requirement for reliable decision-making in machine learning and statistical modeling, particularly in the context of medical AI. Conformal Prediction (CP) has emerged as a promising framework for representing the model uncertainty by providing well-calibrated confidence levels for individual predictions. However, the quantification of model uncertainty in conformal prediction remains an active research area, yet to be fully addressed. In this paper, we explore state-of-the-art CP methodologies and their theoretical foundations. We propose a probabilistic approach in quantifying the model uncertainty derived from the produced prediction sets in conformal prediction and provide certified boundaries for the computed uncertainty. By doing so, we allow model uncertainty measured by CP to be compared by other uncertainty quantification methods such as Bayesian (e.g., MC-Dropout and DeepEnsemble) and Evidential approaches.
APA, Harvard, Vancouver, ISO, and other styles
5

Serenko, I. A., Y. V. Dorn, S. R. Singh, and A. V. Kornaev. "Room for Uncertainty in Remaining Useful Life Estimation for Turbofan Jet Engines." Nelineinaya Dinamika 20, no. 5 (2024): 933–43. https://doi.org/10.20537/nd241218.

Full text
Abstract:
This work addresses uncertainty quantification in machine learning, treating it as a hidden parameter of the model that estimates variance in training data, thereby enhancing the interpretability of predictive models. By predicting both the target value and the certainty of the prediction, combined with deep ensembling to study model uncertainty, the proposed method aims to increase model accuracy. The approach was applied to the well-known problem of Remaining Useful Life (RUL) estimation for turbofan jet engines using NASA’s dataset. The method demonstrated competitive results compared to other commonly used tabular data processing methods, including k-nearest neighbors, support vector machines, decision trees, and
APA, Harvard, Vancouver, ISO, and other styles
6

Akitaya, Kento, and Masaatsu Aichi. "Land Subsidence Model Inversion with the Estimation of Both Model Parameter Uncertainty and Predictive Uncertainty Using an Evolutionary-Based Data Assimilation (EDA) and Ensemble Model Output Statistics (EMOS)." Water 16, no. 3 (2024): 423. http://dx.doi.org/10.3390/w16030423.

Full text
Abstract:
The nonlinearity nature of land subsidence and limited observations cause premature convergence in typical data assimilation methods, leading to both underestimation and miscalculation of uncertainty in model parameters and prediction. This study focuses on a promising approach, the combination of evolutionary-based data assimilation (EDA) and ensemble model output statistics (EMOS), to investigate its performance in land subsidence modeling using EDA with a smoothing approach for parameter uncertainty quantification and EMOS for predictive uncertainty quantification. The methodology was tested on a one-dimensional subsidence model in Kawajima (Japan). The results confirmed the EDA’s robust capability: Model diversity was maintained even after 1000 assimilation cycles on the same dataset, and the obtained parameter distributions were consistent with the soil types. The ensemble predictions were converted to Gaussian predictions with EMOS using past observations statistically. The Gaussian predictions outperformed the ensemble predictions in predictive performance because EMOS compensated for the over/under-dispersive prediction spread and the short-term bias, a potential weakness for the smoothing approach. This case study demonstrates that combining EDA and EMOS contributes to groundwater management for land subsidence control, considering both the model parameter uncertainty and the predictive uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
7

Sriprasert, Soraida, and Patchanok Srisuradetchai. "Multi-K KNN regression with bootstrap aggregation: Accurate predictions and alternative prediction intervals." Edelweiss Applied Science and Technology 9, no. 5 (2025): 2750–64. https://doi.org/10.55214/25768484.v9i5.7589.

Full text
Abstract:
The k-nearest neighbors (KNN) algorithm is widely recognized for its simplicity and flexibility in modeling complex, non-linear relationships; however, standard KNN regression does not inherently provide prediction intervals (PIs), presenting a persistent challenge for uncertainty quantification. This study introduces a bootstrap-based multi-K approach specifically designed to construct robust prediction intervals in KNN regression. By systematically aggregating predictions across multiple neighborhood sizes through ensemble techniques and bootstrap resampling, the method effectively quantifies prediction uncertainty, particularly in challenging high-dimensional scenarios. Evaluations conducted on 15 diverse datasets spanning education, healthcare, chemistry, economics, and social sciences reveal that the proposed approach consistently achieves competitive predictive accuracy compared to traditional regression methods. Although traditional regression produces wider intervals with higher coverage probabilities, the proposed bootstrap-based KNN method generates notably tighter intervals, enhancing interpretability and practical utility. Despite occasionally reduced coverage probabilities, especially in high-dimensional contexts, the proposed methodology effectively balances precision and predictive coverage. Practically, this multi-K bootstrap approach provides researchers and practitioners with an effective and interpretable method for robust uncertainty quantification in complex predictive modeling tasks.
APA, Harvard, Vancouver, ISO, and other styles
8

Chala, Ayele Tesema, and Richard Ray. "Uncertainty Quantification in Shear Wave Velocity Predictions: Integrating Explainable Machine Learning and Bayesian Inference." Applied Sciences 15, no. 3 (2025): 1409. https://doi.org/10.3390/app15031409.

Full text
Abstract:
The accurate prediction of shear wave velocity (Vs) is critical for earthquake engineering applications. However, the prediction is inevitably influenced by geotechnical variability and various sources of uncertainty. This paper investigates the effectiveness of integrating explainable machine learning (ML) model and Bayesian generalized linear model (GLM) to enhance both predictive accuracy and uncertainty quantification in Vs prediction. The study utilizes an Extreme Gradient Boosting (XGBoost) algorithm coupled with Shapley Additive Explanations (SHAPs) and partial dependency analysis to identify key geotechnical parameters influencing Vs predictions. Additionally, a Bayesian GLM is developed to explicitly account for uncertainties arising from geotechnical variability. The effectiveness and predictive performance of the proposed models were validated through comparison with real case scenarios. The results highlight the unique advantages of each model. The XGBoost model demonstrates good predictive performance, achieving high coefficient of determination (R2), index of agreement (IA), Kling–Gupta efficiency (KGE) values, and low error values while effectively explaining the impact of input parameters on Vs. In contrast, the Bayesian GLM provides probabilistic predictions with 95% credible intervals, capturing the uncertainty associated with the predictions. The integration of these two approaches creates a comprehensive framework that combines the strengths of high-accuracy ML predictions with the uncertainty quantification of Bayesian inference. This hybrid methodology offers a powerful and interpretable tool for Vs prediction, providing engineers with the confidence to make informed decisions.
APA, Harvard, Vancouver, ISO, and other styles
9

Ayed, Safa Ben, Roozbeh Sadeghian Broujeny, and Rachid Tahar Hamza. "Remaining Useful Life Prediction with Uncertainty Quantification Using Evidential Deep Learning." Journal of Artificial Intelligence and Soft Computing Research 15, no. 1 (2024): 37–55. https://doi.org/10.2478/jaiscr-2025-0003.

Full text
Abstract:
Abstract Predictive Maintenance presents an important and challenging task in Industry 4.0. It aims to prevent premature failures and reduce costs by avoiding unnecessary maintenance tasks. This involves estimating the Remaining Useful Life (RUL), which provides critical information for decision makers and planners of future maintenance activities. However, RUL prediction is not simple due to the imperfections in monitoring data, making effective Predictive Maintenance challenging. To address this issue, this article proposes an Evidential Deep Learning (EDL) based method to predict the RUL and to quantify both data uncertainties and prediction model uncertainties. An experimental analysis conducted on the C-MAPSS dataset of aero-engine degradation affirms that EDL based method outperforms alternative machine learning approaches. Moreover, the accompanying uncertainty quantification analysis demonstrates sound methodology and reliable results.
APA, Harvard, Vancouver, ISO, and other styles
10

Plesner, Andreas, Allan P. Engsig-Karup, and Hans True. "Detecting Railway Track Irregularities with Data-driven Uncertainty Quantification." Highlights of Vehicles 3, no. 1 (2025): 1–14. https://doi.org/10.54175/hveh3010001.

Full text
Abstract:
This study addresses the critical challenge of assessing railway track irregularities using advanced machine learning techniques, specifically convolutional neural networks (CNNs) and conformal prediction. Leveraging high-fidelity sensor data from high-speed trains, we propose a novel CNN model that significantly outperforms state-of-the-art results in predicting track irregularities. Our CNN architecture, optimized through extensive hyperparameter tuning, comprises multiple convolutional layers with batch normalization, Exponential Linear Unit (ELU) activation functions, and dropout regularization. This design enables the model to capture complex spatial and temporal dependencies in the train’s dynamic responses, translating them into accurate predictions of track irregularities. The model achieves a mean unsigned error of 0.31 mm on the test set, surpassing the previous state-of-the-art performance and approaching industry-standard benchmarks for track measurement accuracy. This level of precision is crucial for the early detection of track defects that could compromise safety and ride quality. To quantify uncertainty in the model’s predictions, we implement conformal prediction techniques, specifically the CV+ and CV-minmax methods. These approaches provide prediction intervals with high reliability, achieving a 97.18% coverage rate for the CV-minmax method. The resulting prediction intervals have an average width of 2.33 mm, offering a balance between precision and confidence in the model’s outputs. Notably, our model exhibits impressive computational efficiency, capable of processing over 2000 kilometers of track data per hour. This speed makes it suitable for real-time applications in continuous monitoring systems, potentially revolutionizing the approach to railway maintenance. The integration of CNNs with conformal prediction represents a significant advancement in the field of predictive maintenance for railway infrastructure. By providing both accurate predictions and well-calibrated uncertainty estimates, our approach enables more informed decision-making in track maintenance planning and safety assessments.
APA, Harvard, Vancouver, ISO, and other styles
11

Singh, Rishabh, and Jose C. Principe. "Toward a Kernel-Based Uncertainty Decomposition Framework for Data and Models." Neural Computation 33, no. 5 (2021): 1164–98. http://dx.doi.org/10.1162/neco_a_01372.

Full text
Abstract:
Abstract This letter introduces a new framework for quantifying predictive uncertainty for both data and models that relies on projecting the data into a gaussian reproducing kernel Hilbert space (RKHS) and transforming the data probability density function (PDF) in a way that quantifies the flow of its gradient as a topological potential field (quantified at all points in the sample space). This enables the decomposition of the PDF gradient flow by formulating it as a moment decomposition problem using operators from quantum physics, specifically Schrödinger's formulation. We experimentally show that the higher-order moments systematically cluster the different tail regions of the PDF, thereby providing unprecedented discriminative resolution of data regions having high epistemic uncertainty. In essence, this approach decomposes local realizations of the data PDF in terms of uncertainty moments. We apply this framework as a surrogate tool for predictive uncertainty quantification of point-prediction neural network models, overcoming various limitations of conventional Bayesian-based uncertainty quantification methods. Experimental comparisons with some established methods illustrate performance advantages that our framework exhibits.
APA, Harvard, Vancouver, ISO, and other styles
12

Doherty, Conor T., Weile Wang, Hirofumi Hashimoto, and Ian G. Brosnan. "A method for quantifying uncertainty in spatially interpolated meteorological data with application to daily maximum air temperature." Geoscientific Model Development 18, no. 10 (2025): 3003–16. https://doi.org/10.5194/gmd-18-3003-2025.

Full text
Abstract:
Abstract. Uncertainty is inherent in gridded meteorological data, but this fact is often overlooked when data products do not provide a quantitative description of prediction uncertainty. This paper describes, applies, and evaluates a method for quantifying prediction uncertainty in spatially interpolated estimates of meteorological variables. The approach presented here, which we will refer to as DNK for “detrend, normal score, krige”, uses established methods from geostatistics to produce not only point estimates (i.e., a single number) but also predictive distributions for each location. Predictive distributions quantitatively describe uncertainty in a manner suitable for propagation into physical models that take meteorological variables as inputs. We apply the method to interpolate daily maximum near-surface air temperature (Tmax) and then validate the uncertainty quantification by comparing theoretical versus actual coverage of prediction intervals computed at locations where measurement data were held out from the estimation procedure. We find that, for most days, the predictive distributions accurately quantify uncertainty and that theoretical versus actual coverage levels of prediction intervals closely match one another. Even for days with the worst agreement, the predictive distributions meaningfully convey the relative certainty of predictions for different locations in space. After validating the methodology, we demonstrate how the magnitude of prediction uncertainty varies significantly in both space and time. Finally, we examine spatial correlation in predictions and errors using conditional Gaussian simulation to sample from the joint spatial predictive distribution. In summary, this work demonstrates the efficacy and value of describing uncertainty in gridded meteorological data products using predictive distributions.
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, Peng, and Nicholas Zabaras. "Adaptive Locally Weighted Projection Regression Method for Uncertainty Quantification." Communications in Computational Physics 14, no. 4 (2013): 851–78. http://dx.doi.org/10.4208/cicp.060712.281212a.

Full text
Abstract:
AbstractWe develop an efficient, adaptive locally weighted projection regression (ALWPR) framework for uncertainty quantification (UQ) of systems governed by ordinary and partial differential equations. The algorithm adaptively selects the new input points with the largest predictive variance and decides when and where to add new local models. It effectively learns the local features and accurately quantifies the uncertainty in the prediction of the statistics. The developed methodology provides predictions and confidence intervals at any query input and can deal with multi-output cases. Numerical examples are presented to show the accuracy and efficiency of the ALWPR framework including problems with non-smooth local features such as discontinuities in the stochastic space.
APA, Harvard, Vancouver, ISO, and other styles
14

Shi, Yuanjie. "Reliable Uncertainty Quantification in Machine Learning via Conformal Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 28 (2025): 29299–300. https://doi.org/10.1609/aaai.v39i28.35227.

Full text
Abstract:
Deploying machine learning (ML) models in high-stakes domains such as healthcare and autonomous systems requires reliable uncertainty quantification (UQ) to ensure safe and accurate decision-making. Conformal prediction (CP) offers a robust, distribution-agnostic framework for UQ, providing valid prediction sets that guarantee a specified coverage probability. However, existing CP methods are often limited by assumptions that are violated in real-world scenarios, such as non-i.i.d. data, and by a lack of integration with modern machine learning workflows, particularly in large generative models. This research aims to address these limitations by advancing CP techniques to operate effectively in non-i.i.d. settings, improving predictive efficiency without sacrificing theoretical guarantees, and integrating CP directly into model training processes. These developments will enhance the practical applicability of CP for a wide range of ML tasks, enabling more reliable and interpretable models in high-stakes applications.
APA, Harvard, Vancouver, ISO, and other styles
15

Farzana, Walia, Megan A. Witherow, Ahmed Temtam, et al. "24 Key brain region identification in obesity prediction with structural MRI and probabilistic uncertainty aware model." Journal of Clinical and Translational Science 9, s1 (2025): 9. https://doi.org/10.1017/cts.2024.715.

Full text
Abstract:
Objectives/Goals: Predictive performance alone may not determine a model’s clinical utility. Neurobiological changes in obesity alter brain structures, but traditional voxel-based morphometry is limited to group-level analysis. We propose a probabilistic model with uncertainty heatmaps to improve interpretability and personalized prediction. Methods/Study Population: The data for this study are sourced from the Human Connectome Project (HCP), with approval from the Washington University in St. Louis Institutional Review Board. We preprocessed raw T1-weighted structural MRI scans from 525 patients using an automated pipeline. The dataset is divided into training (357 cases), calibration (63 cases), and testing (105 cases). Our probabilistic model is a convolutional neural network (CNN) with dropout regularization. It generates a prediction set containing high-probability correct predictions using conformal prediction techniques, which add an uncertainty layer to the CNN. Additionally, gradient-based localization mapping is employed to identify brain regions associated with low uncertainty cases. Results/Anticipated Results: The performance of the computational conformal model is evaluated using training and testing data with varying dropout rates from 0.1 to 0.5. The best results are achieved with a dropout rate of 0.5, yielding a fivefold cross-validated average precision of 72.19% and an F1-score of 70.66%. Additionally, the model provides probabilistic uncertainty quantification along with gradient-based localization maps that identify key brain regions, including the temporal lobe, putamen, caudate, and occipital lobe, relevant to obesity prediction. Comparisons with standard segmented brain atlases and existing literature highlight that our model’s uncertainty quantification mapping offers complementary evidence linking obesity to structural brain regions. Discussion/Significance of Impact: This research offers two significant advancements. First, it introduces a probabilistic model for predicting obesity from structural magnetic resonance imaging data, focusing on uncertainty quantification for reliable results. Second, it improves interpretability using localization maps to identify key brain regions linked to obesity.
APA, Harvard, Vancouver, ISO, and other styles
16

Omagbon, Jericho, John Doherty, Angus Yeh, et al. "Case studies of predictive uncertainty quantification for geothermal models." Geothermics 97 (December 2021): 102263. http://dx.doi.org/10.1016/j.geothermics.2021.102263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Nitschke, C. T., P. Cinnella, D. Lucor, and J. C. Chassaing. "Model-form and predictive uncertainty quantification in linear aeroelasticity." Journal of Fluids and Structures 73 (August 2017): 137–61. http://dx.doi.org/10.1016/j.jfluidstructs.2017.05.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Mirzayeva, A., N. A. Slavinskaya, M. Abbasi, J. H. Starcke, W. Li, and M. Frenklach. "Uncertainty Quantification in Chemical Modeling." Eurasian Chemico-Technological Journal 20, no. 1 (2018): 33. http://dx.doi.org/10.18321/ectj706.

Full text
Abstract:
A module of PrIMe automated data-centric infrastructure, Bound-to-Bound Data Collaboration (B2BDC), was used for the analysis of systematic uncertainty and data consistency of the H2/CO reaction model (73/17). In order to achieve this purpose, a dataset of 167 experimental targets (ignition delay time and laminar flame speed) and 55 active model parameters (pre-exponent factors in the Arrhenius form of the reaction rate coefficients) was constructed. Consistency analysis of experimental data from the composed dataset revealed disagreement between models and data. Two consistency measures were applied to identify the quality of experimental targets (Quantities of Interest, QoI): scalar consistency measure, which quantifies the tightening index of the constraints while still ensuring the existence of a set of the model parameter values whose associated modeling output predicts the experimental QoIs within the uncertainty bounds; and a newly-developed method of computing the vector consistency measure (VCM), which determines the minimal bound changes for QoIs initially identified as inconsistent, each bound by its own extent, while still ensuring the existence of a set of the model parameter values whose associated modeling output predicts the experimental QoIs within the uncertainty bounds. The consistency analysis suggested that elimination of 45 experimental targets, 8 of which were self- inconsistent, would lead to a consistent dataset. After that the feasible parameter set was constructed through decrease uncertainty parameters for several reaction rate coefficients. This dataset was subjected for the B2BDC framework model optimization and analysis on. Forth methods of parameter optimization were applied, including those unique in the B2BDC framework. The optimized models showed improved agreement with experimental values, as compared to the initially-assembled model. Moreover, predictions for experiments not included in the initial dataset were investigated. The results demonstrate benefits of applying the B2BDC methodology for development of predictive kinetic models.
APA, Harvard, Vancouver, ISO, and other styles
19

Albi, Giacomo, Lorenzo Pareschi, and Mattia Zanella. "Uncertainty Quantification in Control Problems for Flocking Models." Mathematical Problems in Engineering 2015 (2015): 1–14. http://dx.doi.org/10.1155/2015/850124.

Full text
Abstract:
The optimal control of flocking models with random inputs is investigated from a numerical point of view. The effect of uncertainty in the interaction parameters is studied for a Cucker-Smale type model using a generalized polynomial chaos (gPC) approach. Numerical evidence of threshold effects in the alignment dynamic due to the random parameters is given. The use of a selective model predictive control permits steering of the system towards the desired state even in unstable regimes.
APA, Harvard, Vancouver, ISO, and other styles
20

Kumar, Bhargava, Tejaswini Kumar, Swapna Nadakuditi, Hitesh Patel, and Karan Gupta. "Comparing Conformal and Quantile Regression for Uncertainty Quantification: An Empirical Investigation." International Journal of Computing and Engineering 5, no. 5 (2024): 1–8. http://dx.doi.org/10.47941/ijce.1925.

Full text
Abstract:
Purpose: This research assesses the efficacy of conformal regression and standard quantile regression in uncertainty quantification for predictive modeling. Quantile regression estimates various quantiles within the conditional distribution, while conformal regression constructs prediction intervals with guaranteed coverage. Methodology: By training models on multiple quantile pairs and varying error rates, the analysis evaluates each method's performance. Findings: Results indicate consistent trends in coverage and prediction interval lengths, with no significant differences in performance. Quantile regression intervals lengthen toward the distribution tails, while conformal regression intervals lengthen with higher coverage. Unique contribution to theory, policy and practice: On the tested dataset, both methods perform similarly, but further testing is necessary to validate these findings across diverse datasets and conditions, considering computational efficiency and implementation ease to determine the best method for specific applications.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhang, Haofeng. "Statistical Methodologies for Decision-Making and Uncertainty Reduction in Machine Learning." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 28 (2025): 29317–18. https://doi.org/10.1609/aaai.v39i28.35236.

Full text
Abstract:
While advances in machine learning and the expansion of massive datasets have significantly improved predictive accuracy, the translation of these predictions into actionable decisions—alongside a robust understanding of associated risks—remains underexplored. My research focuses on developing methodology and theory in data-driven decision-making and uncertainty quantification that effectively address core data challenges. This paper presents two connected pillars of my research: data-driven contextual optimization, uncertainty quantification and reduction.
APA, Harvard, Vancouver, ISO, and other styles
22

Kayusi, Fredrick, Petros Chavula, Gilbert Lungu, and Hockings Mambwe. "AI-Driven Climate Modeling: Validation and Uncertainty Mapping – Methodologies and Challenges." LatIA 3 (March 25, 2025): 332. https://doi.org/10.62486/latia2025332.

Full text
Abstract:
Climate models are fundamental for predicting future climate conditions and guiding mitigation and adaptation strategies. This study aims to enhance the accuracy and reliability of climate modeling by integrating artificial intelligence (AI) techniques for validation and uncertainty mapping. AI-driven approaches, including machine learning-based parameterization, ensemble simulations, and probabilistic modeling, offer improvements in model precision, quality assurance, and uncertainty quantification. A systematic review methodology was applied, selecting peer-reviewed studies from 2000 to 2023 that focused on climate modeling, validation, and uncertainty estimation. Data sources included observational records, satellite measurements, and global reanalysis datasets. The study analyzed key AI-driven methodologies used for improving model accuracy, including statistical downscaling techniques and deep learning-based uncertainty prediction frameworks. Findings indicate that AI-enhanced models significantly improve climate projections by refining parameterization, enhancing bias correction, and optimizing uncertainty quantification. Machine learning applications facilitate more accurate predictions of meteorological phenomena, including temperature and precipitation variability. However, challenges remain in addressing observational biases, inter-model inconsistencies, and computational limitations. The study concludes that AI-driven advancements provide critical improvements in climate model reliability, yet ongoing refinements are necessary to address persistent uncertainties. Enhancing observational datasets, refining computational techniques, and strengthening model validation frameworks will be essential for reducing uncertainty. Effective communication of climate model outputs, including uncertainty mapping, is crucial for supporting informed policy decisions. AI-driven climate modeling is a rapidly evolving field, and continuous innovation will be key to improving predictive accuracy and resilience in climate adaptation strategies.
APA, Harvard, Vancouver, ISO, and other styles
23

Gorle, Catherine. "Improving the predictive capability of building simulations using uncertainty quantification." Science and Technology for the Built Environment 28, no. 5 (2022): 575–76. http://dx.doi.org/10.1080/23744731.2022.2079261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Gerber, Eric A. E., and Bruce A. Craig. "A mixed effects multinomial logistic-normal model for forecasting baseball performance." Journal of Quantitative Analysis in Sports 17, no. 3 (2021): 221–39. http://dx.doi.org/10.1515/jqas-2020-0007.

Full text
Abstract:
Abstract Prediction of player performance is a key component in the construction of baseball team rosters. As a result, most prediction models are the proprietary property of team or industrial sports entities, and little is known about them. Of those models that have been published, the main focus has been to separately model each outcome with nearly no emphasis on uncertainty quantification. This research introduces a joint modeling approach to predict seasonal plate appearance outcome vectors using a mixed-effects multinomial logistic-normal model. This model accounts for positive and negative correlations between outcomes, both across and within player seasons, and provides a joint posterior predictive outcome distribution from which uncertainty can be quantified. It is applied to the important, yet unaddressed, problem of predicting performance for players moving between the Japanese (NPB) and American (MLB) major leagues.
APA, Harvard, Vancouver, ISO, and other styles
25

Portela, Alberto, Julio R. Banga, and Marcos Matabuena. "Conformal prediction for uncertainty quantification in dynamic biological systems." PLOS Computational Biology 21, no. 5 (2025): e1013098. https://doi.org/10.1371/journal.pcbi.1013098.

Full text
Abstract:
Uncertainty quantification (UQ) is the process of systematically determining and characterizing the degree of confidence in computational model predictions. In systems biology, and particularly with dynamic models, UQ is critical due to the nonlinearities and parameter sensitivities that influence the behavior of complex biological systems. Addressing these issues through robust UQ enables a deeper understanding of system dynamics and more reliable extrapolation beyond observed conditions. Many state-of-the-art UQ approaches in this field are grounded in Bayesian statistical methods. While these frameworks naturally incorporate uncertainty quantification, they often require the specification of parameter distributions as priors and may impose parametric assumptions that do not always reflect biological reality. Additionally, Bayesian methods can be computationally expensive, posing significant challenges when dealing with large-scale models and seeking rapid, reliable uncertainty calibration. As an alternative, we propose using conformal predictions methods and introduce two novel algorithms designed for dynamic biological systems. These approaches can provide non-asymptotic guarantees, improving robustness and scalability across various applications, even when the predictive models are misspecified. Through several illustrative scenarios, we demonstrate that these conformal algorithms can serve as powerful complements—or even alternatives—to conventional Bayesian methods, delivering effective uncertainty quantification for predictive tasks in systems biology.
APA, Harvard, Vancouver, ISO, and other styles
26

Ma, Junwei, Xiao Liu, Xiaoxu Niu, et al. "Forecasting of Landslide Displacement Using a Probability-Scheme Combination Ensemble Prediction Technique." International Journal of Environmental Research and Public Health 17, no. 13 (2020): 4788. http://dx.doi.org/10.3390/ijerph17134788.

Full text
Abstract:
Data-driven models have been extensively employed in landslide displacement prediction. However, predictive uncertainty, which consists of input uncertainty, parameter uncertainty, and model uncertainty, is usually disregarded in deterministic data-driven modeling, and point estimates are separately presented. In this study, a probability-scheme combination ensemble prediction that employs quantile regression neural networks and kernel density estimation (QRNNs-KDE) is proposed for robust and accurate prediction and uncertainty quantification of landslide displacement. In the ensemble model, QRNNs serve as base learning algorithms to generate multiple base learners. Final ensemble prediction is obtained by integration of all base learners through a probability combination scheme based on KDE. The Fanjiaping landslide in the Three Gorges Reservoir area (TGRA) was selected as a case study to explore the performance of the ensemble prediction. Based on long-term (2006–2018) and near real-time monitoring data, a comprehensive analysis of the deformation characteristics was conducted for fully understanding the triggering factors. The experimental results indicate that the QRNNs-KDE approach can perform predictions with perfect performance and outperform the traditional backpropagation (BP), radial basis function (RBF), extreme learning machine (ELM), support vector machine (SVM) methods, bootstrap-extreme learning machine-artificial neural network (bootstrap-ELM-ANN), and Copula-kernel-based support vector machine quantile regression (Copula-KSVMQR). The proposed QRNNs-KDE approach has significant potential in medium-term to long-term horizon forecasting and quantification of uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
27

Lidder, Divya, Kathryn Morse, Bridget Sullivan, Wei Qian, Chenglin Miao, and Mengdi Huai. "Neuron Explanations for Conformal Prediction (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 28 (2025): 29412–14. https://doi.org/10.1609/aaai.v39i28.35270.

Full text
Abstract:
Conformal prediction (CP) has gained prominence as a popular technique for uncertainty quantification in deep neural networks (DNNs), providing statistically rigorous uncertainty sets. However, existing CP methods fail to clarify the origins of predictive uncertainties. While neuron-level interpretability has been effective in revealing the internal mechanisms of DNNs, explaining CP at the neuron level remains unexplored. Nonetheless, generating neuron explanations for CP is challenging due to the discrete and non-differentiable characteristics of CP, and the labor-intensive process of semantic annotation. To address these limitations, this paper proposes a novel neuron explanation approach for CP by identifying neurons crucial for understanding predictive uncertainties and automatically generating semantic explanations. The effectiveness of the proposed method is validated through both qualitative and quantitative experiments.
APA, Harvard, Vancouver, ISO, and other styles
28

Feng, Jinchao, Joshua L. Lansford, Markos A. Katsoulakis, and Dionisios G. Vlachos. "Explainable and trustworthy artificial intelligence for correctable modeling in chemical sciences." Science Advances 6, no. 42 (2020): eabc3204. http://dx.doi.org/10.1126/sciadv.abc3204.

Full text
Abstract:
Data science has primarily focused on big data, but for many physics, chemistry, and engineering applications, data are often small, correlated and, thus, low dimensional, and sourced from both computations and experiments with various levels of noise. Typical statistics and machine learning methods do not work for these cases. Expert knowledge is essential, but a systematic framework for incorporating it into physics-based models under uncertainty is lacking. Here, we develop a mathematical and computational framework for probabilistic artificial intelligence (AI)–based predictive modeling combining data, expert knowledge, multiscale models, and information theory through uncertainty quantification and probabilistic graphical models (PGMs). We apply PGMs to chemistry specifically and develop predictive guarantees for PGMs generally. Our proposed framework, combining AI and uncertainty quantification, provides explainable results leading to correctable and, eventually, trustworthy models. The proposed framework is demonstrated on a microkinetic model of the oxygen reduction reaction.
APA, Harvard, Vancouver, ISO, and other styles
29

Zgraggen, Jannik, Gianmarco Pizza, and Lilach Goren Huber. "Uncertainty Informed Anomaly Scores with Deep Learning: Robust Fault Detection with Limited Data." PHM Society European Conference 7, no. 1 (2022): 530–40. http://dx.doi.org/10.36001/phme.2022.v7i1.3342.

Full text
Abstract:
Quantifying the predictive uncertainty of a model is an important ingredient in data-driven decision making. Uncertainty quantification has been gaining interest especially for deep learning models, which are often hard to justify or explain. Various techniques for deep learning based uncertainty estimates have been developed primarily for image classification and segmentation, but also for regression and forecasting tasks. Uncertainty quantification for anomaly detection tasks is still rather limited for image data and has not yet been demonstrated for machine fault detection in PHM applications. In this paper we suggest an approach to derive an uncertaintyinformed anomaly score for regression models trained with normal data only. The score is derived using a deep ensemble of probabilistic neural networks for uncertainty quantification. Using an example of wind-turbine fault detection, we demonstrate the superiority of the uncertainty-informed anomaly score over the conventional score. The advantage is particularly clear in an ”out-of-distribution” scenario, in which the model is trained with limited data which does not represent all normal regimes that are observed during model deployment.
APA, Harvard, Vancouver, ISO, and other styles
30

Kefalas, Marios, Bas van Stein, Mitra Baratchi, Asteris Apostolidis, and Thomas Baeck. "End-to-End Pipeline for Uncertainty Quantification and Remaining Useful Life Estimation: An Application on Aircraft Engines." PHM Society European Conference 7, no. 1 (2022): 245–60. http://dx.doi.org/10.36001/phme.2022.v7i1.3317.

Full text
Abstract:
Estimating the remaining useful life (RUL) of an asset lies at the heart of prognostics and health management (PHM) of many operations-critical industries such as aviation. Modern methods of RUL estimation adopt techniques from deep learning (DL). However, most of these contemporary techniques deliver only single-point estimates for the RUL without reporting on the confidence of the prediction. This practice usually provides overly confident predictions that can have severe consequences in operational disruptions or even safety. To address this issue, we propose a technique for uncertainty quantification (UQ) based on Bayesian deep learning (BDL). The hyperparameters of the framework are tuned using a novel bi-objective Bayesian optimization method with objectives the predictive performance and predictive uncertainty. The method also integrates the data pre-processing steps into the hyperparameter optimization (HPO) stage, models the RUL as a Weibull distribution, and returns the survival curves of the monitored assets to allow informed decision-making. We validate this method on the widely used C-MAPSS dataset against a single-objective HPO baseline that aggregates the two objectives through the harmonic mean (HM). We demonstrate the existence of trade-offs between the predictive performance and the predictive uncertainty and observe that the bi-objective HPO returns a larger number of hyperparameter configurations compared to the single-objective baseline. Furthermore, we see that with the proposed approach, it is possible to configure models for RUL estimation that exhibit better or comparable performance to the single-objective baseline when validated on the test sets.
APA, Harvard, Vancouver, ISO, and other styles
31

Banerjee, Sourav. "Uncertainty Quantification Driven Predictive Multi-Scale Model for Synthesis of Mycotoxins." Computational Biology and Bioinformatics 2, no. 1 (2014): 7. http://dx.doi.org/10.11648/j.cbb.20140201.12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Riley, Matthew E., and Ramana V. Grandhi. "Quantification of model-form and predictive uncertainty for multi-physics simulation." Computers & Structures 89, no. 11-12 (2011): 1206–13. http://dx.doi.org/10.1016/j.compstruc.2010.10.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Olalusi, Oladimeji B., and Panagiotis Spyridis. "Probabilistic Studies on the Shear Strength of Slender Steel Fiber Reinforced Concrete Structures." Applied Sciences 10, no. 19 (2020): 6955. http://dx.doi.org/10.3390/app10196955.

Full text
Abstract:
Shear failure is a brittle and undesirable mode of failure in reinforced concrete structures. Many of the existing shear design equations for steel fiber reinforced concrete (SFRC) beams include significant uncertainty due to the failure in accurately predicting the true shear capacity. Given these, adequate quantification and description of model uncertainties considering the systematic variation in the model prediction and measured shear capacity is crucial for reliability-based investigation. Reliability analysis must account for model uncertainties in order to predict the probability of failure under prescribed limit states. This study focuses on the quantification and description of model uncertainty related to the current shear resistance predictive models for SFRC beams without shear reinforcement. The German (DAfStB) model displayed the lowest bias and dispersion, whereas the fib Model 2010 and the Bernat et al., model displayed the highest bias and dispersion. The inconsistencies observed in the resistance model uncertainties at the variation of shear span to effective depth ratio are a major cause for concern, and differentiation with respect to this parameter is advised. Finally, in line with the EN 1990 semi-probabilistic approach for reliability-based design, the global partial safety factors related to model uncertainties in the shear resistance prediction of SFRC beams are proposed.
APA, Harvard, Vancouver, ISO, and other styles
34

Fröhlich, Alek, Thiago Ramos, Gustavo Motta Cabello Dos Santos, Isabela Panzeri Carlotti Buzatto, Rafael Izbicki, and Daniel Guimarães Tiezzi. "PersonalizedUS: Interpretable Breast Cancer Risk Assessment with Local Coverage Uncertainty Quantification." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 27 (2025): 27998–8006. https://doi.org/10.1609/aaai.v39i27.35017.

Full text
Abstract:
Correctly assessing the malignancy of breast lesions identified during ultrasound examinations is crucial for effective clinical decision-making. However, the current "gold standard" relies on manual BI-RADS scoring by clinicians, often leading to unnecessary biopsies and a significant mental health burden on patients and their families. In this paper, we introduce PersonalizedUS, an interpretable machine learning system that leverages recent advances in conformal prediction to provide precise and personalized risk estimates with local coverage guarantees and sensitivity, specificity, and predictive values above 0.9 across various threshold levels. In particular, we identify meaningful lesion subgroups where distribution-free, model-agnostic conditional coverage holds, with approximately 90% of our prediction sets containing only the ground truth in most lesion subgroups, thus explicitly characterizing for which patients the model is most suitably applied. Moreover, we make available a curated tabular dataset of 1936 biopsied breast lesions from a recent observational multicenter study and benchmark the performance of several state-of-the-art learning algorithms. We also report a successful case study of the deployed system in the same multicenter context. Concrete clinical benefits include up to a 65% reduction in requested biopsies among BI-RADS 4a and 4b lesions, with minimal to no missed cancer cases.
APA, Harvard, Vancouver, ISO, and other styles
35

Sætrom, Jon, Joakim Hove, Jan-Arild Skjervheim, and Jon Gustav Vabø. "Improved Uncertainty Quantification in the Ensemble Kalman Filter Using Statistical Model-Selection Techniques." SPE Journal 17, no. 01 (2012): 152–62. http://dx.doi.org/10.2118/145192-pa.

Full text
Abstract:
Summary The ensemble Kalman filter (EnKF) is a sequential Monte Carlo method for solving nonlinear spatiotemporal inverse problems, such as petroleum-reservoir evaluation, in high dimensions. Although the EnKF has seen successful applications in numerous areas, the classical EnKF algorithm can severely underestimate the prediction uncertainty. This can lead to biased production forecasts and an ensemble collapsing into a single realization. In this paper, we combine a previously suggested EnKF scheme based on dimension reduction in the data space, with an automatic cross-validation (CV) scheme to select the subspace dimension. The properties of both the dimension reduction and the CV scheme are well known in the statistical literature. In an EnKF setting, the former can reduce the effects caused by collinear ensemble members, while the latter can guard against model overfitting by evaluating the predictive capabilities of the EnKF scheme. The model-selection criterion traditionally used for determining the subspace dimension, on the other hand, does not take the predictive power of the EnKF scheme into account, and can potentially lead to severe problems of model overfitting. A reservoir case study is used to demonstrate that the CV scheme can substantially improve the reservoir predictions with associated uncertainty estimates.
APA, Harvard, Vancouver, ISO, and other styles
36

Cui, Xinye, Houpu Li, Yanting Yu, Shaofeng Bian, and Guojun Zhai. "A Hybrid Dropout Method for High-Precision Seafloor Topography Reconstruction and Uncertainty Quantification." Applied Sciences 15, no. 11 (2025): 6113. https://doi.org/10.3390/app15116113.

Full text
Abstract:
Seafloor topography super-resolution reconstruction is critical for marine resource exploration, geological monitoring, and navigation safety. However, sparse acoustic data frequently result in the loss of high-frequency details, and traditional deep learning models exhibit limitations in uncertainty quantification, impeding their practical application. To address these challenges, this study systematically investigates the combined effects of various regularization strategies and uncertainty quantification modules. It proposes a hybrid dropout model that jointly optimizes high-precision reconstruction and uncertainty estimation. The model integrates residual blocks, squeeze-and-excitation (SE) modules, and a multi-scale feature extraction network while employing Monte Carlo Dropout (MC-Dropout) alongside heteroscedastic noise modeling to dynamically gate the uncertainty quantification process. By adaptively modulating the regularization strength based on feature activations, the model preserves high-frequency information and accurately estimates predictive uncertainty. The experimental results demonstrate significant improvements in the Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and Peak Signal-to-Noise Ratio (PSNR). Compared to conventional dropout architectures, the proposed method achieves a PSNR increase of 46.5% to 60.5% in test regions with a marked reduction in artifacts. Overall, the synergistic effect of employed regularization strategies and uncertainty quantification modules substantially enhances detail recovery and robustness in complex seafloor topography reconstruction, offering valuable theoretical insights and practical guidance for further optimization of deep learning models in challenging applications.
APA, Harvard, Vancouver, ISO, and other styles
37

Ding, Jing, Yizhuang David Wang, Saqib Gulzar, Youngsoo Richard Kim, and B. Shane Underwood. "Uncertainty Quantification of Simplified Viscoelastic Continuum Damage Fatigue Model using the Bayesian Inference-Based Markov Chain Monte Carlo Method." Transportation Research Record: Journal of the Transportation Research Board 2674, no. 4 (2020): 247–60. http://dx.doi.org/10.1177/0361198120910149.

Full text
Abstract:
The simplified viscoelastic continuum damage model (S-VECD) has been widely accepted as a computationally efficient and a rigorous mechanistic model to predict the fatigue resistance of asphalt concrete. It operates in a deterministic framework, but in actual practice, there are multiple sources of uncertainty such as specimen preparation errors and measurement errors which need to be probabilistically characterized. In this study, a Bayesian inference-based Markov Chain Monte Carlo method is used to quantify the uncertainty in the S-VECD model. The dynamic modulus and cyclic fatigue test data from 32 specimens are used for parameter estimation and predictive envelope calculation of the dynamic modulus, damage characterization and failure criterion model. These parameter distributions are then propagated to quantify the uncertainty in fatigue prediction. The predictive envelope for each model is further used to analyze the decrease in variance with the increase in the number of replicates. Finally, the proposed methodology is implemented to compare three asphalt concrete mixtures from standard testing. The major findings of this study are: (1) the parameters in the dynamic modulus and damage characterization model have relatively strong correlation which indicates the necessity of Bayesian techniques; (2) the uncertainty of the damage characteristic curve for a single specimen propagated from parameter uncertainties of the dynamic modulus model is negligible compared to the difference in the replicates; (3) four replicates of the cyclic fatigue test are recommended considering the balance between the uncertainty of fatigue prediction and the testing efficiency; and (4) more replicates are needed to confidently detect the difference between different mixtures if their fatigue performance is close.
APA, Harvard, Vancouver, ISO, and other styles
38

Dogulu, N., P. López López, D. P. Solomatine, A. H. Weerts, and D. L. Shrestha. "Estimation of predictive hydrologic uncertainty using quantile regression and UNEEC methods and their comparison on contrasting catchments." Hydrology and Earth System Sciences Discussions 11, no. 9 (2014): 10179–233. http://dx.doi.org/10.5194/hessd-11-10179-2014.

Full text
Abstract:
Abstract. In operational hydrology, estimation of predictive uncertainty of hydrological models used for flood modelling is essential for risk based decision making for flood warning and emergency management. In the literature, there exists a variety of methods analyzing and predicting uncertainty. However, case studies comparing performance of these methods, most particularly predictive uncertainty methods, are limited. This paper focuses on two predictive uncertainty methods that differ in their methodological complexity: quantile regression (QR) and UNcertainty Estimation based on local Errors and Clustering (UNEEC), aiming at identifying possible advantages and disadvantages of these methods (both estimating residual uncertainty) based on their comparative performance. We test these two methods on several catchments (from UK) that vary in its hydrological characteristics and models. Special attention is given to the errors for high flow/water level conditions. Furthermore, normality of model residuals is discussed in view of clustering approach employed within the framework of UNEEC method. It is found that basin lag time and forecast lead time have great impact on quantification of uncertainty (in the form of two quantiles) and achievement of normality in model residuals' distribution. In general, uncertainty analysis results from different case studies indicate that both methods give similar results. However, it is also shown that UNEEC method provides better performance than QR for small catchments with changing hydrological dynamics, i.e. rapid response catchments. We recommend that more case studies of catchments from regions of distinct hydrologic behaviour, with diverse climatic conditions, and having various hydrological features be tested.
APA, Harvard, Vancouver, ISO, and other styles
39

Karimanzira, Divas. "Probabilistic Uncertainty Consideration in Regionalization and Prediction of Groundwater Nitrate Concentration." Knowledge 4, no. 4 (2024): 462–80. http://dx.doi.org/10.3390/knowledge4040025.

Full text
Abstract:
In this study, we extend our previous work on a two-dimensional convolutional neural network (2DCNN) for spatial prediction of groundwater nitrate, focusing on improving uncertainty quantification. Our enhanced model incorporates a fully probabilistic Bayesian framework and a structure aimed at optimizing both specific value predictions and predictive intervals (PIs). We implemented the Prediction Interval Validation and Estimation Network based on Quality Definition (2DCNN-QD) to refine the accuracy of probabilistic predictions and reduce the width of the prediction intervals. Applied to a model region in Germany, our results demonstrate an 18% improvement in the prediction interval width. While traditional Bayesian CNN models may yield broader prediction intervals to adequately capture uncertainties, the 2DCNN-QD method prioritizes quality-driven interval optimization, resulting in narrower prediction intervals without sacrificing coverage probability. Notably, this approach is nonparametric, allowing it to be effectively utilized across a range of real-world scenarios.
APA, Harvard, Vancouver, ISO, and other styles
40

Heringhaus, Monika E., Yi Zhang, André Zimmermann, and Lars Mikelsons. "Towards Reliable Parameter Extraction in MEMS Final Module Testing Using Bayesian Inference." Sensors 22, no. 14 (2022): 5408. http://dx.doi.org/10.3390/s22145408.

Full text
Abstract:
In micro-electro-mechanical systems (MEMS) testing high overall precision and reliability are essential. Due to the additional requirement of runtime efficiency, machine learning methods have been investigated in recent years. However, these methods are often associated with inherent challenges concerning uncertainty quantification and guarantees of reliability. The goal of this paper is therefore to present a new machine learning approach in MEMS testing based on Bayesian inference to determine whether the estimation is trustworthy. The overall predictive performance as well as the uncertainty quantification are evaluated with four methods: Bayesian neural network, mixture density network, probabilistic Bayesian neural network and BayesFlow. They are investigated under the variation in training set size, different additive noise levels, and an out-of-distribution condition, namely the variation in the damping factor of the MEMS device. Furthermore, epistemic and aleatoric uncertainties are evaluated and discussed to encourage thorough inspection of models before deployment striving for reliable and efficient parameter estimation during final module testing of MEMS devices. BayesFlow consistently outperformed the other methods in terms of the predictive performance. As the probabilistic Bayesian neural network enables the distinction between epistemic and aleatoric uncertainty, their share of the total uncertainty has been intensively studied.
APA, Harvard, Vancouver, ISO, and other styles
41

Cacuci, Dan G. "TOWARDS OVERCOMING THE CURSE OF DIMENSIONALITY IN PREDICTIVE MODELLING AND UNCERTAINTY QUANTIFICATION." EPJ Web of Conferences 247 (2021): 00002. http://dx.doi.org/10.1051/epjconf/202124700002.

Full text
Abstract:
This invited presentation summarizes new methodologies developed by the author for performing high-order sensitivity analysis, uncertainty quantification and predictive modeling. The presentation commences by summarizing the newly developed 3rd-Order Adjoint Sensitivity Analysis Methodology (3rd-ASAM) for linear systems, which overcomes the “curse of dimensionality” for sensitivity analysis and uncertainty quantification of a large variety of model responses of interest in reactor physics systems. The use of the exact expressions of the 2nd-, and 3rd-order sensitivities computed using the 3rd-ASAM is subsequently illustrated by presenting 3rd-order formulas for the first three cumulants of the response distribution, for quantifying response uncertainties (covariance, skewness) stemming from model parameter uncertainties. The use of the 1st-, 2nd-, and 3rd-order sensitivities together with the formulas for the first three cumulants of the response distribution are subsequently used in the newly developed 2nd/3rd-BERRU-PM (“Second/Third-Order Best-Estimated Results with Reduced Uncertainties Predictive Modeling”), which aims at overcoming the curse of dimensionality in predictive modeling. The 2nd/3rd-BERRU-PM uses the maximum entropy principle to eliminate the need for introducing a subjective user-defined “cost functional quantifying the discrepancies between measurements and computations.” By utilizing the 1st-, 2nd- and 3rd-order response sensitivities to combine experimental and computational information in the joint phase-space of responses and model parameters, the 2nd/3rd-BERRU-PM generalizes the current data adjustment/assimilation methodologies. Even though all of the 2nd- and 3rd-order are comprised in the mathematical framework of the 2nd/3rd-BERRU-PM formalism, the computations underlying the 2nd/3rd-BERRU-PM require the inversion of a single matrix of dimensions equal to the number of considered responses, thus overcoming the curse of dimensionality which would affect the inversion of hessian and higher-order matrices in the parameter space.
APA, Harvard, Vancouver, ISO, and other styles
42

Cacuci, Dan G. "TOWARDS OVERCOMING THE CURSE OF DIMENSIONALITY IN PREDICTIVE MODELLING AND UNCERTAINTY QUANTIFICATION." EPJ Web of Conferences 247 (2021): 20005. http://dx.doi.org/10.1051/epjconf/202124720005.

Full text
Abstract:
This invited presentation summarizes new methodologies developed by the author for performing high-order sensitivity analysis, uncertainty quantification and predictive modeling. The presentation commences by summarizing the newly developed 3rd-Order Adjoint Sensitivity Analysis Methodology (3rd-ASAM) for linear systems, which overcomes the “curse of dimensionality” for sensitivity analysis and uncertainty quantification of a large variety of model responses of interest in reactor physics systems. The use of the exact expressions of the 2nd-, and 3rd-order sensitivities computed using the 3rd-ASAM is subsequently illustrated by presenting 3rd-order formulas for the first three cumulants of the response distribution, for quantifying response uncertainties (covariance, skewness) stemming from model parameter uncertainties. The use of the 1st-, 2nd-, and 3rd-order sensitivities together with the formulas for the first three cumulants of the response distribution are subsequently used in the newly developed 2nd/3rd-BERRU-PM (“Second/Third-Order Best-Estimated Results with Reduced Uncertainties Predictive Modeling”), which aims at overcoming the curse of dimensionality in predictive modeling. The 2nd/3rd-BERRU-PM uses the maximum entropy principle to eliminate the need for introducing a subjective user-defined “cost functional quantifying the discrepancies between measurements and computations.” By utilizing the 1st-, 2nd- and 3rd-order response sensitivities to combine experimental and computational information in the joint phase-space of responses and model parameters, the 2nd/3rd-BERRU-PM generalizes the current data adjustment/assimilation methodologies. Even though all of the 2nd- and 3rd-order are comprised in the mathematical framework of the 2nd/3rd-BERRU-PM formalism, the computations underlying the 2nd/3rd-BERRU-PM require the inversion of a single matrix of dimensions equal to the number of considered responses, thus overcoming the curse of dimensionality which would affect the inversion of hessian and higher-order matrices in the parameter space.
APA, Harvard, Vancouver, ISO, and other styles
43

Slavinskaya, N. A., M. Abbasi, J. H. Starcke, et al. "Development of an Uncertainty Quantification Predictive Chemical Reaction Model for Syngas Combustion." Energy & Fuels 31, no. 3 (2017): 2274–97. http://dx.doi.org/10.1021/acs.energyfuels.6b02319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Tran, Vinh Ngoc, and Jongho Kim. "Quantification of predictive uncertainty with a metamodel: toward more efficient hydrologic simulations." Stochastic Environmental Research and Risk Assessment 33, no. 7 (2019): 1453–76. http://dx.doi.org/10.1007/s00477-019-01703-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Walz, Eva-Maria, Alexander Henzi, Johanna Ziegel, and Tilmann Gneiting. "Easy Uncertainty Quantification (EasyUQ): Generating Predictive Distributions from Single-Valued Model Output." SIAM Review 66, no. 1 (2024): 91–122. http://dx.doi.org/10.1137/22m1541915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Delottier, Hugo, John Doherty, and Philip Brunner. "Data space inversion for efficient uncertainty quantification using an integrated surface and sub-surface hydrologic model." Geoscientific Model Development 16, no. 14 (2023): 4213–31. http://dx.doi.org/10.5194/gmd-16-4213-2023.

Full text
Abstract:
Abstract. It is incumbent on decision-support hydrological modelling to make predictions of uncertain quantities in a decision-support context. In implementing decision-support modelling, data assimilation and uncertainty quantification are often the most difficult and time-consuming tasks. This is because the imposition of history-matching constraints on model parameters usually requires a large number of model runs. Data space inversion (DSI) provides a highly model-run-efficient method for predictive uncertainty quantification. It does this by evaluating covariances between model outputs used for history matching (e.g. hydraulic heads) and model predictions based on model runs that sample the prior parameter probability distribution. By directly focusing on the relationship between model outputs under historical conditions and predictions of system behaviour under future conditions, DSI avoids the need to estimate or adjust model parameters. This is advantageous when using integrated surface and sub-surface hydrologic models (ISSHMs) because these models are associated with long run times, numerical instability and ideally complex parameterization schemes that are designed to respect geological realism. This paper demonstrates that DSI provides a robust and efficient means of quantifying the uncertainties of complex model predictions. At the same time, DSI provides a basis for complementary linear analysis that allows the worth of available observations to be explored, as well as of observations which are yet to be acquired. This allows for the design of highly efficient, future data acquisition campaigns. DSI is applied in conjunction with an ISSHM representing a synthetic but realistic river–aquifer system. Predictions of interest are fast travel times and surface water infiltration. Linear and non-linear estimates of predictive uncertainty based on DSI are validated against a more traditional uncertainty quantification which requires the adjustment of a large number of parameters. A DSI-generated surrogate model is then used to investigate the effectiveness and efficiency of existing and possible future monitoring networks. The example demonstrates the benefits of using DSI in conjunction with a complex numerical model to quantify predictive uncertainty and support data worth analysis in complex hydrogeological environments.
APA, Harvard, Vancouver, ISO, and other styles
47

Incorvaia, Gabriele, Darryl Hond, and Hamid Asgari. "Uncertainty Quantification of Machine Learning Model Performance via Anomaly-Based Dataset Dissimilarity Measures." Electronics 13, no. 5 (2024): 939. http://dx.doi.org/10.3390/electronics13050939.

Full text
Abstract:
The use of Machine Learning (ML) models as predictive tools has increased dramatically in recent years. However, data-driven systems (such as ML models) exhibit a degree of uncertainty in their predictions. In other words, they could produce unexpectedly erroneous predictions if the uncertainty stemming from the data, choice of model and model parameters is not taken into account. In this paper, we introduce a novel method for quantifying the uncertainty of the performance levels attained by ML classifiers. In particular, we investigate and characterize the uncertainty of model accuracy when classifying out-of-distribution data that are statistically dissimilar from the data employed during training. A main element of this novel Uncertainty Quantification (UQ) method is a measure of the dissimilarity between two datasets. We introduce an innovative family of data dissimilarity measures based on anomaly detection algorithms, namely the Anomaly-based Dataset Dissimilarity (ADD) measures. These dissimilarity measures process feature representations that are derived from the activation values of neural networks when supplied with dataset items. The proposed UQ method for classification performance employs these dissimilarity measures to estimate the classifier accuracy for unseen, out-of-distribution datasets, and to give an uncertainty band for those estimates. A numerical analysis of the efficacy of the UQ method is conducted using standard Artificial Neural Network (ANN) classifiers and public domain datasets. The results obtained generally demonstrate that the amplitude of the uncertainty band associated with the estimated accuracy values tends to increase as the data dissimilarity measure increases. Overall, this research contributes to the verification and run-time performance prediction of systems composed of ML-based elements.
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Ziqian. "Research on Stock Price Prediction Model Based on Sentiment Factor and Multi-Core Bagging Algorithm." Highlights in Business, Economics and Management 41 (October 15, 2024): 692–98. http://dx.doi.org/10.54097/hc042q45.

Full text
Abstract:
Stock price prediction model is one of the key research points in the quantitative financial field. With the rapid development of data acquisition and storage technology, the data of emotional factors based on nonlinear structure is increasing. How to combine this part of information with common European technology factors to increase the ability of the model is a problem that needs to be solved. Based on this, this paper proposes a method of uncertainty quantification that combines European-and non-European data. Specifically, on the one hand, the Gaussian process regression model is used to capture the unknown relationship between predictive variables and price; on the other hand, the multi-kernel learning technology based on European and non- European kernel functions is used to combine the effective information of European and non- European predictive variables. Moreover, the proposed method can weigh the bias of the prediction model against the variance by using the Bagging algorithm. The simulated data analysis shows that the proposed method not only improves the prediction performance of the European Gaussian process, but also works in the presence of irrelevant predictive variables. Finally, the actual data analysis shows that the proposed method has achieved good prediction results in the task of predicting the rise and fall of stock prices.
APA, Harvard, Vancouver, ISO, and other styles
49

Lu, Houyu, Amin Farrokhabadi, Ali Rauf, Reza Talemi, Konstantinos Gryllias, and Dimitrios Chronopoulos. "Uncertainty quantification for damage detection in 3D printed auxetic structures based on ultrasonic guided-wave using Flipout probabilistic convolutional neural network." Journal of Physics: Conference Series 2909, no. 1 (2024): 012032. https://doi.org/10.1088/1742-6596/2909/1/012032.

Full text
Abstract:
Abstract The auxetic structure is widely used in aviation, bio-engineering, automation, and other industries due to its outstanding properties, such as being lightweight, high strength-to-weight ratio, and absorbing energy. Neural network methods have been popularly used for auxetic structures’ structural health monitoring. However, the performance of neural network methods in unknown areas is limited. To increase the reliability of the network model, more comprehensive uncertainty quantification is needed for damage detection in unknown areas. This paper introduces a comprehensive framework for health diagnosis and uncertainty quantification in 3D-printed auxetic structures made of polylactic acid. The framework involves quasi-static uniaxial compression and ultrasonic tests conducted simultaneously to capture ultrasonic signals at different deformation states. Critical damage deformation is identified based on observed deformation patterns and variations in signal energy. Using the Hilbert transform, two damage-sensitive features—envelope and phase—are extracted. These features serve as input data for the Flipout probabilistic convolutional neural network (FPCNN) model, which integrates pseudo-independent weight perturbations and a Gaussian probabilistic layer within the visual geometry group 13 architecture to predict structural deformations and associated uncertainties. The uncertainty quantification framework, based on variational inference and the conditional covariance law, effectively separates and quantifies the predictive variance of the network model into aleatoric and epistemic uncertainty. This framework’s feasibility is demonstrated through compression and ultrasonic tests, utilizing the FPCNN. This showcases the uncertainty quantification capabilities of the FPCNN model.
APA, Harvard, Vancouver, ISO, and other styles
50

Ngartera, Lebede, Mahamat Ali Issaka, and Saralees Nadarajah. "Application of Bayesian Neural Networks in Healthcare: Three Case Studies." Machine Learning and Knowledge Extraction 6, no. 4 (2024): 2639–58. http://dx.doi.org/10.3390/make6040127.

Full text
Abstract:
This study aims to explore the efficacy of Bayesian Neural Networks (BNNs) in enhancing predictive modeling for healthcare applications. Advancements in artificial intelligence have significantly improved predictive modeling capabilities, with BNNs offering a probabilistic framework that addresses the inherent uncertainty and variability in healthcare data. This study demonstrates the real-world applicability of BNNs through three key case studies: personalized diabetes treatment, early Alzheimer’s disease detection, and predictive modeling for HbA1c levels. By leveraging the Bayesian approach, these models provide not only enhanced predictive accuracy but also uncertainty quantification, a critical factor in clinical decision making. While the findings are promising, future research should focus on optimizing scalability and integration for real-world applications. This work lays a foundation for future studies, including the development of rating scales based on BNN predictions to improve clinical outcomes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!