Literatura científica selecionada sobre o tema "Predictive uncertainty quantification"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Predictive uncertainty quantification".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Predictive uncertainty quantification"

1

Cacuci, Dan Gabriel. "Sensitivity Analysis, Uncertainty Quantification and Predictive Modeling of Nuclear Energy Systems." Energies 15, no. 17 (2022): 6379. http://dx.doi.org/10.3390/en15176379.

Texto completo da fonte
Resumo:
The Special Issue “Sensitivity Analysis, Uncertainty Quantification and Predictive Modeling of Nuclear Energy Systems” comprises nine articles that present important applications of concepts for performing sensitivity analyses and uncertainty quantifications of models of nuclear energy systems [...]
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Csillag, Daniel, Lucas Monteiro Paes, Thiago Ramos, et al. "AmnioML: Amniotic Fluid Segmentation and Volume Prediction with Uncertainty Quantification." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (2023): 15494–502. http://dx.doi.org/10.1609/aaai.v37i13.26837.

Texto completo da fonte
Resumo:
Accurately predicting the volume of amniotic fluid is fundamental to assessing pregnancy risks, though the task usually requires many hours of laborious work by medical experts. In this paper, we present AmnioML, a machine learning solution that leverages deep learning and conformal prediction to output fast and accurate volume estimates and segmentation masks from fetal MRIs with Dice coefficient over 0.9. Also, we make available a novel, curated dataset for fetal MRIs with 853 exams and benchmark the performance of many recent deep learning architectures. In addition, we introduce a conformal prediction tool that yields narrow predictive intervals with theoretically guaranteed coverage, thus aiding doctors in detecting pregnancy risks and saving lives. A successful case study of AmnioML deployed in a medical setting is also reported. Real-world clinical benefits include up to 20x segmentation time reduction, with most segmentations deemed by doctors as not needing any further manual refinement. Furthermore, AmnioML's volume predictions were found to be highly accurate in practice, with mean absolute error below 56mL and tight predictive intervals, showcasing its impact in reducing pregnancy complications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Lew, Jiann-Shiun, and Jer-Nan Juang. "Robust Generalized Predictive Control with Uncertainty Quantification." Journal of Guidance, Control, and Dynamics 35, no. 3 (2012): 930–37. http://dx.doi.org/10.2514/1.54510.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Karimi, Hamed, and Reza Samavi. "Quantifying Deep Learning Model Uncertainty in Conformal Prediction." Proceedings of the AAAI Symposium Series 1, no. 1 (2023): 142–48. http://dx.doi.org/10.1609/aaaiss.v1i1.27492.

Texto completo da fonte
Resumo:
Precise estimation of predictive uncertainty in deep neural networks is a critical requirement for reliable decision-making in machine learning and statistical modeling, particularly in the context of medical AI. Conformal Prediction (CP) has emerged as a promising framework for representing the model uncertainty by providing well-calibrated confidence levels for individual predictions. However, the quantification of model uncertainty in conformal prediction remains an active research area, yet to be fully addressed. In this paper, we explore state-of-the-art CP methodologies and their theoretical foundations. We propose a probabilistic approach in quantifying the model uncertainty derived from the produced prediction sets in conformal prediction and provide certified boundaries for the computed uncertainty. By doing so, we allow model uncertainty measured by CP to be compared by other uncertainty quantification methods such as Bayesian (e.g., MC-Dropout and DeepEnsemble) and Evidential approaches.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Serenko, I. A., Y. V. Dorn, S. R. Singh, and A. V. Kornaev. "Room for Uncertainty in Remaining Useful Life Estimation for Turbofan Jet Engines." Nelineinaya Dinamika 20, no. 5 (2024): 933–43. https://doi.org/10.20537/nd241218.

Texto completo da fonte
Resumo:
This work addresses uncertainty quantification in machine learning, treating it as a hidden parameter of the model that estimates variance in training data, thereby enhancing the interpretability of predictive models. By predicting both the target value and the certainty of the prediction, combined with deep ensembling to study model uncertainty, the proposed method aims to increase model accuracy. The approach was applied to the well-known problem of Remaining Useful Life (RUL) estimation for turbofan jet engines using NASA’s dataset. The method demonstrated competitive results compared to other commonly used tabular data processing methods, including k-nearest neighbors, support vector machines, decision trees, and
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Akitaya, Kento, and Masaatsu Aichi. "Land Subsidence Model Inversion with the Estimation of Both Model Parameter Uncertainty and Predictive Uncertainty Using an Evolutionary-Based Data Assimilation (EDA) and Ensemble Model Output Statistics (EMOS)." Water 16, no. 3 (2024): 423. http://dx.doi.org/10.3390/w16030423.

Texto completo da fonte
Resumo:
The nonlinearity nature of land subsidence and limited observations cause premature convergence in typical data assimilation methods, leading to both underestimation and miscalculation of uncertainty in model parameters and prediction. This study focuses on a promising approach, the combination of evolutionary-based data assimilation (EDA) and ensemble model output statistics (EMOS), to investigate its performance in land subsidence modeling using EDA with a smoothing approach for parameter uncertainty quantification and EMOS for predictive uncertainty quantification. The methodology was tested on a one-dimensional subsidence model in Kawajima (Japan). The results confirmed the EDA’s robust capability: Model diversity was maintained even after 1000 assimilation cycles on the same dataset, and the obtained parameter distributions were consistent with the soil types. The ensemble predictions were converted to Gaussian predictions with EMOS using past observations statistically. The Gaussian predictions outperformed the ensemble predictions in predictive performance because EMOS compensated for the over/under-dispersive prediction spread and the short-term bias, a potential weakness for the smoothing approach. This case study demonstrates that combining EDA and EMOS contributes to groundwater management for land subsidence control, considering both the model parameter uncertainty and the predictive uncertainty.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Sriprasert, Soraida, and Patchanok Srisuradetchai. "Multi-K KNN regression with bootstrap aggregation: Accurate predictions and alternative prediction intervals." Edelweiss Applied Science and Technology 9, no. 5 (2025): 2750–64. https://doi.org/10.55214/25768484.v9i5.7589.

Texto completo da fonte
Resumo:
The k-nearest neighbors (KNN) algorithm is widely recognized for its simplicity and flexibility in modeling complex, non-linear relationships; however, standard KNN regression does not inherently provide prediction intervals (PIs), presenting a persistent challenge for uncertainty quantification. This study introduces a bootstrap-based multi-K approach specifically designed to construct robust prediction intervals in KNN regression. By systematically aggregating predictions across multiple neighborhood sizes through ensemble techniques and bootstrap resampling, the method effectively quantifies prediction uncertainty, particularly in challenging high-dimensional scenarios. Evaluations conducted on 15 diverse datasets spanning education, healthcare, chemistry, economics, and social sciences reveal that the proposed approach consistently achieves competitive predictive accuracy compared to traditional regression methods. Although traditional regression produces wider intervals with higher coverage probabilities, the proposed bootstrap-based KNN method generates notably tighter intervals, enhancing interpretability and practical utility. Despite occasionally reduced coverage probabilities, especially in high-dimensional contexts, the proposed methodology effectively balances precision and predictive coverage. Practically, this multi-K bootstrap approach provides researchers and practitioners with an effective and interpretable method for robust uncertainty quantification in complex predictive modeling tasks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Chala, Ayele Tesema, and Richard Ray. "Uncertainty Quantification in Shear Wave Velocity Predictions: Integrating Explainable Machine Learning and Bayesian Inference." Applied Sciences 15, no. 3 (2025): 1409. https://doi.org/10.3390/app15031409.

Texto completo da fonte
Resumo:
The accurate prediction of shear wave velocity (Vs) is critical for earthquake engineering applications. However, the prediction is inevitably influenced by geotechnical variability and various sources of uncertainty. This paper investigates the effectiveness of integrating explainable machine learning (ML) model and Bayesian generalized linear model (GLM) to enhance both predictive accuracy and uncertainty quantification in Vs prediction. The study utilizes an Extreme Gradient Boosting (XGBoost) algorithm coupled with Shapley Additive Explanations (SHAPs) and partial dependency analysis to identify key geotechnical parameters influencing Vs predictions. Additionally, a Bayesian GLM is developed to explicitly account for uncertainties arising from geotechnical variability. The effectiveness and predictive performance of the proposed models were validated through comparison with real case scenarios. The results highlight the unique advantages of each model. The XGBoost model demonstrates good predictive performance, achieving high coefficient of determination (R2), index of agreement (IA), Kling–Gupta efficiency (KGE) values, and low error values while effectively explaining the impact of input parameters on Vs. In contrast, the Bayesian GLM provides probabilistic predictions with 95% credible intervals, capturing the uncertainty associated with the predictions. The integration of these two approaches creates a comprehensive framework that combines the strengths of high-accuracy ML predictions with the uncertainty quantification of Bayesian inference. This hybrid methodology offers a powerful and interpretable tool for Vs prediction, providing engineers with the confidence to make informed decisions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Ayed, Safa Ben, Roozbeh Sadeghian Broujeny, and Rachid Tahar Hamza. "Remaining Useful Life Prediction with Uncertainty Quantification Using Evidential Deep Learning." Journal of Artificial Intelligence and Soft Computing Research 15, no. 1 (2024): 37–55. https://doi.org/10.2478/jaiscr-2025-0003.

Texto completo da fonte
Resumo:
Abstract Predictive Maintenance presents an important and challenging task in Industry 4.0. It aims to prevent premature failures and reduce costs by avoiding unnecessary maintenance tasks. This involves estimating the Remaining Useful Life (RUL), which provides critical information for decision makers and planners of future maintenance activities. However, RUL prediction is not simple due to the imperfections in monitoring data, making effective Predictive Maintenance challenging. To address this issue, this article proposes an Evidential Deep Learning (EDL) based method to predict the RUL and to quantify both data uncertainties and prediction model uncertainties. An experimental analysis conducted on the C-MAPSS dataset of aero-engine degradation affirms that EDL based method outperforms alternative machine learning approaches. Moreover, the accompanying uncertainty quantification analysis demonstrates sound methodology and reliable results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Plesner, Andreas, Allan P. Engsig-Karup, and Hans True. "Detecting Railway Track Irregularities with Data-driven Uncertainty Quantification." Highlights of Vehicles 3, no. 1 (2025): 1–14. https://doi.org/10.54175/hveh3010001.

Texto completo da fonte
Resumo:
This study addresses the critical challenge of assessing railway track irregularities using advanced machine learning techniques, specifically convolutional neural networks (CNNs) and conformal prediction. Leveraging high-fidelity sensor data from high-speed trains, we propose a novel CNN model that significantly outperforms state-of-the-art results in predicting track irregularities. Our CNN architecture, optimized through extensive hyperparameter tuning, comprises multiple convolutional layers with batch normalization, Exponential Linear Unit (ELU) activation functions, and dropout regularization. This design enables the model to capture complex spatial and temporal dependencies in the train’s dynamic responses, translating them into accurate predictions of track irregularities. The model achieves a mean unsigned error of 0.31 mm on the test set, surpassing the previous state-of-the-art performance and approaching industry-standard benchmarks for track measurement accuracy. This level of precision is crucial for the early detection of track defects that could compromise safety and ride quality. To quantify uncertainty in the model’s predictions, we implement conformal prediction techniques, specifically the CV+ and CV-minmax methods. These approaches provide prediction intervals with high reliability, achieving a 97.18% coverage rate for the CV-minmax method. The resulting prediction intervals have an average width of 2.33 mm, offering a balance between precision and confidence in the model’s outputs. Notably, our model exhibits impressive computational efficiency, capable of processing over 2000 kilometers of track data per hour. This speed makes it suitable for real-time applications in continuous monitoring systems, potentially revolutionizing the approach to railway maintenance. The integration of CNNs with conformal prediction represents a significant advancement in the field of predictive maintenance for railway infrastructure. By providing both accurate predictions and well-calibrated uncertainty estimates, our approach enables more informed decision-making in track maintenance planning and safety assessments.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Mais fontes

Teses / dissertações sobre o assunto "Predictive uncertainty quantification"

1

Lonsdale, Jack Henry. "Predictive modelling and uncertainty quantification of UK forest growth." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/16202.

Texto completo da fonte
Resumo:
Forestry in the UK is dominated by coniferous plantations. Sitka spruce (Picea sitchensis) and Scots pine (Pinus sylvestris) are the most prevalent species and are mostly grown in single age mono-culture stands. Forest strategy for Scotland, England, and Wales all include efforts to achieve further afforestation. The aim of this afforestation is to provide a multi-functional forest with a broad range of benefits. Due to the time scale involved in forestry, accurate forecasts of stand productivity (along with clearly defined uncertainties) are essential to forest managers. These can be provided by a range of approaches to modelling forest growth. In this project model comparison, Bayesian calibration, and data assimilation methods were all used to attempt to improve forecasts and understanding of uncertainty therein of the two most important conifers in UK forestry. Three different forest growth models were compared in simulating growth of Scots pine. A yield table approach, the process-based 3PGN model, and a Stand Level Dynamic Growth (SLeDG) model were used. Predictions were compared graphically over the typical productivity range for Scots pine in the UK. Strengths and weaknesses of each model were considered. All three produced similar growth trajectories. The greatest difference between models was in volume and biomass in unthinned stands where the yield table predicted a much larger range compared to the other two models. Future advances in data availability and computing power should allow for greater use of process-based models, but in the interim more flexible dynamic growth models may be more useful than static yield tables for providing predictions which extend to non-standard management prescriptions and estimates of early growth and yield. A Bayesian calibration of the SLeDG model was carried out for both Sitka spruce and Scots pine in the UK for the first time. Bayesian calibrations allow both model structure and parameters to be assessed simultaneously in a probabilistic framework, providing a model with which forecasts and their uncertainty can be better understood and quantified using posterior probability distributions. Two different structures for including local productivity in the model were compared with a Bayesian model comparison. A complete calibration of the more probable model structure was then completed. Example forecasts from the calibration were compatible with existing yield tables for both species. This method could be applied to other species or other model structures in the future. Finally, data assimilation was investigated as a way of reducing forecast uncertainty. Data assimilation assumes that neither observations nor models provide a perfect description of a system, but combining them may provide the best estimate. SLeDG model predictions and LiDAR measurements for sub-compartments within Queen Elizabeth Forest Park were combined with an Ensemble Kalman Filter. Uncertainty was reduced following the second data assimilation in all of the state variables. However, errors in stand delineation and estimated stand yield class may have caused observational uncertainty to be greater thus reducing the efficacy of the method for reducing overall uncertainty.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Gligorijevic, Djordje. "Predictive Uncertainty Quantification and Explainable Machine Learning in Healthcare." Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/520057.

Texto completo da fonte
Resumo:
Computer and Information Science<br>Ph.D.<br>Predictive modeling is an ever-increasingly important part of decision making. The advances in Machine Learning predictive modeling have spread across many domains bringing significant improvements in performance and providing unique opportunities for novel discoveries. A notably important domains of the human world are medical and healthcare domains, which take care of peoples' wellbeing. And while being one of the most developed areas of science with active research, there are many ways they can be improved. In particular, novel tools developed based on Machine Learning theory have drawn benefits across many areas of clinical practice, pushing the boundaries of medical science and directly affecting well-being of millions of patients. Additionally, healthcare and medicine domains require predictive modeling to anticipate and overcome many obstacles that future may hold. These kinds of applications employ a precise decision--making processes which requires accurate predictions. However, good prediction by its own is often insufficient. There has been no major focus in developing algorithms with good quality uncertainty estimates. Ergo, this thesis aims at providing a variety of ways to incorporate solutions by learning high quality uncertainty estimates or providing interpretability of the models where needed for purpose of improving existing tools built in practice and allowing many other tools to be used where uncertainty is the key factor for decision making. The first part of the thesis proposes approaches for learning high quality uncertainty estimates for both short- and long-term predictions in multi-task learning, developed on top for continuous probabilistic graphical models. In many scenarios, especially in long--term predictions, it may be of great importance for the models to provide a reliability flag in order to be accepted by domain experts. To this end we explored a widely applied structured regression model with a goal of providing meaningful uncertainty estimations on various predictive tasks. Our particular interest is in modeling uncertainty propagation while predicting far in the future. To address this important problem, our approach centers around providing an uncertainty estimate by modeling input features as random variables. This allows modeling uncertainty from noisy inputs. In cases when model iteratively produces errors it should propagate uncertainty over the predictive horizon, which may provide invaluable information for decision making based on predictions. In the second part of the thesis we propose novel neural embedding models for learning low-dimensional embeddings of medical concepts, such are diseases and genes, and show how they can be interpreted to allow accessing their quality, and show how can they be used to solve many problems in medical and healthcare research. We use EHR data to discover novel relationships between diseases by studying their comorbidities (i.e., co-occurrences in patients). We trained our models on a large-scale EHR database comprising more than 35 million inpatient cases. To confirm value and potential of the proposed approach we evaluate its effectiveness on a held-out set. Furthermore, for select diseases we provide a candidate gene list for which disease-gene associations were not studied previously, allowing biomedical researchers to better focus their often very costly lab studies. We furthermore examine how disease heterogeneity can affect the quality of learned embeddings and propose an approach for learning types of such heterogeneous diseases, while in our study we primarily focus on learning types of sepsis. Finally, we evaluate the quality of low-dimensional embeddings on tasks of predicting hospital quality indicators such as length of stay, total charges and mortality likelihood, demonstrating their superiority over other approaches. In the third part of the thesis we focus on decision making in medicine and healthcare domain by developing state-of-the-art deep learning models capable of outperforming human performance while maintaining good interpretability and uncertainty estimates.<br>Temple University--Theses
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Zaffran, Margaux. "Post-hoc predictive uncertainty quantification : methods with applications to electricity price forecasting." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAX033.

Texto completo da fonte
Resumo:
L'essor d'algorithmes d'apprentissage statistique offre des perspectives prometteuses pour prévoir les prix de l'électricité. Cependant, ces méthodes fournissent des prévisions ponctuelles, sans indication du degré de confiance à leur accorder. Pour garantir un déploiement sûr de ces modèles prédictifs, il est crucial de quantifier leur incertitude prédictive. Cette thèse porte sur le développement d'intervalles prédictifs pour tout algorithme de prédiction. Bien que motivées par le secteur électrique, les méthodes développées, basées sur la prédiction conforme par partition (SCP), sont génériques : elles peuvent être appliquées dans de nombreux autres domaines sensibles.Dans un premier temps,cette thèse étudie la quantification post-hoc de l'incertitude prédictive pour les séries temporelles. Le premier obstacle à l'application de SCP pour obtenir des prévisions probabilistes théoriquement valides des prix de l'électricité de manière post-hoc est l'aspect temporel hautement non-stationnaire des prix de l'électricité, brisant l'hypothèse d'échangeabilité. La première contribution propose un algorithme qui ne dépend pas d'un paramètre et adapté aux séries temporelles, reposant sur l'analyse théorique de l'efficacité d'une méthode pré-existante, l'Inférence Conforme Adaptative. La deuxième contribution mène une étude d'application détaillée sur un nouveau jeu de données de prix spot français récents et turbulents en 2020 et 2021.Un autre défi sont les valeurs manquantes (NAs). Dans un deuxièmte temps, cette thèse analyse l'interaction entre les NAs et la quantification de l'incertitude prédictive. La troisième contribution montre que les NAs induisent de l'hétéroscédasticité, ce qui conduit à une couverture inégale en fonction de quelles valeurs sont manquantes. Deux algorithmes sont conçus afin d'assurer une couverture constante quelque soit le schéma de NAs, ceci étant assuré sous des hypothèses distributionnelles sur les NAs. La quatrième contribution approfondit l'analyse théorique afin de comprendre précisément quelles hypothèses de distribution sont inévitables pour construite des régions prédictives informatives. Elle unifie également les algorithmes proposés précédemment dans un cadre général qui démontre empiriquement être robuste aux violations des hypothèses distributionnelles sur les NAs<br>The surge of more and more powerful statistical learning algorithms offers promising prospects for electricity prices forecasting. However, these methods provide ad hoc forecasts, with no indication of the degree of confidence to be placed in them. To ensure the safe deployment of these predictive models, it is crucial to quantify their predictive uncertainty. This PhD thesis focuses on developing predictive intervals for any underlying algorithm. While motivated by the electrical sector, the methods developed, based on Split Conformal Prediction (SCP), are generic: they can be applied in many sensitive fields.First, this thesis studies post-hoc predictive uncertainty quantification for time series. The first bottleneck to apply SCP in order to obtain guaranteed probabilistic electricity price forecasting in a post-hoc fashion is the highly non-stationary temporal aspect of electricity prices, breaking the exchangeability assumption. The first contribution proposes a parameter-free algorithm tailored for time series, which is based on theoretically analysing the efficiency of the existing Adaptive Conformal Inference method. The second contribution conducts an extensive application study on novel data set of recent turbulent French spot prices in 2020 and 2021.Another challenge are missing values (NAs). In a second part, this thesis analyzes the interplay between NAs and predictive uncertainty quantification. The third contribution highlights that NAs induce heteroskedasticity, leading to uneven coverage depending on which features are observed. Two algorithms recovering equalized coverage for any NAs under distributional assumptions on the missigness mechanism are designed. The forth contribution pushes forwards the theoretical analysis to understand precisely which distributional assumptions are unavoidable for theoretical informativeness. It also unifies the previously proposed algorithms into a general framework that demontrastes empirical robustness to violations of the supposed missingness distribution
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Riley, Matthew E. "Quantification of Model-Form, Predictive, and Parametric Uncertainties in Simulation-Based Design." Wright State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=wright1314895435.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Freeman, Jacob Andrew. "Optimization Under Uncertainty and Total Predictive Uncertainty for a Tractor-Trailer Base-Drag Reduction Device." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/77168.

Texto completo da fonte
Resumo:
One key outcome of this research is the design for a 3-D tractor-trailer base-drag reduction device that predicts a 41% reduction in wind-averaged drag coefficient at 57 mph (92 km/h) and that is relatively insensitive to uncertain wind speed and direction and uncertain deflection angles due to mounting accuracy and static aeroelastic loading; the best commercial device of non-optimized design achieves a 12% reduction at 65 mph. Another important outcome is the process by which the optimized design is obtained. That process includes verification and validation of the flow solver, a less complex but much broader 2-D pathfinder study, and the culminating 3-D aerodynamic shape optimization under uncertainty (OUU) study. To gain confidence in the accuracy and precision of a computational fluid dynamics (CFD) flow solver and its Reynolds-averaged Navier-Stokes (RANS) turbulence models, it is necessary to conduct code verification, solution verification, and model validation. These activities are accomplished using two commercial CFD solvers, Cobalt and RavenCFD, with four turbulence models: Spalart-Allmaras (S-A), S-A with rotation and curvature, Menter shear-stress transport (SST), and Wilcox 1998 k-ω. Model performance is evaluated for three low subsonic 2-D applications: turbulent flat plate, planar jet, and NACA 0012 airfoil at α = 0°. The S-A turbulence model is selected for the 2-D OUU study. In the 2-D study, a tractor-trailer base flap model is developed that includes six design variables with generous constraints; 400 design candidates are evaluated. The design optimization loop includes the effect of uncertain wind speed and direction, and post processing addresses several other uncertain effects on drag prediction. The study compares the efficiency and accuracy of two optimization algorithms, evolutionary algorithm (EA) and dividing rectangles (DIRECT), twelve surrogate models, six sampling methods, and surrogate-based global optimization (SBGO) methods. The DAKOTA optimization and uncertainty quantification framework is used to interface the RANS flow solver, grid generator, and optimization algorithm. The EA is determined to be more efficient in obtaining a design with significantly reduced drag (as opposed to more efficient in finding the true drag minimum), and total predictive uncertainty is estimated as ±11%. While the SBGO methods are more efficient than a traditional optimization algorithm, they are computationally inefficient due to their serial nature, as implemented in DAKOTA. Because the S-A model does well in 2-D but not in 3-D under these conditions, the SST turbulence model is selected for the 3-D OUU study that includes five design variables and evaluates a total of 130 design candidates. Again using the EA, the study propagates aleatory (wind speed and direction) and epistemic (perturbations in flap deflection angle) uncertainty within the optimization loop and post processes several other uncertain effects. For the best 3-D design, total predictive uncertainty is +15/-42%, due largely to using a relatively coarse (six million cell) grid. That is, the best design drag coefficient estimate is within 15 and 42% of the true value; however, its improvement relative to the no-flaps baseline is accurate within 3-9% uncertainty.<br>Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Wu, Jinlong. "Predictive Turbulence Modeling with Bayesian Inference and Physics-Informed Machine Learning." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/85129.

Texto completo da fonte
Resumo:
Reynolds-Averaged Navier-Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high-fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled.<br>Ph. D.<br>Reynolds-Averaged Navier–Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Cortesi, Andrea Francesco. "Predictive numerical simulations for rebuilding freestream conditions in atmospheric entry flows." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0021/document.

Texto completo da fonte
Resumo:
Une prédiction fidèle des écoulements hypersoniques à haute enthalpie est capitale pour les missions d'entrée atmosphérique. Cependant, la présence d'incertitudes est inévitable, sur les conditions de l'écoulement libre comme sur d'autres paramètres des modèles physico-chimiques. Pour cette raison, une quantification rigoureuse de l'effet de ces incertitudes est obligatoire pour évaluer la robustesse et la prédictivité des simulations numériques. De plus, une reconstruction correcte des paramètres incertains à partir des mesures en vol peut aider à réduire le niveau d'incertitude sur les sorties. Dans ce travail, nous utilisons un cadre statistique pour la propagation directe des incertitudes ainsi que pour la reconstruction inverse des conditions de l'écoulement libre dans le cas d'écoulements de rentrée atmosphérique. La possibilité d'exploiter les mesures de flux thermique au nez du véhicule pour la reconstruction des variables de l'écoulement libre et des paramètres incertains du modèle est évaluée pour les écoulements de rentrée hypersoniques. Cette reconstruction est réalisée dans un cadre bayésien, permettant la prise en compte des différentes sources d'incertitudes et des erreurs de mesure. Différentes techniques sont introduites pour améliorer les capacités de la stratégie statistique de quantification des incertitudes. Premièrement, une approche est proposée pour la génération d'un métamodèle amélioré, basée sur le couplage de Kriging et Sparse Polynomial Dimensional Decomposition. Ensuite, une méthode d'ajoute adaptatif de nouveaux points à un plan d'expériences existant est présentée dans le but d'améliorer la précision du métamodèle créé. Enfin, une manière d'exploiter les sous-espaces actifs dans les algorithmes de Markov Chain Monte Carlo pour les problèmes inverses bayésiens est également exposée<br>Accurate prediction of hypersonic high-enthalpy flows is of main relevance for atmospheric entry missions. However, uncertainties are inevitable on freestream conditions and other parameters of the physico-chemical models. For this reason, a rigorous quantification of the effect of uncertainties is mandatory to assess the robustness and predictivity of numerical simulations. Furthermore, a proper reconstruction of uncertain parameters from in-flight measurements can help reducing the level of uncertainties of the output. In this work, we will use a statistical framework for direct propagation of uncertainties and inverse freestream reconstruction applied to atmospheric entry flows. We propose an assessment of the possibility of exploiting forebody heat flux measurements for the reconstruction of freestream variables and uncertain parameters of the model for hypersonic entry flows. This reconstruction is performed in a Bayesian framework, allowing to account for sources of uncertainties and measurement errors. Different techniques are introduced to enhance the capabilities of the statistical framework for quantification of uncertainties. First, an improved surrogate modeling technique is proposed, based on Kriging and Sparse Polynomial Dimensional Decomposition. Then a method is proposed to adaptively add new training points to an existing experimental design to improve the accuracy of the trained surrogate model. A way to exploit active subspaces in Markov Chain Monte Carlo algorithms for Bayesian inverse problems is also proposed
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Erbas, Demet. "Sampling strategies for uncertainty quantification in oil recovery prediction." Thesis, Heriot-Watt University, 2007. http://hdl.handle.net/10399/70.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Whiting, Nolan Wagner. "Assessment of Model Validation, Calibration, and Prediction Approaches in the Presence of Uncertainty." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/91903.

Texto completo da fonte
Resumo:
Model validation is the process of determining the degree to which a model is an accurate representation of the true value in the real world. The results of a model validation study can be used to either quantify the model form uncertainty or to improve/calibrate the model. However, the model validation process can become complicated if there is uncertainty in the simulation and/or experimental outcomes. These uncertainties can be in the form of aleatory uncertainties due to randomness or epistemic uncertainties due to lack of knowledge. Four different approaches are used for addressing model validation and calibration: 1) the area validation metric (AVM), 2) a modified area validation metric (MAVM) with confidence intervals, 3) the standard validation uncertainty from ASME VandV 20, and 4) Bayesian updating of a model discrepancy term. Details are given for the application of the MAVM for accounting for small experimental sample sizes. To provide an unambiguous assessment of these different approaches, synthetic experimental values were generated from computational fluid dynamics simulations of a multi-element airfoil. A simplified model was then developed using thin airfoil theory. This simplified model was then assessed using the synthetic experimental data. The quantities examined include the two dimensional lift and moment coefficients for the airfoil with varying angles of attack and flap deflection angles. Each of these validation/calibration approaches will be assessed for their ability to tightly encapsulate the true value in nature at locations both where experimental results are provided and prediction locations where no experimental data are available. Generally it was seen that the MAVM performed the best in cases where there is a sparse amount of data and/or large extrapolations and Bayesian calibration outperformed the others where there is an extensive amount of experimental data that covers the application domain.<br>Master of Science<br>Uncertainties often exists when conducting physical experiments, and whether this uncertainty exists due to input uncertainty, uncertainty in the environmental conditions in which the experiment takes place, or numerical uncertainty in the model, it can be difficult to validate and compare the results of a model with those of an experiment. Model validation is the process of determining the degree to which a model is an accurate representation of the true value in the real world. The results of a model validation study can be used to either quantify the uncertainty that exists within the model or to improve/calibrate the model. However, the model validation process can become complicated if there is uncertainty in the simulation (model) and/or experimental outcomes. These uncertainties can be in the form of aleatory (uncertainties which a probability distribution can be applied for likelihood of drawing values) or epistemic uncertainties (no knowledge, inputs drawn within an interval). Four different approaches are used for addressing model validation and calibration: 1) the area validation metric (AVM), 2) a modified area validation metric (MAVM) with confidence intervals, 3) the standard validation uncertainty from ASME V&V 20, and 4) Bayesian updating of a model discrepancy term. Details are given for the application of the MAVM for accounting for small experimental sample sizes. To provide an unambiguous assessment of these different approaches, synthetic experimental values were generated from computational fluid dynamics(CFD) simulations of a multi-element airfoil. A simplified model was then developed using thin airfoil theory. This simplified model was then assessed using the synthetic experimental data. The quantities examined include the two dimensional lift and moment coefficients for the airfoil with varying angles of attack and flap deflection angles. Each of these validation/calibration approaches will be assessed for their ability to tightly encapsulate the true value in nature at locations both where experimental results are provided and prediction locations where no experimental data are available. Also of interest was to assess how well each method could predict the uncertainties about the simulation outside of the region in which experimental observations were made, and model form uncertainties could be observed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Phadnis, Akash. "Uncertainty quantification and prediction for non-autonomous linear and nonlinear systems." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85476.

Texto completo da fonte
Resumo:
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2013.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 189-197).<br>The science of uncertainty quantification has gained a lot of attention over recent years. This is because models of real processes always contain some elements of uncertainty, and also because real systems can be better described using stochastic components. Stochastic models can therefore be utilized to provide a most informative prediction of possible future states of the system. In light of the multiple scales, nonlinearities and uncertainties in ocean dynamics, stochastic models can be most useful to describe ocean systems. Uncertainty quantification schemes developed in recent years include order reduction methods (e.g. proper orthogonal decomposition (POD)), error subspace statistical estimation (ESSE), polynomial chaos (PC) schemes and dynamically orthogonal (DO) field equations. In this thesis, we focus our attention on DO and various PC schemes for quantifying and predicting uncertainty in systems with external stochastic forcing. We develop and implement these schemes in a generic stochastic solver for a class of non-autonomous linear and nonlinear dynamical systems. This class of systems encapsulates most systems encountered in classic nonlinear dynamics and ocean modeling, including flows modeled by Navier-Stokes equations. We first study systems with uncertainty in input parameters (e.g. stochastic decay models and Kraichnan-Orszag system) and then with external stochastic forcing (autonomous and non-autonomous self-engineered nonlinear systems). For time-integration of system dynamics, stochastic numerical schemes of varied order are employed and compared. Using our generic stochastic solver, the Monte Carlo, DO and polynomial chaos schemes are inter-compared in terms of accuracy of solution and computational cost. To allow accurate time-integration of uncertainty due to external stochastic forcing, we also derive two novel PC schemes, namely, the reduced space KLgPC scheme and the modified TDgPC (MTDgPC) scheme. We utilize a set of numerical examples to show that the two new PC schemes and the DO scheme can integrate both additive and multiplicative stochastic forcing over significant time intervals. For the final example, we consider shallow water ocean surface waves and the modeling of these waves by deterministic dynamics and stochastic forcing components. Specifically, we time-integrate the Korteweg-de Vries (KdV) equation with external stochastic forcing, comparing the performance of the DO and Monte Carlo schemes. We find that the DO scheme is computationally efficient to integrate uncertainty in such systems with external stochastic forcing.<br>by Akash Phadnis.<br>S.M.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Mais fontes

Livros sobre o assunto "Predictive uncertainty quantification"

1

McClarren, Ryan G. Uncertainty Quantification and Predictive Computational Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99525-0.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Eva, Boegh, and International Association of Hydrological Sciences., eds. Quantification and reduction of predictive uncertainty for sustainable water resources management: Proceedings of an international symposium [held] during IUGG2007, the XXIV General Assembly of the International Union of Geodesy and Geophysics at Perugia, Italy, July 2007. International Association of Hydrological Sciences, 2007.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Harrington, Matthew R. Predicting and Understanding the Presence of Water through Remote Sensing, Machine Learning, and Uncertainty Quantification. [publisher not identified], 2022.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Hemez, François, and Sez Atamturktur. Predictive Modelling: Verification, Validation and Uncertainty Quantification. Wiley & Sons, Limited, John, 2018.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

McClarren, Ryan G. Uncertainty Quantification and Predictive Computational Science: A Foundation for Physical Scientists and Engineers. Springer, 2018.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Anderson, Mark, Francois Hemez, and Scott Doebling. Model Verification and Validation in Engineering Mechanics: Theory and Applications of Uncertainty Quantification and Predictive Accuracy. John Wiley & Sons, 2005.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Model Verification and Validation in Engineering Mechanics: Theory and Applications of Uncertainty Quantification and Predictive Accuracy. Wiley & Sons, Limited, John, 2004.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Sanderson, Benjamin Mark. Uncertainty Quantification in Multi-Model Ensembles. Oxford University Press, 2018. http://dx.doi.org/10.1093/acrefore/9780190228620.013.707.

Texto completo da fonte
Resumo:
Long-term planning for many sectors of society—including infrastructure, human health, agriculture, food security, water supply, insurance, conflict, and migration—requires an assessment of the range of possible futures which the planet might experience. Unlike short-term forecasts for which validation data exists for comparing forecast to observation, long-term forecasts have almost no validation data. As a result, researchers must rely on supporting evidence to make their projections. A review of methods for quantifying the uncertainty of climate predictions is given. The primary tool for quantifying these uncertainties are climate models, which attempt to model all the relevant processes that are important in climate change. However, neither the construction nor calibration of climate models is perfect, and therefore the uncertainties due to model errors must also be taken into account in the uncertainty quantification.Typically, prediction uncertainty is quantified by generating ensembles of solutions from climate models to span possible futures. For instance, initial condition uncertainty is quantified by generating an ensemble of initial states that are consistent with available observations and then integrating the climate model starting from each initial condition. A climate model is itself subject to uncertain choices in modeling certain physical processes. Some of these choices can be sampled using so-called perturbed physics ensembles, whereby uncertain parameters or structural switches are perturbed within a single climate model framework. For a variety of reasons, there is a strong reliance on so-called ensembles of opportunity, which are multi-model ensembles (MMEs) formed by collecting predictions from different climate modeling centers, each using a potentially different framework to represent relevant processes for climate change. The most extensive collection of these MMEs is associated with the Coupled Model Intercomparison Project (CMIP). However, the component models have biases, simplifications, and interdependencies that must be taken into account when making formal risk assessments. Techniques and concepts for integrating model projections in MMEs are reviewed, including differing paradigms of ensembles and how they relate to observations and reality. Aspects of these conceptual issues then inform the more practical matters of how to combine and weight model projections to best represent the uncertainties associated with projected climate change.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Chen, Nan. Stochastic Methods for Modeling and Predicting Complex Dynamical Systems: Uncertainty Quantification, State Estimation, and Reduced-Order Models. Springer International Publishing AG, 2023.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Predictive uncertainty quantification"

1

Svensson, Emma, Hannah Rosa Friesacher, Adam Arany, Lewis Mervin, and Ola Engkvist. "Temporal Evaluation of Uncertainty Quantification Under Distribution Shift." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72381-0_11.

Texto completo da fonte
Resumo:
AbstractUncertainty quantification is emerging as a critical tool in high-stakes decision-making processes, where trust in automated predictions that lack accuracy and precision can be time-consuming and costly. In drug discovery, such high-stakes decisions are based on modeling the properties of potential drug compounds on biological assays. So far, existing uncertainty quantification methods have primarily been evaluated using public datasets that lack the temporal context necessary to understand their performance over time. In this work, we address the pressing need for a comprehensive, large-scale temporal evaluation of uncertainty quantification methodologies in the context of assay-based molecular property prediction. Our novel framework benchmarks three ensemble-based approaches to uncertainty quantification and explores the effect of adding lower-quality data during training in the form of censored labels. We investigate the robustness of the predictive performance and the calibration and reliability of predictive uncertainty by the models as time evolves. Moreover, we explore how the predictive uncertainty behaves in response to varying degrees of distribution shift. By doing so, our analysis not only advances the field but also provides practical implications for real-world pharmaceutical applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

McClarren, Ryan G. "Introduction to Uncertainty Quantification and Predictive Science." In Uncertainty Quantification and Predictive Computational Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99525-0_1.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

McClarren, Ryan G. "Gaussian Process Emulators and Surrogate Models." In Uncertainty Quantification and Predictive Computational Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99525-0_10.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

McClarren, Ryan G. "Predictive Models Informed by Simulation, Measurement, and Surrogates." In Uncertainty Quantification and Predictive Computational Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99525-0_11.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

McClarren, Ryan G. "Epistemic Uncertainties: Dealing with a Lack of Knowledge." In Uncertainty Quantification and Predictive Computational Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99525-0_12.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

McClarren, Ryan G. "Probability and Statistics Preliminaries." In Uncertainty Quantification and Predictive Computational Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99525-0_2.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

McClarren, Ryan G. "Input Parameter Distributions." In Uncertainty Quantification and Predictive Computational Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99525-0_3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

McClarren, Ryan G. "Local Sensitivity Analysis Based on Derivative Approximations." In Uncertainty Quantification and Predictive Computational Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99525-0_4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

McClarren, Ryan G. "Regression Approximations to Estimate Sensitivities." In Uncertainty Quantification and Predictive Computational Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99525-0_5.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

McClarren, Ryan G. "Adjoint-Based Local Sensitivity Analysis." In Uncertainty Quantification and Predictive Computational Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99525-0_6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Predictive uncertainty quantification"

1

Mossina, Luca, Joseba Dalmau, and Léo Andéol. "Conformal Semantic Image Segmentation: Post-hoc Quantification of Predictive Uncertainty." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2024. http://dx.doi.org/10.1109/cvprw63382.2024.00361.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Park, Seong-Ho, Hong Je-Gal, and Hyun-Suk Lee. "A Novel Data-Driven Soft Sensor in Metaverse Provisioning Predictive Credibility Based on Uncertainty Quantification." In 2024 IEEE International Conference on Metaverse Computing, Networking, and Applications (MetaCom). IEEE, 2024. http://dx.doi.org/10.1109/metacom62920.2024.00053.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Duan, Jinhao, Hao Cheng, Shiqi Wang, et al. "Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models." In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.acl-long.276.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Dehon, Victor, Paulina Quintanilla, and Antonio Del Rio Chanona. "Probabilistic Model Predictive Control for Mineral Flotation using Gaussian Processes." In The 35th European Symposium on Computer Aided Process Engineering. PSE Press, 2025. https://doi.org/10.69997/sct.122018.

Texto completo da fonte
Resumo:
Recent advancements in machine learning and time series analysis have opened new avenues for improving predictive control in complex systems such as mineral flotation. Techniques leveraging multivariate predictive control in mineral flotation have seen significant progress in recent years. However, challenges in developing an accurate dynamic model that encapsulates both the pulp and froth phases have hindered further advancements. Now, with a readily available model containing equations that describe the physics of flotation froths, an opportunity for novel control strategies presents itself. In this study, a Gaussian Process (GP) Model Predictive Control (MPC) strategy is proposed to integrate uncertainty quantification directly into the control framework. By leveraging the probabilistic nature of GP models, this approach captures process variability and adapts dynamically to new data, ensuring continuous refinement of the GP model within the MPC strategy. Unlike previous implementations where model parameters remained static, this methodology updates the GP model in real time, allowing for improved decision-making in the face of process uncertainty. The GP model was trained, optimized with JAX, evaluated using relevant metrics, and implemented as a surrogate within the MPC framework. The results demonstrate the capability of the GP model to accurately represent process dynamics while minimising prediction errors. Moreover, incorporating uncertainty reduction through standard deviation minimisation in the objective function enhances both control performance and system robustness. This paper sets the basis for the potential of using GP-MPC to enhance both the accuracy and robustness of mineral froth flotation control.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Berthier, Louis, Ahmed Shokry, Eric Moulines, Guillaume Ramelet, and Sylvain Desroziers. "Knowledge Discovery in Large-Scale Batch Processes through Explainable Boosted Models and Uncertainty Quantification: Application to Rubber Mixing." In The 35th European Symposium on Computer Aided Process Engineering. PSE Press, 2025. https://doi.org/10.69997/sct.183525.

Texto completo da fonte
Resumo:
Rubber mixing (RM) is a vital batch process producing high-quality composites, which serve as input material for manufacturing different types of final products, such as tires. Due to its complexity, this process faces two main challenges regarding the final quality: i) lack of online measurement and ii) limited comprehension of the influence of the different factors involved in the process. While data-driven and machine learning (ML) based soft-sensing methods have been widely applied to address the first challenge, the second challenge, to the best of the author's knowledge, has not yet been addressed in the rubber industry. This work presents a data-driven method for extracting knowledge and providing explainability in the quality prediction in RM processes. The method centers on an XGBoost model while leveraging high-dimensional data collected over extended time periods from one of Michelin�s complex mixing processes. First, a recursive feature elimination-based procedure is used for selecting relevant features, which reduces the number of input features used for building the ML model by 82% while improving its predictive performance by 17%. Secondly, SHapley Additive exPlanations (SHAP) techniques are employed to explain the ML model�s predictions through global and local analyses of feature interactions. The selected quality-related variables can be leveraged to improve process control and supervision. Finally, an uncertainty quantification (UQ) module, based on Split Conformal Prediction (SCP), is combined with the ML model, providing confidence intervals with 90% coverage and empirically verified theoretical guarantees. This module ensures prediction reliability and robustness in real applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Jiet, Moses Makuei, Prateek Verma, Aahash Kamble, and Chetan Puri. "A Review on Bayesian Methods for Uncertainty Quantification in Machine Learning Models Enhancing Predictive Accuracy and Model Interpretability." In 2024 Second International Conference on Intelligent Cyber Physical Systems and Internet of Things (ICoICI). IEEE, 2024. http://dx.doi.org/10.1109/icoici62503.2024.10696308.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Dekhici, Benaissa, and Michael Short. "Data-Driven Modelling of Biogas Production Using Multi-Task Gaussian Processes." In The 35th European Symposium on Computer Aided Process Engineering. PSE Press, 2025. https://doi.org/10.69997/sct.121877.

Texto completo da fonte
Resumo:
This study introduces the novel application of a Multi-Task Gaussian Process (MTGP) model to predict biogas production and critical anaerobic digestion (AD) performance indicators (soluble COD, volatile fatty acids (VFAs)), addressing feedstock variability and dynamic process behavior. We compare the MTGP against the widely used mechanistic AM2 model to evaluate its accuracy and applicability for probabilistic modeling in AD systems. The MTGP framework leverages multi-output correlations and uncertainty quantification, trained on experimental data, achieving superior predictive performance over AM2in this study, with lower RMSE (SCOD: 0.32 g/L; VFAs: 0.87 mmol/L; biogas: 0.15 L/day) and higher R� values (SCOD:0.91, VFAs:0.94, biogas :0.88) under the conditions tested. While AM2 provides biochemical insights, its reliance on unvalidated assumptions may limits robustness. The flexibility of MTGP and precision suggest its potential for real-world applications such as Bayesian Optimization and Design of Experiments, enabling data-driven process enhancement without mechanistic constraints. This work establishes MTGP as a pioneering tool for AD optimization, bridging data-driven efficiency with practical bioenergy challenges.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Zhou, Hao, Yanze Zhang, and Wenhao Luo. "Safety-Critical Control with Uncertainty Quantification using Adaptive Conformal Prediction." In 2024 American Control Conference (ACC). IEEE, 2024. http://dx.doi.org/10.23919/acc60939.2024.10644391.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Grewal, Ruben, Paolo Tonella, and Andrea Stocco. "Predicting Safety Misbehaviours in Autonomous Driving Systems Using Uncertainty Quantification." In 2024 IEEE Conference on Software Testing, Verification and Validation (ICST). IEEE, 2024. http://dx.doi.org/10.1109/icst60714.2024.00016.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Neumeier, Marion, Sebastian Dorn, Michael Botsch, and Wolfgang Utschick. "Reliable Trajectory Prediction and Uncertainty Quantification with Conditioned Diffusion Models." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2024. http://dx.doi.org/10.1109/cvprw63382.2024.00350.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Relatórios de organizações sobre o assunto "Predictive uncertainty quantification"

1

Adams, Marvin. Phenomena-based Uncertainty Quantification in Predictive Coupled- Physics Reactor Simulations. Office of Scientific and Technical Information (OSTI), 2017. http://dx.doi.org/10.2172/1364745.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Favorite, Jeffrey A., Garrett James Dean, Keith C. Bledsoe, et al. Predictive Modeling, Inverse Problems, and Uncertainty Quantification with Application to Emergency Response. Office of Scientific and Technical Information (OSTI), 2018. http://dx.doi.org/10.2172/1432629.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Lawson, Matthew, Bert J. Debusschere, Habib N. Najm, Khachik Sargsyan, and Jonathan H. Frank. Uncertainty quantification of cinematic imaging for development of predictive simulations of turbulent combustion. Office of Scientific and Technical Information (OSTI), 2010. http://dx.doi.org/10.2172/1011617.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Ye, Ming. Computational Bayesian Framework for Quantification and Reduction of Predictive Uncertainty in Subsurface Environmental Modeling. Office of Scientific and Technical Information (OSTI), 2019. http://dx.doi.org/10.2172/1491235.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Marzouk, Youssef. Final Report, DOE Early Career Award: Predictive modeling of complex physical systems: new tools for statistical inference, uncertainty quantification, and experimental design. Office of Scientific and Technical Information (OSTI), 2016. http://dx.doi.org/10.2172/1312896.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Cattaneo, Matias D., Richard K. Crump, and Weining Wang. Beta-Sorted Portfolios. Federal Reserve Bank of New York, 2023. http://dx.doi.org/10.59576/sr.1068.

Texto completo da fonte
Resumo:
Beta-sorted portfolios—portfolios comprised of assets with similar covariation to selected risk factors—are a popular tool in empirical finance to analyze models of (conditional) expected returns. Despite their widespread use, little is known of their statistical properties in contrast to comparable procedures such as two-pass regressions. We formally investigate the properties of beta-sorted portfolio returns by casting the procedure as a two-step nonparametric estimator with a nonparametric first step and a beta-adaptive portfolios construction. Our framework rationalizes the well-known estimation algorithm with precise economic and statistical assumptions on the general data generating process. We provide conditions that ensure consistency and asymptotic normality along with new uniform inference procedures allowing for uncertainty quantification and general hypothesis testing for financial applications. We show that the rate of convergence of the estimator is non-uniform and depends on the beta value of interest. We also show that the widely used Fama-MacBeth variance estimator is asymptotically valid but is conservative in general and can be very conservative in empirically relevant settings. We propose a new variance estimator, which is always consistent and provide an empirical implementation which produces valid inference. In our empirical application we introduce a novel risk factor—a measure of the business credit cycle—and show that it is strongly predictive of both the cross-section and time-series behavior of U.S. stock returns.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Gonzales, Lindsey M., Thomas M. Hall, Kendra L. Van Buren, Steven R. Anton, and Francois M. Hemez. Quantification of Prediction Bounds Caused by Model Form Uncertainty. Office of Scientific and Technical Information (OSTI), 2013. http://dx.doi.org/10.2172/1095195.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Adams, Jason, Brandon Berman, Joshua Michalenko, and Rina Deka. Non-conformity Scores for High-Quality Uncertainty Quantification from Conformal Prediction. Office of Scientific and Technical Information (OSTI), 2023. http://dx.doi.org/10.2172/2430248.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Vecherin, Sergey, Stephen Ketcham, Aaron Meyer, Kyle Dunn, Jacob Desmond, and Michael Parker. Short-range near-surface seismic ensemble predictions and uncertainty quantification for layered medium. Engineer Research and Development Center (U.S.), 2022. http://dx.doi.org/10.21079/11681/45300.

Texto completo da fonte
Resumo:
To make a prediction for seismic signal propagation, one needs to specify physical properties and subsurface ground structure of the site. This information is frequently unknown or estimated with significant uncertainty. This paper describes a methodology for probabilistic seismic ensemble prediction for vertically stratified soils and short ranges with no in situ site characterization. Instead of specifying viscoelastic site properties, the methodology operates with probability distribution functions of these properties taking into account analytical and empirical relationships among viscoelastic variables. This yields ensemble realizations of signal arrivals at specified locations where statistical properties of the signals can be estimated. Such ensemble predictions can be useful for preliminary site characterization, for military applications, and risk analysis for remote or inaccessible locations for which no data can be acquired. Comparison with experiments revealed that measured signals are not always within the predicted ranges of variability. Variance-based global sensitivity analysis has shown that the most significant parameters for signal amplitude predictions in the developed stochastic model are the uncertainty in the shear quality factor and the Poisson ratio above the water table depth.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Glimm, James, Yunha Lee, Kenny Q. Ye, and David H. Sharp. Prediction Using Numerical Simulations, A Bayesian Framework for Uncertainty Quantification and its Statistical Challenge. Defense Technical Information Center, 2002. http://dx.doi.org/10.21236/ada417842.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!