Tesis sobre el tema "Reliable quantification of uncertainty"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Reliable quantification of uncertainty".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Elfverson, Daniel. "Multiscale Methods and Uncertainty Quantification". Doctoral thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-262354.
Texto completoParkinson, Matthew. "Uncertainty quantification in Radiative Transport". Thesis, University of Bath, 2019. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.767610.
Texto completoCarson, J. "Uncertainty quantification in palaeoclimate reconstruction". Thesis, University of Nottingham, 2015. http://eprints.nottingham.ac.uk/29076/.
Texto completoBoopathy, Komahan. "Uncertainty Quantification and Optimization Under Uncertainty Using Surrogate Models". University of Dayton / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1398302731.
Texto completoCheng, Haiyan. "Uncertainty Quantification and Uncertainty Reduction Techniques for Large-scale Simulations". Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/28444.
Texto completoPh. D.
Fiorito, Luca. "Nuclear data uncertainty propagation and uncertainty quantification in nuclear codes". Doctoral thesis, Universite Libre de Bruxelles, 2016. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/238375.
Texto completoDoctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
Alvarado, Martin Guillermo. "Quantification of uncertainty during history matching". Texas A&M University, 2003. http://hdl.handle.net/1969/463.
Texto completoJimenez, Edwin. "Uncertainty quantification of nonlinear stochastic phenomena". Tallahassee, Florida : Florida State University, 2009. http://etd.lib.fsu.edu/theses/available/etd-11092009-161351/.
Texto completoAdvisor: M.Y. Hussaini, Florida State University, College of Arts and Sciences, Dept. of Mathematics. Title and description from dissertation home page (viewed on Mar. 16, 2010). Document formatted into pages; contains xii, 113 pages. Includes bibliographical references.
Kalmikov, Alexander G. "Uncertainty Quantification in ocean state estimation". Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/79291.
Texto completoCataloged from PDF version of thesis.
Includes bibliographical references (p. 158-160).
Quantifying uncertainty and error bounds is a key outstanding challenge in ocean state estimation and climate research. It is particularly difficult due to the large dimensionality of this nonlinear estimation problem and the number of uncertain variables involved. The "Estimating the Circulation and Climate of the Oceans" (ECCO) consortium has developed a scalable system for dynamically consistent estimation of global time-evolving ocean state by optimal combination of ocean general circulation model (GCM) with diverse ocean observations. The estimation system is based on the "adjoint method" solution of an unconstrained least-squares optimization problem formulated with the method of Lagrange multipliers for fitting the dynamical ocean model to observations. The dynamical consistency requirement of ocean state estimation necessitates this approach over sequential data assimilation and reanalysis smoothing techniques. In addition, it is computationally advantageous because calculation and storage of large covariance matrices is not required. However, this is also a drawback of the adjoint method, which lacks a native formalism for error propagation and quantification of assimilated uncertainty. The objective of this dissertation is to resolve that limitation by developing a feasible computational methodology for uncertainty analysis in dynamically consistent state estimation, applicable to the large dimensionality of global ocean models. Hessian (second derivative-based) methodology is developed for Uncertainty Quantification (UQ) in large-scale ocean state estimation, extending the gradient-based adjoint method to employ the second order geometry information of the model-data misfit function in a high-dimensional control space. Large error covariance matrices are evaluated by inverting the Hessian matrix with the developed scalable matrix-free numerical linear algebra algorithms. Hessian-vector product and Jacobian derivative codes of the MIT general circulation model (MITgcm) are generated by means of algorithmic differentiation (AD). Computational complexity of the Hessian code is reduced by tangent linear differentiation of the adjoint code, which preserves the speedup of adjoint checkpointing schemes in the second derivative calculation. A Lanczos algorithm is applied for extracting the leading rank eigenvectors and eigenvalues of the Hessian matrix. The eigenvectors represent the constrained uncertainty patterns. The inverse eigenvalues are the corresponding uncertainties. The dimensionality of UQ calculations is reduced by eliminating the uncertainty null-space unconstrained by the supplied observations. Inverse and forward uncertainty propagation schemes are designed for assimilating observation and control variable uncertainties, and for projecting these uncertainties onto oceanographic target quantities. Two versions of these schemes are developed: one evaluates reduction of prior uncertainties, while another does not require prior assumptions. The analysis of uncertainty propagation in the ocean model is time-resolving. It captures the dynamics of uncertainty evolution and reveals transient and stationary uncertainty regimes. The system is applied to quantifying uncertainties of Antarctic Circumpolar Current (ACC) transport in a global barotropic configuration of the MITgcm. The model is constrained by synthetic observations of sea surface height and velocities. The control space consists of two-dimensional maps of initial and boundary conditions and model parameters. The size of the Hessian matrix is 0(1010) elements, which would require 0(60GB) of uncompressed storage. It is demonstrated how the choice of observations and their geographic coverage determines the reduction in uncertainties of the estimated transport. The system also yields information on how well the control fields are constrained by the observations. The effects of controls uncertainty reduction due to decrease of diagonal covariance terms are compared to dynamical coupling of controls through off-diagonal covariance terms. The correlations of controls introduced by observation uncertainty assimilation are found to dominate the reduction of uncertainty of transport. An idealized analytical model of ACC guides a detailed time-resolving understanding of uncertainty dynamics. Keywords: Adjoint model uncertainty, sensitivity, posterior error reduction, reduced rank Hessian matrix, Automatic Differentiation, ocean state estimation, barotropic model, Drake Passage transport.
by Alexander G. Kalmikov.
Ph.D.
Roy, Pamphile. "Uncertainty quantification in high dimensional problems". Thesis, Toulouse, INPT, 2019. http://www.theses.fr/2019INPT0038.
Texto completoUncertainties are predominant in the world that we know. Referring therefore to a nominal value is too restrictive, especially when it comes to complex systems. Understanding the nature and the impact of these uncertainties has become an important aspect of engineering work. On a societal point of view, uncertainties play a role in terms of decision-making. From the European Commission through the Better Regulation Guideline, impact assessments are now advised to take uncertainties into account. In order to understand the uncertainties, the mathematical field of uncertainty quantification has been formed. UQ encompasses a large palette of statistical tools and it seeks to link a set of input perturbations on a system (design of experiments) towards a quantity of interest. The purpose of this work is to propose improvements on various methodological aspects of uncertainty quantification applied to costly numerical simulations. This is achieved by using existing methods with a multi-strategy approach but also by creating new methods. In this context, novel sampling and resampling approaches have been developed to better capture the variability of the physical phenomenon when dealing with a high number of perturbed inputs. These allow to reduce the number of simulation required to describe the system. Moreover, novel methods are proposed to visualize uncertainties when dealing with either a high dimensional input parameter space or a high dimensional quantity of interest. The developed methods can be used in various fields like hydraulic modelling and aerodynamic modelling. Their capabilities are demonstrated in realistic systems using well established computational fluid dynamics tools. Lastly, they are not limited to the use of numerical experiments and can be used equally for real experiments
Timmins, Benjamin H. "Automatic Particle Image Velocimetry Uncertainty Quantification". DigitalCommons@USU, 2011. https://digitalcommons.usu.edu/etd/884.
Texto completoMalenova, Gabriela. "Uncertainty quantification for high frequency waves". Licentiate thesis, KTH, Numerisk analys, NA, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186287.
Texto completoQC 20160510
Cousins, William Bryan. "Boundary Conditions and Uncertainty Quantification for Hemodynamics". Thesis, North Carolina State University, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3575896.
Texto completoWe address outflow boundary conditions for blood flow modeling. In particular, we consider a variety of fundamental issues in the structured tree boundary condition. We provide a theoretical analysis of the numerical implementation of the structured tree, showing that it is sensible but must be performed with great care. We also perform analytical and numerical studies on the sensitivity of model output on the structured tree's defining geometrical parameters. The most important component of this dissertation is the derivation of the new, generalized structured tree boundary condition. Unlike the original structured tree condition, the generalized structured tree does not contain a temporal periodicity assumption and is thus applicable to a much broader class of blood flow simulations. We describe a numerical implementation of this new boundary condition and show that the original structured tree is in fact a rough approximation of the new, generalized condition.
We also investigate parameter selection for outflow boundary conditions, and attempt to determine a set of structured tree parameters that gives reasonable simulation results without requiring any calibration. We are successful in doing so for a simulation of the systemic arterial tree, but the same parameter set yields physiologically unreasonable results in simulations of the Circle of Willis. Finally, we investigate the extension of recently introduced PDF methods to smooth solutions of systems of hyperbolic balance laws subject to uncertain inputs. These methods, currently available only for scalar equations, would provide a powerful tool for quantifying uncertainty in predictions of blood flow and other phenomena governed by first order hyperbolic systems.
Teckentrup, Aretha Leonore. "Multilevel Monte Carlo methods and uncertainty quantification". Thesis, University of Bath, 2013. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.577753.
Texto completoStrandberg, Rickard y Johan Låås. "Uncertainty quantification using high-dimensional numerical integration". Thesis, KTH, Skolan för teknikvetenskap (SCI), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-195701.
Texto completoEl-Shanawany, Ashraf Ben Mamdouh. "Quantification of uncertainty in probabilistic safety analysis". Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/48104.
Texto completoLam, Xuan-Binh. "Uncertainty quantification for stochastic subspace indentification methods". Rennes 1, 2011. http://www.theses.fr/2011REN1S133.
Texto completoEn analyse modale operationelle, les paramètres modaux (fréquence, amortissement, déforméees) peuvent être obtenus par des méthodes d'identification de type sous espaces et sont définis à une incertitude stochastique près. Pour évaluer la qualité des résultats obtenus, il est essentiel de connaître les bornes de confiance sur ces résultats. Dans cette thèse sont développés des algorithmes qui calcule automatiquement de telles bornes de confiance pour des paramètres modaux caractèristiques d'une structure mécanique. Ces algorithmes sont validés sur des exemples industriels significatifs. L'incertitude est tout d'abord calculé sur les données puis propagée sur les matrices du système par calcul de sensibilité, puis finalement sur les paramètres modaux. Les algorithmes existants sur lesquels se basent cette thèse dérivent l'incertitude des matrices du système de l'incertitude sur les covariances des entrées mesurées. Dans cette thèse, plusieurs résultats ont été obtenus. Tout d'abord, l'incertitude sur les déformées modales est obtenue par un schema de calcul plus réaliste que précédemment, utilisant une normalisation par l'angle de phase de la composante de valeur maximale. Ensuite, plusieurs méthodes de sous espaces et non seulement les méthodes à base de covariance sont considérées, telles que la méthode de réalisation stochastique ERA ainsi que la méthode UPC, à base des données. Pour ces méthodes, le calcul d'incertitude est explicité. Deu autres problèmatiques sont adressés : tout d'abord l'estimation multi ordre par méthode de sous espace et l'estimation à partir de jeux de données mesurées séparément. Pour ces deux problèmes, les schemas d'incertitude sont développés. En conclusion, cette thèse s'est attaché à développer des schemas de calcul d'incertitude pour une famille de méthodes sous espaces ainsi que pour un certain nombre de problèmes pratiques. La thèse finit avec le calcul d'incertitudes pour les méthodes récursives. Les méthodes sous espaces sont considérées comme une approche d'estimation robuste et consistante pour l'extraction des paramètres modaux à partir de données temporelles. Le calcul des incertitudes pour ces méthodes est maintenant possible, rendant ces méthodes encore plus crédible dans le cadre de l'exploitation de l'analyse modale
Fadikar, Arindam. "Stochastic Computer Model Calibration and Uncertainty Quantification". Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/91985.
Texto completoDoctor of Philosophy
Mathematical models are versatile and often provide accurate description of physical events. Scientific models are used to study such events in order to gain understanding of the true underlying system. These models are often complex in nature and requires advance algorithms to solve their governing equations. Outputs from these models depend on external information (also called model input) supplied by the user. Model inputs may or may not have a physical meaning, and can sometimes be only specific to the scientific model. More often than not, optimal values of these inputs are unknown and need to be estimated from few actual observations. This process is known as inverse problem, i.e. inferring the input from the output. The inverse problem becomes challenging when the mathematical model is stochastic in nature, i.e., multiple execution of the model result in different outcome. In this dissertation, three methodologies are proposed that talk about the calibration and prediction of a stochastic disease simulation model which simulates contagion of an infectious disease through human-human contact. The motivating examples are taken from the Ebola epidemic in West Africa in 2014 and seasonal flu in New York City in USA.
Hagues, Andrew W. "Uncertainty quantification for problems in radionuclide transport". Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/9088.
Texto completoPettersson, Per. "Uncertainty Quantification and Numerical Methods for Conservation Laws". Doctoral thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-188348.
Texto completoau, P. Kraipeerapun@murdoch edu y Pawalai Kraipeerapun. "Neural network classification based on quantification of uncertainty". Murdoch University, 2009. http://wwwlib.murdoch.edu.au/adt/browse/view/adt-MU20090526.100525.
Texto completoKraipeerapun, Pawalai. "Neural network classification based on quantification of uncertainty". Kraipeerapun, Pawalai (2009) Neural network classification based on quantification of uncertainty. PhD thesis, Murdoch University, 2009. http://researchrepository.murdoch.edu.au/699/.
Texto completoHunt, Stephen E. "Uncertainty Quantification Using Epi-Splines and Soft Information". Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/7361.
Texto completoThis thesis deals with the problem of measuring system performance in the presence of uncertainty. The system under consideration may be as simple as an Army vehicle subjected to a kinetic attack or as complex as the human cognitive process. Information about the system performance is found in the observed data points, which we call hard information, and may be collected from physical sensors, field test data, and computer simulations. Soft information is available from human sources such as subject-matter experts and analysts, and represents qualitative information about the system performance and the uncertainty present. We propose the use of epi-splines in a nonparametric framework that allows for the systematic integration of hard and soft information for the estimation of system performance density functions in order to quantify uncertainty. We conduct empirical testing of several benchmark analytical examples, where the true probability density functions are known. We compare the performance of the epi-spline estimator to kernel-based estimates and highlight a real-world problem context to illustrate the potential of the framework.
Chen, Qi. "Uncertainty quantification in assessment of damage ship survivability". Thesis, University of Strathclyde, 2012. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=19511.
Texto completoLebon, Jérémy. "Towards multifidelity uncertainty quantification for multiobjective structural design". Phd thesis, Université de Technologie de Compiègne, 2013. http://tel.archives-ouvertes.fr/tel-01002392.
Texto completoHristov, Peter O. "Numerical modelling and uncertainty quantification of biodiesel filters". Thesis, University of Liverpool, 2018. http://livrepository.liverpool.ac.uk/3024537/.
Texto completoLal, Rajnesh. "Data assimilation and uncertainty quantification in cardiovascular biomechanics". Thesis, Montpellier, 2017. http://www.theses.fr/2017MONTS088/document.
Texto completoCardiovascular blood flow simulations can fill several critical gaps in current clinical capabilities. They offer non-invasive ways to quantify hemodynamics in the heart and major blood vessels for patients with cardiovascular diseases, that cannot be directly obtained from medical imaging. Patient-specific simulations (incorporating data unique to the individual) enable individualised risk prediction, provide key insights into disease progression and/or abnormal physiologic detection. They also provide means to systematically design and test new medical devices, and are used as predictive tools to surgical and personalize treatment planning and, thus aid in clinical decision-making. Patient-specific predictive simulations require effective assimilation of medical data for reliable simulated predictions. This is usually achieved by the solution of an inverse hemodynamic problem, where uncertain model parameters are estimated using the techniques for merging data and numerical models known as data assimilation methods.In this thesis, the inverse problem is solved through a data assimilation method using an ensemble Kalman filter (EnKF) for parameter estimation. By using an ensemble Kalman filter, the solution also comes with a quantification of the uncertainties for the estimated parameters. An ensemble Kalman filter-based parameter estimation algorithm is proposed for patient-specific hemodynamic computations in a schematic arterial network from uncertain clinical measurements. Several in silico scenarii (using synthetic data) are considered to investigate the efficiency of the parameter estimation algorithm using EnKF. The usefulness of the parameter estimation algorithm is also assessed using experimental data from an in vitro test rig and actual real clinical data from a volunteer (patient-specific case). The proposed algorithm is evaluated on arterial networks which include single arteries, cases of bifurcation, a simple human arterial network and a complex arterial network including the circle of Willis.The ultimate aim is to perform patient-specific hemodynamic analysis in the network of the circle of Willis. Common hemodynamic properties (parameters), like arterial wall properties (Young’s modulus, wall thickness, and viscoelastic coefficient) and terminal boundary parameters (reflection coefficient and Windkessel model parameters) are estimated as the solution to an inverse problem using time series pressure values and blood flow rate as measurements. It is also demonstrated that a proper reduced order zero-dimensional compartment model can lead to a simple and reliable estimation of blood flow features in the circle of Willis. The simulations with the estimated parameters capture target pressure or flow rate waveforms at given specific locations
Zhang, Zheng Ph D. Massachusetts Institute of Technology. "Uncertainty quantification for integrated circuits and microelectrornechanical systems". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99855.
Texto completoCataloged from PDF version of thesis.
Includes bibliographical references (pages 155-168).
Uncertainty quantification has become an important task and an emerging topic in many engineering fields. Uncertainties can be caused by many factors, including inaccurate component models, the stochastic nature of some design parameters, external environmental fluctuations (e.g., temperature variation), measurement noise, and so forth. In order to enable robust engineering design and optimal decision making, efficient stochastic solvers are highly desired to quantify the effects of uncertainties on the performance of complex engineering designs. Process variations have become increasingly important in the semiconductor industry due to the shrinking of micro- and nano-scale devices. Such uncertainties have led to remarkable performance variations at both circuit and system levels, and they cannot be ignored any more in the design of nano-scale integrated circuits and microelectromechanical systems (MEMS). In order to simulate the resulting stochastic behaviors, Monte Carlo techniques have been employed in SPICE-like simulators for decades, and they still remain the mainstream techniques in this community. Despite of their ease of implementation, Monte Carlo simulators are often too time-consuming due to the huge number of repeated simulations. This thesis reports the development of several stochastic spectral methods to accelerate the uncertainty quantification of integrated circuits and MEMS. Stochastic spectral methods have emerged as a promising alternative to Monte Carlo in many engineering applications, but their performance may degrade significantly as the parameter dimensionality increases. In this work, we develop several efficient stochastic simulation algorithms for various integrated circuits and MEMS designs, including problems with both low-dimensional and high-dimensional random parameters, as well as complex systems with hierarchical design structures. The first part of this thesis reports a novel stochastic-testing circuit/MEMS simulator as well as its advanced simulation engine for radio-frequency (RF) circuits. The proposed stochastic testing can be regarded as a hybrid variant of stochastic Galerkin and stochastic collocation: it is an intrusive simulator with decoupled computation and adaptive time stepping inside the solver. As a result, our simulator gains remarkable speedup over standard stochastic spectral methods and Monte Carlo in the DC, transient and AC simulation of various analog, digital and RF integrated circuits. An advanced uncertainty quantification algorithm for the periodic steady states (or limit cycles) of analog/RF circuits is further developed by combining stochastic testing and shooting Newton. Our simulator is verified by various integrated circuits, showing 10² x to 10³ x speedup over Monte Carlo when a similar level of accuracy is required. The second part of this thesis presents two approaches for hierarchical uncertainty quantification. In hierarchical uncertainty quantification, we propose to employ stochastic spectral methods at different design hierarchies to simulate efficiently complex systems. The key idea is to ignore the multiple random parameters inside each subsystem and to treat each subsystem as a single random parameter. The main difficulty is to recompute the basis functions and quadrature rules that are required for the high-level uncertainty quantification, since the density function of an obtained low-level surrogate model is generally unknown. In order to address this issue, the first proposed algorithm computes new basis functions and quadrature points in the low-level (and typically high-dimensional) parameter space. This approach is very accurate; however it may suffer from the curse of dimensionality. In order to handle high-dimensional problems, a sparse stochastic testing simulator based on analysis of variance (ANOVA) is developed to accelerate the low-level simulation. At the high-level, a fast algorithm based on tensor decompositions is proposed to compute the basis functions and Gauss quadrature points. Our algorithm is verified by some MEMS/IC co-design examples with both low-dimensional and high-dimensional (up to 184) random parameters, showing about 102 x speedup over the state-of-the-art techniques. The second proposed hierarchical uncertainty quantification technique instead constructs a density function for each subsystem by some monotonic interpolation schemes. This approach is capable of handling general low-level possibly non-smooth surrogate models, and it allows computing new basis functions and quadrature points in an analytical way. The computational techniques developed in this thesis are based on stochastic differential algebraic equations, but the results can also be applied to many other engineering problems (e.g., silicon photonics, heat transfer problems, fluid dynamics, electromagnetics and power systems). There exist lots of research opportunities in this direction. Important open problems include how to solve high-dimensional problems (by both deterministic and randomized algorithms), how to deal with discontinuous response surfaces, how to handle correlated non-Gaussian random variables, how to couple noise and random parameters in uncertainty quantification, how to deal with correlated and time-dependent subsystems in hierarchical uncertainty quantification, and so forth.
by Zheng Zhang.
Ph. D.
Pascual, Blanca. "Uncertainty quantification for complex structures : statics and dynamics". Thesis, Swansea University, 2012. https://cronfa.swan.ac.uk/Record/cronfa42987.
Texto completoMulani, Sameer B. "Uncertainty Quantification in Dynamic Problems With Large Uncertainties". Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/28617.
Texto completoPh. D.
Macatula, Romcholo Yulo. "Linear Parameter Uncertainty Quantification using Surrogate Gaussian Processes". Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/99411.
Texto completoMaster of Science
Parameter uncertainty quantification seeks to determine both estimates and uncertainty regarding estimates of model parameters. Example of model parameters can include physical properties such as density, growth rates, or even deblurred images. Previous work has shown that replacing data with a surrogate model can provide promising estimates with low uncertainty. We extend the previous methods in the specific field of linear models. Theoretical results are tested on simulated computed tomography problems.
Huang, Jiangeng. "Sequential learning, large-scale calibration, and uncertainty quantification". Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/91935.
Texto completoDoctor of Philosophy
With remarkable advances in computing power, complex physical systems today can be simulated comparatively cheaply and to high accuracy through computer experiments. Computer experiments continue to expand the boundaries and drive down the cost of various scientific investigations, including biological, business, engineering, industrial, management, health-related, physical, and social sciences. This dissertation consists of six chapters, exploring statistical methodologies in sequential learning, model calibration, and uncertainty quantification for heteroskedastic computer experiments and large-scale computer experiments. For computer experiments with changing signal-to-noise ratio, an optimal lookahead based sequential learning strategy is presented, balancing replication and exploration to facilitate separating signal from complex noise structure. In order to effectively extract key information from massive amount of simulation and make better prediction for the real world, highly accurate and computationally efficient divide-and-conquer calibration methods for large-scale computer models are developed in this dissertation, addressing challenges in both large data size and model fidelity arising from ever larger modern computer experiments. The proposed methodology is applied to calibrate a real computer experiment from the gas and oil industry. This large-scale calibration method is further extended to solve multiple output calibration problems.
Abdollahzadeh, Asaad. "Adaptive algorithms for history matching and uncertainty quantification". Thesis, Heriot-Watt University, 2014. http://hdl.handle.net/10399/2752.
Texto completoDoty, Austin. "Nonlinear Uncertainty Quantification, Sensitivity Analysis, and Uncertainty Propagation of a Dynamic Electrical Circuit". University of Dayton / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1355456642.
Texto completoKuznetsova, Alexandra Anatolievna. "Heirarchical geological realism in history matching for reliable reservoir uncertainty predictions". Thesis, Heriot-Watt University, 2017. http://hdl.handle.net/10399/3282.
Texto completoMantis, George C. "Quantification and propagation of disciplinary uncertainty via bayesian statistics". Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/12136.
Texto completoPhillips, Edward G. "Fast solvers and uncertainty quantification for models of magnetohydrodynamics". Thesis, University of Maryland, College Park, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3644175.
Texto completoThe magnetohydrodynamics (MHD) model describes the flow of electrically conducting fluids in the presence of magnetic fields. A principal application of MHD is the modeling of plasma physics, ranging from plasma confinement for thermonuclear fusion to astrophysical plasma dynamics. MHD is also used to model the flow of liquid metals, for instance in magnetic pumps, liquid metal blankets in fusion reactor concepts, and aluminum electrolysis. The model consists of a non-self-adjoint, nonlinear system of partial differential equations (PDEs) that couple the Navier-Stokes equations for fluid flow to a reduced set of Maxwell's equations for electromagnetics.
In this dissertation, we consider computational issues arising for the MHD equations. We focus on developing fast computational algorithms for solving the algebraic systems that arise from finite element discretizations of the fully coupled MHD equations. Emphasis is on solvers for the linear systems arising from algorithms such as Newton's method or Picard iteration, with a main goal of developing preconditioners for use with iterative methods for the linearized systems. In particular, we first consider the linear systems arising from an exact penalty finite element formulation of the MHD equations. We then draw on this research to develop solvers for a formulation that includes a Lagrange multiplier within Maxwell's equations. We also consider a simplification of the MHD model: in the MHD kinematics model, the equations are reduced by assuming that the flow behavior of the system is known. In this simpler setting, we allow for epistemic uncertainty to be present. By mathematically modeling this uncertainty with random variables, we investigate its implications on the physical model.
Erbas, Demet. "Sampling strategies for uncertainty quantification in oil recovery prediction". Thesis, Heriot-Watt University, 2007. http://hdl.handle.net/10399/70.
Texto completoGligorijevic, Djordje. "Predictive Uncertainty Quantification and Explainable Machine Learning in Healthcare". Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/520057.
Texto completoPh.D.
Predictive modeling is an ever-increasingly important part of decision making. The advances in Machine Learning predictive modeling have spread across many domains bringing significant improvements in performance and providing unique opportunities for novel discoveries. A notably important domains of the human world are medical and healthcare domains, which take care of peoples' wellbeing. And while being one of the most developed areas of science with active research, there are many ways they can be improved. In particular, novel tools developed based on Machine Learning theory have drawn benefits across many areas of clinical practice, pushing the boundaries of medical science and directly affecting well-being of millions of patients. Additionally, healthcare and medicine domains require predictive modeling to anticipate and overcome many obstacles that future may hold. These kinds of applications employ a precise decision--making processes which requires accurate predictions. However, good prediction by its own is often insufficient. There has been no major focus in developing algorithms with good quality uncertainty estimates. Ergo, this thesis aims at providing a variety of ways to incorporate solutions by learning high quality uncertainty estimates or providing interpretability of the models where needed for purpose of improving existing tools built in practice and allowing many other tools to be used where uncertainty is the key factor for decision making. The first part of the thesis proposes approaches for learning high quality uncertainty estimates for both short- and long-term predictions in multi-task learning, developed on top for continuous probabilistic graphical models. In many scenarios, especially in long--term predictions, it may be of great importance for the models to provide a reliability flag in order to be accepted by domain experts. To this end we explored a widely applied structured regression model with a goal of providing meaningful uncertainty estimations on various predictive tasks. Our particular interest is in modeling uncertainty propagation while predicting far in the future. To address this important problem, our approach centers around providing an uncertainty estimate by modeling input features as random variables. This allows modeling uncertainty from noisy inputs. In cases when model iteratively produces errors it should propagate uncertainty over the predictive horizon, which may provide invaluable information for decision making based on predictions. In the second part of the thesis we propose novel neural embedding models for learning low-dimensional embeddings of medical concepts, such are diseases and genes, and show how they can be interpreted to allow accessing their quality, and show how can they be used to solve many problems in medical and healthcare research. We use EHR data to discover novel relationships between diseases by studying their comorbidities (i.e., co-occurrences in patients). We trained our models on a large-scale EHR database comprising more than 35 million inpatient cases. To confirm value and potential of the proposed approach we evaluate its effectiveness on a held-out set. Furthermore, for select diseases we provide a candidate gene list for which disease-gene associations were not studied previously, allowing biomedical researchers to better focus their often very costly lab studies. We furthermore examine how disease heterogeneity can affect the quality of learned embeddings and propose an approach for learning types of such heterogeneous diseases, while in our study we primarily focus on learning types of sepsis. Finally, we evaluate the quality of low-dimensional embeddings on tasks of predicting hospital quality indicators such as length of stay, total charges and mortality likelihood, demonstrating their superiority over other approaches. In the third part of the thesis we focus on decision making in medicine and healthcare domain by developing state-of-the-art deep learning models capable of outperforming human performance while maintaining good interpretability and uncertainty estimates.
Temple University--Theses
BASTOS, BERNARDO LEOPARDI GONCALVES BARRETTO. "UNCERTAINTY QUANTIFICATION AT RISK ASSESSMENT PROCEDURE DUE CONTAMINATED GROUNDWATER". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2005. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8184@1.
Texto completoFUNDAÇÃO DE APOIO À PESQUISA DO ESTADO DO RIO DE JANEIRO
A análise quantitativa de risco à saúde humana (AqR) devido a uma determinada área contaminada vem se verificando como importante ferramenta na gestão ambiental bem como a concretização de dano ambiental, tanto no Brasil como em outros países. Os procedimentos para AqR consistem em passos seqüenciados de forma orgânica e lógica e englobam características legais, aspectos toxicológicos e mecanismos de transporte. Apesar de não haver uma lei específica que regule a AqR, o Direito Ambiental permite que estas metodologias sejam plenamente aplicadas tanto no âmbito administrativo quanto no âmbito judicial para a caracterização de dano ambiental. As metodologias de AqR se valem de modelos fármaco-cinéticos que relacionam a exposição ao composto químico à possibilidade de causar danos à saúde humana. A Geotecnia Ambiental estuda o transporte e comportamento dos contaminantes nos solos e nas águas subterrâneas. A AqR se mostra um problema complexo e permeado por inúmeras incertezas e variabilidades. Foi proposta a utilização do método do segundo momento de primeira ordem (FOSM) para quantificar as incertezas relacionadas com a estimativa dos parâmetros de transporte a serem usadas em um modelo analítico de transporte de soluto em meios porosos (Domenico). O estudo de caso consiste na aplicação do programa desenvolvido para esta finalidade (SeRis). O método se mostra computacionalmente econômico e o estudo de caso, dentro das idealizações, identificou os parâmetros com maior importância relativa e apresentou uma variância total razoável para o resultado.
The quantitative human health risk assessment (AqR) due a contaminated site has became an important tool at Environmental Managenment and at the identification of environmental harm, at Brazil and other countries. The AqR procedures consists in logical sequence of actions concerned about legal aspects, toxicological matter and transport phenomena. In spite of the absence of a single law that could regulate specifically the AqR, the Environmental Law, as a whole, allows that AqR methodologies to be fully applied at governamental and judicial levels. The AqR procedures are base on pharmaco-kinetics models that quantitatively relates the exposure to the chemicals to human harm potency. The Environmental Geotechnics studies the fate and transport of contaminants at soil and groundwater. AqR is complex and full of uncertainties and variabilities subject. It have been proposed the application of the first order second moment method (FOSM) to quantify the uncertainties related to the estimation of the transport parameters to be used in the analytical transport model of solute in porous media (Domenico). It have been developed a specific software that meets this objective (SeRis). This software proved to be computationally efficient. The case study example indicated the relative importance of the considered parameters and presented a reasonable total system variance.
Mohammadi, Ghazi Reza. "Inference and uncertainty quantification for unsupervised structural monitoring problems". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115791.
Texto completoCataloged from PDF version of thesis.
Includes bibliographical references (pages 261-272).
Health monitoring is an essential functionality for smart and sustainable infrastructures that helps improving their safety and life span. A major element of such functionality is statistical inference and decision making which aims to process the dynamic response of structures in order to localize the defects in those systems as well as quantifying the uncertainties associated with such predictions. Accomplishing this task requires dealing with special constraints, in addition to the general challenges of inference problems, which are imposed by the uniqueness and size of civil infrastructures. These constraints are mainly associated with the small size and high dimensionality of the relevant data sets, low spatial resolution of measurements, and lack of prior information about the response of structures at all possible damaged states. Additionally, the measured responses at various locations on a structure are statistically dependent due to their connectivity via the structural elements. Ignoring such dependencies may result in inaccurate predictions, usually by blurring the damage localization resolution. In this thesis work, a comprehensive investigation has been carried out on developing appropriate signal processing, inference, and uncertainty quantification techniques with applications to data driven structural health monitoring (SHM). For signal processing, we have developed a feature extraction scheme that uses nonlinear non-stationary signal decomposition techniques to capture the effect of damages on the dynamic response of structures. We have also developed a general purpose signal processing method by combining the sparsity based regularization with the singularity expansion method. This method can provide a sparse representation of signals in complex-frequency plane and hence, more robust system identification schemes. For uncertainty quantification and decision making, we have developed three different learning algorithms which are capable of characterizing the statistical dependencies of the relevant random variables in novelty detection inference problems under various constraints related to the quality, size, and dimensionality of data sets. In doing so, we have mainly used the statistical graphical models and Markov random fields, optimization methods, kernel two sample tests, and kernel dependence analysis. The developed methods may be applied to a wide range of problems such as SHM, medical diagnostic, network security, and event detection. We have experimentally evaluated these techniques by applying them to SHM application problems for damage localization in various laboratory prototypes as well as a full scale structure.
by Reza Mohammadi Ghazi.
Ph. D. in Structures and Materials
Dostert, Paul Francis. "Uncertainty quantification using multiscale methods for porous media flows". [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-2532.
Texto completoNdiaye, Aïssatou. "Uncertainty Quantification of Thermo-acousticinstabilities in gas turbine combustors". Thesis, Montpellier, 2017. http://www.theses.fr/2017MONTS062/document.
Texto completoThermoacoustic instabilities result from the interaction between acoustic pressure oscillations and flame heat release rate fluctuations. These combustion instabilities are of particular concern due to their frequent occurrence in modern, low emission gas turbine engines. Their major undesirable consequence is a reduced time of operation due to large amplitude oscillations of the flame position and structural vibrations within the combustor. Computational Fluid Dynamics (CFD) has now become one a key approach to understand and predict these instabilities at industrial readiness level. Still, predicting this phenomenon remains difficult due to modelling and computational challenges; this is even more true when physical parameters of the modelling process are uncertain, which is always the case in practical situations. Introducing Uncertainty Quantification for thermoacoustics is the only way to study and control the stability of gas turbine combustors operated under realistic conditions; this is the objective of this work.First, a laboratory-scale combustor (with only one injector and flame) as well as two industrial helicopter engines (with N injectors and flames) are investigated. Calculations based on a Helmholtz solver and quasi analytical low order tool provide suitable estimates of the frequency and modal structures for each geometry. The analysis suggests that the flame response to acoustic perturbations plays the predominant role in the dynamics of the combustor. Accounting for the uncertainties of the flame representation is thus identified as a key step towards a robust stability analysis.Second, the notion of Risk Factor, that is to say the probability for a particular thermoacoustic mode to be unstable, is introduced in order to provide a more general description of the system than the classical binary (stable/unstable) classification. Monte Carlo and surrogate modelling approaches are then combined to perform an uncertainty quantification analysis of the laboratory-scale combustor with two uncertain parameters (amplitude and time delay of the flame response). It is shown that the use of algebraic surrogate models reduces drastically the number of state computations, thus the computational load, while providing accurate estimates of the modal risk factor. To deal with the curse of dimensionality, a strategy to reduce the number of uncertain parameters is further introduced in order to properly handle the two industrial helicopter engines. The active subspace algorithm used together with a change of variables allows identifying three dominant directions (instead of N initial uncertain parameters) which are sufficient to describe the dynamics of the industrial systems. Combined with appropriate surrogate models construction, this allows to conduct computationally efficient uncertainty quantification analysis of complex thermoacoustic systems.Third, the perspective of using adjoint method for the sensitivity analysis of thermoacoustic systems represented by 3D Helmholtz solvers is examined. The results obtained for 2D and 3D test cases are promising and suggest to further explore the potential of this method on even more complex thermoacoustic problems
White, Jeremy. "Computer Model Inversion and Uncertainty Quantification in the Geosciences". Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5329.
Texto completoYang, Chao. "ON PARTICLE METHODS FOR UNCERTAINTY QUANTIFICATION IN COMPLEX SYSTEMS". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1511967797285962.
Texto completoAlkhatib, Ali. "Decision making and uncertainty quantification for surfactant-polymer flooding". Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/22154.
Texto completoNobari, Amir. "Uncertainty quantification of brake squeal listability via surrogate modelling". Thesis, University of Liverpool, 2015. http://livrepository.liverpool.ac.uk/2035339/.
Texto completoCrevillen, Garcia David. "Uncertainty quantification for flow and transport in porous media". Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/31084/.
Texto completoLonsdale, Jack Henry. "Predictive modelling and uncertainty quantification of UK forest growth". Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/16202.
Texto completoArnold, Daniel Peter. "Geological parameterisation of petroleum reservoir models for improved uncertainty quantification". Thesis, Heriot-Watt University, 2008. http://hdl.handle.net/10399/2256.
Texto completo