Siga este enlace para ver otros tipos de publicaciones sobre el tema: Reliable quantification of uncertainty.

Tesis sobre el tema "Reliable quantification of uncertainty"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Reliable quantification of uncertainty".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Elfverson, Daniel. "Multiscale Methods and Uncertainty Quantification". Doctoral thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-262354.

Texto completo
Resumen
In this thesis we consider two great challenges in computer simulations of partial differential equations: multiscale data, varying over multiple scales in space and time, and data uncertainty, due to lack of or inexact measurements. We develop a multiscale method based on a coarse scale correction, using localized fine scale computations. We prove that the error in the solution produced by the multiscale method decays independently of the fine scale variation in the data or the computational domain. We consider the following aspects of multiscale methods: continuous and discontinuous underlying numerical methods, adaptivity, convection-diffusion problems, Petrov-Galerkin formulation, and complex geometries. For uncertainty quantification problems we consider the estimation of p-quantiles and failure probability. We use spatial a posteriori error estimates to develop and improve variance reduction techniques for Monte Carlo methods. We improve standard Monte Carlo methods for computing p-quantiles and multilevel Monte Carlo methods for computing failure probability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Parkinson, Matthew. "Uncertainty quantification in Radiative Transport". Thesis, University of Bath, 2019. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.767610.

Texto completo
Resumen
We study how uncertainty in the input data of the Radiative Transport equation (RTE), affects the distribution of (functionals of) its solution (the output data). The RTE is an integro-differential equation, in up to seven independent variables, that models the behaviour of rarefied particles (such as photons and neutrons) in a domain. Its applications include nuclear reactor design, radiation shielding, medical imaging, optical tomography and astrophysics. We focus on the RTE in the context of nuclear reactor physics where, to design and maintain safe reactors, understanding the effects of uncertainty is of great importance. There are many potential sources of uncertainty within a nuclear reactor. These include the geometry of the reactor, the material composition and reactor wear. Here we consider uncertainty in the macroscopic cross-sections ('the coefficients'), representing them as correlated spatial random fields. We wish to estimate the statistics of a problem-specific quantity of interest (under the influence of the given uncertainty in the cross-sections), which is defined as a functional of the scalar flux. This is the forward problem of Uncertainty Quantification. We seek accurate and efficient methods for estimating these statistics. Thus far, the research community studying Uncertainty Quantification in radiative transport has focused on the Polynomial Chaos expansion. However, it is known that the number of terms in the expansion grows exponentially with respect to the number of stochastic dimensions and the order of the expansion, i.e. polynomial chaos suffers from the curse of dimensionality. Instead, we focus our attention on variants of Monte Carlo sampling - studying standard and quasi-Monte Carlo methods, and their multilevel and multi-index variants. We show numerically that the quasi-Monte Carlo rules, and the multilevel variance reduction techniques, give substantial gains over the standard Monte Carlo method for a variety of radiative transport problems. Moreover, we report problems in up to 3600 stochastic dimensions, far beyond the capability of polynomial chaos. A large part of this thesis is focused towards a rigorous proof that the multilevel Monte Carlo method is superior to the standard Monte Carlo method, for the RTE in one spatial and one angular dimension with random cross-sections. This is the first rigorous theory of Uncertainty Quantification for transport problems and the first rigorous theory for Uncertainty Quantification for any PDE problem which accounts for a path-dependent stability condition. To achieve this result, we first present an error analysis (including a stability bound on the discretisation parameters) for the combined spatial and angular discretisation of the spatially heterogeneous RTE, which is explicit in the heterogeneous coefficients. We can then extend this result to prove probabilistic bounds on the error, under assumptions on the statistics of the cross-sections and provided the discretisation satisfies the stability condition pathwise. The multilevel Monte Carlo complexity result follows. Amongst other novel contributions, we: introduce a method which combines a direct and iterative solver to accelerate the computation of the scalar flux, by adaptively choosing the fastest solver based on the given coefficients; numerically test an iterative eigensolver, which uses a single source iteration within each loop of a shifted inverse power iteration; and propose a novel model for (random) heterogeneity in concrete which generates (piecewise) discontinuous coefficients according to the material type, but where the composition of materials are spatially correlated.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Carson, J. "Uncertainty quantification in palaeoclimate reconstruction". Thesis, University of Nottingham, 2015. http://eprints.nottingham.ac.uk/29076/.

Texto completo
Resumen
Studying the dynamics of the palaeoclimate is a challenging problem. Part of the challenge lies in the fact that our understanding must be based on only a single realisation of the climate system. With only one climate history, it is essential that palaeoclimate data are used to their full extent, and that uncertainties arising from both data and modelling are well characterised. This is the motivation behind this thesis, which explores approaches for uncertainty quantification in problems related to palaeoclimate reconstruction. We focus on uncertainty quantification problems for the glacial-interglacial cycle, namely parameter estimation, model comparison, and age estimation of palaeoclimate observations. We develop principled data assimilation schemes that allow us to assimilate palaeoclimate data into phenomenological models of the glacial-interglacial cycle. The statistical and modelling approaches we take in this thesis means that this amounts to the task of performing Bayesian inference for multivariate stochastic differential equations that are only partially observed. One contribution of this thesis is the synthesis of recent methodological advances in approximate Bayesian computation and particle filter methods. We provide an up-to-date overview that relates the different approaches and provides new insights into their performance. Through simulation studies we compare these approaches using a common benchmark, and in doing so we highlight the relative strengths and weaknesses of each method. There are two main scientific contributions in this thesis. The first is that by using inference methods to jointly perform parameter estimation and model comparison, we demonstrate that the current two-stage practice of first estimating observation times, and then treating them as fixed for subsequent analysis, leads to conclusions that are not robust to the methods used for estimating the observation times. The second main contribution is the development of a novel age model based on a linear sediment accumulation model. By extending the target of the particle filter we are able to jointly perform parameter estimation, model comparison, and observation age estimation. In doing so, we are able to perform palaeoclimate reconstruction using sediment core data that takes age uncertainty in the data into account, thus solving the problem of dating uncertainty highlighted above.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Boopathy, Komahan. "Uncertainty Quantification and Optimization Under Uncertainty Using Surrogate Models". University of Dayton / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1398302731.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Cheng, Haiyan. "Uncertainty Quantification and Uncertainty Reduction Techniques for Large-scale Simulations". Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/28444.

Texto completo
Resumen
Modeling and simulations of large-scale systems are used extensively to not only better understand a natural phenomenon, but also to predict future events. Accurate model results are critical for design optimization and policy making. They can be used effectively to reduce the impact of a natural disaster or even prevent it from happening. In reality, model predictions are often affected by uncertainties in input data and model parameters, and by incomplete knowledge of the underlying physics. A deterministic simulation assumes one set of input conditions, and generates one result without considering uncertainties. It is of great interest to include uncertainty information in the simulation. By ``Uncertainty Quantification,'' we denote the ensemble of techniques used to model probabilistically the uncertainty in model inputs, to propagate it through the system, and to represent the resulting uncertainty in the model result. This added information provides a confidence level about the model forecast. For example, in environmental modeling, the model forecast, together with the quantified uncertainty information, can assist the policy makers in interpreting the simulation results and in making decisions accordingly. Another important goal in modeling and simulation is to improve the model accuracy and to increase the model prediction power. By merging real observation data into the dynamic system through the data assimilation (DA) technique, the overall uncertainty in the model is reduced. With the expansion of human knowledge and the development of modeling tools, simulation size and complexity are growing rapidly. This poses great challenges to uncertainty analysis techniques. Many conventional uncertainty quantification algorithms, such as the straightforward Monte Carlo method, become impractical for large-scale simulations. New algorithms need to be developed in order to quantify and reduce uncertainties in large-scale simulations. This research explores novel uncertainty quantification and reduction techniques that are suitable for large-scale simulations. In the uncertainty quantification part, the non-sampling polynomial chaos (PC) method is investigated. An efficient implementation is proposed to reduce the high computational cost for the linear algebra involved in the PC Galerkin approach applied to stiff systems. A collocation least-squares method is proposed to compute the PC coefficients more efficiently. A novel uncertainty apportionment strategy is proposed to attribute the uncertainty in model results to different uncertainty sources. The apportionment results provide guidance for uncertainty reduction efforts. The uncertainty quantification and source apportionment techniques are implemented in the 3-D Sulfur Transport Eulerian Model (STEM-III) predicting pollute concentrations in the northeast region of the United States. Numerical results confirm the efficacy of the proposed techniques for large-scale systems and the potential impact for environmental protection policy making. ``Uncertainty Reduction'' describes the range of systematic techniques used to fuse information from multiple sources in order to increase the confidence one has in model results. Two DA techniques are widely used in current practice: the ensemble Kalman filter (EnKF) and the four-dimensional variational (4D-Var) approach. Each method has its advantages and disadvantages. By exploring the error reduction directions generated in the 4D-Var optimization process, we propose a hybrid approach to construct the error covariance matrix and to improve the static background error covariance matrix used in current 4D-Var practice. The updated covariance matrix between assimilation windows effectively reduces the root mean square error (RMSE) in the solution. The success of the hybrid covariance updates motivates the hybridization of EnKF and 4D-Var to further reduce uncertainties in the simulation results. Numerical tests show that the hybrid method improves the model accuracy and increases the model prediction quality.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Fiorito, Luca. "Nuclear data uncertainty propagation and uncertainty quantification in nuclear codes". Doctoral thesis, Universite Libre de Bruxelles, 2016. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/238375.

Texto completo
Resumen
Uncertainties in nuclear model responses must be quantified to define safety limits, minimize costs and define operational conditions in design. Response uncertainties can also be used to provide a feedback on the quality and reliability of parameter evaluations, such as nuclear data. The uncertainties of the predictive model responses sprout from several sources, e.g. nuclear data, model approximations, numerical solvers, influence of random variables. It was proved that the largest quantifiable sources of uncertainty in nuclear models, such as neutronics and burnup calculations, are the nuclear data, which are provided as evaluated best estimates and uncertainties/covariances in data libraries. Nuclear data uncertainties and/or covariances must be propagated to the model responses with dedicated uncertainty propagation tools. However, most of the nuclear codes for neutronics and burnup models do not have these capabilities and produce best-estimate results without uncertainties. In this work, the nuclear data uncertainty propagation was concentrated on the SCK•CEN code burnup ALEPH-2 and the Monte Carlo N-Particle code MCNP.Two sensitivity analysis procedures, i.e. FSAP and ASAP, based on linear perturbation theory were implemented in ALEPH-2. These routines can propagate nuclear data uncertainties in pure decay models. ASAP and ALEPH-2 were tested and validated against the decay heat and uncertainty quantification for several fission pulses and for the MYRRHA subcritical system. The decay uncertainty is necessary to define the reliability of the decay heat removal systems and prevent overheating and mechanical failure of the reactor components. It was proved that the propagation of independent fission yield and decay data uncertainties can be carried out with ASAP also in neutron irradiation models. Because of the ASAP limitations, the Monte Carlo sampling solver NUDUNA was used to propagate cross section covariances. The applicability constraints of ASAP drove our studies towards the development of a tool that could propagate the uncertainty of any nuclear datum. In addition, the uncertainty propagation tool was supposed to operate with multiple nuclear codes and systems, including non-linear models. The Monte Carlo sampling code SANDY was developed. SANDY is independent of the predictive model, as it only interacts with the nuclear data in input. Nuclear data are sampled from multivariate probability density functions and propagated through the model according to the Monte Carlo sampling theory. Not only can SANDY propagate nuclear data uncertainties and covariances to the model responses, but it is also able to identify the impact of each uncertainty contributor by decomposing the response variance. SANDY was extensively tested against integral parameters and was used to quantify the neutron multiplication factor uncertainty of the VENUS-F reactor.Further uncertainty propagation studies were carried out for the burnup models of light water reactor benchmarks. Our studies identified fission yields as the largest source of uncertainty for the nuclide density evolution curves of several fission products. However, the current data libraries provide evaluated fission yields and uncertainties devoid of covariance matrices. The lack of fission yield covariance information does not comply with the conservation equations that apply to a fission model, and generates inconsistency in the nuclear data. In this work, we generated fission yield covariance matrices using a generalised least-square method and a set of physical constraints. The fission yield covariance matrices solve the inconsistency in the nuclear data libraries and reduce the role of the fission yields in the uncertainty quantification of burnup models responses.
Doctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Alvarado, Martin Guillermo. "Quantification of uncertainty during history matching". Texas A&M University, 2003. http://hdl.handle.net/1969/463.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Jimenez, Edwin. "Uncertainty quantification of nonlinear stochastic phenomena". Tallahassee, Florida : Florida State University, 2009. http://etd.lib.fsu.edu/theses/available/etd-11092009-161351/.

Texto completo
Resumen
Thesis (Ph. D.)--Florida State University, 2009.
Advisor: M.Y. Hussaini, Florida State University, College of Arts and Sciences, Dept. of Mathematics. Title and description from dissertation home page (viewed on Mar. 16, 2010). Document formatted into pages; contains xii, 113 pages. Includes bibliographical references.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Kalmikov, Alexander G. "Uncertainty Quantification in ocean state estimation". Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/79291.

Texto completo
Resumen
Thesis (Ph. D.)--Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and the Woods Hole Oceanographic Institution), 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 158-160).
Quantifying uncertainty and error bounds is a key outstanding challenge in ocean state estimation and climate research. It is particularly difficult due to the large dimensionality of this nonlinear estimation problem and the number of uncertain variables involved. The "Estimating the Circulation and Climate of the Oceans" (ECCO) consortium has developed a scalable system for dynamically consistent estimation of global time-evolving ocean state by optimal combination of ocean general circulation model (GCM) with diverse ocean observations. The estimation system is based on the "adjoint method" solution of an unconstrained least-squares optimization problem formulated with the method of Lagrange multipliers for fitting the dynamical ocean model to observations. The dynamical consistency requirement of ocean state estimation necessitates this approach over sequential data assimilation and reanalysis smoothing techniques. In addition, it is computationally advantageous because calculation and storage of large covariance matrices is not required. However, this is also a drawback of the adjoint method, which lacks a native formalism for error propagation and quantification of assimilated uncertainty. The objective of this dissertation is to resolve that limitation by developing a feasible computational methodology for uncertainty analysis in dynamically consistent state estimation, applicable to the large dimensionality of global ocean models. Hessian (second derivative-based) methodology is developed for Uncertainty Quantification (UQ) in large-scale ocean state estimation, extending the gradient-based adjoint method to employ the second order geometry information of the model-data misfit function in a high-dimensional control space. Large error covariance matrices are evaluated by inverting the Hessian matrix with the developed scalable matrix-free numerical linear algebra algorithms. Hessian-vector product and Jacobian derivative codes of the MIT general circulation model (MITgcm) are generated by means of algorithmic differentiation (AD). Computational complexity of the Hessian code is reduced by tangent linear differentiation of the adjoint code, which preserves the speedup of adjoint checkpointing schemes in the second derivative calculation. A Lanczos algorithm is applied for extracting the leading rank eigenvectors and eigenvalues of the Hessian matrix. The eigenvectors represent the constrained uncertainty patterns. The inverse eigenvalues are the corresponding uncertainties. The dimensionality of UQ calculations is reduced by eliminating the uncertainty null-space unconstrained by the supplied observations. Inverse and forward uncertainty propagation schemes are designed for assimilating observation and control variable uncertainties, and for projecting these uncertainties onto oceanographic target quantities. Two versions of these schemes are developed: one evaluates reduction of prior uncertainties, while another does not require prior assumptions. The analysis of uncertainty propagation in the ocean model is time-resolving. It captures the dynamics of uncertainty evolution and reveals transient and stationary uncertainty regimes. The system is applied to quantifying uncertainties of Antarctic Circumpolar Current (ACC) transport in a global barotropic configuration of the MITgcm. The model is constrained by synthetic observations of sea surface height and velocities. The control space consists of two-dimensional maps of initial and boundary conditions and model parameters. The size of the Hessian matrix is 0(1010) elements, which would require 0(60GB) of uncompressed storage. It is demonstrated how the choice of observations and their geographic coverage determines the reduction in uncertainties of the estimated transport. The system also yields information on how well the control fields are constrained by the observations. The effects of controls uncertainty reduction due to decrease of diagonal covariance terms are compared to dynamical coupling of controls through off-diagonal covariance terms. The correlations of controls introduced by observation uncertainty assimilation are found to dominate the reduction of uncertainty of transport. An idealized analytical model of ACC guides a detailed time-resolving understanding of uncertainty dynamics. Keywords: Adjoint model uncertainty, sensitivity, posterior error reduction, reduced rank Hessian matrix, Automatic Differentiation, ocean state estimation, barotropic model, Drake Passage transport.
by Alexander G. Kalmikov.
Ph.D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Roy, Pamphile. "Uncertainty quantification in high dimensional problems". Thesis, Toulouse, INPT, 2019. http://www.theses.fr/2019INPT0038.

Texto completo
Resumen
Les incertitudes font partie du monde qui nous entoure. Se limiter à une seule valeur nominale est bien souvent trop restrictif, et ce d'autant plus lorsqu'il est question de systèmes complexes. Comprendre la nature et l'impact de ces incertitudes est devenu un aspect important de tout travail d'ingénierie. D'un point de vue sociétal, les incertitudes jouent un rôle important dans les processus de décision. Les dernières recommandations de la Commission européenne en matière d'analyses des risques souligne l'importance du traitement des incertitudes. Afin de comprendre les incertitudes, une nouvelle discipline mathématique appelée la quantification des incertitudes a été créée. Ce domaine regroupe un large éventail de méthodes d'analyse statistique qui visent à lier des perturbations sur les paramètres d'entrée d'un système (plan d'expérience) à une quantité d'intérêt. L'objectif de ce travail de thèse est de proposer des améliorations sur divers aspects méthodologiques de la quantification des incertitudes dans le cadre de simulation numérique coûteuse. Cela passe par une utilisation des méthodes existantes avec une approche multi-stratégie mais aussi la création de nouvelles méthodes. Dans ce contexte, de nouvelles méthodes d'échantillonnage et de ré-échantillonnage ont été développées afin de mieux capturer la variabilité dans le cas d'un problème de grande dimension. Par ailleurs, de nouvelles méthodes de visualisation des incertitudes sont proposées dans le cas d'une grande dimension des paramètres d'entrée et d'une grande dimension de la quantité d'intérêt. Les méthodes développées peuvent être utilisées dans divers domaines comme la modélisation hydraulique ou encore la modélisation aérodynamique. Leur apport est démontré sur des systèmes réalistes en faisant appel à des outils de mécanique des fluides numérique. Enfin, ces méthodes ne sont pas seulement utilisables dans le cadre de simulation numérique, mais elles peuvent être utilisées sur de réels dispositifs expérimentaux
Uncertainties are predominant in the world that we know. Referring therefore to a nominal value is too restrictive, especially when it comes to complex systems. Understanding the nature and the impact of these uncertainties has become an important aspect of engineering work. On a societal point of view, uncertainties play a role in terms of decision-making. From the European Commission through the Better Regulation Guideline, impact assessments are now advised to take uncertainties into account. In order to understand the uncertainties, the mathematical field of uncertainty quantification has been formed. UQ encompasses a large palette of statistical tools and it seeks to link a set of input perturbations on a system (design of experiments) towards a quantity of interest. The purpose of this work is to propose improvements on various methodological aspects of uncertainty quantification applied to costly numerical simulations. This is achieved by using existing methods with a multi-strategy approach but also by creating new methods. In this context, novel sampling and resampling approaches have been developed to better capture the variability of the physical phenomenon when dealing with a high number of perturbed inputs. These allow to reduce the number of simulation required to describe the system. Moreover, novel methods are proposed to visualize uncertainties when dealing with either a high dimensional input parameter space or a high dimensional quantity of interest. The developed methods can be used in various fields like hydraulic modelling and aerodynamic modelling. Their capabilities are demonstrated in realistic systems using well established computational fluid dynamics tools. Lastly, they are not limited to the use of numerical experiments and can be used equally for real experiments
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Timmins, Benjamin H. "Automatic Particle Image Velocimetry Uncertainty Quantification". DigitalCommons@USU, 2011. https://digitalcommons.usu.edu/etd/884.

Texto completo
Resumen
The uncertainty of any measurement is the interval in which one believes the actual error lies. Particle Image Velocimetry (PIV) measurement error depends on the PIV algorithm used, a wide range of user inputs, flow characteristics, and the experimental setup. Since these factors vary in time and space, they lead to nonuniform error throughout the flow field. As such, a universal PIV uncertainty estimate is not adequate and can be misleading. This is of particular interest when PIV data are used for comparison with computational or experimental data. A method to estimate the uncertainty due to the PIV calculation of each individual velocity measurement is presented. The relationship between four error sources and their contribution to PIV error is first determined. The sources, or parameters, considered are particle image diameter, particle density, particle displacement, and velocity gradient, although this choice in parameters is arbitrary and may not be complete. This information provides a four-dimensional "uncertainty surface" for the PIV algorithm used. After PIV processing, our code "measures" the value of each of these parameters and estimates the velocity uncertainty for each vector in the flow field. The reliability of the methodology is validated using known flow fields so the actual error can be determined. Analysis shows that, for most flows, the uncertainty distribution obtained using this method fits the confidence interval. The method is general and can be adapted to any PIV analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Malenova, Gabriela. "Uncertainty quantification for high frequency waves". Licentiate thesis, KTH, Numerisk analys, NA, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186287.

Texto completo
Resumen
We consider high frequency waves satisfying the scalar wave equationwith highly oscillatory initial data. The speed of propagation of the mediumas well as the phase and amplitude of the initial data is assumed to beuncertain, described by a finite number of independent random variables withknown probability distributions. We introduce quantities of interest (QoIs)aslocal averages of the squared modulus of the wave solution, or itsderivatives.The regularity of these QoIs in terms of the input random parameters and thewavelength is important for uncertainty quantification methods based oninterpolation in the stochastic space. In particular, the size of thederivativesshould be bounded and independent of the wavelength. In the contributedpapers, we show that the QoIs indeed have this property, despite the highlyoscillatory character of the waves.

QC 20160510

Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Cousins, William Bryan. "Boundary Conditions and Uncertainty Quantification for Hemodynamics". Thesis, North Carolina State University, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3575896.

Texto completo
Resumen

We address outflow boundary conditions for blood flow modeling. In particular, we consider a variety of fundamental issues in the structured tree boundary condition. We provide a theoretical analysis of the numerical implementation of the structured tree, showing that it is sensible but must be performed with great care. We also perform analytical and numerical studies on the sensitivity of model output on the structured tree's defining geometrical parameters. The most important component of this dissertation is the derivation of the new, generalized structured tree boundary condition. Unlike the original structured tree condition, the generalized structured tree does not contain a temporal periodicity assumption and is thus applicable to a much broader class of blood flow simulations. We describe a numerical implementation of this new boundary condition and show that the original structured tree is in fact a rough approximation of the new, generalized condition.

We also investigate parameter selection for outflow boundary conditions, and attempt to determine a set of structured tree parameters that gives reasonable simulation results without requiring any calibration. We are successful in doing so for a simulation of the systemic arterial tree, but the same parameter set yields physiologically unreasonable results in simulations of the Circle of Willis. Finally, we investigate the extension of recently introduced PDF methods to smooth solutions of systems of hyperbolic balance laws subject to uncertain inputs. These methods, currently available only for scalar equations, would provide a powerful tool for quantifying uncertainty in predictions of blood flow and other phenomena governed by first order hyperbolic systems.

Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Teckentrup, Aretha Leonore. "Multilevel Monte Carlo methods and uncertainty quantification". Thesis, University of Bath, 2013. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.577753.

Texto completo
Resumen
We consider the application of multilevel Monte Carlo methods to elliptic partial differential equations with random coefficients. Such equations arise, for example, in stochastic groundwater ow modelling. Models for random coefficients frequently used in these applications, such as log-normal random fields with exponential covariance, lack uniform coercivity and boundedness with respect to the random parameter and have only limited spatial regularity. To give a rigorous bound on the cost of the multilevel Monte Carlo estimator to reach a desired accuracy, one needs to quantify the bias of the estimator. The bias, in this case, is the spatial discretisation error in the numerical solution of the partial differential equation. This thesis is concerned with establishing bounds on this discretisation error in the practically relevant and technically demanding case of coefficients which are not uniformly coercive or bounded with respect to the random parameter. Under mild assumptions on the regularity of the coefficient, we establish new results on the regularity of the solution for a variety of model problems. The most general case is that of a coefficient which is piecewise Hölder continuous with respect to a random partitioning of the domain. The established regularity of the solution is then combined with tools from classical discretisation error analysis to provide a full convergence analysis of the bias of the multilevel estimator for finite element and finite volume spatial discretisations. Our analysis covers as quantities of interest several spatial norms of the solution, as well as point evaluations of the solution and its gradient and any continuously Fréchet differentiable functional. Lastly, we extend the idea of multilevel Monte Carlo estimators to the framework of Markov chain Monte Carlo simulations. We develop a new multilevel version of a Metropolis Hastings algorithm, and provide a full convergence analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Strandberg, Rickard y Johan Låås. "Uncertainty quantification using high-dimensional numerical integration". Thesis, KTH, Skolan för teknikvetenskap (SCI), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-195701.

Texto completo
Resumen
We consider quantities that are uncertain because they depend on one or many uncertain parameters. If the uncertain parameters are stochastic the expected value of the quantity can be obtained by integrating the quantity over all the possible values these parameters can take and dividing the result by the volume of the parameter-space. Each additional uncertain parameter has to be integrated over; if the parameters are many, this give rise to high-dimensional integrals. This report offers an overview of the theory underpinning four numerical methods used to compute high-dimensional integrals: Newton-Cotes, Monte Carlo, Quasi-Monte Carlo, and sparse grid. The theory is then applied to the problem of computing the impact coordinates of a thrown ball by introducing uncertain parameters such as wind velocities into Newton’s equations of motion.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

El-Shanawany, Ashraf Ben Mamdouh. "Quantification of uncertainty in probabilistic safety analysis". Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/48104.

Texto completo
Resumen
This thesis develops methods for quantification and interpretation of uncertainty in probabilistic safety analysis, focussing on fault trees. The output of a fault tree analysis is, usually, the probability of occurrence of an undesirable event (top event) calculated using the failure probabilities of identified basic events. The standard method for evaluating the uncertainty distribution is by Monte Carlo simulation, but this is a computationally intensive approach to uncertainty estimation and does not, readily, reveal the dominant reasons for the uncertainty. A closed form approximation for the fault tree top event uncertainty distribution, for models using only lognormal distributions for model inputs, is developed in this thesis. Its output is compared with the output from two sampling based approximation methods; standard Monte Carlo analysis, and Wilks’ method, which is based on order statistics using small sample sizes. Wilks’ method can be used to provide an upper bound for the percentiles of top event distribution, and is computationally cheap. The combination of the lognormal approximation and Wilks’ Method can be used to give, respectively, the overall shape and high confidence on particular percentiles of interest. This is an attractive, practical option for evaluation of uncertainty in fault trees and, more generally, uncertainty in certain multilinear models. A new practical method of ranking uncertainty contributors in lognormal models is developed which can be evaluated in closed form, based on cutset uncertainty. The method is demonstrated via examples, including a simple fault tree model and a model which is the size of a commercial PSA model for a nuclear power plant. Finally, quantification of “hidden uncertainties” is considered; hidden uncertainties are those which are not typically considered in PSA models, but may contribute considerable uncertainty to the overall results if included. A specific example of the inclusion of a missing uncertainty is explained in detail, and the effects on PSA quantification are considered. It is demonstrated that the effect on the PSA results can be significant, potentially permuting the order of the most important cutsets, which is of practical concern for the interpretation of PSA models. Finally, suggestions are made for the identification and inclusion of further hidden uncertainties.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Lam, Xuan-Binh. "Uncertainty quantification for stochastic subspace indentification methods". Rennes 1, 2011. http://www.theses.fr/2011REN1S133.

Texto completo
Resumen
In Operational Modal Analysis, the modal parameters (natural frequencies, damping ratios, and mode shapes) obtained from Stochastic Subspace Identification (SSI) of a structure, are afflicted with statistical uncertainty. For evaluating the quality of the obtained results it is essential to know the appropriate uncertainty bounds of these terms. In this thesis, the algorithms, that automatically compute the uncertainty bounds of modal parameters obtained from SSI of a structure based on vibration measurements, are presented. With these new algorithms, the uncertainty bounds of the modal parameters of some relevant industrial examples are computed. To quantify the statistical uncertainty of the obtained modal parameters, the statistical uncertainty in the data can be evaluated and propagated to the system matrices and, thus, to the modal parameters. In the uncertainty quantification algorithm, which is a perturbation-based method, it has been shown how uncertainty bounds of modal parameters can be determined from the covariances of the system matrices, which are obtained from some covariance of the data and the covariances of subspace matrices. In this thesis, several results are derived. Firstly, a novel and more realistic scheme for the uncertainty calculation of the mode shape is presented, the mode shape is normalized by the phase angle of the component having the maximal absolute value instead of by one of its components. Secondly, the uncertainty quantification is derived and developed for several identification methods, first few of them are covariance- and data-driven SSI. The thesis also mentions about Eigensystem Realization Algorithm (ERA), a class of identification methods, and its uncertainty quantification scheme. This ERA approach is introduced in conjunction with the singular value decomposition to derive the basic formulation of minimum order realization. Besides, the thesis supposes efficient algorithms to estimate the system matrices at multiple model orders, the uncertainty quantification is also derived for this new multi-order SSI method. Two last interesting sections of the thesis are discovering the uncertainty of multi-setups SSI algorithm and recursive algorithms. In summary, subspace algorithms are efficient tools for vibration analysis, fitting a model to input/output or output-only measurements taken from a system. However, uncertainty quantification for SSI was missing for a long time. The uncertainty quantification is very important feature for credibility of modal analysis exploitation
En analyse modale operationelle, les paramètres modaux (fréquence, amortissement, déforméees) peuvent être obtenus par des méthodes d'identification de type sous espaces et sont définis à une incertitude stochastique près. Pour évaluer la qualité des résultats obtenus, il est essentiel de connaître les bornes de confiance sur ces résultats. Dans cette thèse sont développés des algorithmes qui calcule automatiquement de telles bornes de confiance pour des paramètres modaux caractèristiques d'une structure mécanique. Ces algorithmes sont validés sur des exemples industriels significatifs. L'incertitude est tout d'abord calculé sur les données puis propagée sur les matrices du système par calcul de sensibilité, puis finalement sur les paramètres modaux. Les algorithmes existants sur lesquels se basent cette thèse dérivent l'incertitude des matrices du système de l'incertitude sur les covariances des entrées mesurées. Dans cette thèse, plusieurs résultats ont été obtenus. Tout d'abord, l'incertitude sur les déformées modales est obtenue par un schema de calcul plus réaliste que précédemment, utilisant une normalisation par l'angle de phase de la composante de valeur maximale. Ensuite, plusieurs méthodes de sous espaces et non seulement les méthodes à base de covariance sont considérées, telles que la méthode de réalisation stochastique ERA ainsi que la méthode UPC, à base des données. Pour ces méthodes, le calcul d'incertitude est explicité. Deu autres problèmatiques sont adressés : tout d'abord l'estimation multi ordre par méthode de sous espace et l'estimation à partir de jeux de données mesurées séparément. Pour ces deux problèmes, les schemas d'incertitude sont développés. En conclusion, cette thèse s'est attaché à développer des schemas de calcul d'incertitude pour une famille de méthodes sous espaces ainsi que pour un certain nombre de problèmes pratiques. La thèse finit avec le calcul d'incertitudes pour les méthodes récursives. Les méthodes sous espaces sont considérées comme une approche d'estimation robuste et consistante pour l'extraction des paramètres modaux à partir de données temporelles. Le calcul des incertitudes pour ces méthodes est maintenant possible, rendant ces méthodes encore plus crédible dans le cadre de l'exploitation de l'analyse modale
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Fadikar, Arindam. "Stochastic Computer Model Calibration and Uncertainty Quantification". Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/91985.

Texto completo
Resumen
This dissertation presents novel methodologies in the field of stochastic computer model calibration and uncertainty quantification. Simulation models are widely used in studying physical systems, which are often represented by a set of mathematical equations. Inference on true physical system (unobserved or partially observed) is drawn based on the observations from corresponding computer simulation model. These computer models are calibrated based on limited ground truth observations in order produce realistic predictions and associated uncertainties. Stochastic computer model differs from traditional computer model in the sense that repeated execution results in different outcomes from a stochastic simulation. This additional uncertainty in the simulation model requires to be handled accordingly in any calibration set up. Gaussian process (GP) emulator replaces the actual computer simulation when it is expensive to run and the budget is limited. However, traditional GP interpolator models the mean and/or variance of the simulation output as function of input. For a simulation where marginal gaussianity assumption is not appropriate, it does not suffice to emulate only the mean and/or variance. We present two different approaches addressing the non-gaussianity behavior of an emulator, by (1) incorporating quantile regression in GP for multivariate output, (2) approximating using finite mixture of gaussians. These emulators are also used to calibrate and make forward predictions in the context of an Agent Based disease model which models the Ebola epidemic outbreak in 2014 in West Africa. The third approach employs a sequential scheme which periodically updates the uncertainty inn the computer model input as data becomes available in an online fashion. Unlike other two methods which use an emulator in place of the actual simulation, the sequential approach relies on repeated run of the actual, potentially expensive simulation.
Doctor of Philosophy
Mathematical models are versatile and often provide accurate description of physical events. Scientific models are used to study such events in order to gain understanding of the true underlying system. These models are often complex in nature and requires advance algorithms to solve their governing equations. Outputs from these models depend on external information (also called model input) supplied by the user. Model inputs may or may not have a physical meaning, and can sometimes be only specific to the scientific model. More often than not, optimal values of these inputs are unknown and need to be estimated from few actual observations. This process is known as inverse problem, i.e. inferring the input from the output. The inverse problem becomes challenging when the mathematical model is stochastic in nature, i.e., multiple execution of the model result in different outcome. In this dissertation, three methodologies are proposed that talk about the calibration and prediction of a stochastic disease simulation model which simulates contagion of an infectious disease through human-human contact. The motivating examples are taken from the Ebola epidemic in West Africa in 2014 and seasonal flu in New York City in USA.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Hagues, Andrew W. "Uncertainty quantification for problems in radionuclide transport". Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/9088.

Texto completo
Resumen
The field of radionuclide transport has long recognised the stochastic nature of the problems encountered. Many parameters that are used in computational models are very difficult, if not impossible, to measure with any great degree of confidence. For example, bedrock properties can only be measured at a few discrete points, the properties between these points may be inferred or estimated using experiments but it is difficult to achieve any high levels of confidence. This is a major problem when many countries around the world are considering deep geologic repositories as a disposal option for long-lived nuclear waste but require a high degree of confidence that any release of radioactive material will not pose a risk to future populations. In this thesis we apply Polynomial Chaos methods to a model of the biosphere that is similar to those used to assess exposure pathways for humans and associated dose rates by many countries worldwide. We also apply the Spectral-Stochastic Finite Element Method to the problem of contaminated fluid flow in a porous medium. For this problem we use the Multi-Element generalized Polynomial Chaos method to discretise the random dimensions in a manner similar to the well known Finite Element Method. The stochastic discretisation is then refined adaptively to mitigate the build up errors over the solution times. It was found that these methods have the potential to provide much improved estimates for radionuclide transport problems. However, further development is needed in order to obtain the necessary efficiency that would be required to solve industrial problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Pettersson, Per. "Uncertainty Quantification and Numerical Methods for Conservation Laws". Doctoral thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-188348.

Texto completo
Resumen
Conservation laws with uncertain initial and boundary conditions are approximated using a generalized polynomial chaos expansion approach where the solution is represented as a generalized Fourier series of stochastic basis functions, e.g. orthogonal polynomials or wavelets. The stochastic Galerkin method is used to project the governing partial differential equation onto the stochastic basis functions to obtain an extended deterministic system. The stochastic Galerkin and collocation methods are used to solve an advection-diffusion equation with uncertain viscosity. We investigate well-posedness, monotonicity and stability for the stochastic Galerkin system. High-order summation-by-parts operators and weak imposition of boundary conditions are used to prove stability. We investigate the impact of the total spatial operator on the convergence to steady-state.  Next we apply the stochastic Galerkin method to Burgers' equation with uncertain boundary conditions. An analysis of the truncated polynomial chaos system presents a qualitative description of the development of the solution over time. An analytical solution is derived and the true polynomial chaos coefficients are shown to be smooth, while the corresponding coefficients of the truncated stochastic Galerkin formulation are shown to be discontinuous. We discuss the problematic implications of the lack of known boundary data and possible ways of imposing stable and accurate boundary conditions. We present a new fully intrusive method for the Euler equations subject to uncertainty based on a Roe variable transformation. The Roe formulation saves computational cost compared to the formulation based on expansion of conservative variables. Moreover, it is more robust and can handle cases of supersonic flow, for which the conservative variable formulation fails to produce a bounded solution. A multiwavelet basis that can handle  discontinuities in a robust way is used. Finally, we investigate a two-phase flow problem. Based on regularity analysis of the generalized polynomial chaos coefficients, we present a hybrid method where solution regions of varying smoothness are coupled weakly through interfaces. In this way, we couple smooth solutions solved with high-order finite difference methods with non-smooth solutions solved for with shock-capturing methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

au, P. Kraipeerapun@murdoch edu y Pawalai Kraipeerapun. "Neural network classification based on quantification of uncertainty". Murdoch University, 2009. http://wwwlib.murdoch.edu.au/adt/browse/view/adt-MU20090526.100525.

Texto completo
Resumen
This thesis deals with feedforward backpropagation neural networks and interval neutrosophic sets for the binary and multiclass classification problems. Neural networks are used to predict “true” and “false” output values. These results together with the uncertainty of type error and vagueness occurred in the prediction are then represented in the form of interval neutrosophic sets. Each element in an interval neutrosophic set consists of three membership values: truth, indeterminacy, and false. These three membership values are then used in the classification process. For binary classification, a pair of neural networks is first applied in order to predict the degrees of truth and false membership values. Subsequently, bagging technique is applied to an ensemble of pairs of neural networks in order to improve the performance. For multiclass classification, two basic multiclass classification methods are proposed. A pair of neural networks with multiple outputs and multiple pairs of binary neural network are experimented. A number of aggregation techniques are proposed in this thesis. The difference between each pair of the truth and false membership values determines the vagueness value. Error occurred in the prediction are estimated using an interpolation technique. Both vagueness and error then form the indeterminacy membership. Two and three dimensional visualization of the three membership values are also presented. Ten data sets obtained from UCI machine learning repository are experimented with the proposed approaches. The approaches are also applied to two real world problems: mineral prospectivity prediction and lithofacies classification.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Kraipeerapun, Pawalai. "Neural network classification based on quantification of uncertainty". Kraipeerapun, Pawalai (2009) Neural network classification based on quantification of uncertainty. PhD thesis, Murdoch University, 2009. http://researchrepository.murdoch.edu.au/699/.

Texto completo
Resumen
This thesis deals with feedforward backpropagation neural networks and interval neutrosophic sets for the binary and multiclass classification problems. Neural networks are used to predict “true” and “false” output values. These results together with the uncertainty of type error and vagueness occurred in the prediction are then represented in the form of interval neutrosophic sets. Each element in an interval neutrosophic set consists of three membership values: truth, indeterminacy, and false. These three membership values are then used in the classification process. For binary classification, a pair of neural networks is first applied in order to predict the degrees of truth and false membership values. Subsequently, bagging technique is applied to an ensemble of pairs of neural networks in order to improve the performance. For multiclass classification, two basic multiclass classification methods are proposed. A pair of neural networks with multiple outputs and multiple pairs of binary neural network are experimented. A number of aggregation techniques are proposed in this thesis. The difference between each pair of the truth and false membership values determines the vagueness value. Error occurred in the prediction are estimated using an interpolation technique. Both vagueness and error then form the indeterminacy membership. Two and three dimensional visualization of the three membership values are also presented. Ten data sets obtained from UCI machine learning repository are experimented with the proposed approaches. The approaches are also applied to two real world problems: mineral prospectivity prediction and lithofacies classification.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Hunt, Stephen E. "Uncertainty Quantification Using Epi-Splines and Soft Information". Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/7361.

Texto completo
Resumen
Approved for public release; distribution is unlimited
This thesis deals with the problem of measuring system performance in the presence of uncertainty. The system under consideration may be as simple as an Army vehicle subjected to a kinetic attack or as complex as the human cognitive process. Information about the system performance is found in the observed data points, which we call hard information, and may be collected from physical sensors, field test data, and computer simulations. Soft information is available from human sources such as subject-matter experts and analysts, and represents qualitative information about the system performance and the uncertainty present. We propose the use of epi-splines in a nonparametric framework that allows for the systematic integration of hard and soft information for the estimation of system performance density functions in order to quantify uncertainty. We conduct empirical testing of several benchmark analytical examples, where the true probability density functions are known. We compare the performance of the epi-spline estimator to kernel-based estimates and highlight a real-world problem context to illustrate the potential of the framework.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Chen, Qi. "Uncertainty quantification in assessment of damage ship survivability". Thesis, University of Strathclyde, 2012. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=19511.

Texto completo
Resumen
Ongoing developments in improving ship safety indicate the gradual transition from a compliance-based culture to a sustainable safety-oriented culture. Sophisticated methods, tools and techniques are demanded to address the dynamic behaviour of a ship in a physical environment. This is particularly true for investigating the flooding phenomenon of a damaged ship, a principal hazard endangering modern ships. In this respect, first-principles tools represent a rational and cost-effective approach to address it at both design and operational stages. Acknowledging the criticality of ship survivability and the various maturity levels of state-of-the-art tools, analyses of the underlying uncertainties in relation to relevant predictions become an inevitable component to be addressed. The research presented in this thesis proposes a formalised Bayesian approach for quantifying uncertainties associated with the assessment of ship survivability. It elaborates a formalised procedu re for synthesizing first-principles tools with existing knowledge from various sources. The outcome is a mathematical model for predicting time-domain survivability and quantifying the associated uncertainties. In view of emerging ship life-cycle safety management issues and the recent initiative of "Safe Return to Port", emergency management is recognised as the last remedy to address an evolving flooding crisis. For this reason, an emergency decision support framework is proposed to demonstrate the applicability of the presented Bayesian approach. A case study is enclosed to elucidate the devised shipboard decision support framework for flooding-related emergency control. Various aspects of the presented methodology demonstrate considerable potential for further research, development and application. In an environment where more emphasis is placed on performance and probabilistic-based solutions, it is believed that this research has contributed positiv ely and substantially towards ship safety, with particular reference to uncertainty analysis and ensuing applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Lebon, Jérémy. "Towards multifidelity uncertainty quantification for multiobjective structural design". Phd thesis, Université de Technologie de Compiègne, 2013. http://tel.archives-ouvertes.fr/tel-01002392.

Texto completo
Resumen
This thesis aims at Multi-Objective Optimization under Uncertainty in structural design. We investigate Polynomial Chaos Expansion (PCE) surrogates which require extensive training sets. We then face two issues: high computational costs of an individual Finite Element simulation and its limited precision. From numerical point of view and in order to limit the computational expense of the PCE construction we particularly focus on sparse PCE schemes. We also develop a custom Latin Hypercube Sampling scheme taking into account the finite precision of the simulation. From the modeling point of view,we propose a multifidelity approach involving a hierarchy of models ranging from full scale simulations through reduced order physics up to response surfaces. Finally, we investigate multiobjective optimization of structures under uncertainty. We extend the PCE model of design objectives by taking into account the design variables. We illustrate our work with examples in sheet metal forming and optimal design of truss structures.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Hristov, Peter O. "Numerical modelling and uncertainty quantification of biodiesel filters". Thesis, University of Liverpool, 2018. http://livrepository.liverpool.ac.uk/3024537/.

Texto completo
Resumen
This dissertation explores the design and analysis of computer models for filters used to separate water from biodiesel. Regulations concerning air pollution and increasing fossil fuel scarcity mandate the transition towards biofuels. Moreover, increasingly stringent standards for fuel cleanliness are introduced continually. Biodiesel exhibits strong affinity towards water, which makes its separation from the fuel challenging. Water in the fuel can cause problems, ranging from reduced performance to significant damage to the equipment. A model of the filter is needed to substitute costly or impractical laboratory experiments and to enable the systematic studies of coalescence processes. These computational experiments provide a means for designing filtration equipment with optimal separation efficiency and pressure drop. The coalescence process is simulated using the lattice Boltzmann modelling framework. These models offer several advantages over conventional computational fluid dynamics solvers and are commonly used for the simulation of multiphase flows. Different versions of lattice Boltzmann models in two and three dimensions are created and used in this work. Complex computer models, such as those employed in this dissertation are considered expensive, in that their running times may prohibit any type of code analysis which requires many evaluations of the simulator to be performed. To alleviate this problem, a statistical metamodel known as a Gaussian process emulator is used. Once the computational cost of the model is reduced, uncertainty quantification methods and, in particular, sensitivity and reliability analyses are used to study its performance. Tools and packages for industrial use are developed in this dissertation to enable the practical application of the studies conducted in it.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Lal, Rajnesh. "Data assimilation and uncertainty quantification in cardiovascular biomechanics". Thesis, Montpellier, 2017. http://www.theses.fr/2017MONTS088/document.

Texto completo
Resumen
Les simulations numériques des écoulements sanguins cardiovasculaires peuvent combler d’importantes lacunes dans les capacités actuelles de traitement clinique. En effet, elles offrent des moyens non invasifs pour quantifier l’hémodynamique dans le cœur et les principaux vaisseaux sanguins chez les patients atteints de maladies cardiovasculaires. Ainsi, elles permettent de recouvrer les caractéristiques des écoulements sanguins qui ne peuvent pas être obtenues directement à partir de l’imagerie médicale. Dans ce sens, des simulations personnalisées utilisant des informations propres aux patients aideraient à une prévision individualisée des risques. Nous pourrions en effet, disposer des informations clés sur la progression éventuelle d’une maladie ou détecter de possibles anomalies physiologiques. Les modèles numériques peuvent fournir également des moyens pour concevoir et tester de nouveaux dispositifs médicaux et peuvent être utilisés comme outils prédictifs pour la planification de traitement chirurgical personnalisé. Ils aideront ainsi à la prise de décision clinique. Cependant, une difficulté dans cette approche est que, pour être fiables, les simulations prédictives spécifiques aux patients nécessitent une assimilation efficace de leurs données médicales. Ceci nécessite la solution d’un problème hémodynamique inverse, où les paramètres du modèle sont incertains et sont estimés à l’aide des techniques d’assimilation de données.Dans cette thèse, le problème inverse pour l’estimation des paramètres est résolu par une méthode d’assimilation de données basée sur un filtre de Kalman d’ensemble (EnKF). Connaissant les incertitudes sur les mesures, un tel filtre permet la quantification des incertitudes liées aux paramètres estimés. Un algorithme d’estimation de paramètres, basé sur un filtre de Kalman d’ensemble, est proposé dans cette thèse pour des calculs hémodynamiques spécifiques à un patient, dans un réseau artériel schématique et à partir de mesures cliniques incertaines. La méthodologie est validée à travers plusieurs scenarii in silico utilisant des données synthétiques. La performance de l’algorithme d’estimation de paramètres est également évaluée sur des données expérimentales pour plusieurs réseaux artériels et dans un cas provenant d’un banc d’essai in vitro et des données cliniques réelles d’un volontaire (cas spécifique du patient). Le but principal de cette thèse est l’analyse hémodynamique spécifique du patient dans le polygone de Willis, appelé aussi cercle artériel du cerveau. Les propriétés hémodynamiques communes, comme celles de la paroi artérielle (module de Young, épaisseur de la paroi et coefficient viscoélastique), et les paramètres des conditions aux limites (coefficients de réflexion et paramètres du modèle de Windkessel) sont estimés. Il est également démontré qu’un modèle appelé compartiment d’ordre réduit (ou modèle dimension zéro) permet une estimation simple et fiable des caractéristiques du flux sanguin dans le polygone de Willis. De plus, il est ressorti que les simulations avec les paramètres estimés capturent les formes attendues pour les ondes de pression et de débit aux emplacements prescrits par le clinicien
Cardiovascular blood flow simulations can fill several critical gaps in current clinical capabilities. They offer non-invasive ways to quantify hemodynamics in the heart and major blood vessels for patients with cardiovascular diseases, that cannot be directly obtained from medical imaging. Patient-specific simulations (incorporating data unique to the individual) enable individualised risk prediction, provide key insights into disease progression and/or abnormal physiologic detection. They also provide means to systematically design and test new medical devices, and are used as predictive tools to surgical and personalize treatment planning and, thus aid in clinical decision-making. Patient-specific predictive simulations require effective assimilation of medical data for reliable simulated predictions. This is usually achieved by the solution of an inverse hemodynamic problem, where uncertain model parameters are estimated using the techniques for merging data and numerical models known as data assimilation methods.In this thesis, the inverse problem is solved through a data assimilation method using an ensemble Kalman filter (EnKF) for parameter estimation. By using an ensemble Kalman filter, the solution also comes with a quantification of the uncertainties for the estimated parameters. An ensemble Kalman filter-based parameter estimation algorithm is proposed for patient-specific hemodynamic computations in a schematic arterial network from uncertain clinical measurements. Several in silico scenarii (using synthetic data) are considered to investigate the efficiency of the parameter estimation algorithm using EnKF. The usefulness of the parameter estimation algorithm is also assessed using experimental data from an in vitro test rig and actual real clinical data from a volunteer (patient-specific case). The proposed algorithm is evaluated on arterial networks which include single arteries, cases of bifurcation, a simple human arterial network and a complex arterial network including the circle of Willis.The ultimate aim is to perform patient-specific hemodynamic analysis in the network of the circle of Willis. Common hemodynamic properties (parameters), like arterial wall properties (Young’s modulus, wall thickness, and viscoelastic coefficient) and terminal boundary parameters (reflection coefficient and Windkessel model parameters) are estimated as the solution to an inverse problem using time series pressure values and blood flow rate as measurements. It is also demonstrated that a proper reduced order zero-dimensional compartment model can lead to a simple and reliable estimation of blood flow features in the circle of Willis. The simulations with the estimated parameters capture target pressure or flow rate waveforms at given specific locations
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Zhang, Zheng Ph D. Massachusetts Institute of Technology. "Uncertainty quantification for integrated circuits and microelectrornechanical systems". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99855.

Texto completo
Resumen
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 155-168).
Uncertainty quantification has become an important task and an emerging topic in many engineering fields. Uncertainties can be caused by many factors, including inaccurate component models, the stochastic nature of some design parameters, external environmental fluctuations (e.g., temperature variation), measurement noise, and so forth. In order to enable robust engineering design and optimal decision making, efficient stochastic solvers are highly desired to quantify the effects of uncertainties on the performance of complex engineering designs. Process variations have become increasingly important in the semiconductor industry due to the shrinking of micro- and nano-scale devices. Such uncertainties have led to remarkable performance variations at both circuit and system levels, and they cannot be ignored any more in the design of nano-scale integrated circuits and microelectromechanical systems (MEMS). In order to simulate the resulting stochastic behaviors, Monte Carlo techniques have been employed in SPICE-like simulators for decades, and they still remain the mainstream techniques in this community. Despite of their ease of implementation, Monte Carlo simulators are often too time-consuming due to the huge number of repeated simulations. This thesis reports the development of several stochastic spectral methods to accelerate the uncertainty quantification of integrated circuits and MEMS. Stochastic spectral methods have emerged as a promising alternative to Monte Carlo in many engineering applications, but their performance may degrade significantly as the parameter dimensionality increases. In this work, we develop several efficient stochastic simulation algorithms for various integrated circuits and MEMS designs, including problems with both low-dimensional and high-dimensional random parameters, as well as complex systems with hierarchical design structures. The first part of this thesis reports a novel stochastic-testing circuit/MEMS simulator as well as its advanced simulation engine for radio-frequency (RF) circuits. The proposed stochastic testing can be regarded as a hybrid variant of stochastic Galerkin and stochastic collocation: it is an intrusive simulator with decoupled computation and adaptive time stepping inside the solver. As a result, our simulator gains remarkable speedup over standard stochastic spectral methods and Monte Carlo in the DC, transient and AC simulation of various analog, digital and RF integrated circuits. An advanced uncertainty quantification algorithm for the periodic steady states (or limit cycles) of analog/RF circuits is further developed by combining stochastic testing and shooting Newton. Our simulator is verified by various integrated circuits, showing 10² x to 10³ x speedup over Monte Carlo when a similar level of accuracy is required. The second part of this thesis presents two approaches for hierarchical uncertainty quantification. In hierarchical uncertainty quantification, we propose to employ stochastic spectral methods at different design hierarchies to simulate efficiently complex systems. The key idea is to ignore the multiple random parameters inside each subsystem and to treat each subsystem as a single random parameter. The main difficulty is to recompute the basis functions and quadrature rules that are required for the high-level uncertainty quantification, since the density function of an obtained low-level surrogate model is generally unknown. In order to address this issue, the first proposed algorithm computes new basis functions and quadrature points in the low-level (and typically high-dimensional) parameter space. This approach is very accurate; however it may suffer from the curse of dimensionality. In order to handle high-dimensional problems, a sparse stochastic testing simulator based on analysis of variance (ANOVA) is developed to accelerate the low-level simulation. At the high-level, a fast algorithm based on tensor decompositions is proposed to compute the basis functions and Gauss quadrature points. Our algorithm is verified by some MEMS/IC co-design examples with both low-dimensional and high-dimensional (up to 184) random parameters, showing about 102 x speedup over the state-of-the-art techniques. The second proposed hierarchical uncertainty quantification technique instead constructs a density function for each subsystem by some monotonic interpolation schemes. This approach is capable of handling general low-level possibly non-smooth surrogate models, and it allows computing new basis functions and quadrature points in an analytical way. The computational techniques developed in this thesis are based on stochastic differential algebraic equations, but the results can also be applied to many other engineering problems (e.g., silicon photonics, heat transfer problems, fluid dynamics, electromagnetics and power systems). There exist lots of research opportunities in this direction. Important open problems include how to solve high-dimensional problems (by both deterministic and randomized algorithms), how to deal with discontinuous response surfaces, how to handle correlated non-Gaussian random variables, how to couple noise and random parameters in uncertainty quantification, how to deal with correlated and time-dependent subsystems in hierarchical uncertainty quantification, and so forth.
by Zheng Zhang.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Pascual, Blanca. "Uncertainty quantification for complex structures : statics and dynamics". Thesis, Swansea University, 2012. https://cronfa.swan.ac.uk/Record/cronfa42987.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Mulani, Sameer B. "Uncertainty Quantification in Dynamic Problems With Large Uncertainties". Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/28617.

Texto completo
Resumen
This dissertation investigates uncertainty quantification in dynamic problems. The Advanced Mean Value (AMV) method is used to calculate probabilistic sound power and the sensitivity of elastically supported panels with small uncertainty (coefficient of variation). Sound power calculations are done using Finite Element Method (FEM) and Boundary Element Method (BEM). The sensitivities of the sound power are calculated through direct differentiation of the FEM/BEM/AMV equations. The results are compared with Monte Carlo simulation (MCS). An improved method is developed using AMV, metamodel, and MCS. This new technique is applied to calculate sound power of a composite panel using FEM and Rayleigh Integral. The proposed methodology shows considerable improvement both in terms of accuracy and computational efficiency. In systems with large uncertainties, the above approach does not work. Two Spectral Stochastic Finite Element Method (SSFEM) algorithms are developed to solve stochastic eigenvalue problems using Polynomial chaos. Presently, the approaches are restricted to problems with real and distinct eigenvalues. In both the approaches, the system uncertainties are modeled by Wiener-Askey orthogonal polynomial functions. Galerkin projection is applied in the probability space to minimize the weighted residual of the error of the governing equation. First algorithm is based on inverse iteration method. A modification is suggested to calculate higher eigenvalues and eigenvectors. The above algorithm is applied to both discrete and continuous systems. In continuous systems, the uncertainties are modeled as Gaussian processes using Karhunen-Loeve (KL) expansion. Second algorithm is based on implicit polynomial iteration method. This algorithm is found to be more efficient when applied to discrete systems. However, the application of the algorithm to continuous systems results in ill-conditioned system matrices, which seriously limit its application. Lastly, an algorithm to find the basis random variables of KL expansion for non-Gaussian processes, is developed. The basis random variables are obtained via nonlinear transformation of marginal cumulative distribution function using standard deviation. Results are obtained for three known skewed distributions, Log-Normal, Beta, and Exponential. In all the cases, it is found that the proposed algorithm matches very well with the known solutions and can be applied to solve non-Gaussian process using SSFEM.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Macatula, Romcholo Yulo. "Linear Parameter Uncertainty Quantification using Surrogate Gaussian Processes". Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/99411.

Texto completo
Resumen
We consider uncertainty quantification using surrogate Gaussian processes. We take a previous sampling algorithm and provide a closed form expression of the resulting posterior distribution. We extend the method to weighted least squares and a Bayesian approach both with closed form expressions of the resulting posterior distributions. We test methods on 1D deconvolution and 2D tomography. Our new methods improve on the previous algorithm, however fall short in some aspects to a typical Bayesian inference method.
Master of Science
Parameter uncertainty quantification seeks to determine both estimates and uncertainty regarding estimates of model parameters. Example of model parameters can include physical properties such as density, growth rates, or even deblurred images. Previous work has shown that replacing data with a surrogate model can provide promising estimates with low uncertainty. We extend the previous methods in the specific field of linear models. Theoretical results are tested on simulated computed tomography problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Huang, Jiangeng. "Sequential learning, large-scale calibration, and uncertainty quantification". Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/91935.

Texto completo
Resumen
With remarkable advances in computing power, computer experiments continue to expand the boundaries and drive down the cost of various scientific discoveries. New challenges keep arising from designing, analyzing, modeling, calibrating, optimizing, and predicting in computer experiments. This dissertation consists of six chapters, exploring statistical methodologies in sequential learning, model calibration, and uncertainty quantification for heteroskedastic computer experiments and large-scale computer experiments. For heteroskedastic computer experiments, an optimal lookahead based sequential learning strategy is presented, balancing replication and exploration to facilitate separating signal from input-dependent noise. Motivated by challenges in both large data size and model fidelity arising from ever larger modern computer experiments, highly accurate and computationally efficient divide-and-conquer calibration methods based on on-site experimental design and surrogate modeling for large-scale computer models are developed in this dissertation. The proposed methodology is applied to calibrate a real computer experiment from the gas and oil industry. This on-site surrogate calibration method is further extended to multiple output calibration problems.
Doctor of Philosophy
With remarkable advances in computing power, complex physical systems today can be simulated comparatively cheaply and to high accuracy through computer experiments. Computer experiments continue to expand the boundaries and drive down the cost of various scientific investigations, including biological, business, engineering, industrial, management, health-related, physical, and social sciences. This dissertation consists of six chapters, exploring statistical methodologies in sequential learning, model calibration, and uncertainty quantification for heteroskedastic computer experiments and large-scale computer experiments. For computer experiments with changing signal-to-noise ratio, an optimal lookahead based sequential learning strategy is presented, balancing replication and exploration to facilitate separating signal from complex noise structure. In order to effectively extract key information from massive amount of simulation and make better prediction for the real world, highly accurate and computationally efficient divide-and-conquer calibration methods for large-scale computer models are developed in this dissertation, addressing challenges in both large data size and model fidelity arising from ever larger modern computer experiments. The proposed methodology is applied to calibrate a real computer experiment from the gas and oil industry. This large-scale calibration method is further extended to solve multiple output calibration problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Abdollahzadeh, Asaad. "Adaptive algorithms for history matching and uncertainty quantification". Thesis, Heriot-Watt University, 2014. http://hdl.handle.net/10399/2752.

Texto completo
Resumen
Numerical reservoir simulation models are the basis for many decisions in regard to predicting, optimising, and improving production performance of oil and gas reservoirs. History matching is required to calibrate models to the dynamic behaviour of the reservoir, due to the existence of uncertainty in model parameters. Finally a set of history matched models are used for reservoir performance prediction and economic and risk assessment of different development scenarios. Various algorithms are employed to search and sample parameter space in history matching and uncertainty quantification problems. The algorithm choice and implementation, as done through a number of control parameters, have a significant impact on effectiveness and efficiency of the algorithm and thus, the quality of results and the speed of the process. This thesis is concerned with investigation, development, and implementation of improved and adaptive algorithms for reservoir history matching and uncertainty quantification problems. A set of evolutionary algorithms are considered and applied to history matching. The shared characteristic of applied algorithms is adaptation by balancing exploration and exploitation of the search space, which can lead to improved convergence and diversity. This includes the use of estimation of distribution algorithms, which implicitly adapt their search mechanism to the characteristics of the problem. Hybridising them with genetic algorithms, multiobjective sorting algorithms, and real-coded, multi-model and multivariate Gaussian-based models can help these algorithms to adapt even more and improve their performance. Finally diversity measures are used to develop an explicit, adaptive algorithm and control the algorithm’s performance, based on the structure of the problem. Uncertainty quantification in a Bayesian framework can be carried out by resampling of the search space using Markov chain Monte-Carlo sampling algorithms. Common critiques of these are low efficiency and their need for control parameter tuning. A Metropolis-Hastings sampling algorithm with an adaptive multivariate Gaussian proposal distribution and a K-nearest neighbour approximation has been developed and applied.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Doty, Austin. "Nonlinear Uncertainty Quantification, Sensitivity Analysis, and Uncertainty Propagation of a Dynamic Electrical Circuit". University of Dayton / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1355456642.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Kuznetsova, Alexandra Anatolievna. "Heirarchical geological realism in history matching for reliable reservoir uncertainty predictions". Thesis, Heriot-Watt University, 2017. http://hdl.handle.net/10399/3282.

Texto completo
Resumen
The oil and gas industry has been always associated with huge risks. To minimise these risks, one is looking for the reliable reservoir performance predictions to make better field development decisions. The great challenge associated with reliable predictions is to account for the essential geological uncertainties and propagate them through the engineering model validation process. In this thesis, we propose a new methodology to improve the reliability of reservoir predictions under the Bayesian framework. The first step of the methodology applies the new hierarchical approach to account for essential geological uncertainties from different levels of geological data in facies modelling. As the result of the hierarchical approach, we evaluate the prior range of different geological uncertainties. Facies models greatly affect simulation results but it’s a great challenge to history match them whilst maintaining geological realism. Therefore, next step of the methodology is aiming to improve geological realism during history matching. We propose to combine metric space approach and machine learning classification to evaluate geological relations between multiple geological scenarios and parameters combination and propagate them into history matching. Multidimensional scaling was used to analyse the similarity of the facies models in the metric space. Results of different machine learning classification methods – k-means clustering, Support Vector Machines, Random Forest – were compared to include the ones that performed better into history matching. The reservoir predictions under uncertainty were performed by evaluating the Posterior Probability Distribution under the Bayesian framework and estimating the Credible Intervals (P10, P50, P90). The methodology was applied to a synthetic case study based on a real reservoir of the West Coast of Africa (offshore turbidite reservoir). The main results show that the proposed methodology was able to improve the geological realistic facies model representation during history matching and uncertainty quantification. Some additional controls of facies architecture and facies connectivity modelling could be introduced to improve the quality of the facies realisations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Mantis, George C. "Quantification and propagation of disciplinary uncertainty via bayesian statistics". Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/12136.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Phillips, Edward G. "Fast solvers and uncertainty quantification for models of magnetohydrodynamics". Thesis, University of Maryland, College Park, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3644175.

Texto completo
Resumen

The magnetohydrodynamics (MHD) model describes the flow of electrically conducting fluids in the presence of magnetic fields. A principal application of MHD is the modeling of plasma physics, ranging from plasma confinement for thermonuclear fusion to astrophysical plasma dynamics. MHD is also used to model the flow of liquid metals, for instance in magnetic pumps, liquid metal blankets in fusion reactor concepts, and aluminum electrolysis. The model consists of a non-self-adjoint, nonlinear system of partial differential equations (PDEs) that couple the Navier-Stokes equations for fluid flow to a reduced set of Maxwell's equations for electromagnetics.

In this dissertation, we consider computational issues arising for the MHD equations. We focus on developing fast computational algorithms for solving the algebraic systems that arise from finite element discretizations of the fully coupled MHD equations. Emphasis is on solvers for the linear systems arising from algorithms such as Newton's method or Picard iteration, with a main goal of developing preconditioners for use with iterative methods for the linearized systems. In particular, we first consider the linear systems arising from an exact penalty finite element formulation of the MHD equations. We then draw on this research to develop solvers for a formulation that includes a Lagrange multiplier within Maxwell's equations. We also consider a simplification of the MHD model: in the MHD kinematics model, the equations are reduced by assuming that the flow behavior of the system is known. In this simpler setting, we allow for epistemic uncertainty to be present. By mathematically modeling this uncertainty with random variables, we investigate its implications on the physical model.

Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Erbas, Demet. "Sampling strategies for uncertainty quantification in oil recovery prediction". Thesis, Heriot-Watt University, 2007. http://hdl.handle.net/10399/70.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Gligorijevic, Djordje. "Predictive Uncertainty Quantification and Explainable Machine Learning in Healthcare". Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/520057.

Texto completo
Resumen
Computer and Information Science
Ph.D.
Predictive modeling is an ever-increasingly important part of decision making. The advances in Machine Learning predictive modeling have spread across many domains bringing significant improvements in performance and providing unique opportunities for novel discoveries. A notably important domains of the human world are medical and healthcare domains, which take care of peoples' wellbeing. And while being one of the most developed areas of science with active research, there are many ways they can be improved. In particular, novel tools developed based on Machine Learning theory have drawn benefits across many areas of clinical practice, pushing the boundaries of medical science and directly affecting well-being of millions of patients. Additionally, healthcare and medicine domains require predictive modeling to anticipate and overcome many obstacles that future may hold. These kinds of applications employ a precise decision--making processes which requires accurate predictions. However, good prediction by its own is often insufficient. There has been no major focus in developing algorithms with good quality uncertainty estimates. Ergo, this thesis aims at providing a variety of ways to incorporate solutions by learning high quality uncertainty estimates or providing interpretability of the models where needed for purpose of improving existing tools built in practice and allowing many other tools to be used where uncertainty is the key factor for decision making. The first part of the thesis proposes approaches for learning high quality uncertainty estimates for both short- and long-term predictions in multi-task learning, developed on top for continuous probabilistic graphical models. In many scenarios, especially in long--term predictions, it may be of great importance for the models to provide a reliability flag in order to be accepted by domain experts. To this end we explored a widely applied structured regression model with a goal of providing meaningful uncertainty estimations on various predictive tasks. Our particular interest is in modeling uncertainty propagation while predicting far in the future. To address this important problem, our approach centers around providing an uncertainty estimate by modeling input features as random variables. This allows modeling uncertainty from noisy inputs. In cases when model iteratively produces errors it should propagate uncertainty over the predictive horizon, which may provide invaluable information for decision making based on predictions. In the second part of the thesis we propose novel neural embedding models for learning low-dimensional embeddings of medical concepts, such are diseases and genes, and show how they can be interpreted to allow accessing their quality, and show how can they be used to solve many problems in medical and healthcare research. We use EHR data to discover novel relationships between diseases by studying their comorbidities (i.e., co-occurrences in patients). We trained our models on a large-scale EHR database comprising more than 35 million inpatient cases. To confirm value and potential of the proposed approach we evaluate its effectiveness on a held-out set. Furthermore, for select diseases we provide a candidate gene list for which disease-gene associations were not studied previously, allowing biomedical researchers to better focus their often very costly lab studies. We furthermore examine how disease heterogeneity can affect the quality of learned embeddings and propose an approach for learning types of such heterogeneous diseases, while in our study we primarily focus on learning types of sepsis. Finally, we evaluate the quality of low-dimensional embeddings on tasks of predicting hospital quality indicators such as length of stay, total charges and mortality likelihood, demonstrating their superiority over other approaches. In the third part of the thesis we focus on decision making in medicine and healthcare domain by developing state-of-the-art deep learning models capable of outperforming human performance while maintaining good interpretability and uncertainty estimates.
Temple University--Theses
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

BASTOS, BERNARDO LEOPARDI GONCALVES BARRETTO. "UNCERTAINTY QUANTIFICATION AT RISK ASSESSMENT PROCEDURE DUE CONTAMINATED GROUNDWATER". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2005. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8184@1.

Texto completo
Resumen
UNIVERSIDADE FEDERAL DE ALAGOAS
FUNDAÇÃO DE APOIO À PESQUISA DO ESTADO DO RIO DE JANEIRO
A análise quantitativa de risco à saúde humana (AqR) devido a uma determinada área contaminada vem se verificando como importante ferramenta na gestão ambiental bem como a concretização de dano ambiental, tanto no Brasil como em outros países. Os procedimentos para AqR consistem em passos seqüenciados de forma orgânica e lógica e englobam características legais, aspectos toxicológicos e mecanismos de transporte. Apesar de não haver uma lei específica que regule a AqR, o Direito Ambiental permite que estas metodologias sejam plenamente aplicadas tanto no âmbito administrativo quanto no âmbito judicial para a caracterização de dano ambiental. As metodologias de AqR se valem de modelos fármaco-cinéticos que relacionam a exposição ao composto químico à possibilidade de causar danos à saúde humana. A Geotecnia Ambiental estuda o transporte e comportamento dos contaminantes nos solos e nas águas subterrâneas. A AqR se mostra um problema complexo e permeado por inúmeras incertezas e variabilidades. Foi proposta a utilização do método do segundo momento de primeira ordem (FOSM) para quantificar as incertezas relacionadas com a estimativa dos parâmetros de transporte a serem usadas em um modelo analítico de transporte de soluto em meios porosos (Domenico). O estudo de caso consiste na aplicação do programa desenvolvido para esta finalidade (SeRis). O método se mostra computacionalmente econômico e o estudo de caso, dentro das idealizações, identificou os parâmetros com maior importância relativa e apresentou uma variância total razoável para o resultado.
The quantitative human health risk assessment (AqR) due a contaminated site has became an important tool at Environmental Managenment and at the identification of environmental harm, at Brazil and other countries. The AqR procedures consists in logical sequence of actions concerned about legal aspects, toxicological matter and transport phenomena. In spite of the absence of a single law that could regulate specifically the AqR, the Environmental Law, as a whole, allows that AqR methodologies to be fully applied at governamental and judicial levels. The AqR procedures are base on pharmaco-kinetics models that quantitatively relates the exposure to the chemicals to human harm potency. The Environmental Geotechnics studies the fate and transport of contaminants at soil and groundwater. AqR is complex and full of uncertainties and variabilities subject. It have been proposed the application of the first order second moment method (FOSM) to quantify the uncertainties related to the estimation of the transport parameters to be used in the analytical transport model of solute in porous media (Domenico). It have been developed a specific software that meets this objective (SeRis). This software proved to be computationally efficient. The case study example indicated the relative importance of the considered parameters and presented a reasonable total system variance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Mohammadi, Ghazi Reza. "Inference and uncertainty quantification for unsupervised structural monitoring problems". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115791.

Texto completo
Resumen
Thesis: Ph. D. in Structures and Materials, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 261-272).
Health monitoring is an essential functionality for smart and sustainable infrastructures that helps improving their safety and life span. A major element of such functionality is statistical inference and decision making which aims to process the dynamic response of structures in order to localize the defects in those systems as well as quantifying the uncertainties associated with such predictions. Accomplishing this task requires dealing with special constraints, in addition to the general challenges of inference problems, which are imposed by the uniqueness and size of civil infrastructures. These constraints are mainly associated with the small size and high dimensionality of the relevant data sets, low spatial resolution of measurements, and lack of prior information about the response of structures at all possible damaged states. Additionally, the measured responses at various locations on a structure are statistically dependent due to their connectivity via the structural elements. Ignoring such dependencies may result in inaccurate predictions, usually by blurring the damage localization resolution. In this thesis work, a comprehensive investigation has been carried out on developing appropriate signal processing, inference, and uncertainty quantification techniques with applications to data driven structural health monitoring (SHM). For signal processing, we have developed a feature extraction scheme that uses nonlinear non-stationary signal decomposition techniques to capture the effect of damages on the dynamic response of structures. We have also developed a general purpose signal processing method by combining the sparsity based regularization with the singularity expansion method. This method can provide a sparse representation of signals in complex-frequency plane and hence, more robust system identification schemes. For uncertainty quantification and decision making, we have developed three different learning algorithms which are capable of characterizing the statistical dependencies of the relevant random variables in novelty detection inference problems under various constraints related to the quality, size, and dimensionality of data sets. In doing so, we have mainly used the statistical graphical models and Markov random fields, optimization methods, kernel two sample tests, and kernel dependence analysis. The developed methods may be applied to a wide range of problems such as SHM, medical diagnostic, network security, and event detection. We have experimentally evaluated these techniques by applying them to SHM application problems for damage localization in various laboratory prototypes as well as a full scale structure.
by Reza Mohammadi Ghazi.
Ph. D. in Structures and Materials
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Dostert, Paul Francis. "Uncertainty quantification using multiscale methods for porous media flows". [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-2532.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Ndiaye, Aïssatou. "Uncertainty Quantification of Thermo-acousticinstabilities in gas turbine combustors". Thesis, Montpellier, 2017. http://www.theses.fr/2017MONTS062/document.

Texto completo
Resumen
Les instabilités thermo-acoustiques résultent de l'interaction entre les oscillations de pression acoustique et les fluctuations du taux de dégagement de chaleur de la flamme. Ces instabilités de combustion sont particulièrement préoccupantes en raison de leur fréquence dans les turbines à gaz modernes et à faible émission. Leurs principaux effets indésirables sont une réduction du temps de fonctionnement du moteur en raison des oscillations de grandes amplitudes ainsi que de fortes vibrations à l'intérieur de la chambre de combustion. La simulation numérique est maintenant devenue une approche clé pour comprendre et prédire ces instabilités dans la phase de conception industrielle. Cependant, la prédiction de ce phénomène reste difficile en raison de sa complexité; cela se confirme lorsque les paramètres physiques du processus de modélisation sont incertains, ce qui est pratiquement toujours le cas pour des systèmes réels.Introduire la quantification des incertitudes pour la thermo-acoustique est le seul moyen d'étudier et de contrôler la stabilité des chambres de combustion qui fonctionnent dans des conditions réalistes; c'est l'objectif de cette thèse.Dans un premier temps, une chambre de combustion académique (avec un seul injecteur et une seule flamme) ainsi que deux chambres de moteurs d'hélicoptère (avec N injecteurs et des flammes) sont étudiés. Les calculs basés sur un solveur de Helmholtz et un outil quasi-analytique de bas ordre fournissent des estimations appropriées de la fréquence et des structures modales pour chaque géométrie. L'analyse suggère que la réponse de la flamme aux perturbations acoustiques joue un rôle prédominant dans la dynamique de la chambre de combustion. Ainsi, la prise en compte des incertitudes liées à la représentation de la flamme apparaît comme une étape nécessaire vers une analyse robuste de la stabilité du système.Dans un second temps, la notion de facteur de risque, c'est-à-dire la probabilité pour un mode thermo-acoustique d'être instable, est introduite afin de fournir une description plus générale du système que la classification classique et binaire (stable / instable). Les approches de modélisation de Monte Carlo et de modèle de substitution sont associées pour effectuer une analyse de quantification d'incertitudes de la chambre de combustion académique avec deux paramètres incertains (amplitude et temps de réponse de la flamme). On montre que l'utilisation de modèles de substitution algébriques réduit drastiquement le nombre de calculs initiales, donc la charge de calcul, tout en fournissant des estimations précises du facteur de risque modal. Pour traiter les problèmes multidimensionnel tels que les deux moteurs d'hélicoptère, une stratégie visant à réduire le nombre de paramètres incertains est introduite. La méthode <> combinée à une approche de changement de variables a permis d'identifier trois directions dominantes (au lieu des N paramètres incertains initiaux) qui suffisent à décrire la dynamique des deux systèmes industriels. Dès lors que ces paramètres dominants sont associés à des modèles de substitution appropriés, cela permet de réaliser efficacement une analyse de quantification des incertitudes de systèmes thermo-acoustiques complexes.Finalement, on examine la perspective d'utiliser la méthode adjointe pour analyser la sensibilité des systèmes thermo-acoustiques représentés par des solveurs 3D de Helmholtz. Les résultats obtenus sur des cas tests 2D et 3D sont prometteurs et suggèrent d'explorer davantage le potentiel de cette méthode dans le cas de problèmes thermo-acoustiques encore plus complexes
Thermoacoustic instabilities result from the interaction between acoustic pressure oscillations and flame heat release rate fluctuations. These combustion instabilities are of particular concern due to their frequent occurrence in modern, low emission gas turbine engines. Their major undesirable consequence is a reduced time of operation due to large amplitude oscillations of the flame position and structural vibrations within the combustor. Computational Fluid Dynamics (CFD) has now become one a key approach to understand and predict these instabilities at industrial readiness level. Still, predicting this phenomenon remains difficult due to modelling and computational challenges; this is even more true when physical parameters of the modelling process are uncertain, which is always the case in practical situations. Introducing Uncertainty Quantification for thermoacoustics is the only way to study and control the stability of gas turbine combustors operated under realistic conditions; this is the objective of this work.First, a laboratory-scale combustor (with only one injector and flame) as well as two industrial helicopter engines (with N injectors and flames) are investigated. Calculations based on a Helmholtz solver and quasi analytical low order tool provide suitable estimates of the frequency and modal structures for each geometry. The analysis suggests that the flame response to acoustic perturbations plays the predominant role in the dynamics of the combustor. Accounting for the uncertainties of the flame representation is thus identified as a key step towards a robust stability analysis.Second, the notion of Risk Factor, that is to say the probability for a particular thermoacoustic mode to be unstable, is introduced in order to provide a more general description of the system than the classical binary (stable/unstable) classification. Monte Carlo and surrogate modelling approaches are then combined to perform an uncertainty quantification analysis of the laboratory-scale combustor with two uncertain parameters (amplitude and time delay of the flame response). It is shown that the use of algebraic surrogate models reduces drastically the number of state computations, thus the computational load, while providing accurate estimates of the modal risk factor. To deal with the curse of dimensionality, a strategy to reduce the number of uncertain parameters is further introduced in order to properly handle the two industrial helicopter engines. The active subspace algorithm used together with a change of variables allows identifying three dominant directions (instead of N initial uncertain parameters) which are sufficient to describe the dynamics of the industrial systems. Combined with appropriate surrogate models construction, this allows to conduct computationally efficient uncertainty quantification analysis of complex thermoacoustic systems.Third, the perspective of using adjoint method for the sensitivity analysis of thermoacoustic systems represented by 3D Helmholtz solvers is examined. The results obtained for 2D and 3D test cases are promising and suggest to further explore the potential of this method on even more complex thermoacoustic problems
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

White, Jeremy. "Computer Model Inversion and Uncertainty Quantification in the Geosciences". Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5329.

Texto completo
Resumen
The subject of this dissertation is use of computer models as data analysis tools in several different geoscience settings, including integrated surface water/groundwater modeling, tephra fallout modeling, geophysical inversion, and hydrothermal groundwater modeling. The dissertation is organized into three chapters, which correspond to three individual publication manuscripts. In the first chapter, a linear framework is developed to identify and estimate the potential predictive consequences of using a simple computer model as a data analysis tool. The framework is applied to a complex integrated surface-water/groundwater numerical model with thousands of parameters. Several types of predictions are evaluated, including particle travel time and surface-water/groundwater exchange volume. The analysis suggests that model simplifications have the potential to corrupt many types of predictions. The implementation of the inversion, including how the objective function is formulated, what minimum of the objective function value is acceptable, and how expert knowledge is enforced on parameters, can greatly influence the manifestation of model simplification. Depending on the prediction, failure to specifically address each of these important issues during inversion is shown to degrade the reliability of some predictions. In some instances, inversion is shown to increase, rather than decrease, the uncertainty of a prediction, which defeats the purpose of using a model as a data analysis tool. In the second chapter, an efficient inversion and uncertainty quantification approach is applied to a computer model of volcanic tephra transport and deposition. The computer model simulates many physical processes related to tephra transport and fallout. The utility of the approach is demonstrated for two eruption events. In both cases, the importance of uncertainty quantification is highlighted by exposing the variability in the conditioning provided by the observations used for inversion. The worth of different types of tephra data to reduce parameter uncertainty is evaluated, as is the importance of different observation error models. The analyses reveal the importance using tephra granulometry data for inversion, which results in reduced uncertainty for most eruption parameters. In the third chapter, geophysical inversion is combined with hydrothermal modeling to evaluate the enthalpy of an undeveloped geothermal resource in a pull-apart basin located in southeastern Armenia. A high-dimensional gravity inversion is used to define the depth to the contact between the lower-density valley fill sediments and the higher-density surrounding host rock. The inverted basin depth distribution was used to define the hydrostratigraphy for the coupled groundwater-flow and heat-transport model that simulates the circulation of hydrothermal fluids in the system. Evaluation of several different geothermal system configurations indicates that the most likely system configuration is a low-enthalpy, liquid-dominated geothermal system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Yang, Chao. "ON PARTICLE METHODS FOR UNCERTAINTY QUANTIFICATION IN COMPLEX SYSTEMS". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1511967797285962.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Alkhatib, Ali. "Decision making and uncertainty quantification for surfactant-polymer flooding". Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/22154.

Texto completo
Resumen
The aim of this thesis is to develop a robust parametric uncertainty quantification method and a decision making method for a chemical EOR process. The main motivation is that uncertainty is detrimental to the wide scale implementation of chemical EOR. Poor scale-up performance is not in line with the success in laboratory applications. Furthermore, economic uncertainty is also an important factor as low oil prices can deter EOR investment. As an example of chemical EOR we used Surfactant-polymer flooding due to its high potential and complexity. The approach was based on using Value of Flexibility evaluation in order to optimize the surfactant-polymer flooding in the presence of economic and technical uncertainty. This method was inspired by real options theory which provides a framework to value flexibility and captures the effect of uncertainty as the process evolves through time. By doing so, it provides the means to capitalize on the upside opportunities that these uncertainties present or to help mitigate worsening circumstances. In addition, it fulfils a secondary objective to develop a decision making process that combines both technical and economic uncertainty. The Least Squares Monte Carlo (LSM) method was chosen to value flexibility in surfactant-polymer flooding. The algorithm depends on two main components; the stochastic simulation of the input state variables and the dynamic programming approach that produce the optimal policy. The produced optimal policy represents the influence of uncertainty in the time series of the relevant input parameters. Different chemical related parameters were modelled stochastically such as surfactant and polymer adsorption rates and residual oil saturation. Static uncertainty in heterogeneity was incorporated using Gaussian and multiple-point statistics generated grids and dynamic uncertainty in heterogeneity was modelled using upscaling techniques. Economic uncertainties such as the oil price and surfactant and polymer cost were incorporated into the model as well. The results obtained for the initial case studies showed that the method produced higher value compared with static policy scenarios. It showed that by designing flexibility into the implementation of the surfactant-polymer flood, it is possible to create value in the presence of uncertainty. An attempt to enhance the performance of the LSM algorithm was introduced by using the probabilistic collocation method (PCM) to sample the distributions of the technical state input parameters more efficiently, requiring significantly less computational time compared to Monte Carlo sampling. The combined approach was then applied to more complex decisions to demonstrate its scalability. It was found that the LSM algorithm could value flexibility for surfactant-polymer flooding and that it introduces a new approach to highly uncertain problems. However, there are some limitations to the extendibility of the algorithm to more complex higher dimensional problems. The main limitation was observed when using a finer discretization of the decision space because it requires a significant increase in the number of stochastic realization for the results to converge, thus increasing the computational requirement significantly. The contributions of this thesis can be summarized into the following: an attempt to use real options theory to value flexibility in SP flooding processes, the development of an approximate dynamic programming approach to produce optimal policies, the robust quantification of parametric uncertainty for SP flooding using PCM and an attempt to improve the efficiency of the LSM method by coupling it with the PCM code in order to extend its applicability to more complex problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Nobari, Amir. "Uncertainty quantification of brake squeal listability via surrogate modelling". Thesis, University of Liverpool, 2015. http://livrepository.liverpool.ac.uk/2035339/.

Texto completo
Resumen
Noise, vibration and Harshness (NVH) of automotive disc brakes have been an active research topic for several decades. The environmental concerns, on one hand, and the rising customer expectations of their car quality, on the other hand, have made NVH of brakes an important issue for car manufacturers. Of different types of noise and vibration that a brake system may generate, squeal is the main focus of the current study. Brake squeal is an irritating high-frequency noise causing a significant warranty cost to car manufacturers. There are a number of reasons leading to squeal noise either at the end of production or during usage and services. Of these reasons, it is believed that manufacturing variability, several sources of uncertainty (such as friction and contact) and diverse loading cases have the most contribution in this problem. Car manufacturers are then recently encouraged to look into the uncertainty analysis of the brake systems in order to cover the influence of these variations on brake designs. The biggest hurdle in the uncertainty analysis of brakes is the computational time, cost and data storage. In general, stochastic studies are done on the premise of deterministic analyses of a system. As the deterministic analyses of brake squeal instability essentially involve a great deal of computational workload, their stochastic (non-deterministic) analyses will be consequently very expensive. To overcome this issue, the method of surrogate modelling is proposed in this study. Briefly speaking, surrogate modelling replaces an expensive simulation code with a cheap-to-evaluate mathematical predictor. As a result, instead of using the actual finite element model of a brake for statistical analyses, its replacement model will be used alternatively. There are three main advantages in surrogate modelling of brakes. First of all, it paves the way of structural modification of brakes, which are conventionally done for reducing squeal propensity. Secondly, structural uncertainties of a brake design can cost-effectively be propagated onto the results of the stability analysis. Thereafter, instead of making a single design point stable, a scatter of points should meet the stability criteria. Finally, the reliability and robustness of a brake design can be quantified efficiently. These two measures indicate the probability of unstable vibration leading to squeal noise for a brake design. Accordingly, car manufacturers will be able to estimate the cost of warranty claims which may be filed due to this particular issue. If the probability of failure which is calculated for squeal propensity is significant, surrogate modelling helps come up with a solution during the design stage, before cars go into production. In brief, two major steps must be taken toward constructing a surrogate model: making a uniform sampling plan and fitting a mathematical predictor to the observed data. Of different sampling techniques, Latin hypercube sampling (LHS) is used in this study in order to reduce the amount of computational workload. It is worth mentioning that the original LHS does not enforce the uniformity condition when making samples. However, some modifications can be applied to LHS in order to improve the uniformity of samples. Note that the uniformity of samples plays a crucial role in the accuracy of a surrogate model. A surrogate model, in fact, is built on the premise of the observations which are made over a design space. Depending on the nonlinearity of the outputs versus the input variables and also depending on the dimensions of a design space, different mathematical functions may be used for a surrogate predictor. The results of this study show that Kriging function brings about a very accurate surrogate model for the brake squeal instability. In order to validate the accuracy of surrogate models, a number of methods are reviewed and implemented in the current study. Finally, the validated surrogate models are used in place of the actual FE model for uncertainty quantification of squeal instability. Apart from surrogate modeling, a stochastic study is conducted on friction-induced vibration. Statistics of complex eigenvalues of a simplified brake models are studied under the influence of variability and uncertainty. For this purpose, the 2nd order perturbation method is extended to be applicable on an asymmetric system with non-proportional damping. The main advantage of this approach is that the statistics of complex eigenvalues can be calculated in just one run, which is massively more efficient than the conventional techniques of uncertainty propagation that use a large number of simulations to determine the results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Crevillen, Garcia David. "Uncertainty quantification for flow and transport in porous media". Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/31084/.

Texto completo
Resumen
The major spreading and trapping mechanisms of carbon dioxide in geological media are subject to spatial variability due to heterogeneity of the physical and chemical properties of the medium. For modelling to make a useful contribution to the understanding of carbon dioxide sequestration and its associated risk assessment, the impact of heterogeneity on flow, transport and reaction processes and their uncertainties must be identified, characterised, and its consequences quantified. Complex computer simulation models based on systems of partial differential equations with random inputs are often used to describe the flow of groundwater through this heterogeneous media. The Monte Carlo method is a widely used and effective approach to quantify uncertainty in such systems of partial differential equations with random inputs. This thesis investigates two alternatives to Monte Carlo for solving the equations with random inputs; the first of these are techniques developed for improving the computational performance of Monte Carlo, namely methods such as, multilevel Monte Carlo, quasi Monte Carlo, multilevel quasi Monte Carlo. The second alternative, Gaussian process emulation, is an approach based on Bayesian non parametric modelling, in which we build statistical approximations of the simulator, called emulators. Numerical calculations carried out in this thesis have demonstrated the effectiveness of the proposed alternatives to the Monte Carlo method for solving two dimensional model problems arising in groundwater flow and Carbon capture and storage processes. Multilevel quasi Monte Carlo has been proven to be the more efficient method, in terms of computational resources used, among Monte Carlo, multilevel Monte Carlo and quasi Monte Carlo. Gaussian process emulation has been proven to be a reliable surrogate for these simulators and it has been concluded that the use of Gaussian process emulation is a powerful tool which can be satisfactorily used when the physical processes are modelled through computationally expensive simulators.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Lonsdale, Jack Henry. "Predictive modelling and uncertainty quantification of UK forest growth". Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/16202.

Texto completo
Resumen
Forestry in the UK is dominated by coniferous plantations. Sitka spruce (Picea sitchensis) and Scots pine (Pinus sylvestris) are the most prevalent species and are mostly grown in single age mono-culture stands. Forest strategy for Scotland, England, and Wales all include efforts to achieve further afforestation. The aim of this afforestation is to provide a multi-functional forest with a broad range of benefits. Due to the time scale involved in forestry, accurate forecasts of stand productivity (along with clearly defined uncertainties) are essential to forest managers. These can be provided by a range of approaches to modelling forest growth. In this project model comparison, Bayesian calibration, and data assimilation methods were all used to attempt to improve forecasts and understanding of uncertainty therein of the two most important conifers in UK forestry. Three different forest growth models were compared in simulating growth of Scots pine. A yield table approach, the process-based 3PGN model, and a Stand Level Dynamic Growth (SLeDG) model were used. Predictions were compared graphically over the typical productivity range for Scots pine in the UK. Strengths and weaknesses of each model were considered. All three produced similar growth trajectories. The greatest difference between models was in volume and biomass in unthinned stands where the yield table predicted a much larger range compared to the other two models. Future advances in data availability and computing power should allow for greater use of process-based models, but in the interim more flexible dynamic growth models may be more useful than static yield tables for providing predictions which extend to non-standard management prescriptions and estimates of early growth and yield. A Bayesian calibration of the SLeDG model was carried out for both Sitka spruce and Scots pine in the UK for the first time. Bayesian calibrations allow both model structure and parameters to be assessed simultaneously in a probabilistic framework, providing a model with which forecasts and their uncertainty can be better understood and quantified using posterior probability distributions. Two different structures for including local productivity in the model were compared with a Bayesian model comparison. A complete calibration of the more probable model structure was then completed. Example forecasts from the calibration were compatible with existing yield tables for both species. This method could be applied to other species or other model structures in the future. Finally, data assimilation was investigated as a way of reducing forecast uncertainty. Data assimilation assumes that neither observations nor models provide a perfect description of a system, but combining them may provide the best estimate. SLeDG model predictions and LiDAR measurements for sub-compartments within Queen Elizabeth Forest Park were combined with an Ensemble Kalman Filter. Uncertainty was reduced following the second data assimilation in all of the state variables. However, errors in stand delineation and estimated stand yield class may have caused observational uncertainty to be greater thus reducing the efficacy of the method for reducing overall uncertainty.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Arnold, Daniel Peter. "Geological parameterisation of petroleum reservoir models for improved uncertainty quantification". Thesis, Heriot-Watt University, 2008. http://hdl.handle.net/10399/2256.

Texto completo
Resumen
As uncertainty can never be removed from reservoir forecasts, the accurate quantification of uncertainty is the only appropriate method to make reservoir predictions. Bayes’ Theorem defines a framework by which the uncertainty in a reservoir can be ascertained by updating prior definitions of uncertainty with the mismatch between our simulation models and the measured production data. In the simplest version of the Bayesian methodology we assume that a realistic representation our field exists as a particular combination of model parameters from a set of uniform prior ranges. All models are believed to be initially equally likely, but are updated to new values of uncertainty based on the misfit between the historical and production data. Furthermore, most effort in reservoir uncertainty quantification and automated history matching has been applied to non-geological model parameters, preferring to leave the geological aspects of the reservoir static. While such an approach is the easiest to apply, the reality is that the majority of the reservoir uncertainty is sourced from the geological aspects of the reservoir, therefore geological parameters should be included in the prior and those priors should be conditioned to include the full amount of geological knowledge so as to remove combinations that are not possible in nature. This thesis develops methods of geological parameterisation to capture geological features and assess the impact of geologically derived non-uniform prior definitions and the choice of modelling method/interpretation on the quantification of uncertainty. A number of case studies are developed, using synthetic models and a real field data set, that show the inclusion of geological prior data reduces the amount of quantified uncertainty and improves the performance of sampling. The framework allows the inclusion of any data type, to reflect the variety of geological information sources. ii Errors in the interpretation of the geology and/or the choice of an appropriate modelling method have an impact on the quantified uncertainty. In the cases developed in this thesis all models were able to produce good history matches, but the differences in the models lead to differences in the amount of quantified uncertainty. The result is that each quantification would lead to different development decisions and that the a combination of several models may be required when a single modelling approach cannot be defined. The overall conclusion to the work is that geological prior data should be used in uncertainty quantification to reduce the uncertainty in forecasts by preventing bias from non-realistic models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía