To see the other types of publications on this topic, follow the link: Propagation of uncertainty.

Dissertations / Theses on the topic 'Propagation of uncertainty'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Propagation of uncertainty.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chetwynd, Daley. "Uncertainty propagation in nonlinear systems." Thesis, University of Sheffield, 2005. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.425587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fiorito, Luca. "Nuclear data uncertainty propagation and uncertainty quantification in nuclear codes." Doctoral thesis, Universite Libre de Bruxelles, 2016. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/238375.

Full text
Abstract:
Uncertainties in nuclear model responses must be quantified to define safety limits, minimize costs and define operational conditions in design. Response uncertainties can also be used to provide a feedback on the quality and reliability of parameter evaluations, such as nuclear data. The uncertainties of the predictive model responses sprout from several sources, e.g. nuclear data, model approximations, numerical solvers, influence of random variables. It was proved that the largest quantifiable sources of uncertainty in nuclear models, such as neutronics and burnup calculations, are the nuclear data, which are provided as evaluated best estimates and uncertainties/covariances in data libraries. Nuclear data uncertainties and/or covariances must be propagated to the model responses with dedicated uncertainty propagation tools. However, most of the nuclear codes for neutronics and burnup models do not have these capabilities and produce best-estimate results without uncertainties. In this work, the nuclear data uncertainty propagation was concentrated on the SCK•CEN code burnup ALEPH-2 and the Monte Carlo N-Particle code MCNP.Two sensitivity analysis procedures, i.e. FSAP and ASAP, based on linear perturbation theory were implemented in ALEPH-2. These routines can propagate nuclear data uncertainties in pure decay models. ASAP and ALEPH-2 were tested and validated against the decay heat and uncertainty quantification for several fission pulses and for the MYRRHA subcritical system. The decay uncertainty is necessary to define the reliability of the decay heat removal systems and prevent overheating and mechanical failure of the reactor components. It was proved that the propagation of independent fission yield and decay data uncertainties can be carried out with ASAP also in neutron irradiation models. Because of the ASAP limitations, the Monte Carlo sampling solver NUDUNA was used to propagate cross section covariances. The applicability constraints of ASAP drove our studies towards the development of a tool that could propagate the uncertainty of any nuclear datum. In addition, the uncertainty propagation tool was supposed to operate with multiple nuclear codes and systems, including non-linear models. The Monte Carlo sampling code SANDY was developed. SANDY is independent of the predictive model, as it only interacts with the nuclear data in input. Nuclear data are sampled from multivariate probability density functions and propagated through the model according to the Monte Carlo sampling theory. Not only can SANDY propagate nuclear data uncertainties and covariances to the model responses, but it is also able to identify the impact of each uncertainty contributor by decomposing the response variance. SANDY was extensively tested against integral parameters and was used to quantify the neutron multiplication factor uncertainty of the VENUS-F reactor.Further uncertainty propagation studies were carried out for the burnup models of light water reactor benchmarks. Our studies identified fission yields as the largest source of uncertainty for the nuclide density evolution curves of several fission products. However, the current data libraries provide evaluated fission yields and uncertainties devoid of covariance matrices. The lack of fission yield covariance information does not comply with the conservation equations that apply to a fission model, and generates inconsistency in the nuclear data. In this work, we generated fission yield covariance matrices using a generalised least-square method and a set of physical constraints. The fission yield covariance matrices solve the inconsistency in the nuclear data libraries and reduce the role of the fission yields in the uncertainty quantification of burnup models responses.
Doctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
3

Kubicek, Martin. "High dimensional uncertainty propagation for hypersonic flows and entry propagation." Thesis, University of Strathclyde, 2018. http://digitool.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=30780.

Full text
Abstract:
To solve complex design problems, engineers cannot avoid to take into account the involved uncertainties. This is important for the analysis and design of hypersonic objects and vehicles, which have to operate in extreme conditions. In this work, two approaches for a high dimensional uncertainty quantification (UQ) are developed. The first approach performs a single-fidelity non-intrusive forward UQ, while the second one performs a multi fidelity UQ, as an extension of the first approach. Both methods are focused on real engineering problems and, therefore, appropriate heuristics are included to achieve an optimal trade-off between accuracy and computational costs. In the first approach, the stochastic domain is decomposed into domains of lower dimensionality, and, then, each domain is handled separately. This is possible due to the application of the HDMR, which is here derived in a new way. This new derivation allowed to deduce important conclusions about the high dimensional modelling, which are used in the prediction scheme. This novel approach for the selection of the higher order interaction effects drastically reduce the required number of samples. In order to have optimally distributed samples for the problem of interest, the adaptive sampling scheme is introduced. Moreover, the multi-surrogate approach is introduced in order to improve the robustness of the method. The single-fidelity approach is tested on a debris re-entry case and the method is validated with respect to the MC simulation method. In the second approach, the multi-fidelity approach has been developed. In order to have the optimal combination of the low fidelity models, the power ratio approach is introduced. To correct the low fidelity model, the classical additive correction, adapted to work within the HDMR approach, is used. The multi-fidelity approach has been tested on the GOCE re-entry case, where the performed tests demonstrate the potentialities of the method.
APA, Harvard, Vancouver, ISO, and other styles
4

Malhotra, Sunil K. Caughey Thomas Kirk Caughey Thomas Kirk. "Nonlinear uncertainty propagation in space trajectories /." Diss., Pasadena, Calif. : California Institute of Technology, 1992. http://resolver.caltech.edu/CaltechETD:etd-08092007-085505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Becker, William. "Uncertainty propagation through large nonlinear models." Thesis, University of Sheffield, 2011. http://etheses.whiterose.ac.uk/15000/.

Full text
Abstract:
Uncertainty analysis in computer models has seen a rise in interest in recent years as a result of the increased complexity of (and dependence on) computer models in the design process. A major problem however, is that the computational cost of propagating uncertainty through large nonlinear models can be prohibitive using conventional methods (such as Monte Carlo methods). A powerful solution to this problem is to use an emulator, which is a mathematical representation of the model built from a small set of model runs at specified points in input space. Such emulators are massively cheaper to run and can be used to mimic the "true" model, with the result that uncertainty analysis and sensitivity analysis can be performed for a greatly reduced computational cost. The work here investigates the use of an emulator known as a Gaussian process (GP), which is an advanced probabilistic form of regression, hitherto relatively unknown in engineering. The GP is used to perform uncertainty and sensitivity analysis on nonlinear finite element models of a human heart valve and a novel airship design. Aside from results specific to these models, it is evident that a limitation of the GP is that non-smooth model responses cannot be accurately represented. Consequently, an extension to the GP is investigated, which uses a classification and regression tree to partition the input space, such that non-smooth responses, including bifurcations, can be modelled at boundaries. This new emulator is applied to a simple nonlinear problem, then a bifurcating finite element model. The method is found to be successful, as well as actually reducing computational cost, although it is noted that bifurcations that are not axis-aligned cannot realistically be dealt with.
APA, Harvard, Vancouver, ISO, and other styles
6

Dixon, Elsbeth Clare. "Representing uncertainty in models." Thesis, University of Cambridge, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.279578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Busby, Daniel Gilbert Michel. "Uncertainty propagation and reduction in reservoir forecasting." Thesis, University of Leicester, 2007. http://hdl.handle.net/2381/30534.

Full text
Abstract:
In this work we focus on nonparametric regression techniques based on Gaussian process, considering both the frequentist and the Bayesian approach. A new sequential experimental design strategy referred to as hierarchical adaptive experimental design is proposed and tested on synthetic functions and on realistic reservoir models using a commercial oil reservoir multiphase flow simulator. Our numerical results show that the method effectively approximate the simulators output with the required approximation accuracy using an affordable number of simulator runs. Moreover, the number of simulations necessary to reach a given approximation accuracy is sensibly reduced respect to other existing experimental designs such as maximin latin hypercubes, or other classical designs used in commercial softwares.;Once an accurate emulator of the simulator output is obtained, it can be also used to calibrate the simulator model using data observed on the real physical system. This process, referred to as history matching in reservoir forecasting, is fundamental to tune input parameters and to consequently reduce output uncertainty. An approach to model calibration using Bayesian inversion is proposed in the last part of this work. Here again a hierarchical emulator is adopted. An innovative sequential design is proposed with the objective of increasing the emulator accuracy around possible history matching solutions. The excellent performances obtained on a very complicated reservoir test case, suggest the high potential of the method to solve complicated inverse problems. The proposed methodology is about to be commercialized in an industrial environment to assist reservoir engineers in uncertainty analysis and history matching.
APA, Harvard, Vancouver, ISO, and other styles
8

Doty, Austin. "Nonlinear Uncertainty Quantification, Sensitivity Analysis, and Uncertainty Propagation of a Dynamic Electrical Circuit." University of Dayton / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1355456642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Damianou, Andreas. "Deep Gaussian processes and variational propagation of uncertainty." Thesis, University of Sheffield, 2015. http://etheses.whiterose.ac.uk/9968/.

Full text
Abstract:
Uncertainty propagation across components of complex probabilistic models is vital for improving regularisation. Unfortunately, for many interesting models based on non-linear Gaussian processes (GPs), straightforward propagation of uncertainty is computationally and mathematically intractable. This thesis is concerned with solving this problem through developing novel variational inference approaches. From a modelling perspective, a key contribution of the thesis is the development of deep Gaussian processes (deep GPs). Deep GPs generalise several interesting GP-based models and, hence, motivate the development of uncertainty propagation techniques. In a deep GP, each layer is modelled as the output of a multivariate GP, whose inputs are governed by another GP. The resulting model is no longer a GP but, instead, can learn much more complex interactions between data. In contrast to other deep models, all the uncertainty in parameters and latent variables is marginalised out and both supervised and unsupervised learning is handled. Two important special cases of a deep GP can equivalently be seen as its building components and, historically, were developed as such. Firstly, the variational GP-LVM is concerned with propagating uncertainty in Gaussian process latent variable models. Any observed inputs (e.g. temporal) can also be used to correlate the latent space posteriors. Secondly, this thesis develops manifold relevance determination (MRD) which considers a common latent space for multiple views. An adapted variational framework allows for strong model regularisation, resulting in rich latent space representations to be learned. The developed models are also equipped with algorithms that maximise the information communicated between their different stages using uncertainty propagation, to achieve improved learning when partially observed values are present. The developed methods are demonstrated in experiments with simulated and real data. The results show that the developed variational methodologies improve practical applicability by enabling automatic capacity control in the models, even when data are scarce.
APA, Harvard, Vancouver, ISO, and other styles
10

Mantis, George C. "Quantification and propagation of disciplinary uncertainty via bayesian statistics." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/12136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Krivtchik, Guillaume. "Analysis of uncertainty propagation in nuclear fuel cycle scenarios." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENI050/document.

Full text
Abstract:
Les études des scénarios électronucléaires modélisent le fonctionnement d’un parcnucléaire sur une période de temps donnée. Elles permettent la comparaison de différentesoptions d’évolution du parc nucléaire et de gestion des matières du cycle, depuis l’extraction duminerai jusqu’au stockage ultime des déchets, en se basant sur des critères tels que les puis-sances installées par filière, les inventaires et les flux, en cycle et aux déchets. Les incertitudessur les données nucléaires et les hypothèses de scénarios (caractéristiques des combustibles, desréacteurs et des usines) se propagent le long des chaînes isotopiques lors des calculs d’évolutionet au cours de l’historique du scénario, limitant la précision des résultats obtenus. L’objetdu présent travail est de développer, implémenter et utiliser une méthodologie stochastiquede propagation d’incertitudes dans les études de scénario. La méthode retenue repose sur ledéveloppement de métamodèles de calculs d’irradiation, permettant de diminuer le temps decalcul des études de scénarios et de prendre en compte des perturbations des paramètres ducalcul, et la fabrication de modèles d’équivalence permettant de tenir compte des perturbationsdes sections efficaces lors du calcul de teneur du combustible neuf. La méthodologie de calculde propagation d’incertitudes est ensuite appliquée à différents scénarios électronucléairesd’intérêt, considérant différentes options d’évolution du parc REP français avec le déploiementde RNR
Nuclear scenario studies model nuclear fleet over a given period. They enablethe comparison of different options for the reactor fleet evolution, and the management ofthe future fuel cycle materials, from mining to disposal, based on criteria such as installedcapacity per reactor technology, mass inventories and flows, in the fuel cycle and in the waste.Uncertainties associated with nuclear data and scenario parameters (fuel, reactors and facilitiescharacteristics) propagate along the isotopic chains in depletion calculations, and throughoutthe scenario history, which reduces the precision of the results. The aim of this work isto develop, implement and use a stochastic uncertainty propagation methodology adaptedto scenario studies. The method chosen is based on development of depletion computationsurrogate models, which reduce the scenario studies computation time, and whose parametersinclude perturbations of the depletion model; and fabrication of equivalence model which takeinto account cross-sections perturbations for computation of fresh fuel enrichment. Then theuncertainty propagation methodology is applied to different scenarios of interest, consideringdifferent options of evolution for the French PWR fleet with SFR deployment
APA, Harvard, Vancouver, ISO, and other styles
12

Robinson, Elinirina Iréna. "Filtering and uncertainty propagation methods for model-based prognosis." Thesis, Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1189/document.

Full text
Abstract:
Les travaux présentés dans ce mémoire concernent le développement de méthodes de pronostic à base de modèles. Le pronostic à base de modèles a pour but d'estimer le temps qu'il reste avant qu'un système ne soit défaillant, à partir d'un modèle physique de la dégradation du système. Ce temps de vie restant est appelé durée de résiduelle (RUL) du système.Le pronostic à base de modèle est composé de deux étapes principales : (i) estimation de l'état actuel de la dégradation et (ii) prédiction de l'état futur de la dégradation. La première étape, qui est une étape de filtrage, est réalisée à partir du modèle et des mesures disponibles. La seconde étape consiste à faire de la propagation d'incertitudes. Le principal enjeu du pronostic concerne la prise en compte des différentes sources d'incertitude pour obtenir une mesure de l'incertitude associée à la RUL prédite. Les principales sources d'incertitude sont les incertitudes de modèle, les incertitudes de mesures et les incertitudes liées aux futures conditions d'opération du système. Afin de gérer ces incertitudes et les intégrer au pronostic, des méthodes probabilistes ainsi que des méthodes ensemblistes ont été développées dans cette thèse.Dans un premier temps, un filtre de Kalman étendu ainsi qu'un filtre particulaire sont appliqués au pronostic de propagation de fissure, en utilisant la loi de Paris et des données synthétiques. Puis, une méthode combinant un filtre particulaire et un algorithme de détection (algorithme des sommes cumulatives) a été développée puis appliquée au pronostic de propagation de fissure dans un matériau composite soumis à un chargement variable. Cette fois, en plus des incertitudes de modèle et de mesures, les incertitudes liées aux futures conditions d'opération du système ont aussi été considérées. De plus, des données réelles ont été utilisées. Ensuite, deux méthodes de pronostic sont développées dans un cadre ensembliste où les erreurs sont considérées comme étant bornées. Elles utilisent notamment des méthodes d'inversion ensembliste et un observateur par intervalles pour des systèmes linéaires à temps discret. Enfin, l'application d'une méthode issue du domaine de l'analyse de fiabilité des systèmes au pronostic à base de modèles est présentée. Il s'agit de la méthode Inverse First-Order Reliability Method (Inverse FORM).Pour chaque méthode développée, des métriques d'évaluation de performance sont calculées dans le but de comparer leur efficacité. Il s'agit de l'exactitude, la précision et l'opportunité
In this manuscript, contributions to the development of methods for on-line model-based prognosis are presented. Model-based prognosis aims at predicting the time before the monitored system reaches a failure state, using a physics-based model of the degradation. This time before failure is called the remaining useful life (RUL) of the system.Model-based prognosis is divided in two main steps: (i) current degradation state estimation and (ii) future degradation state prediction to predict the RUL. The first step, which consists in estimating the current degradation state using the measurements, is performed with filtering techniques. The second step is realized with uncertainty propagation methods. The main challenge in prognosis is to take the different uncertainty sources into account in order to obtain a measure of the RUL uncertainty. There are mainly model uncertainty, measurement uncertainty and future uncertainty (loading, operating conditions, etc.). Thus, probabilistic and set-membership methods for model-based prognosis are investigated in this thesis to tackle these uncertainties.The ability of an extended Kalman filter and a particle filter to perform RUL prognosis in presence of model and measurement uncertainty is first studied using a nonlinear fatigue crack growth model based on the Paris' law and synthetic data. Then, the particle filter combined to a detection algorithm (cumulative sum algorithm) is applied to a more realistic case study, which is fatigue crack growth prognosis in composite materials under variable amplitude loading. This time, model uncertainty, measurement uncertainty and future loading uncertainty are taken into account, and real data are used. Then, two set-membership model-based prognosis methods based on constraint satisfaction and unknown input interval observer for linear discete-time systems are presented. Finally, an extension of a reliability analysis method to model-based prognosis, namely the inverse first-order reliability method (Inverse FORM), is presented.In each case study, performance evaluation metrics (accuracy, precision and timeliness) are calculated in order to make a comparison between the proposed methods
APA, Harvard, Vancouver, ISO, and other styles
13

Kumar, Vikas. "Soft computing approaches to uncertainty propagation in environmental risk mangement." Doctoral thesis, Universitat Rovira i Virgili, 2008. http://hdl.handle.net/10803/8558.

Full text
Abstract:
Real-world problems, especially those that involve natural systems, are complex and composed of many nondeterministic components having non-linear coupling. It turns out that in dealing with such systems, one has to face a high degree of uncertainty and tolerate imprecision. Classical system models based on numerical analysis, crisp logic or binary logic have characteristics of precision and categoricity and classified as hard computing approach. In contrast soft computing approaches like probabilistic reasoning, fuzzy logic, artificial neural nets etc have characteristics of approximation and dispositionality. Although in hard computing, imprecision and uncertainty are undesirable properties, in soft computing the tolerance for imprecision and uncertainty is exploited to achieve tractability, lower cost of computation, effective communication and high Machine Intelligence Quotient (MIQ). Proposed thesis has tried to explore use of different soft computing approaches to handle uncertainty in environmental risk management. The work has been divided into three parts consisting five papers.
In the first part of this thesis different uncertainty propagation methods have been investigated. The first methodology is generalized fuzzy α-cut based on the concept of transformation method. A case study of uncertainty analysis of pollutant transport in the subsurface has been used to show the utility of this approach. This approach shows superiority over conventional methods of uncertainty modelling. A Second method is proposed to manage uncertainty and variability together in risk models. The new hybrid approach combining probabilistic and fuzzy set theory is called Fuzzy Latin Hypercube Sampling (FLHS). An important property of this method is its ability to separate randomness and imprecision to increase the quality of information. A fuzzified statistical summary of the model results gives indices of sensitivity and uncertainty that relate the effects of variability and uncertainty of input variables to model predictions. The feasibility of the method is validated to analyze total variance in the calculation of incremental lifetime risks due to polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/F) for the residents living in the surroundings of a municipal solid waste incinerator (MSWI) in Basque Country, Spain.
The second part of this thesis deals with the use of artificial intelligence technique for generating environmental indices. The first paper focused on the development of a Hazzard Index (HI) using persistence, bioaccumulation and toxicity properties of a large number of organic and inorganic pollutants. For deriving this index, Self-Organizing Maps (SOM) has been used which provided a hazard ranking for each compound. Subsequently, an Integral Risk Index was developed taking into account the HI and the concentrations of all pollutants in soil samples collected in the target area. Finally, a risk map was elaborated by representing the spatial distribution of the Integral Risk Index with a Geographic Information System (GIS). The second paper is an improvement of the first work. New approach called Neuro-Probabilistic HI was developed by combining SOM and Monte-Carlo analysis. It considers uncertainty associated with contaminants characteristic values. This new index seems to be an adequate tool to be taken into account in risk assessment processes. In both study, the methods have been validated through its implementation in the industrial chemical / petrochemical area of Tarragona.
The third part of this thesis deals with decision-making framework for environmental risk management. In this study, an integrated fuzzy relation analysis (IFRA) model is proposed for risk assessment involving multiple criteria. The fuzzy risk-analysis model is proposed to comprehensively evaluate all risks associated with contaminated systems resulting from more than one toxic chemical. The model is an integrated view on uncertainty techniques based on multi-valued mappings, fuzzy relations and fuzzy analytical hierarchical process. Integration of system simulation and risk analysis using fuzzy approach allowed to incorporate system modelling uncertainty and subjective risk criteria. In this study, it has been shown that a broad integration of fuzzy system simulation and fuzzy risk analysis is possible.
In conclusion, this study has broadly demonstrated the usefulness of soft computing approaches in environmental risk analysis. The proposed methods could significantly advance practice of risk analysis by effectively addressing critical issues of uncertainty propagation problem.
Los problemas del mundo real, especialmente aquellos que implican sistemas naturales, son complejos y se componen de muchos componentes indeterminados, que muestran en muchos casos una relación no lineal. Los modelos convencionales basados en técnicas analíticas que se utilizan actualmente para conocer y predecir el comportamiento de dichos sistemas pueden ser muy complicados e inflexibles cuando se quiere hacer frente a la imprecisión y la complejidad del sistema en un mundo real. El tratamiento de dichos sistemas, supone el enfrentarse a un elevado nivel de incertidumbre así como considerar la imprecisión. Los modelos clásicos basados en análisis numéricos, lógica de valores exactos o binarios, se caracterizan por su precisión y categorización y son clasificados como una aproximación al hard computing. Por el contrario, el soft computing tal como la lógica de razonamiento probabilístico, las redes neuronales artificiales, etc., tienen la característica de aproximación y disponibilidad. Aunque en la hard computing, la imprecisión y la incertidumbre son propiedades no deseadas, en el soft computing la tolerancia en la imprecisión y la incerteza se aprovechan para alcanzar tratabilidad, bajos costes de computación, una comunicación efectiva y un elevado Machine Intelligence Quotient (MIQ). La tesis propuesta intenta explorar el uso de las diferentes aproximaciones en la informática blanda para manipular la incertidumbre en la gestión del riesgo medioambiental. El trabajo se ha dividido en tres secciones que forman parte de cinco artículos.
En la primera parte de esta tesis, se han investigado diferentes métodos de propagación de la incertidumbre. El primer método es el generalizado fuzzy α-cut, el cual está basada en el método de transformación. Para demostrar la utilidad de esta aproximación, se ha utilizado un caso de estudio de análisis de incertidumbre en el transporte de la contaminación en suelo. Esta aproximación muestra una superioridad frente a los métodos convencionales de modelación de la incertidumbre. La segunda metodología propuesta trabaja conjuntamente la variabilidad y la incertidumbre en los modelos de evaluación de riesgo. Para ello, se ha elaborado una nueva aproximación híbrida denominada Fuzzy Latin Hypercube Sampling (FLHS), que combina los conjuntos de la teoría de probabilidad con la teoría de los conjuntos difusos. Una propiedad importante de esta teoría es su capacidad para separarse los aleatoriedad y imprecisión, lo que supone la obtención de una mayor calidad de la información. El resumen estadístico fuzzificado de los resultados del modelo generan índices de sensitividad e incertidumbre que relacionan los efectos de la variabilidad e incertidumbre de los parámetros de modelo con las predicciones de los modelos. La viabilidad del método se llevó a cabo mediante la aplicación de un caso a estudio donde se analizó la varianza total en la cálculo del incremento del riesgo sobre el tiempo de vida de los habitantes que habitan en los alrededores de una incineradora de residuos sólidos urbanos en Tarragona, España, debido a las emisiones de dioxinas y furanos (PCDD/Fs).
La segunda parte de la tesis consistió en la utilización de las técnicas de la inteligencia artificial para la generación de índices medioambientales. En el primer artículo se desarrolló un Índice de Peligrosidad a partir de los valores de persistencia, bioacumulación y toxicidad de un elevado número de contaminantes orgánicos e inorgánicos. Para su elaboración, se utilizaron los Mapas de Auto-Organizativos (SOM), que proporcionaron un ranking de peligrosidad para cada compuesto. A continuación, se elaboró un Índice de Riesgo Integral teniendo en cuenta el Índice de peligrosidad y las concentraciones de cada uno de los contaminantes en las muestras de suelo recogidas en la zona de estudio. Finalmente, se elaboró un mapa de la distribución espacial del Índice de Riesgo Integral mediante la representación en un Sistema de Información Geográfico (SIG). El segundo artículo es un mejoramiento del primer trabajo. En este estudio, se creó un método híbrido de los Mapas Auto-organizativos con los métodos probabilísticos, obteniéndose de esta forma un Índice de Riesgo Integrado. Mediante la combinación de SOM y el análisis de Monte-Carlo se desarrolló una nueva aproximación llamada Índice de Peligrosidad Neuro-Probabilística. Este nuevo índice es una herramienta adecuada para ser utilizada en los procesos de análisis. En ambos artículos, la viabilidad de los métodos han sido validados a través de su aplicación en el área de la industria química y petroquímica de Tarragona (Cataluña, España).
El tercer apartado de esta tesis está enfocado en la elaboración de una estructura metodológica de un sistema de ayuda en la toma de decisiones para la gestión del riesgo medioambiental. En este estudio, se presenta un modelo integrado de análisis de fuzzy (IFRA) para la evaluación del riesgo cuyo resultado depende de múltiples criterios. El modelo es una visión integrada de las técnicas de incertidumbre basadas en diseños de valoraciones múltiples, relaciones fuzzy y procesos analíticos jerárquicos inciertos. La integración de la simulación del sistema y el análisis del riesgo utilizando aproximaciones inciertas permitieron incorporar la incertidumbre procedente del modelo junto con la incertidumbre procedente de la subjetividad de los criterios. En este estudio, se ha demostrado que es posible crear una amplia integración entre la simulación de un sistema incierto y de un análisis de riesgo incierto.
En conclusión, este trabajo demuestra ampliamente la utilidad de aproximación Soft Computing en el análisis de riesgos ambientales. Los métodos propuestos podría avanzar significativamente la práctica de análisis de riesgos de abordar eficazmente el problema de propagación de incertidumbre.
APA, Harvard, Vancouver, ISO, and other styles
14

Liguori, Sara. "Propagation of uncertainty in hydrological predictions using probabilistic rainfall forecasts." Thesis, University of Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.544345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Brown, J. D. "Uncertainty propagation through a numerical model of storm surge flooding." Thesis, University of Cambridge, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.597018.

Full text
Abstract:
This research focuses on the uncertainties associated with numerical modelling of extreme coastal flooding. It uses Canvey Island in the Thames estuary (UK) as a case study, where a linked storm surge and flood inundation model is developed and applied within an uncertainty framework. The case study is used to illustrate and evaluate the propagation of uncertainties from model inputs to model outputs as a function of the ‘boundary conditions’ encountered (input data, geography of the case study). The thesis is separated into six chapters, of which two provide an introduction to the research (Chapters 1 and 2), two focus on model development (Chapters 3 and 4) and two address model application and evaluation (Chapters 5 and 6). Chapter 1 provides a theoretical context for the research and outlines the aims and objectives of the thesis. The research adopts a ‘critical realist’ perspective on the value of models as tools for probing understanding about uncertain coastal flood hazards. Chapter 2 provides a description of the case study site and discusses its illustrative value from a flood risk perspective. Chapter 3 describes the development of a storm surge model for predicting water levels at Canvey Island in response to meteorological and tidal forcing of the North Sea. The storm surge model is based on four nested models, which provide a consistent increase in spatial resolution towards the case study site. The predicted water levels are used to drive a flood inundation model of Canvey Island, which is described in Chapter 4. The flood model is based on a 2-D, shock capturing, numerical scheme, and resolves the terrain and buildings of the study area with high-resolution topographic data. Chapter 5 describes the application of these models within an uncertainty framework where model sensitivities and uncertainties are evaluated for a range of storm forcing conditions over the North Sea and sea defence failures at Canvey Island.
APA, Harvard, Vancouver, ISO, and other styles
16

Girard, Agathe. "Approximate methods for propagation of uncertainty with Gaussian process models." Thesis, University of Glasgow, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.407783.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

SILVA, GUTEMBERG BRUNO DA. "COLORIMETRY: PROPAGATION OF ERRORS AND UNCERTAINTY CALCULATIONS IN SPECTROPHOTOMETRIC MEASUREMENTS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2004. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=5012@1.

Full text
Abstract:
FINANCIADORA DE ESTUDOS E PROJETOS
MINISTÉRIO DA CIÊNCIA E TECNOLOGIA
Colorimetria - Propagação de erros e cálculo da incerteza da medição nos resultados espectrofotométricos trata da medição da cor de objetos, baseada nas medições de irradiância espectral (objetos luminosos) ou de refletância ou transmitância espectral (objetos opacos ou transparentes), seguidas por cálculos colorimétricos conforme o sistema CIE. As medições são normalmente feitas em intervalos de 5nm (ou 10 nm) na faixa espectral de 360 a 780nm, e os três valores triestímulos (X, Y e Z) são calculados usando-se 42-84 pontos medidos por equações padrões. A distribuição dos valores medidos R(lambda) é, provavelmente, normal, com uma correlação entre os valores obtidos variável em posições diferentes do espectro. As distribuições dos valores e as correlações entre X, Y e Z são desconhecidas e dependem da forma da curva espectral da cor e do funcionamento dos instrumentos de medição. No controle instrumental das cores são usadas fórmulas muito complexas, baseadas nas transformações não lineares dos valores X, Y e Z em L*, a*, b*, C* e h°. A determinação da incerteza dos resultados dados em coordenadas CIELAB ou expressos em fórmulas de diferenças (delta)E*, (delta) ECMC ou CIE (delta) E2000 é fundamental no controle instrumental das cores em qualquer indústria. À base de um número elevado de medições repetidas de várias amostras têxteis e padrões cerâmicos, são analisadas a distribuição e outras características estatísticas dos valores R(lambda) diretamente medidos, e - usando o método de propagação de erros - são calculadas as incertezas das medições em termos colorimétricos. A pesquisa de mestrado objeto do presente trabalho desenvolve- se sob a égide de um convênio de cooperação que o Programa de Pós-Graduação em Metrologia da PUC-Rio está celebrando com o SENAI/CETIQT, viabilizado a inclusão dessa pesquisa dentre os dez projetos-piloto que participaram do Convênio FINEP/MCT número 22.01.0692.00, Referência 1974/01, que aportou recursos do Fundo Setorial Verde Amarelo para direcionar o esforço de pesquisa em metrologia para a solução de um problema de interesse do setor têxtil que fez uso de conhecimentos avançados de metrologia da cor. Relacionado à demanda de medições espectrofotométricas com elevado controle metrológico, o desenvolvimento e a orientação acadêmico-científica da presente dissertação de mestrado deu-se nas instalações do SENAI/CETIQT, que possui comprovada competência técnica e científica na área e uma adequada infra-estrutura laboratorial em metrologia da cor de suporte ao trabalho.
Colorimetry - Propagation of Errors and Uncertainty Calculations in Spectrophotometric Measurements treats the measurement of the colour of objects, based on the measurement of spectral irradiance (self-luminous objects) or that of spectral reflectance or transmittance (opaque or transparent objects), followed by colorimetric calculations according to the CIE system. Measurements are generally made in 5nm (or 10 nm) intervals in the spectral range of 360 to 780nm, and the 3 tristimulus values (X, Y and Z) are calculated from the 42-84 measurement points by standard equations. The statistical distribution of the measured R (lambda) values is probably normal; the correlation between the values varies depending on their position in the spectrum. The distribution of and the correlation between the X, Y and Z values are not known and they depend on the form of the spectral curve of each colour and on the operation of the measuring instrument. Complex formulae are used in the instrumental control of colours based on non-linear transformations of the X, Y and Z values into L*a*b*C*h°. The determination of the uncertainty of the results given in CIELAB coordinates or expressed in one of the colour difference formulae (delta)E*, (delta)ECMC or CIE(delta) E2000 is fundamental in the instrumental control of colours in any industry. Based on a large number of repeated measurements of different textile samples and ceramic standards, the distribution and other statistical characteristics of the directly measured R(lambda) values are analysed and - using the propagation of errors method - the uncertainties are calculated in colorimetric terms. The present research, a M. Sc. Dissertation work, was developed under the auspices of a co-operation agreement celebrated between the Post-graduate Programme in Metrology of PUC-Rio and SENAI/CETIQT, allowing for the inclusion of this M.Sc. Dissertation among the ten pilot projects which benefited from the financial support received from the FINEP/MCT Agreement number 22.01.0692.00, Reference 1974/01 (Fundo Verde-Amarelo). The project aims at driving the research effort in metrology to the solution of industrial problems, in this case the solution of a problem identified within the textile sector which requires to its solution advanced knowledge of colour metrology. Related the spectrophotometer measurements under the highest level of metrological control, the development and academic-scientific supervision of this M. Sc. Dissertation was performed at the laboratory facility of SENAI/CETIQT, an institution with proven technical and scientific competence in the field having sophisticated and well equipped laboratories in colour metrology meeting the measurement requirements needed to support the development of this research.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhang, Y. "Uncertainty modeling, propagation, and quantification techniques with applications in engineering dynamics." Thesis, University of Liverpool, 2017. http://livrepository.liverpool.ac.uk/3008063/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Blatman, Géraud. "Adaptive sparse polynomial chaos expansions for uncertainty propagation and sensitivity analysis." Clermont-Ferrand 2, 2009. https://tel.archives-ouvertes.fr/tel-00440197.

Full text
Abstract:
Cette thèse s'insère dans le contexte générale de la propagation d'incertitudes et de l'analyse de sensibilité de modèles de simulation numérique, en vue d'applications industrielles. Son objectif est d'effectuer de telles études en minimisant le nombre d'évaluations du modèle, potentiellement coûteuses. Le présent travail repose sur une approximation de la réponse du modèle sur la base du chaos polynomial (CP), qui permet de réaliser des post-traitements à un coût de calcul négligeable. Toutefois, l'ajustement du CP peut nécessiter un nombre conséquent d'appels au modèle si ce dernier dépend d'un nombre élevé de paramètres (e. G. Supérieur à 10). Pour contourner ce problème, on propose deux algorithmes pour ne sélectionner qu'un faible nombre de termes importants dans la représentation par CP, à savoir une procédure de régression pas-à-pas et une procédure basée sur la méthode de Least Angle Regression (LAR). Le faible nombre de coefficients associés aux CP creux obtenus peuvent ainsi être déterminés à partir d'un nombre réduit d'évaluations du modèle. Les méthodes sont validées sur des cas-tests académiques de mécanique, puis appliquées sur le cas industriel de l'analyse d'intégrité d'une cuve de réacteur à eau pressurisée. Les résultats obtenus confirment l'efficacité des méthodes proposées pour traiter les problèmes de propagation d'incertitudes et d'analyse de sensibilité en grande dimension
APA, Harvard, Vancouver, ISO, and other styles
20

Braun, Mathias. "Reduced Order Modelling and Uncertainty Propagation Applied to Water Distribution Networks." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0050/document.

Full text
Abstract:
Les réseaux de distribution d’eau consistent en de grandes infrastructures réparties dans l’espace qui assurent la distribution d’eau potable en quantité et en qualité suffisantes. Les modèles mathématiques de ces systèmes sont caractérisés par un grand nombre de variables d’état et de paramètres dont la plupart sont incertains. Les temps de calcul peuvent s’avérer conséquents pour les réseaux de taille importante et la propagation d’incertitude par des méthodes de Monte Carlo. Par conséquent, les deux principaux objectifs de cette thèse sont l’étude des techniques de modélisation à ordre réduit par projection ainsi que la propagation spectrale des incertitudes des paramètres. La thèse donne tout d’abord un aperçu des méthodes mathématiques utilisées. Ensuite, les équations permanentes des réseaux hydrauliques sont présentées et une nouvelle méthode de calcul des sensibilités est dérivée sur la base de la méthode adjointe. Les objectifs spécifiques du développement de modèles d’ordre réduit sont l’application de méthodes basées sur la projection, le développement de stratégies d’échantillonnage adaptatives plus efficaces et l’utilisation de méthodes d’hyper-réduction pour l’évaluation rapide des termes résiduels non linéaires. Pour la propagation des incertitudes, des méthodes spectrales sont introduites dans le modèle hydraulique et un modèle hydraulique intrusif est formulé. Dans le but d’une analyse plus efficace des incertitudes des paramètres, la propagation spectrale est ensuite évaluée sur la base du modèle réduit. Les résultats montrent que les modèles d’ordre réduit basés sur des projections offrent un avantage considérable par rapport à l’effort de calcul. Bien que l’utilisation de l’échantillonnage adaptatif permette une utilisation plus efficace des états système pré-calculés, l’utilisation de méthodes d’hyper-réduction n’a pas permis d’améliorer la charge de calcul. La propagation des incertitudes des paramètres sur la base des méthodes spectrales est comparable aux simulations de Monte Carlo en termes de précision, tout en réduisant considérablement l’effort de calcul
Water distribution systems are large, spatially distributed infrastructures that ensure the distribution of potable water of sufficient quantity and quality. Mathematical models of these systems are characterized by a large number of state variables and parameter. Two major challenges are given by the time constraints for the solution and the uncertain character of the model parameters. The main objectives of this thesis are thus the investigation of projection based reduced order modelling techniques for the time efficient solution of the hydraulic system as well as the spectral propagation of parameter uncertainties for the improved quantification of uncertainties. The thesis gives an overview of the mathematical methods that are being used. This is followed by the definition and discussion of the hydraulic network model, for which a new method for the derivation of the sensitivities is presented based on the adjoint method. The specific objectives for the development of reduced order models are the application of projection based methods, the development of more efficient adaptive sampling strategies and the use of hyper-reduction methods for the fast evaluation of non-linear residual terms. For the propagation of uncertainties spectral methods are introduced to the hydraulic model and an intrusive hydraulic model is formulated. With the objective of a more efficient analysis of the parameter uncertainties, the spectral propagation is then evaluated on the basis of the reduced model. The results show that projection based reduced order models give a considerable benefit with respect to the computational effort. While the use of adaptive sampling resulted in a more efficient use of pre-calculated system states, the use of hyper-reduction methods could not improve the computational burden and has to be explored further. The propagation of the parameter uncertainties on the basis of the spectral methods is shown to be comparable to Monte Carlo simulations in accuracy, while significantly reducing the computational effort
APA, Harvard, Vancouver, ISO, and other styles
21

Cherry, Matthew Ryan. "Rapidly Solving Physics-Based Models for Uncertainty Propagation in Nondestructive Evaluation." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright151317135171711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Hultgren, Ante. "Uncertainty Propagation Analysis for Low Power Transients at the Oskarshamn 3 BWR." Thesis, KTH, Fysik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-147358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Muthusamy, Manoranjan. "Accounting for rainfall variability in sediment wash-off modelling using uncertainty propagation." Thesis, University of Sheffield, 2018. http://etheses.whiterose.ac.uk/20905/.

Full text
Abstract:
Urban surface sediment is a major source of pollution as it acts as a transport medium for many contaminants. Accurate modelling of sediment wash-off from urban surfaces requires an understanding of the effect of variability in the external drivers such as rainfall on the wash-off process. This study investigates the uncertainty created due to the urban-scale variability of rainfall, in sediment wash-off predictions. Firstly, a rigorous geostatistical method was developed that quantifies uncertainty due to spatial rainfall variability of rainfall at an urban scale. The new method was applied to a unique high-resolution rainfall dataset collected with multiple paired gauges for a study designed to quantify rainfall uncertainty. Secondly, the correlation between calibration parameters and external drivers - rainfall intensity, surface slope and initial load- was established for a widely used exponential wash-off model using data obtained from new detailed laboratory experiments. Based on this, a new wash-off model where the calibration parameters are replaced with functions of these external drivers was derived. Finally, this new wash-off model was used to investigate the propagation of rainfall uncertainty in wash-off predictions. This work produced for the first time quantitative predictions of the variation in wash-off load that can be linked to the rainfall variability observed at an urban scale. The results show that (1) the assumption of constant spatial rainfall variability across rainfall intensity ranges is invalid for small spatial and temporal scales, (2) wash-off load is sensitive to initial loads and using a constant initial load in wash-off modelling is not valid, (3) the level of uncertainty in predicted wash-off load due to rainfall uncertainty depends on the rainfall intensity range and the “first-flush” effect. The maximum uncertainty in the prediction of peak wash-off load due to rainfall uncertainty within an 8-ha catchment was found to be ~15%.
APA, Harvard, Vancouver, ISO, and other styles
24

Vidal, Codina Ferran. "A reduced-basis method for input-output uncertainty propagation in stochastic PDEs." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82417.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 123-132).
Recently there has been a growing interest in quantifying the effects of random inputs in the solution of partial differential equations that arise in a number of areas, including fluid mechanics, elasticity, and wave theory to describe phenomena such as turbulence, random vibrations, flow through porous media, and wave propagation through random media. Monte-Carlo based sampling methods, generalized polynomial chaos and stochastic collocation methods are some of the popular approaches that have been used in the analysis of such problems. This work proposes a non-intrusive reduced-basis method for the rapid and reliable evaluation of the statistics of linear functionals of stochastic PDEs. Our approach is based on constructing a reduced-basis model for the quantity of interest that enables to solve the full problem very efficiently. In particular, we apply a reduced-basis technique to the Hybridizable Discontinuous Galerkin (HDG) approximation of the underlying PDE, which allows for a rapid and accurate evaluation of the input-output relationship represented by a functional of the solution of the PDE. The method has been devised for problems where an affine parametrization of the PDE in terms of the uncertain input parameters may be obtained. This particular structure enables us to seek an offline-online computational strategy to economize the output evaluation. Indeed, the offline stage (performed once) is computationally intensive since its computational complexity depends on the dimension of the underlying high-order discontinuous finite element space. The online stage (performed many times) provides rapid output evaluation with a computational cost which is several orders of magnitude smaller than the computational cost of the HDG approximation. In addition, we incorporate two ingredients to the reduced-basis method. First, we employ the greedy algorithm to drive the sampling in the parameter space, by computing inexpensive bounds of the error in the output on the online stage. These error bounds allow us to detect which samples contribute most to the error, thereby enriching the reduced basis with high-quality basis functions. Furthermore, we develop the reduced basis for not only the primal problem, but also the adjoint problem. This allows us to compute an improved reduced basis output that is crucial in reducing the number of basis functions needed to achieve a prescribed error tolerance. Once the reduced bases have been constructed, we employ Monte-Carlo based sampling methods to perform the uncertainty propagation. The main achievement is that the forward evaluations needed for each Monte-Carlo sample are inexpensive, and therefore statistics of the output can be computed very efficiently. This combined technique renders an uncertainty propagation method that requires a small number of full forward model evaluations and thus greatly reduces the computational burden. We apply our approach to study the heat conduction of the thermal fin under uncertainty from the diffusivity coefficient and the wave propagation generated by a Gaussian source under uncertainty from the propagation medium. We shall also compare our approach to stochastic collocation methods and Monte-Carlo methods to assess the reliability of the computations.
by Ferran Vidal-Codina.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
25

Kundu, Abhishek. "Efficient uncertainty propagation schemes for dynamical systems with stochastic finite element analysis." Thesis, Swansea University, 2014. https://cronfa.swan.ac.uk/Record/cronfa42292.

Full text
Abstract:
Efficient uncertainty propagation schemes for dynamical systems are investigated here within the framework of stochastic finite element analysis. Uncertainty in the mathematical models arises from the incomplete knowledge or inherent variability of the various parametric and geometric properties of the physical system. These input uncertainties necessitate the use of stochastic mathematical models to accurately capture their behavior. The resolution of such stochastic models is computationally quite expensive. This work is concerned with development of model order reduction techniques for obtaining the dynamical response statistics of stochastic finite element systems. Efficient numerical methods have been proposed to propagate the input uncertainty of dynamical systems to the response variables. Response statistics of randomly parametrized structural dynamic systems have been investigated with a reduced spectral function approach. The frequency domain response and the transient evolution of the response of randomly parametrized structural dynamic systems have been studied with this approach. An efficient discrete representation of the input random field in a finite dimensional stochastic space is proposed here which has been integrated into the generic framework of the stochastic finite element weak formulation. This framework has been utilized to study the problem of random perturbation of the boundary surface of physical domains. Truncated reduced order representation of the complex mathematical quantities which are associated with the stochastic isoparametric mapping of the random domain to a deterministic master domain within the stochastic Galerkin framework have been provided. Lastly, an a-priori model reduction scheme for the resolution of the response statistics of stochastic dynamical systems has also been studied here which is based on the concept of balanced truncation. The performance and numerical accuracy of the methods proposed in this work have been exemplified with numerical simulations of stochastic dynamical systems and the convergence behavior of various error indicators.
APA, Harvard, Vancouver, ISO, and other styles
26

Hills, Esther. "Uncertainty propagation in structural dynamics with special reference to component modal models." Thesis, University of Southampton, 2006. https://eprints.soton.ac.uk/65678/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ricciardi, Denielle E. "Uncertainty Quantification and Propagation in Materials Modeling Using a Bayesian Inferential Framework." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587473424147276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Wyant, Timothy Joseph. "Numerical study of error propagation in Monte Carlo depletion simulations." Thesis, Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44809.

Full text
Abstract:
Improving computer technology and the desire to more accurately model the heterogeneity of the nuclear reactor environment have made the use of Monte Carlo depletion codes more attractive in recent years, and feasible (if not practical) even for 3-D depletion simulation. However, in this case statistical uncertainty is combined with error propagating through the calculation from previous steps. In an effort to understand this error propagation, four test problems were developed to test error propagation in the fuel assembly and core domains. Three test cases modeled and tracked individual fuel pins in four 17x17 PWR fuel assemblies. A fourth problem modeled a well-characterized 330MWe nuclear reactor core. By changing the code's initial random number seed, the data produced by a series of 19 replica runs of each test case was used to investigate the true and apparent variance in k-eff, pin powers, and number densities of several isotopes. While this study does not intend to develop a predictive model for error propagation, it is hoped that its results can help to identify some common regularities in the behavior of uncertainty in several key parameters.
APA, Harvard, Vancouver, ISO, and other styles
29

Yarkinoglu, Gucuk Oya. "Modelling And Analyzing The Uncertainty Propagation In Vector-based Network Structures In Gis." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/2/12608845/index.pdf.

Full text
Abstract:
Uncertainty is a quantitative attribute that represents the difference between reality and representation of reality. Uncertainty analysis and error propagation modeling reveals the propagation of input error through output. Main objective of this thesis is to model the uncertainty and its propagation for dependent line segments considering positional correlation. The model is implemented as a plug-in, called Propagated Band Model (PBM) Plug-in, to a commercial desktop application, GeoKIT Explorer. Implementation of the model is divided into two parts. In the first one, model is applied to each line segment of the selected network, separately. In the second one, error in each segment is transmitted through the line segments from the start node to the end node of the network. Outcomes are then compared with the results of the G-Band model which is the latest uncertainty model for vector features. To comment on similarities and differences of the outcomes, implementation is handled for two different cases. In the first case, users digitize the selected road network. In the second case recently developed software called Interactive Drawer (ID) is used to allow user to define a new network and simulate this network through Monte Carlo Simulation Method. PBM Plug-in is designed to accept the outputs of these implementation cases as an input, as well as generating and visualizing the uncertainty bands of the given line network. Developed implementations and functionality are basically for expressing the importance and effectiveness of uncertainty handling in vector based geometric features, especially for line segments which construct a network.
APA, Harvard, Vancouver, ISO, and other styles
30

Backhaus, Thomas [Verfasser]. "Uncertainty Propagation of Real Geometry Effects on Jet Engine Compressor Blisks / Thomas Backhaus." Düren : Shaker, 2020. http://d-nb.info/1213472989/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Li, Xiaoshuo [Verfasser]. "Entwicklung der Softwareplattform RESUS : repository simulation, uncertainty propagation and sensitivity analysis / Xiaoshuo Li." Clausthal-Zellerfeld : Universitätsbibliothek Clausthal, 2015. http://d-nb.info/1078230919/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Tamssaouet, Ferhat. "Towards system-level prognostics : modeling, uncertainty propagation and system remaining useful life prediction." Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0079.

Full text
Abstract:
Le pronostic est le processus de prédiction de la durée de vie résiduelle utile (RUL) des composants, sous-systèmes ou systèmes. Cependant, jusqu'à présent, le pronostic a souvent été abordé au niveau composant sans tenir compte des interactions entre les composants et l'impact de l'environnement, ce qui peut conduire à une mauvaise prédiction du temps de défaillance dans des systèmes complexes. Dans ce travail, une approche de pronostic au niveau du système est proposée. Cette approche est basée sur un nouveau cadre de modélisation : le modèle d'inopérabilité entrée-sortie (IIM), qui permet de prendre en compte les interactions entre les composants et les effets du profil de mission et peut être appliqué pour des systèmes hétérogènes. Ensuite, une nouvelle méthodologie en ligne pour l'estimation des paramètres (basée sur l'algorithme de la descente du gradient) et la prédiction du RUL au niveau système (SRUL) en utilisant les filtres particulaires (PF), a été proposée. En détail, l'état de santé des composants du système est estimé et prédit d'une manière probabiliste en utilisant les PF. En cas de divergence consécutive entre les estimations a priori et a posteriori de l'état de santé du système, la méthode d'estimation proposée est utilisée pour corriger et adapter les paramètres de l'IIM. Finalement, la méthodologie développée, a été appliquée sur un système industriel réaliste : le Tennessee Eastman Process, et a permis une prédiction du SRUL dans un temps de calcul raisonnable
Prognostics is the process of predicting the remaining useful life (RUL) of components, subsystems, or systems. However, until now, the prognostics has often been approached from a component view without considering interactions between components and effects of the environment, leading to a misprediction of the complex systems failure time. In this work, a prognostics approach to system-level is proposed. This approach is based on a new modeling framework: the inoperability input-output model (IIM), which allows tackling the issue related to the interactions between components and the mission profile effects and can be applied for heterogeneous systems. Then, a new methodology for online joint system RUL (SRUL) prediction and model parameter estimation is developed based on particle filtering (PF) and gradient descent (GD). In detail, the state of health of system components is estimated and predicted in a probabilistic manner using PF. In the case of consecutive discrepancy between the prior and posterior estimates of the system health state, the proposed estimation method is used to correct and to adapt the IIM parameters. Finally, the developed methodology is verified on a realistic industrial system: The Tennessee Eastman Process. The obtained results highlighted its effectiveness in predicting the SRUL in reasonable computing time
APA, Harvard, Vancouver, ISO, and other styles
33

Bruns, Morgan Chase. "Propagation of Imprecise Probabilities through Black Box Models." Thesis, Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/10553.

Full text
Abstract:
From the decision-based design perspective, decision making is the critical element of the design process. All practical decision making occurs under some degree of uncertainty. Subjective expected utility theory is a well-established method for decision making under uncertainty; however, it assumes that the DM can express his or her beliefs as precise probability distributions. For many reasons, both practical and theoretical, it can be beneficial to relax this assumption of precision. One possible means for avoiding this assumption is the use of imprecise probabilities. Imprecise probabilities are more expressive of uncertainty than precise probabilities, but they are also more computationally cumbersome. Probability Bounds Analysis (PBA) is a compromise between the expressivity of imprecise probabilities and the computational ease of modeling beliefs with precise probabilities. In order for PBA to be implemented in engineering design, it is necessary to develop appropriate computational methods for propagating probability boxes (p-boxes) through black box engineering models. This thesis examines the range of applicability of current methods for p-box propagation and proposes three alternative methods. These methods are applied towards the solution of three successively complex numerical examples.
APA, Harvard, Vancouver, ISO, and other styles
34

Wolting, Duane. "MULTIVARIATE SYSTEMS ANALYSIS." International Foundation for Telemetering, 1985. http://hdl.handle.net/10150/615760.

Full text
Abstract:
International Telemetering Conference Proceedings / October 28-31, 1985 / Riviera Hotel, Las Vegas, Nevada
In many engineering applications, a systems analysis is performed to study the effects of random error propagation throughout a system. Often these errors are not independent, and have joint behavior characterized by arbitrary covariance structure. The multivariate nature of such problems is compounded in complex systems, where overall system performance is described by a q-dimensional random vector. To address this problem, a computer program was developed which generates Taylor series approximations for multivariate system performance in the presence of random component variablilty. A summary of an application of this approach is given in which an analysis was performed to assess simultaneous design margins and to ensure optimal component selection.
APA, Harvard, Vancouver, ISO, and other styles
35

Crespo, Cuaresma Jesus, Florian Huber, and Luca Onorante. "The macroeconomic effects of international uncertainty shocks." WU Vienna University of Economics and Business, 2017. http://epub.wu.ac.at/5462/1/wp245.pdf.

Full text
Abstract:
We propose a large-scale Bayesian VAR model with factor stochastic volatility to investigate the macroeconomic consequences of international uncertainty shocks on the G7 countries. The factor structure enables us to identify an international uncertainty shock by assuming that it is the factor most correlated with forecast errors related to equity markets and permits fast sampling of the model. Our findings suggest that the estimated uncertainty factor is strongly related to global equity price volatility, closely tracking other prominent measures commonly adopted to assess global uncertainty. The dynamic responses of a set of macroeconomic and financial variables show that an international uncertainty shock exerts a powerful effect on all economies and variables under consideration.
Series: Department of Economics Working Paper Series
APA, Harvard, Vancouver, ISO, and other styles
36

Xiao, Sa Ph D. Massachusetts Institute of Technology. "Quantifying galactic propagation uncertainty in WIMP dark matter search with AMS01 Z=-1 spectrum." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53231.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Physics, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 89-91).
A search for a WIMP dark matter annihilation signal is carried out in the AMS01 negatively charged (Z=-I) particle spectrum, following a set of supersymmetric benchmark scenarios in the mSUGRA framework. The result is consistent with no dark matter, assuming a smooth isothermal distribution of dark matter in the Galactic halo. 90% upper bounds of the boost factor by which the flux from the DM annihilation could be enhanced without exceeding AMS01 data are derived to be - 10² - 10⁵, varied as different mSUGRA senarios. The Boron-to-Carbon ratio energy spectrum is measured with AMS01, which allows us to constrain the cosmic ray (CR) Galactic propagation parameters. In the diffusive reaccelaration (DR) model, the propagation parameters are shown to be Dxx ~ 4.5 x 10₂₈ - 6 x 10²⁸ cm² S-1, and VA ~ 28 - 42 km s-1. The impact of the uncertainties in the cosmic ray propagation model on dark matter limits is studied and the associated uncertainties of the 90% upper bound of the boost factor are found to be less than 30%.
by Sa Xiao.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
37

Romatoski, Rebecca R. (Rebecca Rose). "Fluoride-salt-cooled high-temperature test reactor thermal-hydraulic licensing and uncertainty propagation analysis." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112378.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 295-307).
An important Fluoride-salt-cooled High-temperature Reactor (FHR) development step is to design, build, and operate a test reactor. Through a literature review, liquid-salt coolant thermophysical properties have been recommended along with their uncertainties of 2-20%. This study tackles determining the effects of these high uncertainties by proposing a newly developed methodology to incorporate uncertainty propagation in a thermal-hydraulic safety analysis for test reactor licensing. A hot channel model, Monte Carlo statistical sampling uncertainty propagation, and limiting safety systems settings (LSSS) approach are uniquely combined to ensure sufficient margin to fuel and material thermal limits during steady-state operation and to incorporate margin for high uncertainty inputs. The method calculates LSSS parameters to define safe operation. The methodology has been applied to two test reactors currently considered, the Chinese TMSR-SF1 pebble bed design and MIT's Transportable FHR prismatic core design; two candidate coolants, flibe (LiF-BeF2) and nafzirf (NaF-ZrF4); and forced flow and natural circulation conditions to compare operating regions and LSSS power (maximum power not exceeding any thermal limits). The calculated operating region accounts for uncertainty (2 [sigma]) with LSSS power (MW) for forced flow of 25.37±0.72, 22.56±1.15, 21.28±1.48, and 11.32±1.35 for pebble flibe, pebble nafzirf, prismatic flibe, and prismatic nafzirf, respectively. The pebble bed has superior heat transfer with an operating region reduced ~10% less when switching coolants and ~50% smaller uncertainty than the prismatic. The maximum fuel temperature constrains the pebble bed while the maximum coolant temperature constrains the prismatic due to different dominant heat transfer modes. Sensitivity analysis revealed 1) thermal conductivity and thus conductive heat transfer dominates in the prismatic design while convection is superior in the pebble bed, and 2) the impact of thermophysical property uncertainties are ranked in the following order: thermal conductivity, heat capacity, density, and lastly viscosity. Broadly, the methodology developed incorporates uncertainty propagation that can be used to evaluate parametric uncertainties to satisfy guidelines for non-power reactor licensing applications, and method application shows the pebble bed is more attractive for thermal-hydraulic safety. Although the method was developed and evaluated for coolant property uncertainties for FHR, it is readily applicable for any parameters of interest.
by Rebecca Rose Romatoski.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
38

Cecinati, Francesca. "Uncertainty estimation and propagation in radar-rain gauge rainfall merging using kriging-based techniques." Thesis, University of Bristol, 2017. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.738288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Esperon, Miguez Manuel. "Financial and risk assessment and selection of health monitoring system design options for legacy aircraft." Thesis, Cranfield University, 2013. http://dspace.lib.cranfield.ac.uk/handle/1826/8062.

Full text
Abstract:
Aircraft operators demand an ever increasing availability of their fleets with constant reduction of their operational costs. With the age of many fleets measured in decades, the options to face these challenges are limited. Integrated Vehicle Health Management (IVHM) uses data gathered through sensors in the aircraft to assess the condition of components to detect and isolate faults or even estimate their Remaining Useful Life (RUL). This information can then be used to improve the planning of maintenance operations and even logistics and operational planning, resulting in shorter maintenance stops and lower cost. Retrofitting health monitoring technology onto legacy aircraft has the capability to deliver what operators and maintainers demand, but working on aging platforms presents numerous challenges. This thesis presents a novel methodology to select the combination of diagnostic and prognostic tools for legacy aircraft that best suits the stakeholders’ needs based on economic return and financial risk. The methodology is comprised of different steps in which a series of quantitative analyses are carried out to reach an objective solution. Beginning with the identification of which components could bring higher reduction of maintenance cost and time if monitored, the methodology also provides a method to define the requirements for diagnostic and prognostic tools capable of monitoring these components. It then continues to analyse how combining these tools affects the economic return and financial risk. Each possible combination is analysed to identify which of them should be retrofitted. Whilst computer models of maintenance operations can be used to analyse the effect of retrofitting IVHM technology on a legacy fleet, the number of possible combinations of diagnostic and prognostic tools is too big for this approach to be practicable. Nevertheless, computer models can go beyond the economic analysis performed thus far and simulations are used as part of the methodology to get an insight of other effects or retrofitting the chosen toolset.
APA, Harvard, Vancouver, ISO, and other styles
40

RIO, EDUARDO DA SILVA LEMA DEL. "A PROPOSAL FOR MODELING THE UNCERTAINTY PROPAGATION IN CARTOGRAPHIC LOCATION METHODS BASED ON WIRELESS TECHNOLOGIES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=35494@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTS. DE ENSINO
Neste trabalho, foca-se em uma solução alternativa para se expressar a incerteza de medição que é aplicada na localização por identificação por rádio-frequência (RFID). Elabora-se um modelo para a expressão da incerteza de medição, incerteza máxima possível diferencial (IMPD), que aplica-se aos métodos de identificação de células (Cell-Id ou Cid), vizinho mais próximo (VMP) e trilateração (3L) e que possa servir de auxílio à escolha dos parâmetros relevantes para localização, antes da implementação. Propõe-se uma modificação da Cell-ID, denominada Cell-ID média (MCid), e constrói-se o modelo de incertezas tanto para a Cid quanto para a MCid, levando em conta os parâmetros não considerados no modelo da literatura. Os modelos propostos neste trabalho levam, naturalmente, à definição de um alcance ótimo para as tags. Consequentemente, derivam-se fórmulas fechadas para a sua determinação direta e rápida. Com todo o ferramental construído, desenvolveu-se um experimento numérico aplicado à localização de chamadas de emergência segundo critérios do E911, cujos resultados indicaram a possibilidade de localização com a metodologia proposta, ainda que o alcance varie de 40 m em magnitude, em relação ao alcance esférico geralmente adotado na literatura. Mostrou-se como modelar a incerteza dos métodos VMP e 3L e também se comparou a incerteza de medição determinada pelo método proposto neste trabalho com a determinada tanto por simulação de Monte Carlo quanto pela lei de propagação de incertezas. Os resultados indicaram que o método proposto pode estimar a incerteza para altas probabilidades de abrangência, sendo especialmente útil quando nem todas as grandezas de entrada podem ser medidas.
In this work, the focus is on an alternative solution for expressing the uncertainty of a measurement, which is applied in location by the identification through radio-frequency (RFID). A proposal of a model for the expression of measurement uncertainty is presented, named as the maximum possible differential uncertainty (IMPD, in Portuguese), and its application to the Cell-Id (Cid), nearest neighbor (VMP, in Portuguese) and trilateration (3L) methods that may be of use for choosing the relevant parameters for location before implementation. A modification of Cell-ID is proposed, named mean Cell-ID (MCid), and built the error model for both Cid and MCid, taking into account the parameters not considered in the literature. The models proposed in this work conducted, naturally, to the definition of an optimal range for the tags. Consequently, it is derived closed formulae for its direct and fast determination. Finally, with all the tools built, a numerical experiment applied to the location of emergency calls is developed, according to criteria for the E911 (Enhanced 911). The results indicated the possibility of positioning, even if the range shows variations of 40 m, in magnitude, in relation to the spherical range generally adopted in the literature. It was shown how to model the uncertainty of VMP and 3L, and performed a comparison between the measurement uncertainty as given by the method proposed in this work with the one determined by a Monte Carlo simulation. The results indicate that the proposed approach can estimate the measurement uncertainty for high coverage probabilities, and that it is especially useful when one or more input quantities cannot be measured.
APA, Harvard, Vancouver, ISO, and other styles
41

Sabouri, Pouya. "Application of perturbation theory methods to nuclear data uncertainty propagation using the collision probability method." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENI071/document.

Full text
Abstract:
Dans cette thèse, nous présentons une étude rigoureuse des barres d'erreurs et des sensibilités de paramètres neutroniques (tels le keff) aux données nucléaires de base utilisées pour les calculer. Notre étude commence au niveau fondamental, i.e. les fichiers de données ENDF et leurs incertitudes, fournies sous la forme de matrices de variance/covariance, et leur traitement. Lorsqu'un calcul méthodique et consistant des sensibilités est consenti, nous montrons qu'une approche déterministe utilisant des formalismes bien connus est suffisante pour propager les incertitudes des bases de données avec un niveau de précision équivalent à celui des meilleurs outils disponibles sur le marché, comme les codes Monte-Carlo de référence. En appliquant notre méthodologie à trois exercices proposés par l'OCDE, dans le cadre des Benchmarks UACSA, nous donnons des informations, que nous espérons utiles, sur les processus physiques et les hypothèses sous-jacents aux formalismes déterministes utilisés dans cette étude
This dissertation presents a comprehensive study of sensitivity/uncertainty analysis for reactor performance parameters (e.g. the k-effective) to the base nuclear data from which they are computed. The analysis starts at the fundamental step, the Evaluated Nuclear Data File and the uncertainties inherently associated with the data they contain, available in the form of variance/covariance matrices. We show that when a methodical and consistent computation of sensitivity is performed, conventional deterministic formalisms can be sufficient to propagate nuclear data uncertainties with the level of accuracy obtained by the most advanced tools, such as state-of-the-art Monte Carlo codes. By applying our developed methodology to three exercises proposed by the OECD (UACSA Benchmarks), we provide insights of the underlying physical phenomena associated with the used formalisms
APA, Harvard, Vancouver, ISO, and other styles
42

Xuan, Yunqing. "Uncertainty propagation in complex coupled flood risk models using numerical weather prediction and weather radars." Thesis, University of Bristol, 2007. http://hdl.handle.net/1983/c76c4eb0-9c9e-4ddc-866c-9bbdbfa4ec25.

Full text
Abstract:
The role of flood forecasting is becoming increasingly important as the concept of risk-based approach is accepted in flood risk management. The risk-based approach not only requires efficient and abundant information for decision making in a risk framework. but needs the uncertainty appropriately accounted for and expressed. The rapid development in numerical weather prediction and weather radar technology make it feasible to provide precipitation predictions and observations for flood warning and forecasting that benefit from the extended lead-time. Although the uncertainty issues related to standalone models have been addressed. little attention has been focused on the complex behaviour of coupled modelling systems when the uncertainty-bearing information propagates through the model cascade. The work presented in this thesis focuses on the issue of uncertainty propagation in this complex coupled modelling environment. A prototype system that integrates the high reso- lution numerical weather prediction. weather radar. and distributed hydrological models. was developed to facilitate the study. The uncertainty propagation and interactions were then analysed covering the uncertainty associated with the data. model structures, chaotic dynamics and coupling processes. The ensemble method was concluded to be the choice for the coupled system to produce forecasts able to account for the uncertainty cascaded from the precipitation prediction to the hydrological and hydraulic models. Finally. recommendations are made in relation to the exploration of complex coupled systems for uncertainty propagation in flood risk management.
APA, Harvard, Vancouver, ISO, and other styles
43

Anderson, Travis V. "Efficient, Accurate, and Non-Gaussian Error Propagation Through Nonlinear, Closed-Form, Analytical System Models." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2675.

Full text
Abstract:
Uncertainty analysis is an important part of system design. The formula for error propagation through a system model that is most-often cited in literature is based on a first-order Taylor series. This formula makes several important assumptions and has several important limitations that are often ignored. This thesis explores these assumptions and addresses two of the major limitations. First, the results obtained from propagating error through nonlinear systems can be wrong by one or more orders of magnitude, due to the linearization inherent in a first-order Taylor series. This thesis presents a method for overcoming that inaccuracy that is capable of achieving fourth-order accuracy without significant additional computational cost. Second, system designers using a Taylor series to propagate error typically only propagate a mean and variance and ignore all higher-order statistics. Consequently, a Gaussian output distribution must be assumed, which often does not reflect reality. This thesis presents a proof that nonlinear systems do not produce Gaussian output distributions, even when inputs are Gaussian. A second-order Taylor series is then used to propagate both skewness and kurtosis through a system model. This allows the system designer to obtain a fully-described non-Gaussian output distribution. The benefits of having a fully-described output distribution are demonstrated using the examples of both a flat rolling metalworking process and the propeller component of a solar-powered unmanned aerial vehicle.
APA, Harvard, Vancouver, ISO, and other styles
44

Chen, Shengli. "Maîtrise des biais et incertitudes des sections efficaces et de la modélisation de la cinématique associées aux réactions nucléaires conduisant aux dommages dans les matériaux sous irradiation." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALI048.

Full text
Abstract:
Étant donné que les dommages causés par l'irradiation constituent un défi majeur pour les matériaux nucléaires, il est nécessaire de calculer précisément ces dommages conjointement avec l’estimation de ses incertitudes. L'objectif principal de cette thèse est de développer et d'améliorer les méthodologies pour calculer les dommages induits par l'irradiation neutronique ainsi que de proposer une méthodologie pour l’estimation de l'incertitude. Après une brève revue des modèles de réactions nucléaires et des modèles de dommages d’irradiation primaires, on propose des méthodes complètes pour calculer la section efficace des dommages à partir de différentes réactions nucléaires pour calculer du taux de Déplacement par Atome (DPA).Une interpolation améliorée est proposée pour produire la valeur de crête de la distribution d'énergie-angulaire à partir de données tabulées. Les énergies de recul des réactions induites par les neutrons sont résumées avec une estimation de l'effet relativiste et de la vibration thermique de la cible. En particulier, une nouvelle méthode de calcul de l'énergie de recul des réactions d'émission de particules chargées est proposée en considérant l’effet tunnel et la barrière Coulombienne. Certaines méthodes sont développées pour améliorer et vérifier les calculs numériques. Les calculs de la section de dommage provenant de la réaction de la capture et des réactions d’émission de N-corps sont également analysés et discutés en profondeur. En plus des dommages induits par l'irradiation neutronique, les sections DPA induites par les électrons, les positons et les photons et les dommages induits par la désintégration bêta sont également étudiées.Pour le calcul du taux de DPA induit par l'irradiation neutronique, il convient de faire attention lors de l'utilisation de sections à dilution infinie. Par exemple, dans le cœur interne d’ASTRID, la correction d'autoprotection sur la section DPA de ECCO 33-groupe conduit à une réduction de 10% du taux de DPA, tandis que cette correction multi-groupe n'est pas toujours automatiquement traitée pour le calcul de DPA dans les codes neutroniques ni pour le calcul du spectre Primary Knock-on Atom (PKA). En plus des dommages par les neutrons, une méthode générale est proposée pour calculer les dommages de déplacement induits par les Produits de Fission (PFs) avec des simulations de collisions atomistiques. Elle montre que la valeur de crête du taux de DPA induit par les PFs peut être 4 à 5 fois supérieure à celle induite par les neutrons dans la gaine du cœur interne d’ASTRID, même si la pénétration des PFs dans la gaine Fe-14Cr est inférieure à 10 µm. Par conséquent, la question si les dommages induits par les PFs doivent être pris en compte pour déterminer la durée de vie des assemblages combustibles dans les réacteurs rapides doit être discutée.Dans la cuve d'un réacteur à eau pressurisée, les matrices de covariance du spectre de neutrons prompts de fission de 235U venant de ENDF/B-VII.1 et JENDL-4.0 conduisent respectivement à une incertitude de 11% et 7% du taux de DPA. Négliger les corrélations du flux de neutrons et du spectre PKA entraîne une large sous-estimation d’un facteur de 21. Les incertitudes totales du taux de dommages sont respectivement de 12% et 9%, tandis que les nulles valeurs des corrélations de la section efficace de dommage et du flux de neutron conduisent à une réduction de l'incertitude par un facteur de 3
Because the irradiation damage is a major challenge of nuclear materials, it is of upmost importance to accurately calculate it with reliable uncertainty estimates. The main objective of this thesis is to develop and improve the methodologies for computing the neutron irradiation-induced displacement damages as well as their uncertainties. After a brief review on nuclear reaction models and primary radiation damage models, we propose a complete methodology for calculating damage cross sections from different nuclear reactions and the subsequent calculation of Displacement per Atom (DPA) rates.The recoil energies from neutron-induced reactions are summarized with an estimation of the relativistic effect and the target thermal vibration. Particularly, a new method for computing the recoil energy from charged particle emission reactions is proposed by considering both the quantum tunneling and the Coulomb barrier. Some methods are developed to improve and verify numerical calculations. Damage cross section calculations from neutron radiative capture reaction and N-body reactions are also thoroughly analyzed and discussed. In addition to the neutron irradiation-induced displacement damage, the electron, positron, photon-induced DPA cross sections, as well as the beta decay and Fission Products (FPs)-induced damage are also investigated. Orders of magnitude of their relative contributions are given.For the neutron irradiation-induced DPA rate calculation, attention should be paid when using infinite dilution cross sections. E.g., in the ASTRID inner core, the self-shielding correction on ECCO 33-group damage cross sections leads to a 10% reduction of DPA rate, whereas the multigroup correction is still not automatically treated for DPA rate calculation in neutronic codes nor for computing Primary Knock-on Atom (PKA) spectrum. Based on the presently proposed method for computing the FPs-induced DPA by atomistic simulations, the peak value of the FPs-induced DPA rate can be 4 to 5 times larger than the neutron-induced one in the cladding of the ASTRID inner core, even though the penetration of FPs in the Fe-14Cr cladding is less than 10 µm. Therefore, the question of whether the FPs-induced damage should be considered for determining fuel assembly lifetime in fast reactors needs to be discussed.In the reactor vessel of a simplified pressurized water reactor, the covariance matrices of 235U prompt fission neutron spectrum from ENDF/B-VII.1 and JENDL-4.0 respectively lead to 11% and 7% relative uncertainty of DPA rate. Neglecting the correlations of the neutron flux and PKA spectrum results in an underestimation by a factor of 21. The total uncertainties of damage energy rate are respectively 12% and 9%, whereas an underestimation by a factor of 3 is found if the correlations of damage cross section and neutron flux are not considered
APA, Harvard, Vancouver, ISO, and other styles
45

Alhossen, Iman. "Méthode d'analyse de sensibilité et propagation inverse d'incertitude appliquées sur les modèles mathématiques dans les applications d'ingénierie." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30314/document.

Full text
Abstract:
Dans de nombreuses disciplines, les approches permettant d'étudier et de quantifier l'influence de données incertaines sont devenues une nécessité. Bien que la propagation directe d'incertitudes ait été largement étudiée, la propagation inverse d'incertitudes demeure un vaste sujet d'étude, sans méthode standardisée. Dans cette thèse, une nouvelle méthode de propagation inverse d'incertitude est présentée. Le but de cette méthode est de déterminer l'incertitude d'entrée à partir de données de sortie considérées comme incertaines. Parallèlement, les méthodes d'analyse de sensibilité sont également très utilisées pour déterminer l'influence des entrées sur la sortie lors d'un processus de modélisation. Ces approches permettent d'isoler les entrées les plus significatives, c'est à dire les plus influentes, qu'il est nécessaire de tester lors d'une analyse d'incertitudes. Dans ce travail, nous approfondirons tout d'abord la méthode d'analyse de sensibilité de Sobol, qui est l'une des méthodes d'analyse de sensibilité globale les plus efficaces. Cette méthode repose sur le calcul d'indices de sensibilité, appelés indices de Sobol, qui représentent l'effet des données d'entrées (vues comme des variables aléatoires continues) sur la sortie. Nous démontrerons ensuite que la méthode de Sobol donne des résultats fiables même lorsqu'elle est appliquée dans le cas discret. Puis, nous étendrons le cadre d'application de la méthode de Sobol afin de répondre à la problématique de propagation inverse d'incertitudes. Enfin, nous proposerons une nouvelle approche de la méthode de Sobol qui permet d'étudier la variation des indices de sensibilité par rapport à certains facteurs du modèle ou à certaines conditions expérimentales. Nous montrerons que les résultats obtenus lors de ces études permettent d'illustrer les différentes caractéristiques des données d'entrée. Pour conclure, nous exposerons comment ces résultats permettent d'indiquer les meilleures conditions expérimentales pour lesquelles l'estimation des paramètres peut être efficacement réalisée
Approaches for studying uncertainty are of great necessity in all disciplines. While the forward propagation of uncertainty has been investigated extensively, the backward propagation is still under studied. In this thesis, a new method for backward propagation of uncertainty is presented. The aim of this method is to determine the input uncertainty starting from the given data of the uncertain output. In parallel, sensitivity analysis methods are also of great necessity in revealing the influence of the inputs on the output in any modeling process. This helps in revealing the most significant inputs to be carried in an uncertainty study. In this work, the Sobol sensitivity analysis method, which is one of the most efficient global sensitivity analysis methods, is considered and its application framework is developed. This method relies on the computation of sensitivity indexes, called Sobol indexes. These indexes give the effect of the inputs on the output. Usually inputs in Sobol method are considered to vary as continuous random variables in order to compute the corresponding indexes. In this work, the Sobol method is demonstrated to give reliable results even when applied in the discrete case. In addition, another advancement for the application of the Sobol method is done by studying the variation of these indexes with respect to some factors of the model or some experimental conditions. The consequences and conclusions derived from the study of this variation help in determining different characteristics and information about the inputs. Moreover, these inferences allow the indication of the best experimental conditions at which estimation of the inputs can be done
APA, Harvard, Vancouver, ISO, and other styles
46

Westin, Robin. "Three material decomposition in dual energy CT for brachytherapy using the iterative image reconstruction algorithm DIRA : Performance of the method for an anthropomorphic phantom." Thesis, Linköpings universitet, Institutionen för medicinsk teknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-91297.

Full text
Abstract:
Brachytherapy is radiation therapy performed by placing a radiation source near or inside a tumor. Difference between the current water-based brachytherapy dose formalism (TG-43) and new model based dose calculation algorithms (MBSCAs) can differ by more than a factor of 10 in the calculated doses. There is a need for voxel-by-voxel cross-section assignment, ideally, both the tissue composition and mass density of every voxel should be known for individual patients. A method for determining tissue composition via three material decomposition (3MD) from dual energy CT scans was developed at Linköping university. The method (named DIRA) is a model based iterative reconstruction algorithm that utilizes two photon energies for image reconstruction and 3MD for quantitative tissue classification of the reconstructed volumetric dataset. This thesis has investigated the accuracy of the 3MD method applied on prostate tissue in an anthropomorphic phantom when using two different approximations of soft tissues in DIRA. Also the distributions of CT-numbers for soft tissues in a contemporary dual energy CT scanner have been determined. An investigation whether these distributions can be used for tissue classification of soft tissues via thresholding has been conducted. It was found that the relative errors of mass energy absorption coefficient (MEAC) and linear attenuation coefficient (LAC) of the approximated mixture as functions of photon energy were less than 6 \% in the energy region from 1 keV to 1 MeV. This showed that DIRA performed well for the selected anthropomorphic phantom and that it was relatively insensitive to choice of base materials for the approximation of soft tissues. The distributions of CT-numbers of liver, muscle and kidney tissues overlapped. For example a voxel containing muscle could be misclassified as liver in 42 cases of 100. This suggests that pure thresholding is insufficient as a method for tissue classification of soft tissues and that more advanced methods should be used.
APA, Harvard, Vancouver, ISO, and other styles
47

Ehrlacher, Virginie. "Quelques modèles mathématiques en chimie quantique et propagation d'incertitudes." Thesis, Paris Est, 2012. http://www.theses.fr/2012PEST1073/document.

Full text
Abstract:
Ce travail comporte deux volets. Le premier concerne l'étude de défauts locaux dans des matériaux cristallins. Le chapitre 1 donne un bref panorama des principaux modèles utilisés en chimie quantique pour le calcul de structures électroniques. Dans le chapitre 2, nous présentons un modèle variationnel exact qui permet de décrire les défauts locaux d'un cristal périodique dans le cadre de la théorie de Thomas-Fermi-von Weiszäcker. Celui-ci est justifié à l'aide d'arguments de limite thermodynamique. On montre en particulier que les défauts modélisés par cette théorie ne peuvent pas être chargés électriquement. Les chapitres 3 et 4 de cette thèse traitent du phénomène de pollution spectrale. En effet, lorsqu'un opérateur est discrétisé, il peut apparaître des valeurs propres parasites, qui n'appartiennent pas au spectre de l'opérateur initial. Dans le chapitre 3, nous montrons que des méthodes d'approximation de Galerkin via une discrétisation en éléments finis pour approcher le spectre d'opérateurs de Schrödinger périodiques perturbés sont sujettes au phénomène de pollution spectrale. Par ailleurs, les vecteurs propres associés aux valeurs propres parasites peuvent être interprétés comme des états de surface. Nous prouvons qu'il est possible d'éviter ce problème en utilisant des espaces d'éléments finis augmentés, construits à partir des fonctions de Wannier associées à l'opérateur de Schrödinger périodique non perturbé. On montre également que la méthode dite de supercellule, qui consiste à imposer des conditions limites périodiques sur un domaine de simulation contenant le défaut, ne produit pas de pollution spectrale. Dans le chapitre 4, nous établissons des estimations d'erreur a priori pour la méthode de supercellule. En particulier, nous montrons que l'erreur effectuée décroît exponentiellement vite en fonction de la taille de la supercellule considérée. Un deuxième volet concerne l'étude d'algorithmes gloutons pour résoudre des problèmes de propagation d'incertitudes en grande dimension. Le chapitre 5 de cette thèse présente une introduction aux méthodes numériques classiques utilisées dans le domaine de la propagation d'incertitudes, ainsi qu'aux algorithmes gloutons. Dans le chapitre 6, nous prouvons que ces algorithmes peuvent être appliqués à la minimisation de fonctionnelles d'énergie fortement convexes non linéaires et que leur vitesse de convergence est exponentielle en dimension finie. Nous illustrons ces résultats par la résolution de problèmes de l'obstacle avec incertitudes via une formulation pénalisée
The contributions of this thesis work are two fold. The first part deals with the study of local defects in crystalline materials. Chapter 1 gives a brief overview of the main models used in quantum chemistry for electronic structure calculations. In Chapter 2, an exact variational model for the description of local defects in a periodic crystal in the framework of the Thomas-Fermi-von Weisz"acker theory is presented. It is justified by means of thermodynamic limit arguments. In particular, it is proved that the defects modeled within this theory are necessarily neutrally charged. Chapters 3 and 4 are concerned with the so-called spectral pollution phenomenon. Indeed, when an operator is discretized, spurious eigenvalues which do not belong to the spectrum of the initial operator may appear. In Chapter 3, we prove that standard Galerkin methods with finite elements discretization for the approximation of perturbed periodic Schrödinger operators are prone to spectral pollution. Besides, the eigenvectors associated with spurious eigenvalues can be characterized as surface states. It is possible to circumvent this problem by using augmented finite element spaces, constructed with the Wannier functions of the periodic unperturbed Schr"odinger operator. We also prove that the supercell method, which consists in imposing periodic boundary conditions on a large simulation domain containing the defect, does not produce spectral pollution. In Chapter 4, we give a priori error estimates for the supercell method. It is proved in particular that the rate of convergence of the method scales exponentiall with respect to the size of the supercell. The second part of this thesis is devoted to the study of greedy algorithms for the resolution of high-dimensional uncertainty quantification problems. Chapter 5 presents the most classical numerical methods used in the field of uncertainty quantification and an introduction to greedy algorithms. In Chapter 6, we prove that these algorithms can be applied to the minimization of strongly convex nonlinear energy functionals and that their convergence rate is exponential in the finite-dimensional case. We illustrate these results on obstacle problems with uncertainty via penalized formulations
APA, Harvard, Vancouver, ISO, and other styles
48

Alhassan, Erwin. "Nuclear data uncertainty quantification and data assimilation for a lead-cooled fast reactor : Using integral experiments for improved accuracy." Doctoral thesis, Uppsala universitet, Tillämpad kärnfysik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-265502.

Full text
Abstract:
For the successful deployment of advanced nuclear systems and optimization of current reactor designs, high quality nuclear data are required. Before nuclear data can be used in applications they must first be evaluated, tested and validated against a set of integral experiments, and then converted into formats usable for applications. The evaluation process in the past was usually done by using differential experimental data which was then complemented with nuclear model calculations. This trend is fast changing due to the increase in computational power and tremendous improvements in nuclear reaction models over the last decade. Since these models have uncertain inputs, they are normally calibrated using experimental data. However, these experiments are themselves not exact. Therefore, the calculated quantities of model codes such as cross sections and angular distributions contain uncertainties. Since nuclear data are used in reactor transport codes as input for simulations, the output of transport codes contain uncertainties due to these data as well. Quantifying these uncertainties is important for setting safety margins; for providing confidence in the interpretation of results; and for deciding where additional efforts are needed to reduce these uncertainties. Also, regulatory bodies are now moving away from conservative evaluations to best estimate calculations that are accompanied by uncertainty evaluations. In this work, the Total Monte Carlo (TMC) method was applied to study the impact of nuclear data uncertainties from basic physics to macroscopic reactor parameters for the European Lead Cooled Training Reactor (ELECTRA). As part of the work, nuclear data uncertainties of actinides in the fuel, lead isotopes within the coolant, and some structural materials have been investigated. In the case of the lead coolant it was observed that the uncertainty in the keff and the coolant void worth (except in the case of 204Pb), were large, with the most significant contribution coming from 208Pb. New 208Pb and 206Pb random nuclear data libraries with realistic central values have been produced as part of this work. Also, a correlation based sensitivity method was used in this work, to determine parameter - cross section correlations for different isotopes and energy groups. Furthermore, an accept/reject method and a method of assigning file weights based on the likelihood function are proposed for uncertainty reduction using criticality benchmark experiments within the TMC method. It was observed from the study that a significant reduction in nuclear data uncertainty was obtained for some isotopes for ELECTRA after incorporating integral benchmark information. As a further objective of this thesis, a method for selecting benchmark for code validation for specific reactor applications was developed and applied to the ELECTRA reactor. Finally, a method for combining differential experiments and integral benchmark data for nuclear data adjustments is proposed and applied for the adjustment of neutron induced 208Pb nuclear data in the fast energy region.
APA, Harvard, Vancouver, ISO, and other styles
49

Kacker, Shubhra. "The Role of Constitutive Model in Traumatic Brain Injury Prediction." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563874757653453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

López, Rafael Arcángel Cepeda. "Spatial uncertainty and path loss in UWB propagation channels, and frequency dependent path loss in multi-band OFDM." Thesis, University of Bristol, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.503865.

Full text
Abstract:
The performance of ultra wideband (UWB) signals in wireless communication systems depends, primarily, on the knowledge of the propagation channel, interfering signals and the proximity between transmit and receive antennas. This work focuses on the study of the UWB propagation channel and, in particular, on how large scale statistics of the frequency components are affected by the environment. A two-ray UWB model is used to show the effects of magnitude, delay, and phase variations on the power spectrum density of UWB signals. Also, spatial uncertainty in measurements is analysed in two ways. Firstly, by deriving mathematical expressions for the area and volume of uncertainty and, secondly, by measuring the effects of displacements between transmit and receive antennas, when the distance changes are smaller than the spatial resolution of the measuring equipment.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography