To see the other types of publications on this topic, follow the link: Chaos expansion.

Dissertations / Theses on the topic 'Chaos expansion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 45 dissertations / theses for your research on the topic 'Chaos expansion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Szepietowska, Katarzyna. "POLYNOMIAL CHAOS EXPANSION IN BIO- AND STRUCTURAL MECHANICS." Thesis, Bourges, INSA Centre Val de Loire, 2018. http://www.theses.fr/2018ISAB0004/document.

Full text
Abstract:
Cette thèse présente une approche probabiliste de la modélisation de la mécanique des matériaux et des structures. Le dimensionnement est influencé par l'incertitude des paramètres d'entrée. Le travail est interdisciplinaire et les méthodes décrites sont appliquées à des exemples de biomécanique et de génie civil. La motivation de ce travail était le besoin d'approches basées sur la mécanique dans la modélisation et la simulation des implants utilisés dans la réparation des hernies ventrales. De nombreuses incertitudes apparaissent dans la modélisation du système implant-paroi abdominale. L'approche probabiliste proposée dans cette thèse permet de propager ces incertitudes et d’étudier leurs influences respectives. La méthode du chaos polynomial basée sur la régression est utilisée dans ce travail. L'exactitude de ce type de méthodes non intrusives dépend du nombre et de l'emplacement des points de calcul choisis. Trouver une méthode universelle pour atteindre un bon équilibre entre l'exactitude et le coût de calcul est encore une question ouverte. Différentes approches sont étudiées dans cette thèse afin de choisir une méthode efficace et adaptée au cas d’étude. L'analyse de sensibilité globale est utilisée pour étudier les influences des incertitudes d'entrée sur les variations des sorties de différents modèles. Les incertitudes sont propagées aux modèles implant-paroi abdominale. Elle permet de tirer des conclusions importantes pour les pratiques chirurgicales. À l'aide de l'expertise acquise à partir de ces modèles biomécaniques, la méthodologie développée est utilisée pour la modélisation de joints de bois historiques et la simulation de leur comportement mécanique. Ce type d’étude facilite en effet la planification efficace des réparations et de la rénovation des bâtiments ayant une valeur historique
This thesis presents a probabilistic approach to modelling the mechanics of materials and structures where the modelled performance is influenced by uncertainty in the input parameters. The work is interdisciplinary and the methods described are applied to medical and civil engineering problems. The motivation for this work was the necessity of mechanics-based approaches in the modelling and simulation of implants used in the repair of ventral hernias. Many uncertainties appear in the modelling of the implant-abdominal wall system. The probabilistic approach proposed in this thesis enables these uncertainties to be propagated to the output of the model and the investigation of their respective influences. The regression-based polynomial chaos expansion method is used here. However, the accuracy of such non-intrusive methods depends on the number and location of sampling points. Finding a universal method to achieve a good balance between accuracy and computational cost is still an open question so different approaches are investigated in this thesis in order to choose an efficient method. Global sensitivity analysis is used to investigate the respective influences of input uncertainties on the variation of the outputs of different models. The uncertainties are propagated to the implant-abdominal wall models in order to draw some conclusions important for further research. Using the expertise acquired from biomechanical models, modelling of historic timber joints and simulations of their mechanical behaviour is undertaken. Such an investigation is important owing to the need for efficient planning of repairs and renovation of buildings of historical value
APA, Harvard, Vancouver, ISO, and other styles
2

Aït-Simmou, Abderrahmane. "Filtrage non-linéaire et expansion en chaos de Wiener /." Thèse, Trois-Rivières : Université du Québec à Trois-Rivières, 2002. http://www.uqtr.ca/biblio/notice/tablemat/03-2246353TM.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Aït-Simmou, Abderrahmane. "Filtrage non-linéaire et expansion en chaos de Wiener." Thèse, Université du Québec à Trois-Rivières, 2002. http://depot-e.uqtr.ca/3992/1/000102224.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nydestedt, Robin. "Application of Polynomial Chaos Expansion for Climate Economy Assessment." Thesis, KTH, Optimeringslära och systemteori, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-223985.

Full text
Abstract:
In climate economics integrated assessment models (IAMs) are used to predict economic impacts resulting from climate change. These IAMs attempt to model complex interactions between human and geophysical systems to provide quantifications of economic impact, typically using the Social Cost of Carbon (SCC) which represents the economic cost of a one ton increase in carbon dioxide. Another difficulty that arises in modeling a climate economics system is that both the geophysical and economic submodules are inherently stochastic. Even in frequently cited IAMs, such as DICE and PAGE, there exists a lot of variation in the predictions of the SCC. These differences stem both from the models of the climate and economic modules used, as well as from the choice of probability distributions used for the random variables. Seeing as IAMs often take the form of optimization problems these nondeterministic elements potentially result in heavy computational costs. In this thesis a new IAM, FAIR/DICE, is introduced. FAIR/DICE is a discrete time hybrid of DICE and FAIR providing a potential improvement to DICE as the climate and carbon modules in FAIR take into account feedback coming from the climate module to the carbon module. Additionally uncertainty propagation in FAIR/DICE is analyzed using Polynomial Chaos Expansions (PCEs) which is an alternative to Monte Carlo sampling where the stochastic variables are projected onto stochastic polynomial spaces. PCEs provide better computational efficiency compared to Monte Carlo sampling at the expense of storage requirements as a lot of computations can be stored from the first simulation of the system, and conveniently statistics can be computed from the PCE coefficients without the need for sampling. A PCE overloading of FAIR/DICE is investigated where the equilibrium climate sensitivity, modeled as a four parameter Beta distribution, introduces an uncertainty to the dynamical system. Finally, results in the mean and variance obtained from the PCEs are compared to a Monte Carlo reference and avenues into future work are suggested.
Inom klimatekonomi används integrated assessment models (IAMs) för att förutspå hur klimatförändringar påverkar ekonomin. Dessa IAMs modellerar komplexa interaktioner mellan geofysiska och mänskliga system för att kunna kvantifiera till exempel kostnaden för den ökade koldioxidhalten på planeten, i.e. Social Cost of Carbon (SCC). Detta representerar den ekonomiska kostnaden som motsvaras av utsläppet av ett ton koldioxid. Faktumet att både de geofysiska och ekonomiska submodulerna är stokastiska gör att SCC-uppskattningar varierar mycket även inom väletablerade IAMs som PAGE och DICE. Variationen grundar sig i skillnader inom modellerna men också från att val av sannolikhetsfördelningar för de stokastiska variablerna skiljer sig. Eftersom IAMs ofta är formulerade som optimeringsproblem leder dessutom osäkerheterna till höga beräkningskostnader. I denna uppsats introduceras en ny IAM, FAIR/DICE, som är en diskret tids hybrid av DICE och FAIR. Den utgör en potentiell förbättring av DICE eftersom klimat- och kolmodulerna i FAIR även behandlar återkoppling från klimatmodulen till kolmodulen. FAIR/DICE är analyserad med hjälp av Polynomial Chaos Expansions (PCEs), ett alternativ till Monte Carlo-metoder. Med hjälp av PCEs kan de osäkerheter projiceras på stokastiska polynomrum vilket har fördelen att beräkningskostnader reduceras men nackdelen att lagringskraven ökar. Detta eftersom många av beräkningarna kan sparas från första simuleringen av systemet, dessutom kan statistik extraheras direkt från PCE koefficienterna utan behov av sampling. FAIR/DICE systemet projiceras med hjälp av PCEs där en osäkerhet är introducerad via equilibrium climate sensitivity (ECS), vilket i sig är ett värde på hur känsligt klimatet är för koldioxidförändringar. ECS modelleras med hjälp av en fyra-parameters Beta sannolikhetsfördelning. Avslutningsvis jämförs resultat i medelvärde och varians mellan PCE implementationen av FAIR/DICE och en Monte Carlo-baserad referens, därefter ges förslag på framtida utvecklingsområden.
APA, Harvard, Vancouver, ISO, and other styles
5

Luo, Wuan Hou Thomas Y. "Wiener chaos expansion and numerical solutions of stochastic partial differential equations /." Diss., Pasadena, Calif. : Caltech, 2006. http://resolver.caltech.edu/CaltechETD:etd-05182006-173710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Price, Darryl Brian. "Estimation of Uncertain Vehicle Center of Gravity using Polynomial Chaos Expansions." Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/33625.

Full text
Abstract:
The main goal of this study is the use of polynomial chaos expansion (PCE) to analyze the uncertainty in calculating the lateral and longitudinal center of gravity for a vehicle from static load cell measurements. A secondary goal is to use experimental testing as a source of uncertainty and as a method to confirm the results from the PCE simulation. While PCE has often been used as an alternative to Monte Carlo, PCE models have rarely been based on experimental data. The 8-post test rig at the Virginia Institute for Performance Engineering and Research facility at Virginia International Raceway is the experimental test bed used to implement the PCE model. Experimental tests are conducted to define the true distribution for the load measurement systemsâ uncertainty. A method that does not require a new uncertainty distribution experiment for multiple tests with different goals is presented. Moved mass tests confirm the uncertainty analysis using portable scales that provide accurate results. The polynomial chaos model used to find the uncertainty in the center of gravity calculation is derived. Karhunen-Loeve expansions, similar to Fourier series, are used to define the uncertainties to allow for the polynomial chaos expansion. PCE models are typically computed via the collocation method or the Galerkin method. The Galerkin method is chosen as the PCE method in order to formulate a more accurate analytical result. The derivation systematically increases from one uncertain load cell to all four uncertain load cells noting the differences and increased complexity as the uncertainty dimensions increase. For each derivation the PCE model is shown and the solution to the simulation is given. Results are presented comparing the polynomial chaos simulation to the Monte Carlo simulation and to the accurate scales. It is shown that the PCE simulations closely match the Monte Carlo simulations.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
7

Cattell, Simon. "A Wiener chaos based approach to stability analysis of stochastic shear flows." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/289421.

Full text
Abstract:
As the aviation industry expands, consuming oil reserves, generating carbon dioxide gas and adding to environmental concerns, there is an increasing need for drag reduction technology. The ability to maintain a laminar flow promises significant reductions in drag, with economic and environmental benefits. Whilst development of flow control technology has gained interest, few studies investigate the impacts that uncertainty, in flow properties, can have on flow stability. Inclusion of uncertainty, inherent in all physical systems, facilitates a more realistic analysis, and is therefore central to this research. To this end, we study the stability of stochastic shear flows, and adopt a framework based upon the Wiener Chaos expansion for efficient numerical computations. We explore the stability of stochastic Poiseuille, Couette and Blasius boundary layer type base flows, presenting stochastic results for both the modal and non modal problem, contrasting with the deterministic case and identifying the responsible flow characteristics. From a numerical perspective we show that the Wiener Chaos expansion offers a highly efficient framework for the study of relatively low dimensional stochastic flow problems, whilst Monte Carlo methods remain superior in higher dimensions. Further, we demonstrate that a Gaussian auto-covariance provides a suitable model for the stochasticity present in typical wind tunnel tests, at least in the case of a Blasius boundary layer. From a physical perspective we demonstrate that it is neither the number of inflection points in a defect, nor the input variance attributed to a defect, that influences the variance in stability characteristics for Poiseuille flow, but the shape/symmetry of the defect. Conversely, we show the symmetry of defects to be less important in the case of the Blasius boundary layer, where we find that defects which increase curvature in the vicinity of the critical point generally reduce stability. In addition, we show that defects which enhance gradients in the outer regions of a boundary layer can excite centre modes with the potential to significantly impact neutral curves. Such effects can lead to the development of an additional lobe at lower wave-numbers, can be related to jet flows, and can significantly reduce the critical Reynolds number.
APA, Harvard, Vancouver, ISO, and other styles
8

Song, Chen [Verfasser], and Vincent [Akademischer Betreuer] Heuveline. "Uncertainty Quantification for a Blood Pump Device with Generalized Polynomial Chaos Expansion / Chen Song ; Betreuer: Vincent Heuveline." Heidelberg : Universitätsbibliothek Heidelberg, 2018. http://d-nb.info/1177252406/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Langewisch, Dustin R. "Application of the polynomial chaos expansion to multiphase CFD : a study of rising bubbles and slug flow." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/92097.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 157-167).
Part I of this thesis considers subcooled nucleate boiling on the microscale, focusing on the analysis of heat transfer near the Three-Phase (solid, liquid, and vapor) contact Line (TPL) region. A detailed derivation of one representative TPL model is presented. From this work, it was ultimately concluded that heat transfer in the vicinity of the TPL is rather unimportant in the overall quantification of nucleate boiling heat transfer; despite the extremely high heat fluxes that are attainable, it is limited to a very small region so the net heat transfer from this region is comparatively small. It was further concluded that many of the so-called microlayer heat transfer models appearing in the literature are actually models for TPL heat transfer; these models do not model the experimentally observed microlayer. This portion of the project was terminated early, however, in order to focus on the application of advanced computational uncertainty quantification methods to computational multiphase fluid dynamics (Part II). Part II discusses advanced uncertainty quantification (UQ) methods for long-running numerical models, namely computational multiphase fluid dynamics (CMFD) simulations. We consider the problem of how to efficiently propagate uncertainties in the model inputs (e.g., fluid properties, such as density, viscosity, etc.) through a computationally demanding model. The challenge is chiefly a matter of economics-the long run-time of these simulations limits the number of samples that one can reasonably obtain (i.e., the number of times the simulation can be run). Chapter 2 introduces the generalized Polynomial Chaos (gPC) expansion, which has shown promise for reducing the computational cost of performing UQ for a large class of problems, including heat transfer and single phase, incompressible flow simulations; example applications are demonstrated in Chapter 2. One of main objectives of this research was to ascertain whether this promise extends to realm of CMFD applications, and this is the topic of Chapters 3 and 4; Chapter 3 covers the numerical simulation of a single bubble rising in a quiescent liquid bath. The pertinent quantities from these simulations are the terminal velocity of the bubble and terminal bubble shape. the simulations were performed using the open source gerris flow solver. A handful of test cases were performed to validate the simulation results against available experimental data and numerical results from other authors; the results from gerris were found to compare favorably. Following the validation, we considered two uncertainty quantifications problems. In the first problem, the viscosity of the surrounding liquid is modeled as a uniform random variable and we quantify the resultant uncertainty in the bubbles terminal velocity. The second example is similar, except the bubble's size (diameter) is modeled as a log-normal random variable. In this case, the Hermite expansion is seen to converge almost immediately; a first-order Hermite expansion computed using 3 model evaluations is found to capture the terminal velocity distribution almost exactly. Both examples demonstrate that NISP can be successfully used to efficiently propagate uncertainties through CMFD models. Finally, we describe a simple technique to implement a moving reference frame in gerris. Chapter 4 presents an extensive study of the numerical simulation of capillary slug flow. We review existing correlations for the thickness of the liquid film surrounding a Taylor bubble and the pressure drop across the bubble. Bretherton's lubrication analysis, which yields analytical predictions for these quantities when inertial effects are negligible and Ca[beta] --> o, is considered in detail. In addition, a review is provided of film thickness correlations that are applicable for high Cab or when inertial effects are non-negligible. An extensive computational study was undertaken with gerris to simulate capillary slug flow under a variety of flow conditions; in total, more than two hundred simulations were carried out. The simulations were found to compare favorably with simulations performed previously by other authors using finite elements. The data from our simulations have been used to develop a new correlation for the film thickness and bubble velocity that is generally applicable. While similar in structure to existing film thickness correlations, the present correlation does not require the bubble velocity to be known a priori. We conclude with an application of the gPC expansion to quantify the uncertainty in the pressure drop in a channel in slug flow when the bubble size is described by a probability distribution. It is found that, although the gPC expansion fails to adequately quantify the uncertainty in field quantities (pressure and velocity) near the liquid-vapor interface, it is nevertheless capable of representing the uncertainty in other quantities (e.g., channel pressure drop) that do not depend sensitively on the precise location of the interface.
by Dustin R. Langewisch.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
10

Koehring, Andrew. "The application of polynomial response surface and polynomial chaos expansion metamodels within an augmented reality conceptual design environment." [Ames, Iowa : Iowa State University], 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
11

Mühlpfordt, Tillmann [Verfasser], and V. [Akademischer Betreuer] Hagenmeyer. "Uncertainty Quantification via Polynomial Chaos Expansion – Methods and Applications for Optimization of Power Systems / Tillmann Mühlpfordt ; Betreuer: V. Hagenmeyer." Karlsruhe : KIT-Bibliothek, 2020. http://d-nb.info/1203211872/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Yadav, Vaibhav. "Novel Computational Methods for Solving High-Dimensional Random Eigenvalue Problems." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/4927.

Full text
Abstract:
The primary objective of this study is to develop new computational methods for solving a general random eigenvalue problem (REP) commonly encountered in modeling and simulation of high-dimensional, complex dynamic systems. Four major research directions, all anchored in polynomial dimensional decomposition (PDD), have been defined to meet the objective. They involve: (1) a rigorous comparison of accuracy, efficiency, and convergence properties of the polynomial chaos expansion (PCE) and PDD methods; (2) development of two novel multiplicative PDD methods for addressing multiplicative structures in REPs; (3) development of a new hybrid PDD method to account for the combined effects of the multiplicative and additive structures in REPs; and (4) development of adaptive and sparse algorithms in conjunction with the PDD methods. The major findings are as follows. First, a rigorous comparison of the PCE and PDD methods indicates that the infinite series from the two expansions are equivalent but their truncations endow contrasting dimensional structures, creating significant difference between the two approximations. When the cooperative effects of input variables on an eigenvalue attenuate rapidly or vanish altogether, the PDD approximation commits smaller error than does the PCE approximation for identical expansion orders. Numerical analysis reveal higher convergence rates and significantly higher efficiency of the PDD approximation than the PCE approximation. Second, two novel multiplicative PDD methods, factorized PDD and logarithmic PDD, were developed to exploit the hidden multiplicative structure of an REP, if it exists. Since a multiplicative PDD recycles the same component functions of the additive PDD, no additional cost is incurred. Numerical results show that indeed both the multiplicative PDD methods are capable of effectively utilizing the multiplicative structure of a random response. Third, a new hybrid PDD method was constructed for uncertainty quantification of high-dimensional complex systems. The method is based on a linear combination of an additive and a multiplicative PDD approximation. Numerical results indicate that the univariate hybrid PDD method, which is slightly more expensive than the univariate additive or multiplicative PDD approximations, yields more accurate stochastic solutions than the latter two methods. Last, two novel adaptive-sparse PDD methods were developed that entail global sensitivity analysis for defining the relevant pruning criteria. Compared with the past developments, the adaptive-sparse PDD methods do not require its truncation parameter(s) to be assigned a priori or arbitrarily. Numerical results reveal that an adaptive-sparse PDD method achieves a desired level of accuracy with considerably fewer coefficients compared with existing PDD approximations.
APA, Harvard, Vancouver, ISO, and other styles
13

Scott, Karen Mary Louise. "Practical Analysis Tools for Structures Subjected to Flow-Induced and Non-Stationary Random Loads." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/38686.

Full text
Abstract:
There is a need to investigate and improve upon existing methods to predict response of sensors due to flow-induced vibrations in a pipe flow. The aim was to develop a tool which would enable an engineer to quickly evaluate the suitability of a particular design for a certain pipe flow application, without sacrificing fidelity. The primary methods, found in guides published by the American Society of Mechanical Engineers (ASME), of simple response prediction of sensors were found to be lacking in several key areas, which prompted development of the tool described herein. A particular limitation of the existing guidelines deals with complex stochastic stationary and non-stationary modeling and required much further study, therefore providing direction for the second portion of this body of work. A tool for response prediction of fluid-induced vibrations of sensors was developed which allowed for analysis of low aspect ratio sensors. Results from the tool were compared to experimental lift and drag data, recorded for a range of flow velocities. The model was found to perform well over the majority of the velocity range showing superiority in prediction of response as compared to ASME guidelines. The tool was then applied to a design problem given by an industrial partner, showing several of their designs to be inadequate for the proposed flow regime. This immediate identification of unsuitable designs no doubt saved significant time in the product development process. Work to investigate stochastic modeling in structural dynamics was undertaken to understand the reasons for the limitations found in fluid-structure interaction models. A particular weakness, non-stationary forcing, was found to be the most lacking in terms of use in the design stage of structures. A method was developed using the Karhunen Loeve expansion as its base to close the gap between prohibitively simple (stationary only) models and those which require too much computation time. Models were developed from SDOF through continuous systems and shown to perform well at each stage. Further work is needed in this area to bring this work full circle such that the lessons learned can improve design level turbulent response calculations.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
14

El, Moçayd Nabil. "La décomposition en polynôme du chaos pour l'amélioration de l'assimilation de données ensembliste en hydraulique fluviale." Phd thesis, Toulouse, INPT, 2017. http://oatao.univ-toulouse.fr/17862/1/El_Mocayd_Nabil.pdf.

Full text
Abstract:
Ce travail porte sur la construction d'un modèle réduit en hydraulique fluviale avec une méthode de décomposition en polynôme du chaos. Ce modèle réduit remplace le modèle direct afin de réduire le coût de calcul lié aux méthodes ensemblistes en quantification d'incertitudes et assimilation de données. Le contexte de l'étude est la prévision des crues et la gestion de la ressource en eau. Ce manuscrit est composé de cinq parties, chacune divisée en chapitres. La première partie présente un état de l'art des travaux en quantification des incertitudes et en assimilation de données dans le domaine de l'hydraulique ainsi que les objectifs de la thèse. On présente le cadre de la prévision des crues, ses enjeux et les outils dont on dispose pour prévoir la dynamique des rivières. On présente notamment la future mission SWOT qui a pour but de mesurer les hauteurs d'eau dans les rivières avec un couverture globale à haute résolution. On précise notamment l'apport de ces mesures et leur complémentarité avec les mesures in-situ. La deuxième partie présente les équations de Saint-Venant, qui décrivent les écoulements dans les rivières, ainsi qu'une discrétisation numérique de ces équations, telle qu'implémentée dans le logiciel Mascaret-1D. Le dernier chapitre de cette partie propose des simplifications des équations de Saint-Venant. La troisième partie de ce manuscrit présente les méthodes de quantification et de réduction des incertitudes. On présente notamment le contexte probabiliste de la quantification d'incertitudes et d'analyse de sensibilité. On propose ensuite de réduire la dimension d'un problème stochastique quand on traite de champs aléatoires. Les méthodes de décomposition en polynômes du chaos sont ensuite présentées. Cette partie dédiée à la méthodologie s'achève par un chapitre consacré à l'assimilation de données ensemblistes et à l'utilisation des modèles réduits dans ce cadre. La quatrième partie de ce manuscrit est dédiée aux résultats. On commence par identifier les sources d'incertitudes en hydraulique que l'on s'attache à quantifier et réduire par la suite. Un article en cours de révision détaille la validation d'un modèle réduit pour les équations de Saint-Venant en régime stationnaire lorsque l'incertitude est majoritairement portée par les coefficients de frottement et le débit à l'amont. On montre que les moments statistiques, la densité de probabilité et la matrice de covariances spatiales pour la hauteur d'eau sont efficacement et précisément estimés à l'aide du modèle réduit dont la construction ne nécessite que quelques dizaines d'intégrations du modèle direct. On met à profit l'utilisation du modèle réduit pour réduire le coût de calcul du filtre de Kalman d'Ensemble dans le cadre d'un exercice d'assimilation de données synthétiques de type SWOT. On s'intéresse précisément à la représentation spatiale de la donnée telle que vue par SWOT: couverture globale du réseau, moyennage spatial entre les pixels observés. On montre notamment qu'à budget de calcul donné les résultats de l'analyse d'assimilation de données qui repose sur l'utilisation du modèle réduit sont meilleurs que ceux obtenus avec le filtre classique. On s'intéresse enfin à la construction du modèle réduit en régime instationnaire. On suppose ici que l'incertitude est liée aux coefficients de frottement. Il s'agit à présent de juger de la nécessité du recalcul des coefficients polynomiaux au fil du temps et des cycles d'assimilation de données. Pour ce travail seul des données in-situ ont été considérées. On suppose dans un deuxième temps que l'incertitude est portée par le débit en amont du réseau, qui est un vecteur temporel. On procède à une décomposition de type Karhunen-Loève pour réduire la taille de l'espace incertain aux trois premiers modes. Nous sommes ainsi en mesure de mener à bien un exercice d'assimilation de données. Pour finir, les conclusions et les perspectives de ce travail sont présentées en cinquième partie.
APA, Harvard, Vancouver, ISO, and other styles
15

Fajraoui, Noura. "Analyse de sensibilité globale et polynômes de chaos pour l'estimation des paramètres : application aux transferts en milieu poreux." Phd thesis, Université de Strasbourg, 2014. http://tel.archives-ouvertes.fr/tel-01019528.

Full text
Abstract:
La gestion des transferts des contaminants en milieu poreux représentent une préoccupation croissante et revêtent un intérêt particulier pour le contrôle de la pollution dans les milieux souterrains et la gestion de la ressource en eau souterraine, ou plus généralement la protection de l'environnement. Les phénomènes d'écoulement et de transport de polluants sont décrits par des lois physiques traduites sous forme d'équations algébro-différentielles qui dépendent d'un grand nombre de paramètres d'entrée. Pour la plupart, ces paramètres sont mal connus et souvent ne sont pas directement mesurables et/ou leur mesure peut être entachée d'incertitude. Ces travaux de thèse concernent l'étude de l'analyse de sensibilité globale et l'estimation des paramètres pour des problèmes d'écoulement et de transport en milieux poreux. Pour mener à bien ces travaux, la décomposition en polynômes de chaos est utilisée pour quantifier l'influence des paramètres sur la sortie des modèles numériques utilisés. Cet outil permet non seulement de calculer les indices de sensibilité de Sobol mais représente également un modèle de substitution (ou métamodèle) beaucoup plus rapide à exécuter. Cette dernière caractéristique est alors exploitée pour l'inversion des modèles à partir des données observées. Pour le problème inverse, nous privilégions l'approche Bayésienne qui offre un cadre rigoureux pour l'estimation des paramètres. Dans un second temps, nous avons développé une stratégie efficace permettant de construire des polynômes de chaos creux, où seuls les coefficients dont la contribution sur la variance du modèle est significative, sont retenus. Cette stratégie a donné des résultats très encourageants pour deux problèmes de transport réactif. La dernière partie de ce travail est consacrée au problème inverse lorsque les entrées du modèle sont des champs stochastiques gaussiens spatialement distribués. La particularité d'un tel problème est qu'il est mal posé car un champ stochastique est défini par une infinité de coefficients. La décomposition de Karhunen-Loève permet de réduire la dimension du problème et également de le régulariser. Toutefois, les résultats de l'inversion par cette méthode fournit des résultats sensibles au choix à priori de la fonction de covariance du champ. Un algorithme de réduction de la dimension basé sur un critère de sélection (critère de Schwartz) est proposé afin de rendre le problème moins sensible à ce choix.
APA, Harvard, Vancouver, ISO, and other styles
16

Segui, Vasquez Bartolomé. "Modélisation dynamique des systèmes disque aubes multi-étages : Effets des incertitudes." Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00961270.

Full text
Abstract:
Les conceptions récentes de turbomachines ont tendance à évoluer vers des liaisons entre étages de plus en plus souples et des niveaux d'amortissement faibles, donnant lieu à des configurations où les modes sont susceptibles de présenter des niveaux de couplages inter-étages forts. En général, les ensembles disques aubes multi-étagés n'ont aucune propriété de symétrie cyclique d'ensemble et l'analyse doit porter sur un modèle de la structure complète donnant lieu à des calculs très coûteux. Pour palier ce problème, une méthode récente appelée symétrie cyclique multi-étages peut être utilisée pour réduire le coût des calculs des rotors composés de plusieurs étages, même lorsque les étages ont un nombre différent de secteurs. Cette approche profite de la symétrie cyclique inhérente à chaque étage et utilise une hypothèse spécifique qui aboutit à des sous-problèmes découplés pour chaque ordre de Fourier spatial. La méthodologie proposée vise à étudier l'effet des incertitudes sur le comportement dynamique des rotors en utilisant l'approche de symétrie cyclique multi-étages et l'expansion en Chaos Polynomial. Les incertitudes peuvent découler de l'usure des aubes, des changements de température ou des tolérances de fabrication. En première approche, seules les incertitudes provenant de l'usure uniforme de l'ensemble des aubes sont étudiées. Celles-ci peuvent être modélisées en considérant une variation globale des propriétés du matériau de l'ensemble des aubes d'un étage particulier. L'approche de symétrie cyclique multi-étages peut alors être utilisée car l'hypothèse de secteurs identiques est respectée. La positivité des matrices aléatoires concernées est assurée par l'utilisation d'une loi gamma très adaptée à la physique du problème impliquant le choix des polynômes de Laguerre comme base pour le chaos polynomial. Dans un premier temps des exemples numériques représentatifs de différents types de turbomachines sont introduits dans le but d'évaluer la robustesse de la méthode de symétrie cyclique multi-étages. Ensuite, les résultats de l'analyse modale aléatoire et de la réponse aléatoire obtenus par le chaos polynomial sont validés par comparaison avec des simulations de Monte-Carlo. En plus des résultats classiquement rencontrés pour les fréquences et réponses forcées, les incertitudes considérées mettent en évidence des variations sur les déformées modales qui évoluent entre différentes familles de modes dans les zones de forte densité modale. Ces variations entraînent des modifications sensibles sur la dynamique globale de la structure analysée et doivent être considérées dans le cadre des conceptions robustes.
APA, Harvard, Vancouver, ISO, and other styles
17

Svobodová, Miriam. "Dynamika soustav těles s neurčitostním modelem vzájemné vazby." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-418197.

Full text
Abstract:
This diploma thesis deal with evaluation of the impact in the scale of uncertaintly stiffness on the tool deviation during grooving process. By the affect of the insufficient stiffness in each parts of the machine, there is presented a mechanical vibration during the cutting process which may cause a damage to the surface of the workpiece, to the tool or to the processing machine. The change of the stiffness is caused in the result of tool wear, impact of setted cutting conditions and many others. In the first part includes teoretical introduction to field of the uncertainty and choosing suitable methods for the solutions. Chosen methods are Monte Carlo and polynomial chaos expansion which are procced in the interface of MATLAB. Both of the methods are primery tested on the simple systems with the indefinited enters of the stiffness. These systems replace the parts of the stiffness characteristics of the each support parts. After that, the model is defined for the turning during the process of grooving with the 3 degrees of freedom. Then the analyses of the uncertainity and also sensibility analyses for uncertainity entering data of the stiffness are carried out again by both methods. At the end are both methods compared in the points of view by the time consuption and also by precission. Judging by gathered data it is clear that the change of the stiffness has significant impact on vibration in all degrees of freedome of the analysed model. As the example a maximum and a minimum calculated deviation of the workpiece stiffness was calculated via methode of Monte Carlo. The biggest impact on the finall vibration of the tool is found by stiffness of the ball screw. The solution was developed for the more stabile cutting process.
APA, Harvard, Vancouver, ISO, and other styles
18

Ghannoum, Abir. "EDSs réfléchies en moyenne avec sauts et EDSs rétrogrades de type McKean-Vlasov : étude théorique et numérique." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM068.

Full text
Abstract:
Cette thèse est consacrée à l'étude théorique et numérique de deux principaux sujets de recherche: les équations différentielles stochastiques (EDSs) réfléchies en moyenne avec sauts et les équations différentielles stochastiques rétrogrades (EDSRs) de type McKean-Vlasov.Le premier travail de ma thèse établit la propagation du chaos pour les EDSs réfléchies en moyenne avec sauts. Nous avons étudié dans un premier temps l'existence et l'unicité d'une solution. Nous avons développé ensuite un schéma numérique via le système de particules. Enfin nous avons obtenu une vitesse de convergence pour ce schéma.Le deuxième travail de ma thèse consiste à étudier les EDSRs de type McKean-Vlasov. Nous avons prouvé l'existence et l'unicité de solutions de telles équations, et nous avons proposé une approximation numérique basée sur la décomposition en chaos de Wiener ainsi que sa vitesse de convergence.Le troisième travail de ma thèse s'intéresse à une autre type de simulation pour les EDSRs de type McKean-Vlasov. Nous avons proposé un schéma numérique basé sur l'approximation du mouvement brownien par une marche aléatoire et nous avons obtenu une vitesse de convergence pour ce schéma.Par ailleurs, quelques exemples numériques dans ces trois travaux permettent de constater l'efficacité de nos schémas et les vitesses de convergences annoncées par les résultats théoriques
This thesis is devoted to the theoretical and numerical study of two main subjects in the context of stochastic differential equations (SDEs): mean reflected SDEs with jumps and McKean-Vlasov backward SDEs.The first part of my thesis establishes the propagation of chaos for the mean reflected SDEs with jumps. First, we study the existence and uniqueness of a solution. Then, we develop a numerical scheme based on the particle system. Finally, we obtain the rate of convergence of this scheme.The second part of my thesis studies the McKean-Vlasov backward SDEs. In this case, we prove the existence and uniqueness of a solution for such equations. Then, thanks to the Wiener chaos expansion, we provide a numerical approximation. Moreover, the convergence rate of this approximation is also determined.The third part of my thesis proposes another type of simulation for the McKean-Vlasov backward SDEs. Due to the approximation of Brownian motion by a scaled random walk, we develop a numerical scheme and we get its convergence rate.In addition, a few numerical examples in these three parts are given to illustrate the efficiency of our schemes and their convergence rates stated by the theoretical results
APA, Harvard, Vancouver, ISO, and other styles
19

Ene, Simon. "Analys av osäkerheter vid hydraulisk modellering av torrfåror." Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-448369.

Full text
Abstract:
Hydraulisk modellering är ett viktigt verktyg vid utvärdering av lämpliga åtgärder för torrfåror. Modelleringen påverkas dock alltid av osäkerheter och om dessa är stora kan en modells simuleringsresultat bli opålitligt. Det kan därför vara viktigt att presentera dess simuleringsresultat tillsammans med osäkerheter. Denna studie utreder olika typer av osäkerheter som kan påverka hydrauliska modellers simuleringsresultat. Dessutom utförs känslighetsanalyser där en andel av osäkerheten i simuleringsresultatet tillskrivs de olika inmatningsvariablerna som beaktas. De parametrar som ingår i analysen är upplösningen i använd terrängmodell, upplösning i den hydrauliska modellens beräkningsnät, inflöde till modellen och råheten genom Mannings tal. Studieobjektet som behandlades i denna studie var en torrfåra som ligger nedströms Sandforsdammen i Skellefteälven och programvaran TELEMAC-MASCARET nyttjades för samtliga hydrauliska simuleringar i denna studie.  För att analysera osäkerheter kopplade till upplösning i en terrängmodell och ett beräkningsnät användes ett kvalitativt tillvägagångsätt. Ett antal simuleringar utfördes där alla parametrar förutom de kopplade till upplösning fixerades. Simuleringsresultaten illustrerades sedan genom profil, sektioner, enskilda raster och raster som visade differensen mellan olika simuleringar. Resultaten för analysen visade att en låg upplösning i terrängmodeller och beräkningsnät kan medföra osäkerheter lokalt där det är högre vattenhastigheter och där det finns stor variation i geometrin. Några signifikanta effekter kunde dock inte skönjas på större skala.  Separat gjordes kvantitativa osäkerhets- och känslighetsanalyser för vattendjup och vattenhastighet i torrfåran. Inmatningsparametrarna inflöde till modellen och råhet genom Mannings tal ansågs medföra störst påverkan och övriga parametrar fixerades således. Genom script skapade i programmeringsspråket Python tillsammans med biblioteket OpenTURNS upprättades ett stort urval av möjliga kombinationer för storlek på inflöde och Mannings tal. Alla kombinationer som skapades antogs till fullo täcka upp för den totala osäkerheten i inmatningsparametrarna. Genom att använda urvalet för simulering kunde osäkerheten i simuleringsresultaten också beskrivas. Osäkerhetsanalyser utfördes både genom klassisk beräkning av statistiska moment och genom Polynomial Chaos Expansion. En känslighetsanalys följde sedan där Polynomial Chaos Expansion användes för att beräkna Sobols känslighetsindex för inflödet och Mannings tal i varje kontrollpunkt. Den kvantitativa osäkerhetsanalysen visade att det fanns relativt stora osäkerheter för både vattendjupet och vattenhastighet vid behandlat studieobjekt. Flödet bidrog med störst påverkan på osäkerheten medan Mannings tals påverkan var insignifikant i jämförelse, bortsett från ett område i modellen där dess påverkan ökade markant.
Hydraulic modelling is an important tool when measures for dry river stretches are assessed. The modelling is however always affected by uncertainties and if these are big the simulation results from the models could become unreliable. It may therefore be important to present its simulation results together with the uncertainties. This study addresses various types of uncertainties that may affect the simulation results from hydraulic models. In addition, sensitivity analysis is conducted where a proportion of the uncertainty in the simulation result is attributed to the various input variables that are included. The parameters included in the analysis are terrain model resolution, hydraulic model mesh resolution, inflow to the model and Manning’s roughness coefficient. The object studied in this paper was a dry river stretch located downstream of Sandforsdammen in the river of Skellefteälven, Sweden. The software TELEMAC-MASCARET was used to perform all hydraulic simulations for this thesis.  To analyze the uncertainties related to the resolution for the terrain model and the mesh a qualitative approach was used. Several simulations were run where all parameters except those linked to the resolution were fixed. The simulation results were illustrated through individual rasters, profiles, sections and rasters that showed the differences between different simulations. The results of the analysis showed that a low resolution for terrain models and meshes can lead to uncertainties locally where there are higher water velocities and where there are big variations in the geometry. However, no significant effects could be discerned on a larger scale.  Separately, quantitative uncertainty and sensitivity analyzes were performed for the simulation results, water depth and water velocity for the dry river stretch. The input parameters that were assumed to have the biggest impact were the inflow to the model and Manning's roughness coefficient. Other model input parameters were fixed. Through scripts created in the programming language Python together with the library OpenTURNS, a large sample of possible combinations for the size of inflow and Manning's roughness coefficient was created. All combinations were assumed to fully cover the uncertainty of the input parameters. After using the sample for simulation, the uncertainty of the simulation results could also be described. Uncertainty analyses were conducted through both classical calculation of statistical moments and through Polynomial Chaos Expansion. A sensitivity analysis was then conducted through Polynomial Chaos Expansion where Sobol's sensitivity indices were calculated for the inflow and Manning's M at each control point. The analysis showed that there were relatively large uncertainties for both the water depth and the water velocity. The inflow had the greatest impact on the uncertainties while Manning's M was insignificant in comparison, apart from one area in the model where its impact increased.
APA, Harvard, Vancouver, ISO, and other styles
20

Kouassi, Attibaud. "Propagation d'incertitudes en CEM. Application à l'analyse de fiabilité et de sensibilité de lignes de transmission et d'antennes." Thesis, Université Clermont Auvergne‎ (2017-2020), 2017. http://www.theses.fr/2017CLFAC067/document.

Full text
Abstract:
De nos jours, la plupart des analyses CEM d’équipements et systèmes électroniques sont basées sur des approches quasi-déterministes dans lesquelles les paramètres internes et externes des modèles sont supposés parfaitement connus et où les incertitudes les affectant sont prises en compte sur les réponses par le biais de marges de sécurité importantes. Or, l’inconvénient de telles approches est qu’elles sont non seulement trop conservatives, mais en outre totalement inadaptées à certaines situations, notamment lorsque l’objectif de l’étude impose de prendre en compte le caractère aléatoire de ces paramètres via des modélisations stochastiques appropriées de type variables, processus ou champs aléatoires. Cette approche probabiliste a fait l’objet ces dernières années d’un certain nombre de recherches en CEM, tant au plan national qu’au plan international. Le travail présenté dans cette thèse est une contribution à ces recherches et a un double objectif : (1) développer et mettre en œuvre une méthodologie probabiliste et ses outils numériques d’accompagnement pour l’évaluation de la fiabilité et l’analyse sensibilité des équipements et systèmes électroniques en se limitant à des modélisations stochastiques par variables aléatoires ; (2) étendre cette étude au cas des modélisations stochastiques par processus et champs aléatoires dans le cadre d’une analyse prospective basée sur la résolution de l’équation aux dérivées partielles des télégraphistes à coefficients aléatoires.L’approche probabiliste mentionnée au point (1) consiste à évaluer la probabilité de défaillance d’un équipement ou d’un système électronique vis-à-vis d’un critère de défaillance donné et à déterminer l’importance relative de chacun des paramètres aléatoires en présence. Les différentes méthodes retenues à cette fin sont des adaptations à la CEM de méthodes développées dans le domaine de la mécanique aléatoire pour les études de propagation d’incertitudes. Pour le calcul des probabilités de défaillance, deux grandes catégories de méthodes sont proposées : celles basées sur une approximation de la fonction d’état-limite relative au critère de défaillance et les méthodes de Monte-Carlo basées sur la simulation numérique des variables aléatoires du modèle et l’estimation statistique des probabilités cibles. Pour l’analyse de sensibilité, une approche locale et une approche globale sont retenues. Ces différentes méthodes sont d’abord testées sur des applications académiques afin de mettre en lumière leur intérêt dans le domaine de la CEM. Elles sont ensuite appliquées à des problèmes de lignes de transmission et d’antennes plus représentatifs de la réalité.Dans l’analyse prospective, des méthodes de résolution avancées sont proposées, basées sur des techniques spectrales requérant les développements en chaos polynomiaux et de Karhunen-Loève des processus et champs aléatoires présents dans les modèles. Ces méthodes ont fait l’objet de tests numériques encourageant, mais qui ne sont pas présentés dans le rapport de thèse, faute de temps pour leur analyse complète
Nowadays, most EMC analyzes of electronic or electrical devices are based on deterministic approaches for which the internal and external models’ parameters are supposed to be known and the uncertainties on models’ parameters are taken into account on the outputs by defining very large security margins. But, the disadvantage of such approaches is their conservative character and their limitation when dealing with the parameters’ uncertainties using appropriate stochastic modeling (via random variables, processes or fields) is required in agreement with the goal of the study. In the recent years, this probabilistic approach has been the subject of several researches in the EMC community. The work presented here is a contribution to these researches and has a dual purpose : (1) develop a probabilistic methodology and implement the associated numerical tools for the reliability and sensitivity analyzes of the electronic devices and systems, assuming stochastic modeling via random variables; (2) extend this study to stochastic modeling using random processes and random fields through a prospective analysis based on the resolution of the telegrapher equations (partial derivative equations) with random coefficients. The first mentioned probabilistic approach consists in computing the failure probability of an electronic device or system according to a given criteria and in determining the relative importance of each considered random parameter. The methods chosen for this purpose are adaptations to the EMC framework of methods developed in the structural mechanics community for uncertainty propagation studies. The failure probabilities computation is performed using two type of methods: the ones based on an approximation of the limit state function associated to the failure criteria, and the Monte Carlo methods based on the simulation of the model’s random variables and the statistical estimation of the target failure probabilities. In the case of the sensitivity analysis, a local approach and a global approach are retained. All these methods are firstly applied to academic EMC problems in order to illustrate their interest in the EMC field. Next, they are applied to transmission lines problems and antennas problems closer to reality. In the prospective analysis, more advanced resolution methods are proposed. They are based on spectral approaches requiring the polynomial chaos expansions and the Karhunen-Loève expansions of random processes and random fields considered in the models. Although the first numerical tests of these methods have been hopeful, they are not presented here because of lack of time for a complete analysis
APA, Harvard, Vancouver, ISO, and other styles
21

Mulani, Sameer B. "Uncertainty Quantification in Dynamic Problems With Large Uncertainties." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/28617.

Full text
Abstract:
This dissertation investigates uncertainty quantification in dynamic problems. The Advanced Mean Value (AMV) method is used to calculate probabilistic sound power and the sensitivity of elastically supported panels with small uncertainty (coefficient of variation). Sound power calculations are done using Finite Element Method (FEM) and Boundary Element Method (BEM). The sensitivities of the sound power are calculated through direct differentiation of the FEM/BEM/AMV equations. The results are compared with Monte Carlo simulation (MCS). An improved method is developed using AMV, metamodel, and MCS. This new technique is applied to calculate sound power of a composite panel using FEM and Rayleigh Integral. The proposed methodology shows considerable improvement both in terms of accuracy and computational efficiency. In systems with large uncertainties, the above approach does not work. Two Spectral Stochastic Finite Element Method (SSFEM) algorithms are developed to solve stochastic eigenvalue problems using Polynomial chaos. Presently, the approaches are restricted to problems with real and distinct eigenvalues. In both the approaches, the system uncertainties are modeled by Wiener-Askey orthogonal polynomial functions. Galerkin projection is applied in the probability space to minimize the weighted residual of the error of the governing equation. First algorithm is based on inverse iteration method. A modification is suggested to calculate higher eigenvalues and eigenvectors. The above algorithm is applied to both discrete and continuous systems. In continuous systems, the uncertainties are modeled as Gaussian processes using Karhunen-Loeve (KL) expansion. Second algorithm is based on implicit polynomial iteration method. This algorithm is found to be more efficient when applied to discrete systems. However, the application of the algorithm to continuous systems results in ill-conditioned system matrices, which seriously limit its application. Lastly, an algorithm to find the basis random variables of KL expansion for non-Gaussian processes, is developed. The basis random variables are obtained via nonlinear transformation of marginal cumulative distribution function using standard deviation. Results are obtained for three known skewed distributions, Log-Normal, Beta, and Exponential. In all the cases, it is found that the proposed algorithm matches very well with the known solutions and can be applied to solve non-Gaussian process using SSFEM.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
22

Huehne, Florian. "Levy processes and chaos expansions in finance." Thesis, University of Oxford, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.531826.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Pédèches, Laure. "Stochastic models for collective motions of populations." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30083/document.

Full text
Abstract:
Dans cette thèse, on s'intéresse à des systèmes stochastiques modélisant un des phénomènes biologiques les plus mystérieux, les mouvements collectifs de populations. Pour un groupe de N individus, vus comme des particules sans poids ni volume, on étudie deux types de comportements asymptotiques : d'un côté, en temps long, les propriétés d'ergodicité et de flocking, de l'autre, quand le nombre de particules N tend vers l'infini, les phénomènes de propagation du chaos. Le modèle, déterministe, de Cucker-Smale, un modèle cinétique de champ moyen pour une population sans structure hiérarchique, est notre point de départ : les deux premiers chapitres sont consacrés à la compréhension de diverses dynamiques stochastiques qui s'en inspirent, du bruit étant rajouté sous différentes formes. Le troisième chapitre, originellement une tentative d'amélioration de ces résultats, est basé sur la méthode du développement en amas, un outil de physique statistique. On prouve l'ergodicité exponentielle de certains processus non- markoviens à drift non-régulier. Dans la dernière partie, on démontre l'existence d'une solution, unique dans un certain sens, pour un système stochastique de particules associé au modèle chimiotactique de Keller et Segel
In this thesis, stochastic dynamics modelling collective motions of populations, one of the most mysterious type of biological phenomena, are considered. For a system of N particle-like individuals, two kinds of asymptotic behaviours are studied: ergodicity and flocking properties, in long time, and propagation of chaos, when the number N of agents goes to infinity. Cucker and Smale, deterministic, mean-field kinetic model for a population without a hierarchical structure is the starting point of our journey: the fist two chapters are dedicated to the understanding of various stochastic dynamics it inspires, with random noise added in different ways. The third chapter, an attempt to improve those results, is built upon the cluster expansion method, a technique from statistical mechanics. Exponential ergodicity is obtained for a class of non-Markovian process with non-regular drift. In the final part, the focus shifts onto a stochastic system of interacting particles derived from Keller and Segel 2-D parabolic-elliptic model for chemotaxis. Existence and weak uniqueness are proven
APA, Harvard, Vancouver, ISO, and other styles
24

Alhajj, Chehade Hicham. "Geosynthetic-Reinforced Retaining Walls-Deterministic And Probabilistic Approaches." Thesis, Université Grenoble Alpes, 2021. http://www.theses.fr/2021GRALI010.

Full text
Abstract:
L'objectif de cette thèse est de développer, dans le cadre de la mécanique des sols, des méthodes d’analyse de la stabilité interne des murs de soutènement renforcés par géosynthétiques sous chargement sismique. Le travail porte d'abord sur des analyses déterministes, puis est étendu à des analyses probabilistes. Dans la première partie de cette thèse, un modèle déterministe, basé sur le théorème cinématique de l'analyse limite, est proposé pour évaluer le facteur de sécurité d’un mur en sol renforcé ou la résistance nécessaire du renforcement pour stabiliser la structure. Une technique de discrétisation spatiale est utilisée pour générer une surface de rupture rotationnelle, afin de pouvoir considérer des remblais hétérogènes et/ou de représenter le chargement sismique par une approche de type pseudo-dynamique. Les cas de sols secs, non saturés et saturés sont étudiés. La présence de fissures dans le sol est également prise en compte. Ce modèle déterministe permet d’obtenir des résultats rigoureux et est validé par confrontation avec des résultats existants dans la littérature. Dans la deuxième partie du mémoire de thèse, ce modèle déterministe est utilisé dans un cadre probabiliste. Tout d'abord, l’approche en variables aléatoires est utilisée. Les incertitudes considérées concernent les paramètres de résistance au cisaillement du sol, la charge sismique et la résistance des renforcements. L'expansion du chaos polynomial qui consiste à remplacer le modèle déterministe coûteux par un modèle analytique, combinée avec la technique de simulation de Monte Carlo est la méthode fiabiliste considérée pour effectuer l'analyse probabiliste. L'approche en variables aléatoires néglige la variabilité spatiale du sol puisque les propriétés du sol et les autres paramètres modélisés par des variables aléatoires, sont considérés comme constants dans chaque simulation déterministe. Pour cette raison, dans la dernière partie du manuscrit, la variabilité spatiale du sol est considérée en utilisant la théorie des champs aléatoires. La méthode SIR/A-bSPCE, une combinaison entre la technique de réduction dimensionnelle SIR (Sliced Inverse Regression) et une expansion de chaos polynomial adaptative (A-bSPCE), est la méthode fiabiliste considérée pour effectuer l'analyse probabiliste. Le temps de calcul total de l'analyse probabiliste, effectuée à l'aide de la méthode SIR-SPCE, est considérablement réduit par rapport à l'exécution directe des méthode probabilistes classiques. Seuls les paramètres de résistance du sol sont modélisés à l'aide de champs aléatoires, afin de se concentrer sur l'effet de la variabilité spatiale sur les résultats fiabilistes
The aim of this thesis is to assess the seismic internal stability of geosynthetic reinforced soil retaining walls. The work first deals with deterministic analyses and then focus on probabilistic ones. In the first part of this thesis, a deterministic model, based on the upper bound theorem of limit analysis, is proposed for assessing the reinforced soil wall safety factor or the required reinforcement strength to stabilize the structure. A spatial discretization technique is used to generate the rotational failure surface and give the possibility of considering heterogeneous backfills and/or to represent the seismic loading by the pseudo-dynamic approach. The cases of dry, unsaturated and saturated soils are investigated. Additionally, the crack presence in the backfill soils is considered. This deterministic model gives rigorous results and is validated by confrontation with existing results from the literature. Then, in the second part of the thesis, this deterministic model is used in a probabilistic framework. First, the uncertain input parameters are modeled using random variables. The considered uncertainties involve the soil shear strength parameters, seismic loading and reinforcement strength parameters. The Sparse Polynomial Chaos Expansion that consists of replacing the time expensive deterministic model by a meta-model, combined with Monte Carlo Simulations is considered as the reliability method to carry out the probabilistic analysis. Random variables approach neglects the soil spatial variability since the soil properties and the other uncertain input parameters, are considered constant in each deterministic simulation. Therefore, in the last part of the manuscript, the soil spatial variability is considered using the random field theory. The SIR/A-bSPCE method, a combination between the dimension reduction technique, Sliced Inverse Regression (SIR) and an active learning sparse polynomial chaos expansion (A-bSPCE), is implemented to carry out the probabilistic analysis. The total computational time of the probabilistic analysis, performed using SIR-SPCE, is significantly reduced compared to directly running classical probabilistic methods. Only the soil strength parameters are modeled using random fields, in order to focus on the effect of the spatial variability on the reliability results
APA, Harvard, Vancouver, ISO, and other styles
25

Braun, Mathias. "Reduced Order Modelling and Uncertainty Propagation Applied to Water Distribution Networks." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0050/document.

Full text
Abstract:
Les réseaux de distribution d’eau consistent en de grandes infrastructures réparties dans l’espace qui assurent la distribution d’eau potable en quantité et en qualité suffisantes. Les modèles mathématiques de ces systèmes sont caractérisés par un grand nombre de variables d’état et de paramètres dont la plupart sont incertains. Les temps de calcul peuvent s’avérer conséquents pour les réseaux de taille importante et la propagation d’incertitude par des méthodes de Monte Carlo. Par conséquent, les deux principaux objectifs de cette thèse sont l’étude des techniques de modélisation à ordre réduit par projection ainsi que la propagation spectrale des incertitudes des paramètres. La thèse donne tout d’abord un aperçu des méthodes mathématiques utilisées. Ensuite, les équations permanentes des réseaux hydrauliques sont présentées et une nouvelle méthode de calcul des sensibilités est dérivée sur la base de la méthode adjointe. Les objectifs spécifiques du développement de modèles d’ordre réduit sont l’application de méthodes basées sur la projection, le développement de stratégies d’échantillonnage adaptatives plus efficaces et l’utilisation de méthodes d’hyper-réduction pour l’évaluation rapide des termes résiduels non linéaires. Pour la propagation des incertitudes, des méthodes spectrales sont introduites dans le modèle hydraulique et un modèle hydraulique intrusif est formulé. Dans le but d’une analyse plus efficace des incertitudes des paramètres, la propagation spectrale est ensuite évaluée sur la base du modèle réduit. Les résultats montrent que les modèles d’ordre réduit basés sur des projections offrent un avantage considérable par rapport à l’effort de calcul. Bien que l’utilisation de l’échantillonnage adaptatif permette une utilisation plus efficace des états système pré-calculés, l’utilisation de méthodes d’hyper-réduction n’a pas permis d’améliorer la charge de calcul. La propagation des incertitudes des paramètres sur la base des méthodes spectrales est comparable aux simulations de Monte Carlo en termes de précision, tout en réduisant considérablement l’effort de calcul
Water distribution systems are large, spatially distributed infrastructures that ensure the distribution of potable water of sufficient quantity and quality. Mathematical models of these systems are characterized by a large number of state variables and parameter. Two major challenges are given by the time constraints for the solution and the uncertain character of the model parameters. The main objectives of this thesis are thus the investigation of projection based reduced order modelling techniques for the time efficient solution of the hydraulic system as well as the spectral propagation of parameter uncertainties for the improved quantification of uncertainties. The thesis gives an overview of the mathematical methods that are being used. This is followed by the definition and discussion of the hydraulic network model, for which a new method for the derivation of the sensitivities is presented based on the adjoint method. The specific objectives for the development of reduced order models are the application of projection based methods, the development of more efficient adaptive sampling strategies and the use of hyper-reduction methods for the fast evaluation of non-linear residual terms. For the propagation of uncertainties spectral methods are introduced to the hydraulic model and an intrusive hydraulic model is formulated. With the objective of a more efficient analysis of the parameter uncertainties, the spectral propagation is then evaluated on the basis of the reduced model. The results show that projection based reduced order models give a considerable benefit with respect to the computational effort. While the use of adaptive sampling resulted in a more efficient use of pre-calculated system states, the use of hyper-reduction methods could not improve the computational burden and has to be explored further. The propagation of the parameter uncertainties on the basis of the spectral methods is shown to be comparable to Monte Carlo simulations in accuracy, while significantly reducing the computational effort
APA, Harvard, Vancouver, ISO, and other styles
26

Blatman, Géraud. "Adaptive sparse polynomial chaos expansions for uncertainty propagation and sensitivity analysis." Clermont-Ferrand 2, 2009. https://tel.archives-ouvertes.fr/tel-00440197.

Full text
Abstract:
Cette thèse s'insère dans le contexte générale de la propagation d'incertitudes et de l'analyse de sensibilité de modèles de simulation numérique, en vue d'applications industrielles. Son objectif est d'effectuer de telles études en minimisant le nombre d'évaluations du modèle, potentiellement coûteuses. Le présent travail repose sur une approximation de la réponse du modèle sur la base du chaos polynomial (CP), qui permet de réaliser des post-traitements à un coût de calcul négligeable. Toutefois, l'ajustement du CP peut nécessiter un nombre conséquent d'appels au modèle si ce dernier dépend d'un nombre élevé de paramètres (e. G. Supérieur à 10). Pour contourner ce problème, on propose deux algorithmes pour ne sélectionner qu'un faible nombre de termes importants dans la représentation par CP, à savoir une procédure de régression pas-à-pas et une procédure basée sur la méthode de Least Angle Regression (LAR). Le faible nombre de coefficients associés aux CP creux obtenus peuvent ainsi être déterminés à partir d'un nombre réduit d'évaluations du modèle. Les méthodes sont validées sur des cas-tests académiques de mécanique, puis appliquées sur le cas industriel de l'analyse d'intégrité d'une cuve de réacteur à eau pressurisée. Les résultats obtenus confirment l'efficacité des méthodes proposées pour traiter les problèmes de propagation d'incertitudes et d'analyse de sensibilité en grande dimension
APA, Harvard, Vancouver, ISO, and other styles
27

Kassir, Wafaa. "Approche probabiliste non gaussienne des charges statiques équivalentes des effets du vent en dynamique des structures à partir de mesures en soufflerie." Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1116/document.

Full text
Abstract:
Afin d'estimer les forces statiques équivalentes du vent, qui produisent les réponses quasi-statiques et dynamiques extrêmes dans les structures soumises au champ de pression instationnaire induit par les effets du vent, une nouvelle méthode probabiliste est proposée. Cette méthode permet de calculer les forces statiques équivalentes du vent pour les structures avec des écoulements aérodynamiques complexes telles que les toitures de stade, pour lesquelles le champ de pression n'est pas gaussien et pour lesquelles la réponse dynamique de la structure ne peut être simplement décrite en utilisant uniquement les premiers modes élastiques (mais nécessitent une bonne représentation des réponses quasi-statiques). Généralement, les mesures en soufflerie du champ de pression instationnaire appliqué à une structure dont la géométrie est complexe ne suffisent pas pour construire une estimation statistiquement convergée des valeurs extrêmes des réponses dynamiques de la structure. Une telle convergence est nécessaire pour l'estimation des forces statiques équivalentes afin de reproduire les réponses dynamiques extrêmes induites par les effets du vent en tenant compte de la non-gaussianité du champ de pression aléatoire instationnaire. Dans ce travail, (1) un générateur de réalisation du champ de pression instationnaire non gaussien est construit en utilisant les réalisations qui sont mesurées dans la soufflerie à couche limite turbulente; ce générateur basé sur une représentation en chaos polynomiaux permet de construire un grand nombre de réalisations indépendantes afin d'obtenir la convergence des statistiques des valeurs extrêmes des réponses dynamiques, (2) un modèle d'ordre réduit avec des termes d'accélération quasi-statique est construit et permet d'accélérer la convergence des réponses dynamiques de la structure en n'utilisant qu'un petit nombre de modes élastiques, (3) une nouvelle méthode probabiliste est proposée pour estimer les forces statiques équivalentes induites par les effets du vent sur des structures complexes décrites par des modèles éléments finis, en préservant le caractère non gaussien et sans introduire le concept d'enveloppes des réponses. L'approche proposée est validée expérimentalement avec une application relativement simple et elle est ensuite appliquée à une structure de toiture de stade pour laquelle des mesures expérimentales de pressions instationnaires ont été effectuées dans la soufflerie à couche limite turbulente
In order to estimate the equivalent static wind loads, which produce the extreme quasi-static and dynamical responses of structures submitted to random unsteady pressure field induced by the wind effects, a new probabilistic method is proposed. This method allows for computing the equivalent static wind loads for structures with complex aerodynamic flows such as stadium roofs, for which the pressure field is non-Gaussian, and for which the dynamical response of the structure cannot simply be described by using only the first elastic modes (but require a good representation of the quasi-static responses). Usually, the wind tunnel measurements of the unsteady pressure field applied to a structure with complex geometry are not sufficient for constructing a statistically converged estimation of the extreme values of the dynamical responses. Such a convergence is necessary for the estimation of the equivalent static loads in order to reproduce the extreme dynamical responses induced by the wind effects taking into account the non-Gaussianity of the random unsteady pressure field. In this work, (1) a generator of realizations of the non-Gaussian unsteady pressure field is constructed by using the realizations that are measured in the boundary layer wind tunnel; this generator based on a polynomial chaos representation allows for generating a large number of independent realizations in order to obtain the convergence of the extreme value statistics of the dynamical responses, (2) a reduced-order model with quasi-static acceleration terms is constructed, which allows for accelerating the convergence of the structural dynamical responses by using only a small number of elastic modes of the structure, (3) a novel probabilistic method is proposed for estimating the equivalent static wind loads induced by the wind effects on complex structures that are described by finite element models, preserving the non-Gaussian property and without introducing the concept of responses envelopes. The proposed approach is experimentally validated with a relatively simple application and is then applied to a stadium roof structure for which experimental measurements of unsteady pressures have been performed in boundary layer wind tunnel
APA, Harvard, Vancouver, ISO, and other styles
28

Cooper, Rachel Gray. "Augmented Neural Network Surrogate Models for Polynomial Chaos Expansions and Reduced Order Modeling." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/103423.

Full text
Abstract:
Mathematical models describing real world processes are becoming increasingly complex to better match the dynamics of the true system. While this is a positive step towards more complete knowledge of our world, numerical evaluations of these models become increasingly computationally inefficient, requiring increased resources or time to evaluate. This has led to the need for simplified surrogates to these complex mathematical models. A growing surrogate modeling solution is with the usage of neural networks. Neural networks (NN) are known to generalize an approximation across a diverse dataset and minimize the solution along complex nonlinear boundaries. Additionally, these surrogate models can be found using only incomplete knowledge of the true dynamics. However, NN surrogates often suffer from a lack of interpretability, where the decisions made in the training process are not fully understood, and the roles of individual neurons are not well defined. We present two solutions towards this lack of interpretability. The first focuses on mimicking polynomial chaos (PC) modeling techniques, modifying the structure of a NN to produce polynomial approximations of the underlying dynamics. This methodology allows for an extractable meaning from the network and results in improvement in accuracy over traditional PC methods. Secondly, we examine the construction of a reduced order modeling scheme using NN autoencoders, guiding the decisions of the training process to better match the real dynamics. This guiding process is performed via a physics-informed (PI) penalty, resulting in a speed-up in training convergence, but still results in poor performance compared to traditional schemes.
Master of Science
The world is an elaborate system of relationships between diverse processes. To accurately represent these relationships, increasingly complex models are defined to better match what is physically seen. These complex models can lead to issues when trying to use them to predict a realistic outcome, either requiring immensely powerful computers to run the simulations or long amounts of time to present a solution. To fix this, surrogates or approximations to these complex models are used. These surrogate models aim to reduce the resources needed to calculate a solution while remaining as accurate to the more complex model as possible. One way to make these surrogate models is through neural networks. Neural networks try to simulate a brain, making connections between some input and output given to the network. In the case of surrogate modeling, the input is some current state of the true process, and the output is what is seen later from the same system. But much like the human brain, the reasoning behind why choices are made when connecting the input and outputs is often largely unknown. Within this paper, we seek to add meaning to neural network surrogate models in two different ways. In the first, we change what each piece in a neural network represents to build large polynomials (e.g., $x^5 + 4x^2 + 2$) to approximate the larger complex system. We show that the building of these polynomials via neural networks performs much better than traditional ways to construct them. For the second, we guide the choices made by the neural network by enforcing restrictions in what connections it can make. We do this by using additional information from the larger system to ensure the connections made focus on the most important information first before trying to match the less important patterns. This guiding process leads to more information being captured when the surrogate model is compressed into only a few dimensions compared to traditional methods. Additionally, it allows for a faster learning time compared to similar surrogate models without the information.
APA, Harvard, Vancouver, ISO, and other styles
29

Bourgey, Florian. "Stochastic approximations for financial risk computations." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX052.

Full text
Abstract:
Dans cette thèse, nous examinons plusieurs méthodes d'approximations stochastiques à la fois pour le calcul de mesures de risques financiers et pour le pricing de produits dérivés.Comme les formules explicites sont rarement disponibles pour de telles quantités, le besoin d'approximations analytiques rapides,efficaces et fiables est d'une importance capitale pour les institutions financières.Nous visons ainsi à donner un large aperçu de ces méthodes d'approximation et nous nous concentrons sur trois approches distinctes.Dans la première partie, nous étudions plusieurs méthodes d'approximation Monte Carlo multi-niveaux et les appliquons à deux problèmes pratiques :l'estimation de quantités impliquant des espérances imbriquées (comme la marge initiale) ainsi que la discrétisation des intégrales apparaissant dans les modèles rough pour la variance forward pour le pricing d'options sur le VIX.Dans les deux cas, nous analysons les propriétés d'optimalité asymptotique des estimateurs multi-niveaux correspondants et démontrons numériquement leur supériorité par rapport à une méthode de Monte Carlo classique.Dans la deuxième partie, motivés par les nombreux exemples issus de la modélisation en risque de crédit, nous proposons un cadre général de métamodélisation pour de grandes sommes de variables aléatoires de Bernoulli pondérées, qui sont conditionnellement indépendantes par rapport à un facteur commun X. Notre approche générique est basée sur la décomposition en polynômes du chaos du facteur commun et sur une approximation gaussienne. Les estimations d'erreur L2 sont données lorsque le facteur X est associé à des polynômes orthogonaux classiques.Enfin, dans la dernière partie de cette thèse, nous nous intéressons aux asymptotiques en temps court de la volatilité implicite américaine et les prix d'options américaines dans les modèles à volatilité locale. Nous proposons également une approximation en loi de l'indice VIX dans des modèles rough pour la variance forward, exprimée en termes de proxys log-normaux et dérivons des résultats d'expansion pour les options sur le VIX dont les coefficients sont explicites
In this thesis, we investigate several stochastic approximation methods for both the computation of financial risk measures and the pricing of derivatives.As closed-form expressions are scarcely available for such quantities, %and because they have to be evaluated daily, the need for fast, efficient, and reliable analytic approximation formulas is of primal importance to financial institutions.We aim at giving a broad overview of such approximation methods and we focus on three distinct approaches.In the first part, we study some Multilevel Monte Carlo approximation methods and apply them for two practical problems: the estimation of quantities involving nested expectations (such as the initial margin) along with the discretization of integrals arising in rough forward variance models for the pricing of VIX derivatives.For both cases, we analyze the properties of the corresponding asymptotically-optimal multilevel estimatorsand numerically demonstrate the superiority of multilevel methods compare to a standard Monte Carlo.In the second part, motivated by the numerous examples arising in credit risk modeling, we propose a general framework for meta-modeling large sums of weighted Bernoullirandom variables which are conditional independent of a common factor X.Our generic approach is based on a Polynomial Chaos Expansion on the common factor together withsome Gaussian approximation. L2 error estimates are given when the factor X is associated withclassical orthogonal polynomials.Finally, in the last part of this dissertation, we deal withsmall-time asymptotics and provide asymptoticexpansions for both American implied volatility and American option prices in local volatility models.We also investigate aweak approximations for the VIX index inrough forward variance models expressed in termsof lognormal proxiesand derive expansions results for VIX derivatives with explicit coefficients
APA, Harvard, Vancouver, ISO, and other styles
30

Lebon, Jérémy. "Towards multifidelity uncertainty quantification for multiobjective structural design." Phd thesis, Université de Technologie de Compiègne, 2013. http://tel.archives-ouvertes.fr/tel-01002392.

Full text
Abstract:
This thesis aims at Multi-Objective Optimization under Uncertainty in structural design. We investigate Polynomial Chaos Expansion (PCE) surrogates which require extensive training sets. We then face two issues: high computational costs of an individual Finite Element simulation and its limited precision. From numerical point of view and in order to limit the computational expense of the PCE construction we particularly focus on sparse PCE schemes. We also develop a custom Latin Hypercube Sampling scheme taking into account the finite precision of the simulation. From the modeling point of view,we propose a multifidelity approach involving a hierarchy of models ranging from full scale simulations through reduced order physics up to response surfaces. Finally, we investigate multiobjective optimization of structures under uncertainty. We extend the PCE model of design objectives by taking into account the design variables. We illustrate our work with examples in sheet metal forming and optimal design of truss structures.
APA, Harvard, Vancouver, ISO, and other styles
31

Riahi, Hassen. "Analyse de structures à dimension stochastique élevée : application aux toitures bois sous sollicitation sismique." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2013. http://tel.archives-ouvertes.fr/tel-00881187.

Full text
Abstract:
Le problème de la dimension stochastique élevée est récurrent dans les analyses probabilistes des structures. Il correspond à l'augmentation exponentielle du nombre d'évaluations du modèle mécanique lorsque le nombre de paramètres incertains est élevé. Afin de pallier cette difficulté, nous avons proposé dans cette thèse, une approche à deux étapes. La première consiste à déterminer la dimension stochastique efficace, en se basant sur une hiérarchisation des paramètres incertains en utilisant les méthodes de criblage. Une fois les paramètres prépondérants sur la variabilité de la réponse du modèle identifiés, ils sont modélisés par des variables aléatoires et le reste des paramètres est fixé à leurs valeurs moyennes respectives, dans le calcul stochastique proprement dit. Cette tâche fut la deuxième étape de l'approche proposée, dans laquelle la méthode de décomposition de la dimension est utilisée pour caractériser l'aléa de la réponse du modèle, par l'estimation des moments statistiques et la construction de la densité de probabilité. Cette approche permet d'économiser jusqu'à 90% du temps de calcul demandé par les méthodes de calcul stochastique classiques. Elle est ensuite utilisée dans l'évaluation de l'intégrité d'une toiture à ossature bois d'une habitation individuelle installée sur un site d'aléa sismique fort. Dans ce contexte, l'analyse du comportement de la structure est basée sur un modèle éléments finis, dans lequel les assemblages en bois sont modélisés par une loi anisotrope avec hystérésis et l'action sismique est représentée par huit accélérogrammes naturels fournis par le BRGM. Ces accélérogrammes permettent de représenter différents types de sols selon en se référant à la classification de l'Eurocode 8. La défaillance de la toiture est définie par l'atteinte de l'endommagement, enregistré dans les assemblages situés sur les éléments de contreventement et les éléments d'anti-flambement, d'un niveau critique fixé à l'aide des résultats des essais. Des analyses déterministes du modèle éléments finis ont montré que la toiture résiste à l'aléa sismique de la ville du Moule en Guadeloupe. Les analyses probabilistes ont montré que parmi les 134 variables aléatoires représentant l'aléa dans le comportement non linéaire des assemblages, 15 seulement contribuent effectivement à la variabilité de la réponse mécanique ce qui a permis de réduire la dimension stochastique dans le calcul des moments statistiques. En s'appuyant sur les estimations de la moyenne et de l'écart-type on a montré que la variabilité de l'endommagement dans les assemblages situés dans les éléments de contreventement est plus importante que celle de l'endommagement sur les assemblages situés sur les éléments d'anti-flambement. De plus, elle est plus significative pour les signaux les plus nocifs sur la structure.
APA, Harvard, Vancouver, ISO, and other styles
32

Knani, Habiba. "Backward stochastic differential equations driven by Gaussian Volterra processes." Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0014.

Full text
Abstract:
Cette thèse porte sur les équations différentielles stochastiques rétrogrades (EDSR) dirigées par une classe de processus de Volterra qui contient le mouvement brownien multifractionnaire et le processus Ornstein-Uhlenbeck multifractionnaire. Dans la première partie, nous étudions la solution des EDSRs multidimensionnelles avec des générateurs linéaires. Par la formule d’Itô pour les processus de Volterra nous réduisons l’EDSR à une équation aux dérivées partielles (EDP) de second ordre linéaire avec la condition terminale. Sous une condition d’intégrabilité dans un voisinage du temps terminal de la variance du processus de Volterra, nous résolvons l’EDP associée explicitement et en déduisons la solution des EDSR linéaire. Puis, nous discutons une application dans le contexte des stratégies autofinancées. La seconde partie de la thèse traite des EDSRs non linéaires dirigées par la même classe de processus de Volterra. Les résultats principaux sont l’existence et l’unicité de la solution de l’EDSR dans un espace de fonctionnelles régulières du processus de Volterra et un théorème de comparaison qui porte sur les générateurs et les conditions terminales. Nous donnons deux preuves de l’existence et de l’unicité de la solution de l’EDSR, l’une basée sur l’EDP associée et l’autre sans référence à l’EDP, mais avec des méthodes probabilistes. Cette seconde preuve est techniquement difficile et, en raison de l’absence de propriétés de martingale dans le contexte des processus de Volterra, la preuve nécessite différentes normes sur l’espace de Hilbert sous-jacent défini par le noyau du processus de Volterra. Pour la construction de la solution, nous avons besoin de la notion de l’espérance quasi-conditionnelle, d’une formule de type Clark-Ocone et d’une autre formule d’Itô pour les processus de Volterra. Contrairement au cas classique des EDSR dirigées par le mouvement brownien ou brownien fractionnaire, une hypothèse sur le comportement du noyau est nécessaire pour l’existence et l’unicité de la solution de l’EDSR. Pour le mouvement brownien multifractionnaire, cette hypothèse est liée à la fonction de Hurst
This thesis treats of backward stochastic differential equations (BSDE) driven by a class of Gaussian Volterra processes that includes multifractional Brownian motion and multifractional Ornstein-Uhlenbeck processes. In the first part we study multidimensional BSDE with generators that are linear functions of the solution. By means of an Itoˆ formula for Volterra processes, a linear second order partial differential equation (PDE) with terminal condition is associated to the BSDE. Under an integrability condition on a functional of the second moment of the Volterra process in a neighbourhood of the terminal time, we solve the associated PDE explicitely and deduce the solution of the linear BSDE. We discuss an application in the context of self-financing trading stategies. The second part of the thesis treats of non-linear BSDE driven by the same class of Gaussian Volterra processes. The main results are the existence and uniqueness of the solution in a space of regular functionals of the Volterra process, and a comparison theorem for the solutions of BSDE. We give two proofs for the existence and uniqueness of the solution, one is based on the associated PDE and a second one without making reference to this PDE, but with probabilistic and functional theoretic methods. Especially this second proof is technically quite complex, and, due to the absence of mar- tingale properties in the context of Volterra processes, requires to work with different norms on the underlying Hilbert space that is defined by the kernel of the Volterra process. For the construction of the solution we need the notion of quasi-conditional expectation, a Clark-Ocone type formula and another Itoˆ formula for Volterra processes. Contrary to the more classical cases of BSDE driven by Brownian or fractional Brownian motion, an assumption on the behaviour of the kernel of the driv- ing Volterra process is in general necessary for the wellposedness of the BSDE. For multifractional Brownian motion this assumption is closely related to the behaviour of the Hurst function
APA, Harvard, Vancouver, ISO, and other styles
33

Milica, Žigić. "Primene polugrupa operatora u nekim klasama Košijevih početnih problema." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2014. https://www.cris.uns.ac.rs/record.jsf?recordId=90322&source=NDLTD&language=en.

Full text
Abstract:
Doktorska disertacija je posvećena primeni teorije polugrupa operatora na rešavanje dve klase Cauchy-jevih početnih problema. U prvom delu smoispitivali parabolične stohastičke parcijalne diferencijalne jednačine (SPDJ-ne), odredjene sa dva tipa operatora: linearnim zatvorenim operatorom kojigeneriše C0−polugrupu i linearnim ograničenim operatorom kombinovanimsa Wick-ovim proizvodom. Svi stohastički procesi su dati Wiener-Itô-ovomhaos ekspanzijom. Dokazali smo postojanje i jedinstvenost rešenja ove klaseSPDJ-na. Posebno, posmatrali smo i stacionarni slučaj kada je izvod povremenu jednak nuli. U drugom delu smo konstruisali kompleksne stepeneC-sektorijalnih operatora na sekvencijalno kompletnim lokalno konveksnimprostorima. Kompleksne stepene operatora smo posmatrali kao integralnegeneratore uniformno ograničenih analitičkih C-regularizovanih rezolventnihfamilija, i upotrebili dobijene rezultate na izučavanje nepotpunih Cauchy-jevih problema viš3eg ili necelog reda.
The doctoral dissertation is devoted to applications of the theoryof semigroups of operators on two classes of Cauchy problems. In the firstpart, we studied parabolic stochastic partial differential equations (SPDEs),driven by two types of operators: one linear closed operator generating aC0−semigroup and one linear bounded operator with Wick-type multipli-cation. All stochastic processes are considered in the setting of Wiener-Itôchaos expansions. We proved existence and uniqueness of solutions for thisclass of SPDEs. In particular, we also treated the stationary case when thetime-derivative is equal to zero. In the second part, we constructed com-plex powers of C−sectorial operators in the setting of sequentially completelocally convex spaces. We considered these complex powers as the integralgenerators of equicontinuous analytic C−regularized resolvent families, andincorporated the obtained results in the study of incomplete higher or frac-tional order Cauchy problems.
APA, Harvard, Vancouver, ISO, and other styles
34

Dora, Seleši. "Uopšteni stohastički procesi u beskonačno-dimenzionalnim prostorima sa primenama na singularne stohastičke parcijalne diferencijalne jednačine." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2007. https://www.cris.uns.ac.rs/record.jsf?recordId=6018&source=NDLTD&language=en.

Full text
Abstract:
Doktorska disertacija je posvećena raznim klasama uopštenih stohastičkih procesa i njihovim primenama na rešavanje singularnih stohastičkih parcijalnih diferencijalnih jednačina. U osnovi, disertacija se može podeliti na dva dela. Prvi deo disertacije (Glava 2) je posvećen strukturnoj karakterizaciji uopštenih stohastičkih procesa u vidu haos ekspanzije i integralne reprezentacije. Drugi deo disertacije (Glava 3) čini primena dobijenih rezultata na re·savanje stohastičkog Dirihleovog problema u kojem se množenje modelira Vikovim proizvodom, a koefcijenti eliptičnog diferencijalnog operatora su Kolomboovi uopšteni stohastički procesi.
Subject of the dissertation are various classes of generalizedstochastic processes and their applications to solving singular stochasticpartial di®erential equations. Basically, the dissertation can be divided intotwo parts. The ¯rst part (Chapter 2) is devoted to structural characteri-zations of generalized random processes in terms of chaos expansions andintegral representations. The second part of the dissertation (Chapter 3)involves applications of the obtained results to solving a stochastic Dirichletproblem, where multiplication is modeled by the Wick product, and thecoe±cients of the elliptic di®erential operator are Colombeau generalizedrandom processes.
APA, Harvard, Vancouver, ISO, and other styles
35

Jornet, Sanz Marc. "Mean square solutions of random linear models and computation of their probability density function." Doctoral thesis, Universitat Politècnica de València, 2020. http://hdl.handle.net/10251/138394.

Full text
Abstract:
[EN] This thesis concerns the analysis of differential equations with uncertain input parameters, in the form of random variables or stochastic processes with any type of probability distributions. In modeling, the input coefficients are set from experimental data, which often involve uncertainties from measurement errors. Moreover, the behavior of the physical phenomenon under study does not follow strict deterministic laws. It is thus more realistic to consider mathematical models with randomness in their formulation. The solution, considered in the sample-path or the mean square sense, is a smooth stochastic process, whose uncertainty has to be quantified. Uncertainty quantification is usually performed by computing the main statistics (expectation and variance) and, if possible, the probability density function. In this dissertation, we study random linear models, based on ordinary differential equations with and without delay and on partial differential equations. The linear structure of the models makes it possible to seek for certain probabilistic solutions and even approximate their probability density functions, which is a difficult goal in general. A very important part of the dissertation is devoted to random second-order linear differential equations, where the coefficients of the equation are stochastic processes and the initial conditions are random variables. The study of this class of differential equations in the random setting is mainly motivated because of their important role in Mathematical Physics. We start by solving the randomized Legendre differential equation in the mean square sense, which allows the approximation of the expectation and the variance of the stochastic solution. The methodology is extended to general random second-order linear differential equations with analytic (expressible as random power series) coefficients, by means of the so-called Fröbenius method. A comparative case study is performed with spectral methods based on polynomial chaos expansions. On the other hand, the Fröbenius method together with Monte Carlo simulation are used to approximate the probability density function of the solution. Several variance reduction methods based on quadrature rules and multilevel strategies are proposed to speed up the Monte Carlo procedure. The last part on random second-order linear differential equations is devoted to a random diffusion-reaction Poisson-type problem, where the probability density function is approximated using a finite difference numerical scheme. The thesis also studies random ordinary differential equations with discrete constant delay. We study the linear autonomous case, when the coefficient of the non-delay component and the parameter of the delay term are both random variables while the initial condition is a stochastic process. It is proved that the deterministic solution constructed with the method of steps that involves the delayed exponential function is a probabilistic solution in the Lebesgue sense. Finally, the last chapter is devoted to the linear advection partial differential equation, subject to stochastic velocity field and initial condition. We solve the equation in the mean square sense and provide new expressions for the probability density function of the solution, even in the non-Gaussian velocity case.
[ES] Esta tesis trata el análisis de ecuaciones diferenciales con parámetros de entrada aleatorios, en la forma de variables aleatorias o procesos estocásticos con cualquier tipo de distribución de probabilidad. En modelización, los coeficientes de entrada se fijan a partir de datos experimentales, los cuales suelen acarrear incertidumbre por los errores de medición. Además, el comportamiento del fenómeno físico bajo estudio no sigue patrones estrictamente deterministas. Es por tanto más realista trabajar con modelos matemáticos con aleatoriedad en su formulación. La solución, considerada en el sentido de caminos aleatorios o en el sentido de media cuadrática, es un proceso estocástico suave, cuya incertidumbre se tiene que cuantificar. La cuantificación de la incertidumbre es a menudo llevada a cabo calculando los principales estadísticos (esperanza y varianza) y, si es posible, la función de densidad de probabilidad. En este trabajo, estudiamos modelos aleatorios lineales, basados en ecuaciones diferenciales ordinarias con y sin retardo, y en ecuaciones en derivadas parciales. La estructura lineal de los modelos nos permite buscar ciertas soluciones probabilísticas e incluso aproximar su función de densidad de probabilidad, lo cual es un objetivo complicado en general. Una parte muy importante de la disertación se dedica a las ecuaciones diferenciales lineales de segundo orden aleatorias, donde los coeficientes de la ecuación son procesos estocásticos y las condiciones iniciales son variables aleatorias. El estudio de esta clase de ecuaciones diferenciales en el contexto aleatorio está motivado principalmente por su importante papel en la Física Matemática. Empezamos resolviendo la ecuación diferencial de Legendre aleatorizada en el sentido de media cuadrática, lo que permite la aproximación de la esperanza y la varianza de la solución estocástica. La metodología se extiende al caso general de ecuaciones diferenciales lineales de segundo orden aleatorias con coeficientes analíticos (expresables como series de potencias), mediante el conocido método de Fröbenius. Se lleva a cabo un estudio comparativo con métodos espectrales basados en expansiones de caos polinomial. Por otro lado, el método de Fröbenius junto con la simulación de Monte Carlo se utilizan para aproximar la función de densidad de probabilidad de la solución. Para acelerar el procedimiento de Monte Carlo, se proponen varios métodos de reducción de la varianza basados en reglas de cuadratura y estrategias multinivel. La última parte sobre ecuaciones diferenciales lineales de segundo orden aleatorias estudia un problema aleatorio de tipo Poisson de difusión-reacción, en el que la función de densidad de probabilidad es aproximada mediante un esquema numérico de diferencias finitas. En la tesis también se tratan ecuaciones diferenciales ordinarias aleatorias con retardo discreto y constante. Estudiamos el caso lineal y autónomo, cuando el coeficiente de la componente no retardada i el parámetro del término retardado son ambos variables aleatorias mientras que la condición inicial es un proceso estocástico. Se demuestra que la solución determinista construida con el método de los pasos y que involucra la función exponencial retardada es una solución probabilística en el sentido de Lebesgue. Finalmente, el último capítulo lo dedicamos a la ecuación en derivadas parciales lineal de advección, sujeta a velocidad y condición inicial estocásticas. Resolvemos la ecuación en el sentido de media cuadrática y damos nuevas expresiones para la función de densidad de probabilidad de la solución, incluso en el caso de velocidad no Gaussiana.
[CAT] Aquesta tesi tracta l'anàlisi d'equacions diferencials amb paràmetres d'entrada aleatoris, en la forma de variables aleatòries o processos estocàstics amb qualsevol mena de distribució de probabilitat. En modelització, els coeficients d'entrada són fixats a partir de dades experimentals, les quals solen comportar incertesa pels errors de mesurament. A més a més, el comportament del fenomen físic sota estudi no segueix patrons estrictament deterministes. És per tant més realista treballar amb models matemàtics amb aleatorietat en la seua formulació. La solució, considerada en el sentit de camins aleatoris o en el sentit de mitjana quadràtica, és un procés estocàstic suau, la incertesa del qual s'ha de quantificar. La quantificació de la incertesa és sovint duta a terme calculant els principals estadístics (esperança i variància) i, si es pot, la funció de densitat de probabilitat. En aquest treball, estudiem models aleatoris lineals, basats en equacions diferencials ordinàries amb retard i sense, i en equacions en derivades parcials. L'estructura lineal dels models ens fa possible cercar certes solucions probabilístiques i inclús aproximar la seua funció de densitat de probabilitat, el qual és un objectiu complicat en general. Una part molt important de la dissertació es dedica a les equacions diferencials lineals de segon ordre aleatòries, on els coeficients de l'equació són processos estocàstics i les condicions inicials són variables aleatòries. L'estudi d'aquesta classe d'equacions diferencials en el context aleatori està motivat principalment pel seu important paper en Física Matemàtica. Comencem resolent l'equació diferencial de Legendre aleatoritzada en el sentit de mitjana quadràtica, el que permet l'aproximació de l'esperança i la variància de la solució estocàstica. La metodologia s'estén al cas general d'equacions diferencials lineals de segon ordre aleatòries amb coeficients analítics (expressables com a sèries de potències), per mitjà del conegut mètode de Fröbenius. Es duu a terme un estudi comparatiu amb mètodes espectrals basats en expansions de caos polinomial. Per altra banda, el mètode de Fröbenius juntament amb la simulació de Monte Carlo són emprats per a aproximar la funció de densitat de probabilitat de la solució. Per a accelerar el procediment de Monte Carlo, es proposen diversos mètodes de reducció de la variància basats en regles de quadratura i estratègies multinivell. L'última part sobre equacions diferencials lineals de segon ordre aleatòries estudia un problema aleatori de tipus Poisson de difusió-reacció, en què la funció de densitat de probabilitat és aproximada mitjançant un esquema numèric de diferències finites. En la tesi també es tracten equacions diferencials ordinàries aleatòries amb retard discret i constant. Estudiem el cas lineal i autònom, quan el coeficient del component no retardat i el paràmetre del terme retardat són ambdós variables aleatòries mentre que la condició inicial és un procés estocàstic. Es prova que la solució determinista construïda amb el mètode dels passos i que involucra la funció exponencial retardada és una solució probabilística en el sentit de Lebesgue. Finalment, el darrer capítol el dediquem a l'equació en derivades parcials lineal d'advecció, subjecta a velocitat i condició inicial estocàstiques. Resolem l'equació en el sentit de mitjana quadràtica i donem noves expressions per a la funció de densitat de probabilitat de la solució, inclús en el cas de velocitat no Gaussiana.
This work has been supported by the Spanish Ministerio de Economía y Competitividad grant MTM2017–89664–P. I acknowledge the doctorate scholarship granted by Programa de Ayudas de Investigación y Desarrollo (PAID), Universitat Politècnica de València.
Jornet Sanz, M. (2020). Mean square solutions of random linear models and computation of their probability density function [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/138394
TESIS
APA, Harvard, Vancouver, ISO, and other styles
36

Luo, Wuan. "Wiener Chaos Expansion and Numerical Solutions of Stochastic Partial Differential Equations." Thesis, 2006. https://thesis.library.caltech.edu/1861/1/wuan_thesis.pdf.

Full text
Abstract:

Stochastic partial differential equations (SPDEs) are important tools in modeling complex phenomena, and they arise in many physics and engineering applications. Developing efficient numerical methods for simulating SPDEs is a very important while challenging research topic. In this thesis, we study a numerical method based on the Wiener chaos expansion (WCE) for solving SPDEs driven by Brownian motion forcing. WCE represents a stochastic solution as a spectral expansion with respect to a set of random basis. By deriving a governing equation for the expansion coefficients, we can reduce a stochastic PDE into a system of deterministic PDEs and separate the randomness from the computation. All the statistical information of the solution can be recovered from the deterministic coefficients using very simple formulae.

We apply the WCE-based method to solve stochastic Burgers equations, Navier-Stokes equations and nonlinear reaction-diffusion equations with either additive or multiplicative random forcing. Our numerical results demonstrate convincingly that the new method is much more efficient and accurate than MC simulations for solutions in short to moderate time. For a class of model equations, we prove the convergence rate of the WCE method. The analysis also reveals precisely how the convergence constants depend on the size of the time intervals and the variability of the random forcing. Based on the error analysis, we design a sparse truncation strategy for the Wiener chaos expansion. The sparse truncation can reduce the dimension of the resulting PDE system substantially while retaining the same asymptotic convergence rates.

For long time solutions, we propose a new computational strategy where MC simulations are used to correct the unresolved small scales in the sparse Wiener chaos solutions. Numerical experiments demonstrate that the WCE-MC hybrid method can handle SPDEs in much longer time intervals than the direct WCE method can. The new method is shown to be much more efficient than the WCE method or the MC simulation alone in relatively long time intervals. However, the limitation of this method is also pointed out.

Using the sparse WCE truncation, we can resolve the probability distributions of a stochastic Burgers equation numerically and provide direct evidence for the existence of a unique stationary measure. Using the WCE-MC hybrid method, we can simulate the long time front propagation for a reaction-diffusion equation in random shear flows. Our numerical results confirm the conjecture by Jack Xin that the front propagation speed obeys a quadratic enhancing law.

Using the machinery we have developed for the Wiener chaos method, we resolve a few technical difficulties in solving stochastic elliptic equations by Karhunen-Loeve-based polynomial chaos method. We further derive an upscaling formulation for the elliptic system of the Wiener chaos coefficients. Eventually, we apply the upscaled Wiener chaos method for uncertainty quantification in subsurface modeling, combined with a two-stage Markov chain Monte Carlo sampling method we have developed recently.

APA, Harvard, Vancouver, ISO, and other styles
37

Winokur, Justin Gregory. "Adaptive Sparse Grid Approaches to Polynomial Chaos Expansions for Uncertainty Quantification." Diss., 2015. http://hdl.handle.net/10161/9845.

Full text
Abstract:

Polynomial chaos expansions provide an efficient and robust framework to analyze and quantify uncertainty in computational models. This dissertation explores the use of adaptive sparse grids to reduce the computational cost of determining a polynomial model surrogate while examining and implementing new adaptive techniques.

Determination of chaos coefficients using traditional tensor product quadrature suffers the so-called curse of dimensionality, where the number of model evaluations scales exponentially with dimension. Previous work used a sparse Smolyak quadrature to temper this dimensional scaling, and was applied successfully to an expensive Ocean General Circulation Model, HYCOM during the September 2004 passing of Hurricane Ivan through the Gulf of Mexico. Results from this investigation suggested that adaptivity could yield great gains in efficiency. However, efforts at adaptivity are hampered by quadrature accuracy requirements.

We explore the implementation of a novel adaptive strategy to design sparse ensembles of oceanic simulations suitable for constructing polynomial chaos surrogates. We use a recently developed adaptive pseudo-spectral projection (aPSP) algorithm that is based on a direct application of Smolyak's sparse grid formula, and that allows for the use of arbitrary admissible sparse grids. Such a construction ameliorates the severe restrictions posed by insufficient quadrature accuracy. The adaptive algorithm is tested using an existing simulation database of the HYCOM model during Hurricane Ivan. The {\it a priori} tests demonstrate that sparse and adaptive pseudo-spectral constructions lead to substantial savings over isotropic sparse sampling.

In order to provide a finer degree of resolution control along two distinct subsets of model parameters, we investigate two methods to build polynomial approximations. The two approaches are based with pseudo-spectral projection (PSP) methods on adaptively constructed sparse grids. The control of the error along different subsets of parameters may be needed in the case of a model depending on uncertain parameters and deterministic design variables. We first consider a nested approach where an independent adaptive sparse grid pseudo-spectral projection is performed along the first set of directions only, and at each point a sparse grid is constructed adaptively in the second set of directions. We then consider the application of aPSP in the space of all parameters, and introduce directional refinement criteria to provide a tighter control of the projection error along individual dimensions. Specifically, we use a Sobol decomposition of the projection surpluses to tune the sparse grid adaptation. The behavior and performance of the two approaches are compared for a simple two-dimensional test problem and for a shock-tube ignition model involving 22 uncertain parameters and 3 design parameters. The numerical experiments indicate that whereas both methods provide effective means for tuning the quality of the representation along distinct subsets of parameters, adaptive PSP in the global parameter space generally requires fewer model evaluations than the nested approach to achieve similar projection error.

In order to increase efficiency even further, a subsampling technique is developed to allow for local adaptivity within the aPSP algorithm. The local refinement is achieved by exploiting the hierarchical nature of nested quadrature grids to determine regions of estimated convergence. In order to achieve global representations with local refinement, synthesized model data from a lower order projection is used for the final projection. The final subsampled grid was also tested with two more robust, sparse projection techniques including compressed sensing and hybrid least-angle-regression. These methods are evaluated on two sample test functions and then as an {\it a priori} analysis of the HYCOM simulations and the shock-tube ignition model investigated earlier. Small but non-trivial efficiency gains were found in some cases and in others, a large reduction in model evaluations with only a small loss of model fidelity was realized. Further extensions and capabilities are recommended for future investigations.


Dissertation
APA, Harvard, Vancouver, ISO, and other styles
38

Lin, Yu-Tuan, and 林玉端. "Implementations of Tailored Finite Point Method and Polynomial Chaos Expansion for Solving Problems Related to Fluid Dynamics, Image Processing and Finance." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/40488536171794178165.

Full text
Abstract:
博士
國立中興大學
應用數學系所
104
In this dissertation, we study the tailored finite point method (TFPM) and polynomial chaos expansion (PCE) scheme for solving partial differential equations (PDEs). These PDEs are related to fluid dynamics, imaging processing and finance problems. In the first part, we concern on quasilinear time-dependent Burgers'' equations with small coefficients of viscosity. The selected basis functions for the TFPM method automatically fit the properties of the local solution in time and space simultaneously. We apply the Hopf-Cole transformation to derive the first TFPM-I scheme. For the second scheme, we approximate the solution by using local exact solutions and consider iterated processes to attain numerical solutions to the original form of the Burgers'' equation. The TFPM-II is particularly suitable for a solution with steep gradients or discontinuities. More importantly, the TFPM obtained numerical solutions with reasonable accuracy even on relatively coarse meshes for Burgers'' equations. In the second part, we employ the application of the TFPM in an anisotropic convection-diffusion (ACD) filter for image denoising. A quadtree structure is implemented in order to allow multi-level storage during the denoising and compression process. The ACD filter exhibits the potential to get a more accurate approximated solution to the PDEs. In the third part, we regard the TFPM for Black-Scholes equations, European option pricing. We compare the performance of our algorithm with other popular numerical schemes. The numerical experiments using the TFPM is more efficient and accurate compared to other well-known methods. In the last part, we present the polynomial chaos expansion (PCE) for stochastic PDEs. We provide a review of the theory of generalized polynomial chaos expansion (gPCE) and arbitrary polynomial chaos expansion (aPCE) including the case analysis of test problems. We demonstrate the accuracy of the gPCE and aPCE for the Black-Scholes model with the log-normal random volatilities. Furthermore, we employ the aPCE scheme for arbitrary distributions of uncertainty volatilities with short term price data. This is the forefront of adopting the polynomial chaos expansion in the randomness of volatilities in financial mathematics.
APA, Harvard, Vancouver, ISO, and other styles
39

(5930765), Pratik Kiranrao Naik. "History matching of surfactant-polymer flooding." Thesis, 2019.

Find full text
Abstract:
This thesis presents a framework for history matching and model calibration of surfactant-polymer (SP) flooding. At first, a high-fidelity mechanistic SP flood model is constructed by performing extensive lab-scale experiments on Berea cores. Then, incorporating Sobol based sensitivity analysis, polynomial chaos expansion based surrogate modelling (PCE-proxy) and Genetic algorithm based inverse optimization, an optimized model parameter set is determined by minimizing the miss-fit between PCE-proxy response and experimental observations for quantities of interests such as cumulative oil recovery and pressure profile. The epistemic uncertainty in PCE-proxy is quantified using a Gaussian regression process called Kriging. The framework is then extended to Bayesian calibration where the posterior of model parameters is inferred by directly sampling from it using Markov chain Monte Carlo (MCMC). Finally, a stochastic multi-objective optimization problem is posed under uncertainties in model parameters and oil price which is solved using a variant of Bayesian global optimization routine.
APA, Harvard, Vancouver, ISO, and other styles
40

Dutta, Parikshit. "New Algorithms for Uncertainty Quantification and Nonlinear Estimation of Stochastic Dynamical Systems." Thesis, 2011. http://hdl.handle.net/1969.1/ETD-TAMU-2011-08-9951.

Full text
Abstract:
Recently there has been growing interest to characterize and reduce uncertainty in stochastic dynamical systems. This drive arises out of need to manage uncertainty in complex, high dimensional physical systems. Traditional techniques of uncertainty quantification (UQ) use local linearization of dynamics and assumes Gaussian probability evolution. But several difficulties arise when these UQ models are applied to real world problems, which, generally are nonlinear in nature. Hence, to improve performance, robust algorithms, which can work efficiently in a nonlinear non-Gaussian setting are desired. The main focus of this dissertation is to develop UQ algorithms for nonlinear systems, where uncertainty evolves in a non-Gaussian manner. The algorithms developed are then applied to state estimation of real-world systems. The first part of the dissertation focuses on using polynomial chaos (PC) for uncertainty propagation, and then achieving the estimation task by the use of higher order moment updates and Bayes rule. The second part mainly deals with Frobenius-Perron (FP) operator theory, how it can be used to propagate uncertainty in dynamical systems, and then using it to estimate states by the use of Bayesian update. Finally, a method to represent the process noise in a stochastic dynamical system using a nite term Karhunen-Loeve (KL) expansion is proposed. The uncertainty in the resulting approximated system is propagated using FP operator. The performance of the PC based estimation algorithms were compared with extended Kalman filter (EKF) and unscented Kalman filter (UKF), and the FP operator based techniques were compared with particle filters, when applied to a duffing oscillator system and hypersonic reentry of a vehicle in the atmosphere of Mars. It was found that the accuracy of the PC based estimators is higher than EKF or UKF and the FP operator based estimators were computationally superior to the particle filtering algorithms.
APA, Harvard, Vancouver, ISO, and other styles
41

Schulte, Matthias. "Malliavin-Stein Method in Stochastic Geometry." Doctoral thesis, 2013. https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2013031910717.

Full text
Abstract:
In this thesis, abstract bounds for the normal approximation of Poisson functionals are computed by the Malliavin-Stein method and used to derive central limit theorems for problems from stochastic geometry. As a Poisson functional we denote a random variable depending on a Poisson point process. It is known from stochastic analysis that every square integrable Poisson functional has a representation as a (possibly infinite) sum of multiple Wiener-Ito integrals. This decomposition is called Wiener-Itô chaos expansion, and the integrands are denoted as kernels of the Wiener-Itô chaos expansion. An explicit formula for these kernels is known due to Last and Penrose. Via their Wiener-Itô chaos expansions the so-called Malliavin operators are defined. By combining Malliavin calculus and Stein's method, a well-known technique to derive limit theorems in probability theory, bounds for the normal approximation of Poisson functionals in the Wasserstein distance and vectors of Poisson functionals in a similar distance were obtained by Peccati, Sole, Taqqu, and Utzet and Peccati and Zheng, respectively. An analogous bound for the univariate normal approximation in Kolmogorov distance is derived. In order to evaluate these bounds, one has to compute the expectation of products of multiple Wiener-Itô integrals, which are complicated sums of deterministic integrals. Therefore, the bounds for the normal approximation of Poisson functionals reduce to sums of integrals depending on the kernels of the Wiener-Itô chaos expansion. The strategy to derive central limit theorems for Poisson functionals is to compute the kernels of their Wiener-Itô chaos expansions, to put the kernels in the bounds for the normal approximation, and to show that the bounds vanish asymptotically. By this approach, central limit theorems for some problems from stochastic geometry are derived. Univariate and multivariate central limit theorems for some functionals of the intersection process of Poisson k-flats and the number of vertices and the total edge length of a Gilbert graph are shown. These Poisson functionals are so-called Poisson U-statistics which have an easier structure since their Wiener-Itô chaos expansions are finite, i.e. their Wiener-Itô chaos expansions consist of finitely many multiple Wiener-Itô integrals. As examples for Poisson functionals with infinite Wiener-Itô chaos expansions, central limit theorems for the volume of the Poisson-Voronoi approximation of a convex set and the intrinsic volumes of Boolean models are proven.
APA, Harvard, Vancouver, ISO, and other styles
42

Deng, Jian. "Stochastic collocation methods for aeroelastic system with uncertainty." Master's thesis, 2009. http://hdl.handle.net/10048/557.

Full text
Abstract:
Thesis (M. Sc.)--University of Alberta, 2009.
Title from pdf file main screen (viewed on Sept. 3, 2009). "A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Master of Science in Applied Mathematics, Department of Mathematical and Statistical Sciences, University of Alberta." Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
43

Fluck, Manuel. "Stochastic methods for unsteady aerodynamic analysis of wings and wind turbine blades." Thesis, 2017. http://hdl.handle.net/1828/7981.

Full text
Abstract:
Advancing towards `better' wind turbine designs engineers face two central challenges: first, current aerodynamic models (based on Blade Element Momentum theory) are inherently limited to comparatively simple designs of flat rotors with straight blades. However, such designs present only a subset of possible designs. Better concepts could be coning rotors, swept or kinked blades, or blade tip modifications. To be able to extend future turbine optimization to these new concepts a different kind of aerodynamic model is needed. Second, it is difficult to include long term loads (life time extreme and fatigue loads) directly into the wind turbine design optimization. This is because with current methods the assessment of long term loads is computationally very expensive -- often too expensive for optimization. This denies the optimizer the possibility to fully explore the effects of design changes on important life time loads, and one might settle with a sub-optimal design. In this dissertation we present work addressing these two challenges, looking at wing aerodynamics in general and focusing on wind turbine loads in particular. We adopt a Lagrangian vortex model to analyze bird wings. Equipped with distinct tip feathers, these wings present very complex lifting surfaces with winglets, stacked in sweep and dihedral. Very good agreement between experimental and numerical results is found, and thus we confirm that a vortex model is actually capable of analyzing complex new wing and rotor blade geometries. Next stochastic methods are derived to deal with the time and space coupled unsteady aerodynamic equations. In contrast to deterministic models, which repeatedly analyze the loads for different input samples to eventually estimate life time load statistics, the new stochastic models provide a continuous process to assess life time loads in a stochastic context -- starting from a stochastic wind field input through to a stochastic solution for the load output. Hence, these new models allow obtaining life time loads much faster than from the deterministic approach, which will eventually make life time loads accessible to a future stochastic wind turbine optimization algorithm. While common stochastic techniques are concerned with random parameters or boundary conditions (constant in time), a stochastic treatment of turbulent wind inflow requires a technique capable to handle a random field. The step from a random parameter to a random field is not trivial, and hence the new stochastic methods are introduced in three stages. First the bird wing model from above is simplified to a one element wing/ blade model, and the previously deterministic solution is substituted with a stochastic solution for a one-point wind speed time series (a random process). Second, the wind inflow is extended to an $n$-point correlated random wind field and the aerodynamic model is extended accordingly. To complete this step a new kind of wind model is introduced, requiring significantly fewer random variables than previous models. Finally, the stochastic method is applied to wind turbine aerodynamics (for now based on Blade Element Momentum theory) to analyze rotor thrust, torque, and power. Throughout all these steps the stochastic results are compared to result statistics obtained via Monte Carlo analysis from unsteady reference models solved in the conventional deterministic framework. Thus it is verified that the stochastic results actually reproduce the deterministic benchmark. Moreover, a considerable speed-up of the calculations is found (for example by a factor 20 for calculating blade thrust load probability distributions). Results from this research provide a means to much more quickly analyze life time loads and an aerodynamic model to be used a new wind turbine optimization framework, capable of analyzing new geometries, and actually optimizing wind turbine blades with life time loads in mind. However, to limit the scope of this work, we only present the aerodynamic models here and will not proceed to turbine optimization itself, which is left for future work.
Graduate
0538
0548
mfluck@uvic.ca
APA, Harvard, Vancouver, ISO, and other styles
44

Ozen, Hasan Cagan. "Long Time Propagation of Stochasticity by Dynamical Polynomial Chaos Expansions." Thesis, 2017. https://doi.org/10.7916/D8WH32C5.

Full text
Abstract:
Stochastic differential equations (SDEs) and stochastic partial differential equations (SPDEs) play an important role in many areas of engineering and applied sciences such as atmospheric sciences, mechanical and aerospace engineering, geosciences, and finance. Equilibrium statistics and long-time solutions of these equations are pertinent to many applications. Typically, these models contain several uncertain parameters which need to be propagated in order to facilitate uncertainty quantification and prediction. Correspondingly, in this thesis, we propose a generalization of the Polynomial Chaos (PC) framework for long-time solutions of SDEs and SPDEs driven by Brownian motion forcing. Polynomial chaos expansions (PCEs) allow us to propagate uncertainties in the coefficients of these equations to the statistics of their solutions. Their main advantages are: (i) they replace stochastic equations by systems of deterministic equations; and (ii) they provide fast convergence. Their main challenge is that the computational cost becomes prohibitive when the dimension of the parameters modeling the stochasticity is even moderately large. In particular, for equations with Brownian motion forcing, the long-time simulation by PC-based methods is notoriously difficult as the dimension of stochastic variables increases with time. With the goal in mind to deliver computationally efficient numerical algorithms for stochastic equations in the long time, our main strategy is to leverage the intrinsic sparsity in the dynamics by identifying the influential random parameters and construct spectral approximations to the solutions in terms of those relevant variables. Once this strategy is employed dynamically in time, using online constructions, approximations can retain their sparsity and accuracy; even for long times. To this end, exploiting Markov property of Brownian motion, we present a restart procedure that allows PCEs to expand the solutions at future times in terms of orthogonal polynomials of the measure describing the solution at a given time and the future Brownian motion. In case of SPDEs, the Karhunen-Loeve expansion (KLE) is applied at each restart to select the influential variables and keep the dimensionality minimal. Using frequent restarts and low degree polynomials, the algorithms are able to capture long-time solutions accurately. We will also introduce, using the same principles, a similar algorithm based on a stochastic collocation method for the solutions of SDEs. We apply the methods to the numerical simulation of linear and nonlinear SDEs, and stochastic Burgers and Navier-Stokes equations with white noise forcing. Our methods also allow us to incorporate time-independent random coefficients such as a random viscosity. We propose several numerical simulations, and show that the algorithms compare favorably with standard Monte Carlo methods in terms of accuracy and computational times. To demonstrate the efficiency of the algorithms for long-time simulations, we compute invariant measures of the solutions when they exist.
APA, Harvard, Vancouver, ISO, and other styles
45

Mandur, Jasdeep Singh. "Robust Algorithms for Optimization of Chemical Processes in the Presence of Model-Plant Mismatch." Thesis, 2014. http://hdl.handle.net/10012/8526.

Full text
Abstract:
Process models are always associated with uncertainty, due to either inaccurate model structure or inaccurate identification. If left unaccounted for, these uncertainties can significantly affect the model-based decision-making. This thesis addresses the problem of model-based optimization in the presence of uncertainties, especially due to model structure error. The optimal solution from standard optimization techniques is often associated with a certain degree of uncertainty and if the model-plant mismatch is very significant, this solution may have a significant bias with respect to the actual process optimum. Accordingly, in this thesis, we developed new strategies to reduce (1) the variability in the optimal solution and (2) the bias between the predicted and the true process optima. Robust optimization is a well-established methodology where the variability in optimization objective is considered explicitly in the cost function, leading to a solution that is robust to model uncertainties. However, the reported robust formulations have few limitations especially in the context of nonlinear models. The standard technique to quantify the effect of model uncertainties is based on the linearization of underlying model that may not be valid if the noise in measurements is quite high. To address this limitation, uncertainty descriptions based on the Bayes’ Theorem are implemented in this work. Since for nonlinear models the resulting Bayesian uncertainty may have a non-standard form with no analytical solution, the propagation of this uncertainty onto the optimum may become computationally challenging using conventional Monte Carlo techniques. To this end, an approach based on Polynomial Chaos expansions is developed. It is shown in a simulated case study that this approach resulted in drastic reductions in the computational time when compared to a standard Monte Carlo sampling technique. The key advantage of PC expansions is that they provide analytical expressions for statistical moments even if the uncertainty in variables is non-standard. These expansions were also used to speed up the calculation of likelihood function within the Bayesian framework. Here, a methodology based on Multi-Resolution analysis is proposed to formulate the PC based approximated model with higher accuracy over the parameter space that is most likely based on the given measurements. For the second objective, i.e. reducing the bias between the predicted and true process optima, an iterative optimization algorithm is developed which progressively corrects the model for structural error as the algorithm proceeds towards the true process optimum. The standard technique is to calibrate the model at some initial operating conditions and, then, use this model to search for an optimal solution. Since the identification and optimization objectives are solved independently, when there is a mismatch between the process and the model, the parameter estimates cannot satisfy these two objectives simultaneously. To this end, in the proposed methodology, corrections are added to the model in such a way that the updated parameter estimates reduce the conflict between the identification and optimization objectives. Unlike the standard estimation technique that minimizes only the prediction error at a given set of operating conditions, the proposed algorithm also includes the differences between the predicted and measured gradients of the optimization objective and/or constraints in the estimation. In the initial version of the algorithm, the proposed correction is based on the linearization of model outputs. Then, in the second part, the correction is extended by using a quadratic approximation of the model, which, for the given case study, resulted in much faster convergence as compared to the earlier version. Finally, the methodologies mentioned above were combined to formulate a robust iterative optimization strategy that converges to the true process optimum with minimum variability in the search path. One of the major findings of this thesis is that the robust optimal solutions based on the Bayesian parametric uncertainty are much less conservative than their counterparts based on normally distributed parameters.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography