To see the other types of publications on this topic, follow the link: STOCHASTIC SENSITIVITY.

Dissertations / Theses on the topic 'STOCHASTIC SENSITIVITY'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'STOCHASTIC SENSITIVITY.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Restrepo, Juan M., and Shankar Venkataramani. "Stochastic longshore current dynamics." ELSEVIER SCI LTD, 2016. http://hdl.handle.net/10150/621938.

Full text
Abstract:
We develop a stochastic parametrization, based on a 'simple' deterministic model for the dynamics of steady longshore currents, that produces ensembles that are statistically consistent with field observations of these currents. Unlike deterministic models, stochastic parameterization incorporates randomness and hence can only match the observations in a statistical sense. Unlike statistical emulators, in which the model is tuned to the statistical structure of the observation, stochastic parametrization are not directly tuned to match the statistics of the observations. Rather, stochastic parameterization combines deterministic, i.e physics based models with stochastic models for the "missing physics" to create hybrid models, that are stochastic, but yet can be used for making predictions, especially in the context of data assimilation. We introduce a novel measure of the utility of stochastic models of complex processes, that we call consistency of sensitivity. A model with poor consistency of sensitivity requires a great deal of tuning of parameters and has a very narrow range of realistic parameters leading to outcomes consistent with a reasonable spectrum of physical outcomes. We apply this metric to our stochastic parametrization and show that, the loss of certainty inherent in model due to its stochastic nature is offset by the model's resulting consistency of sensitivity. In particular, the stochastic model still retains the forward sensitivity of the deterministic model and hence respects important structural/physical constraints, yet has a broader range of parameters capable of producing outcomes consistent with the field data used in evaluating the model. This leads to an expanded range of model applicability. We show, in the context of data assimilation, the stochastic parametrization of longshore currents achieves good results in capturing the statistics of observation that were not used in tuning the model.
APA, Harvard, Vancouver, ISO, and other styles
2

Bouakiz, Mokrane. "Risk-sensitivity in stochastic optimization with applications." Diss., Georgia Institute of Technology, 1985. http://hdl.handle.net/1853/25457.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ghalebsaz-Jeddi, Babak. "Analysis and sensitivity of stochastic capacitatied multi-commodity flows." Cincinnati, Ohio : University of Cincinnati, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=ucin1078512441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Long, Sally. "Evaluating farm management strategy using sensitivity and stochastic analysis." Thesis, Kansas State University, 2013. http://hdl.handle.net/2097/19756.

Full text
Abstract:
Master of Agribusiness
Department of Agricultural Economics
Jason Bergtold
The dramatic changes that have taken place in the production agriculture industry in the last decade have the Long Family Partnership wanting to reassess their farm land management strategy. As land owners, they feel as though they might be missing out on profit opportunity by continuing their current lease agreements as status quo. The objective of this research is to determine the optimal land management strategy for the Partnership farm that maximizes net returns for crop production, but also taking into account input costs and risk. Three scenarios were built: (1) a Base Case of the current share-crop and cash lease Agreements; (2) the possibility of farming their own irrigated farm land and continuing to cash lease land used to produce dryland wheat; and (3) deciding to farm all the irrigated and dry land farm acreage themselves. In order to do this, a whole-farm budget spreadsheet model was generated to assess alternative land management scenarios. The difference in net returns between alternative land rental scenarios were then compared and followed by a sensitivity analysis and stochastic analysis using @RISK software. The findings concluded that there was greater potential to increase net farm income while still conservatively managing risk by investing into their own farm land, as not only owners but also as operators. The stochastic and sensitivity analysis confirmed that farming their own land was more sensitive to changes in yields, prices and input expenses. However, even in consideration of the additional risk, the probability of increasing net farm income was greater for the scenarios in which they farmed their own land.
APA, Harvard, Vancouver, ISO, and other styles
5

GHALEBSAZ-JEDDI, BABAK. "ANALYSIS AND SENSITIVITY OF STOCHASTIC CAPACITATED MULTI-COMMODITY FLOWS." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1078512441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Koch, Stefan [Verfasser], and Andreas [Akademischer Betreuer] Neuenkirch. "Sensitivity results in stochastic analysis / Stefan Koch ; Betreuer: Andreas Neuenkirch." Mannheim : Universitätsbibliothek Mannheim, 2019. http://d-nb.info/1195441541/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Koch, Stefan Verfasser], and Andreas [Akademischer Betreuer] [Neuenkirch. "Sensitivity results in stochastic analysis / Stefan Koch ; Betreuer: Andreas Neuenkirch." Mannheim : Universitätsbibliothek Mannheim, 2019. http://nbn-resolving.de/urn:nbn:de:bsz:180-madoc-520185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fadikar, Arindam. "Stochastic Computer Model Calibration and Uncertainty Quantification." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/91985.

Full text
Abstract:
This dissertation presents novel methodologies in the field of stochastic computer model calibration and uncertainty quantification. Simulation models are widely used in studying physical systems, which are often represented by a set of mathematical equations. Inference on true physical system (unobserved or partially observed) is drawn based on the observations from corresponding computer simulation model. These computer models are calibrated based on limited ground truth observations in order produce realistic predictions and associated uncertainties. Stochastic computer model differs from traditional computer model in the sense that repeated execution results in different outcomes from a stochastic simulation. This additional uncertainty in the simulation model requires to be handled accordingly in any calibration set up. Gaussian process (GP) emulator replaces the actual computer simulation when it is expensive to run and the budget is limited. However, traditional GP interpolator models the mean and/or variance of the simulation output as function of input. For a simulation where marginal gaussianity assumption is not appropriate, it does not suffice to emulate only the mean and/or variance. We present two different approaches addressing the non-gaussianity behavior of an emulator, by (1) incorporating quantile regression in GP for multivariate output, (2) approximating using finite mixture of gaussians. These emulators are also used to calibrate and make forward predictions in the context of an Agent Based disease model which models the Ebola epidemic outbreak in 2014 in West Africa. The third approach employs a sequential scheme which periodically updates the uncertainty inn the computer model input as data becomes available in an online fashion. Unlike other two methods which use an emulator in place of the actual simulation, the sequential approach relies on repeated run of the actual, potentially expensive simulation.
Doctor of Philosophy
Mathematical models are versatile and often provide accurate description of physical events. Scientific models are used to study such events in order to gain understanding of the true underlying system. These models are often complex in nature and requires advance algorithms to solve their governing equations. Outputs from these models depend on external information (also called model input) supplied by the user. Model inputs may or may not have a physical meaning, and can sometimes be only specific to the scientific model. More often than not, optimal values of these inputs are unknown and need to be estimated from few actual observations. This process is known as inverse problem, i.e. inferring the input from the output. The inverse problem becomes challenging when the mathematical model is stochastic in nature, i.e., multiple execution of the model result in different outcome. In this dissertation, three methodologies are proposed that talk about the calibration and prediction of a stochastic disease simulation model which simulates contagion of an infectious disease through human-human contact. The motivating examples are taken from the Ebola epidemic in West Africa in 2014 and seasonal flu in New York City in USA.
APA, Harvard, Vancouver, ISO, and other styles
9

Azim, Qurat-Ul-Ain. "Information theoretic framework for stochastic sensitivity and specificity analysis in biochemical networks." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/52712.

Full text
Abstract:
Biochemical reaction networks involve many chemical species and are inherently stochastic and complex in nature. Reliable and organised functioning of such systems in varied environments requires that their behaviour is robust with respect to certain parameters while sensitive to other variations, and that they exhibit specific responses to various stimuli. There is a continuous need for improved models and methodologies to unravel the complex behaviour of the dynamics of such systems. In this thesis, we apply ideas from information theory to develop novel methods to study properties of biochemical networks. In the first part of the thesis, a framework for the study of parametric sensitivity in stochastic models of biochemical networks using entropies and mutual information is developed. The concept of noise entropy is introduced and its interplay with parametric sensitivity is studied as the system becomes more stochastic. Using the methodology for gene expression models, it is shown that noise can change the sensitivities of the system at var- ious orders of parameter interaction. An approximate and computationally more efficient way of calculating the sensitivities is also developed using unscented transform. Finally, the methodology is applied to a circadian clock model, illustrating the applicability of the approach to more complex systems. In the second part of the thesis, a novel method for specificity quantification in a receptor-ligand binding system is proposed in terms of mutual information estimates be- tween appropriate stimulus and system response. The maximum specificity of 2 × 2 affinity matrices in a parametric setup is theoretically studied. Parameter optimisation methodology and specificity upper bounds are presented for maximum specificity estimates of a given affinity matrix. The quantification framework is then applied to experimental data from T-Cell signalling. Finally, generalisation of the scheme for stochastic systems is discussed.
APA, Harvard, Vancouver, ISO, and other styles
10

Wei, Xiaofan. "Stochastic Analysis and Optimization of Structures." University of Akron / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=akron1163789451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

CONDEIXA, LUCAS DIAS. "EVALUATION OF CONFLICTING OBJECTIVES AND RISK SENSITIVITY IN DISASTER PREPAREDNESS THROUGH STOCHASTIC OPTIMIZATION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=35730@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTS. DE ENSINO
PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTITUIÇÕES COMUNITÁRIAS DE ENSINO PARTICULARES
O processo decisório na logística humanitária compreende diversos tipos de prioridades que por vezes estão relacionados com situações de vida ou morte. Neste grau de importância, os objetivos a serem perseguidos pelos tomadores de decisão na situação de um desastre e as restrições do problema devem ser estabelecidos para se alinhar com os anseios das vítimas e com as limitações existentes. Este estudo visa analisar de que maneiras as prioridades conflitantes num problema repleto de incertezas como em um desastre podem impactar o resultado do atendimento humanitário no que tange à sua eficiência, efetividade e equidade (3E). A dissertação apresenta o papel de alguns objetivos e restrições conflitantes (trade-offs) na tomada de decisão durante a fase de preparação para um desastre. Para tal, modelos de otimização estocástica são propostos utilizando-se dos conceitos de desempenho via 3E e sensibilidade ao risco, através da medida CVaR. Os resultados sugerem que a inclusão da aversão ao risco pode levar a um sistema mais efetivo em média. Outro ponto importante é que o modelo de minimização de custos incluindo o custo da falta forneceu uma resposta com melhor desempenho do que na maximização de equidade ou de cobertura de forma independente. Além disso, a restrição de orçamento (eficiência) quando mal dimensionada pode tornar um problema de maximização de cobertura (efetividade) desnecessariamente ineficiente. Conclui-se que a priorização da maximização conjunta da eficiência e da efetividade com restrição de inequidade e sensibilidade ao risco torna o modelo mais preciso quanto ao atendimento das vítimas do desastre.
The decision-making process in humanitarian logistics comprises several types of priorities that are sometimes related to life or death situations. In this degree of importance, the objectives to be pursued by decision-makers in the event of a disaster as well as the constraints of the problem must be established to align both with the needs of the victims and with the existing limitations. This study aims at analyzing how conflicting priorities in an uncertainty-filled problem such as a disaster can impact the performance of the solution with respect to its efficiency, effectiveness and equity (3E). The dissertation presents the role of some decision-making trade-offs within disaster preparedness phase. For this, stochastic optimization models are proposed using the concept of 3E-performance and risk sensitivity, through the measure CVaR. Results indicate that the inclusion of risk aversion may lead to a more effective system on average. Another important point is that the cost minimization model including the shortage penalty provided a better performing response than in equity or coverage maximization independently. In addition, budget constraint (efficiency) when poorly dimensioned can make a problem of maximizing coverage (effectiveness) unnecessarily inefficient. It is concluded that the prioritization of the joint maximization of efficiency and effectiveness with restriction of inequity and risk sensitivity makes a model more precise as regards the care of the disaster victims.
APA, Harvard, Vancouver, ISO, and other styles
12

Pfeiffer, Laurent. "Sensitivity analysis for optimal control problems. Stochastic optimal control with a probability constraint." Palaiseau, Ecole polytechnique, 2013. https://pastel.hal.science/docs/00/88/11/19/PDF/thesePfeiffer.pdf.

Full text
Abstract:
Cette thèse est divisée en deux parties. Dans la première partie, nous étudions des problèmes de contrôle optimal déterministes avec contraintes et nous nous intéressons à des questions d'analyse de sensibilité. Le point de vue que nous adoptons est celui de l'optimisation abstraite; les conditions d'optimalité nécessaires et suffisantes du second ordre jouent alors un rôle crucial et sont également étudiées en tant que telles. Dans cette thèse, nous nous intéressons à des solutions fortes. De façon générale, nous employons ce terme générique pour désigner des contrôles localement optimaux pour la norme L1. En renforçant la notion d'optimalité locale utilisée, nous nous attendons à obtenir des résultats plus forts. Deux outils sont utilisés de façon essentielle : une technique de relaxation, qui consiste à utiliser plusieurs contrôles simultanément, ainsi qu'un principe de décomposition, qui est un développement de Taylor au second ordre particulier du lagrangien. Les chapitres 2 et 3 portent sur les conditions d'optimalité nécessaires et suffisantes du second ordre pour des solutions fortes de problèmes avec contraintes pures, mixtes et sur l'état final. Dans le chapitre 4, nous réalisons une analyse de sensibilité pour des problèmes relaxés avec des contraintes sur l'état final. Dans le chapitre 5, nous réalisons une analyse de sensibilité pour un problème de production d'énergie nucléaire. Dans la deuxième partie, nous étudions des problèmes de contrôle optimal stochastique sous contrainte en probabilité. Nous étudions une approche par programmation dynamique, dans laquelle le niveau de probabilité est vu comme une variable d'état supplémentaire. Dans ce cadre, nous montrons que la sensibilité de la fonction valeur par rapport au niveau de probabilité est constante le long des trajectoires optimales. Cette analyse nous permet de développer des méthodes numériques pour des problèmes en temps continu. Ces résultats sont présentés dans le chapitre 6, dans lequel nous étudions également une application à la gestion actif-passif
This thesis is divided into two parts. In the first part, we study constrained deterministic optimal control problems and sensitivity analysis issues, from the point of view of abstract optimization. Second-order necessary and sufficient optimality conditions, which play an important role in sensitivity analysis, are also investigated. In this thesis, we are interested in strong solutions. We use this generic term for locally optimal controls for the L1-norm, roughly speaking. We use two essential tools: a relaxation technique, which consists in using simultaneously several controls, and a decomposition principle, which is a particular second-order Taylor expansion of the Lagrangian. Chapters 2 and 3 deal with second-order necessary and sufficient optimality conditions for strong solutions of problems with pure, mixed, and final-state constraints. In Chapter 4, we perform a sensitivity analysis for strong solutions of relaxed problems with final-state constraints. In Chapter 5, we perform a sensitivity analysis for a problem of nuclear energy production. In the second part of the thesis, we study stochastic optimal control problems with a probability constraint. We study an approach by dynamic programming, in which the level of probability is a supplementary state variable. In this framework, we show that the sensitivity of the value function with respect to the probability level is constant along optimal trajectories. We use this analysis to design numerical schemes for continuous-time problems. These results are presented in Chapter 6, in which we also study an application to asset-liability management
APA, Harvard, Vancouver, ISO, and other styles
13

Browne, Thomas. "Regression models and sensitivity analysis for stochastic simulators : applications to non-destructive examination." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCB041/document.

Full text
Abstract:
De nombreuses industries ont recourt aux examens non-destructifs (END) afin d’assurer l’intégrité de certains composants importants, le but étant d’y détecter d’hypothétiques défauts avant tout évènement indésirable (ruptures, fuites etc...). L’environnement des END étant très aléatoire, il est légitime de s’interroger sur la fiabilité de tels procédés. De ce fait, on quantifie leur efficacité par leurs probabilités de détection. On les modélise par des courbes croissantes, fonctions de la taille du défaut à détecter : courbes de PoD (Probability of Detection). Cette thèse vise à présenter des outils pour construire et étudier les courbes de PoD à travers des stratégies parcimonieuses. Notre approche repose essentiellement sur la modélisation de ces courbes par des fonctions de répartitions aléatoires. D`es lors, il est nécessaire de définir des grandeurs pertinentes afin d’étudier leur distribution. Des estimateurs par Krigeage de ces valeurs seront introduits. Des techniques de planifications séquentielles pour améliorer la qualité des estimateurs de courbes de PoD ainsi que pour répondre à des problèmes d’optimisation sont développées. De plus, des indices d’analyse de sensibilité fonctionnels afin de quantifier l’influence de divers paramètres sur la qualité de détection sont présents. Leurs estimateurs respectifs sont également proposés
Many industries perform non-destructive examination (NDE) in order to ensure the integrity of some important components, where the goal is to detect a hypothetical flaw before the occurrence any non desirable event (breaks, leaks etc...). As the NDE environment is highly random, it is legitimate to wonder about the reliability of such procedures. To this effect, their efficiencies are quantified through their probabilities of detection. They are modelled by increasing curves, functions of the flaw size to detect: Probability of Detection-curves (PoD-curves). This thesis aims to introduce tools in order to both build and study PoD-curves through parsimonious strategies. Our approach mainly consists in modelling these curves by random cumulative distribution functions. From this point, it becomes necessary to define relevant objects to quantify their probability distributions. Kriging-based estimators for these values are introduced. Techniques are developed in order to carry out sequential strategies for both the improvement of the PoD-curve estimators and optimization problems. Besides, in order to quantify the influence of input parameters over the quality of detection, functional sensitivity analysis indices are provided. Their respective estimators are also introduced
APA, Harvard, Vancouver, ISO, and other styles
14

Menin, Michel. "Parametric sensitivity study for wind power trading through stochastic reserve and energy market optimization." Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-257502.

Full text
Abstract:
Trading optimal wind power in energy and regulation market offers possibil-ities for increasing revenues as well as impacting security of the system in apositive way[33]. The bidding in both energy and regulation markets can bedone through stochastic optimization process of both markets.Stochastic optimization can be possible once the probabilistic forecst is avail-able through ensemble forecast methodology. For stochastic optimization, thepost-processing of the ensembles to generate quantiles that will be used in op-timization can be accomplished by employing different methodology. In thisstudy, we will concentrate on the impact of post-processing of ensembles onthe stochastic optimization.Generation of quantiles needed for stochastic optimization used herein formarket optimization will be the main focus of the investigation. The impactof price ratios between energy and reserve market will be also investigated toanalyse the impact of said ratios on the revenues. Furthermore this analysiswill be performed for both US and Swedish markets.
APA, Harvard, Vancouver, ISO, and other styles
15

Backhoff, Julio Daniel. "Functional analytic approaches to some stochastic optimization problems." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2015. http://dx.doi.org/10.18452/17138.

Full text
Abstract:
In dieser Arbeit beschäftigen wir uns mit Nutzenoptimierungs- und stochastischen Kontrollproblemen unter mehreren Gesichtspunkten. Wir untersuchen die Parameterunsicherheit solcher Probleme im Sinne des Robustheits- und des Sensitivitätsparadigma. Neben der Betrachtung dieser problemen widmen wir uns auch einem Zweiagentenproblem, bei dem der eine dem anderen das Management seines Portfolios vertraglich überträgt. Wir betrachten das robuste Nutzenoptimierungsproblem in Finanzmarktmodellen, wobei wir Bedingungen für seine Lösbarkeit formulieren, ohne jegliche Kompaktheit der Unsicherheitsmenge zu fordern, welche die Maße enthält, auf die der Optimierer robustifiziert. Unsere Bedingungen sind über gewisse Funktionenräume beschrieben, die allgemein Modularräume sind, mittels dennen wir eine Min-Max-Gleichung und die Existenz optimalen Strategien beweisen. In vollständigen Märkten ist der Raum ein Orlicz, und nachdem man seine Reflexivität explizit überprüft hat, erhält man zusätzlich die Existenz einer Worst-Case-Maße, die wir charakterisieren. Für die Parameterabhängigkeit stochastischer Kontrollprobleme entwickeln wir einen Sensitivitätsansatz. Das Kernargument ist die Korrespondenz zwischen dem adjungierten Zustand zur schwachen Formulierung des Pontryaginschen Prinzips und den Lagrange-Multiplikatoren, die der Kontrollgleichung assoziiert werden, wenn man sie als eine Bedingung betrachtet. Der Sensitivitätsansatz wird dann auf konvexe Probleme mit additiver oder multiplikativer Störung angewendet. Das Zweiagentenproblem formulieren wir in diskreter Zeit. Wir wenden in größter Verallgemeinerung die Methoden der bedingten Analysis auf den Fall linearer Verträge an und zeigen, dass sich die Mehrheit der in der Literatur unter sehr spezifischen Annahmen bekannten Ergebnisse auf eine deutlich umfassenderer Klasse von Modellen verallgemeinern lässt. Insbesondere erhalten wir die Existenz eines first-best-optimalen Vertrags und dessen Implementierbarkeit.
In this thesis we deal with utility maximization and stochastic optimal control through several points of view. We shall be interested in understanding how such problems behave under parameter uncertainty under respectively the robustness and the sensitivity paradigms. Afterwards, we leave the single-agent world and tackle a two-agent problem where the first one delegates her investments to the second through a contract. First, we consider the robust utility maximization problem in financial market models, where we formulate conditions for its solvability without assuming compactness of the densities of the uncertainty set, which is a set of measures upon which the maximizing agent performs robust investments. These conditions are stated in terms of functional spaces wich generally correspond to Modular spaces, through which we prove a minimax equality and the existence of optimal strategies. In complete markets the space is an Orlicz one, and upon explicitly granting its reflexivity we obtain in addition the existence of a worst-case measure, which we fully characterize. Secondly we turn our attention to stochastic optimal control, where we provide a sensitivity analysis to some parameterized variants of such problems. The main tool is the correspondence between the adjoint states appearing in a (weak) stochastic Pontryagin principle and the Lagrange multipliers associated to the controlled equation when viewed as a constraint. The sensitivity analysis is then deployed in the case of convex problems and additive or multiplicative perturbations. In a final part, we proceed to Principal-Agent problems in discrete time. Here we apply in great generality the tools from conditional analysis to the case of linear contracts and show that most results known in the literature for very specific instances of the problem carry on to a much broader setting. In particular, the existence of a first-best optimal contract and its implementability by the Agent is obtained.
APA, Harvard, Vancouver, ISO, and other styles
16

Whitall, Michael. "Using stochastic parameterisations to study the sensitivity of the atmosphere to its high-frequency variability." Thesis, University of Reading, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.654494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Zillmer, Rüdiger. "Statistical properties and scaling of the Lyapunov exponents in stochastic systems." Phd thesis, Universität Potsdam, 2003. http://opus.kobv.de/ubp/volltexte/2005/125/.

Full text
Abstract:
Die vorliegende Arbeit umfaßt drei Abhandlungen, welche allgemein mit einer stochastischen Theorie für die Lyapunov-Exponenten befaßt sind. Mit Hilfe dieser Theorie werden universelle Skalengesetze untersucht, die in gekoppelten chaotischen und ungeordneten Systemen auftreten.

Zunächst werden zwei zeitkontinuierliche stochastische Modelle für schwach gekoppelte chaotische Systeme eingeführt, um die Skalierung der Lyapunov-Exponenten mit der Kopplungsstärke ('coupling sensitivity of chaos') zu untersuchen. Mit Hilfe des Fokker-Planck-Formalismus werden Skalengesetze hergeleitet, die von Ergebnissen numerischer Simulationen bestätigt werden.

Anschließend wird gezeigt, daß 'coupling sensitivity' im Fall gekoppelter ungeordneter Ketten auftritt, wobei der Effekt sich durch ein singuläres Anwachsen der Lokalisierungslänge äußert. Numerische Ergebnisse für gekoppelte Anderson-Modelle werden bekräftigt durch analytische Resultate für gekoppelte raumkontinuierliche Schrödinger-Gleichungen. Das resultierende Skalengesetz für die Lokalisierungslänge ähnelt der Skalierung der Lyapunov-Exponenten gekoppelter chaotischer Systeme.

Schließlich wird die Statistik der exponentiellen Wachstumsrate des linearen Oszillators mit parametrischem Rauschen studiert. Es wird gezeigt, daß die Verteilung des zeitabhängigen Lyapunov-Exponenten von der Normalverteilung abweicht. Mittels der verallgemeinerten Lyapunov-Exponenten wird der Parameterbereich bestimmt, in welchem die Abweichungen von der Normalverteilung signifikant sind und Multiskalierung wesentlich wird.
This work incorporates three treatises which are commonly concerned with a stochastic theory of the Lyapunov exponents. With the help of this theory universal scaling laws are investigated which appear in coupled chaotic and disordered systems.

First, two continuous-time stochastic models for weakly coupled chaotic systems are introduced to study the scaling of the Lyapunov exponents with the coupling strength (coupling sensitivity of chaos). By means of the the Fokker-Planck formalism scaling relations are derived, which are confirmed by results of numerical simulations.

Next, coupling sensitivity is shown to exist for coupled disordered chains, where it appears as a singular increase of the localization length. Numerical findings for coupled Anderson models are confirmed by analytic results for coupled continuous-space Schrödinger equations. The resulting scaling relation of the localization length resembles the scaling of the Lyapunov exponent of coupled chaotic systems.

Finally, the statistics of the exponential growth rate of the linear oscillator with parametric noise are studied. It is shown that the distribution of the finite-time Lyapunov exponent deviates from a Gaussian one. By means of the generalized Lyapunov exponents the parameter range is determined where the non-Gaussian part of the distribution is significant and multiscaling becomes essential.
APA, Harvard, Vancouver, ISO, and other styles
18

Ureh, Henry Chigozie. "IMPACTS OF PLUG-IN ELECTRIC VEHICLE ON RESIDENTIAL ELECTRIC DISTRIBUTION SYSTEM USING STOCHASTIC AND SENSITIVITY APPROACH." DigitalCommons@CalPoly, 2011. https://digitalcommons.calpoly.edu/theses/642.

Full text
Abstract:
Plug-in Electric Vehicles (PEVs) are projected to become a viable means of transportation due to advances in technology and advocates for green and eco-friendly energy solutions. These vehicles are powered partially, or in some cases, solely by the energy stored in their battery packs. The large sizes of these battery packs require large amount of energy to charge, and as the demand for PEV increases, the increase in energy demand needed to recharge these PEV batteries could pose problems to the present electric distribution system. This study examines the potential impacts of PEV on a residential electric distribution system at various penetration levels. An existing residential distribution network is modeled up to each household service point and various sensitivity scenarios and stochastic patterns of PEV loads are simulated. Impact studies that include voltage drop, service transformers overload, energy loss, and transformer thermal loss-of-life expectancy are analyzed. Results from the study are reported and recommendations to mitigate the impacts are presented.
APA, Harvard, Vancouver, ISO, and other styles
19

Ugurhan, Beliz. "Stochastic Strong Ground Motion Simulations On North Anatolian Fault Zone And Central Italy: Validation, Limitation And Sensitivity Analyses." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612413/index.pdf.

Full text
Abstract:
Assessment of potential ground motions in seismically active regions is essential for purposes of seismic design and analysis. Peak ground motion intensity values and frequency content of seismic excitations are required for reliable seismic design, analysis and retrofitting of structures. In regions of sparse or no strong ground motion records, ground motion simulations provide physics-based synthetic records. These simulations provide not only the earthquake engineering parameters but also give insight into the mechanisms of the earthquakes. This thesis presents strong ground motion simulations in three regions of intense seismic activity. Stochastic finite-fault simulation methodology with a dynamic corner frequency approach is applied to three case studies performed in Dü
zce, L&rsquo
Aquila and Erzincan regions. In Dü
zce study, regional seismic source, propagation and site parameters are determined through validation of the simulations against the records. In L&rsquo
Aquila case study, in addition to study of the regional parameters, the limitations of the method in terms of simulating the directivity effects are also investigated. In Erzincan case study, where there are very few records, the optimum model parameters are determined using a large set of simulations with an error-minimization scheme. Later, a parametric sensitivity study is performed to observe the variations in simulation results to small perturbations in input parameters. Results of this study confirm that stochastic finite-fault simulation method is an effective technique for generating realistic physics-based synthetic records of large earthquakes in near field regions.
APA, Harvard, Vancouver, ISO, and other styles
20

Amro, Rami M. A. "Nonlinear Stochastic Dynamics and Signal Amplifications in Sensory Hair Cells." Ohio University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1438694373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lin, Jessica. "Robust Modelling of the Glucose-Insulin System for Tight Glycemic Control of Critical Care Patients." Thesis, University of Canterbury. Mechanical, 2007. http://hdl.handle.net/10092/1570.

Full text
Abstract:
Hyperglycemia is prevalent in critical care, as patients experience stress-induced hyperglycemia, even with no history of diabetes. Hyperglycemia has a significant impact on patient mortality, outcome and health care cost. Tight regulation can significantly reduce these negative outcomes, but achieving it remains clinically elusive, particularly with regard to what constitutes tight control and what protocols are optimal in terms of results and clinical effort. Hyperglycemia in critical care is not largely benign, as once thought, and has a deleterious effect on outcome. Recent studies have shown that tight glucose regulation to average levels from 6.1–7.75 mmol/L can reduce mortality 17–45%, while also significantly reducing other negative clinical outcomes. However, clinical results are highly variable and there is little agreement on what levels of performance can be achieved and how to achieve them. A typical clinical solution is to use ad-hoc protocols based primarily on experience, where large amounts of insulin, up to 50 U/hr, are titrated against glucose measurements variably taken every 1–4 hours. When combined with the unpredictable and sudden metabolic changes that characterise this aspect of critical illness and/or clinical changes in nutritional support, this approach results in highly variable blood glucose levels. The overall result is sustained periods of hyper- or hypo- glycemia, characterised by oscillations between these states, which can adversely affect clinical outcomes and mortality. The situation is exacerbated by exogenous nutritional support regimes with high dextrose content. Model-based predictive control can deliver patient specific and adaptive control, ideal for such a highly dynamic problem. A simple, effective physiological model is presented in this thesis, focusing strongly on clinical control feasibility. This model has three compartments for glucose utilisation, interstitial insulin and its transport, and insulin kinetics in blood plasma. There are two patient specific parameters, the endogenous glucose removal and insulin sensitivity. A novel integral-based parameter identification enables fast and accurate real-time model adaptation to individual patients and patient condition. Three stages of control algorithm developments were trialed clinically in the Christchurch Hospital Department of Intensive Care Medicine. These control protocols are adaptive and patient specific. It is found that glycemic control utilising both insulin and nutrition interventions is most effective. The third stage of protocol development, SPRINT, achieved 61% of patient blood glucose measurements within the 4–6.1 mmol/L desirable glycemic control range in 165 patients. In addition, 89% were within the 4–7.75 mmol/L clinical acceptable range. These values are percentages of the total number of measurements, of which 47% are two-hourly, and the rest are hourly. These results showed unprecedented tight glycemic control in the critical care, but still struggle with patient variability and dynamics. Two stochastic models of insulin sensitivity for the critically ill population are derived and presented in this thesis. These models reveal the highly dynamic variation in insulin sensitivity under critical illness. The stochastic models can deliver probability intervals to support clinical control interventions. Hypoglycemia can thus be further avoided with the probability interval guided intervention assessments. This stochastic approach brings glycemic control to a more knowledge and intelligible level. In “virtual patient” simulation studies, 72% of glycemic levels were within the 4–6.1 mmol/L desirable glycemic control range. The incidence level of hypoglycemia was reduced to practically zero. These results suggest the clinical advances the stochastic model can bring. In addition, the stochastic models reflect the critical patients’ insulin sensitivity driven dynamics. Consequently, the models can create virtual patients to simulated clinical conditions. Thus, protocol developments can be optimised with guaranteed patient safety. Finally, the work presented in this thesis can act as a starting point for many other glycemic control problems in other environments. These areas include the cardiac critical care and neonatal critical care that share the most similarities to the environment studied in this thesis, to general diabetes where the population is growing exponentially world wide. Furthermore, the same pharmacodynamic modelling and control concept can be applied to other human pharmacodynamic control problems. In particular, stochastic modelling can bring added knowledge to these control systems. Eventually, this added knowledge can lead clinical developments from protocol simulations to better clinical decision making.
APA, Harvard, Vancouver, ISO, and other styles
22

Liang, Di. "ESTIMATING THE ECONOMIC LOSSES FROM DISEASES AND EXTENDED DAYS OPEN WITH A FARM-LEVEL STOCHASTIC MODEL." UKnowledge, 2013. http://uknowledge.uky.edu/animalsci_etds/22.

Full text
Abstract:
This thesis improved a farm-level stochastic model with Monte Carlo simulation to estimate the impact of health performance and market conditions on dairy farm economics. The main objective of this model was to estimate the costs of seven common clinical dairy diseases (mastitis, lameness, metritis, retained placenta, left displaced abomasum, ketosis, and milk fever) in the U.S. An online survey was conducted to estimate veterinary fees, treatment costs, and producer labor data. The total disease costs were higher in multiparous cows than in primiparous cows. Left displaced abomasum had the greatest costs in all parities ($404.74 in primiparous cows and $555.79 in multiparous cows). Milk loss, treatment costs, and culling costs were the largest three cost categories for all diseases. A secondary objective of this model was to evaluate the dairy cow’s value, the optimal culling decision, and the cost of days open with flexible model inputs. Dairy cow value under 2013 market conditions was lower than previous studies due to the high slaughter and feed price and low replacement price. The first optimal replacement moment appeared in the middle of the first parity. Furthermore, the cost of days open was considerably influenced by the market conditions.
APA, Harvard, Vancouver, ISO, and other styles
23

Fukasaku, Kotaro. "Explorative study for stochastic failure analysis of a roughened bi-material interface: implementation of the size sensitivity based perturbation method." Thesis, Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41114.

Full text
Abstract:
In our age in which the use of electronic devices is expanding all over the world, their reliability and miniaturization have become very crucial. The thesis is based on the study of one of the most frequent failure mechanisms in semiconductor packages, the delamination of interface or the separation of two bonded materials, in order to improve their adhesion and a fortiori the reliability of microelectronic devices. It focuses on the metal (-oxide) / polymer interfaces because they cover 95% of all existing interfaces. Since several years, research activities at mesoscopic scale (1-10µm) have proved that the more roughened the surface of the interface, i.e., presenting sharp asperities, the better the adhesion between these two materials. Because roughness exhibits extremely complex shapes, it is difficult to find a description that can be used for reliability analysis of interfaces. In order to investigate quantitatively the effect of roughness variation on adhesion properties, studies have been carried out involving analytical fracture mechanics; then numerical studies were conducted with Finite Element Analysis. Both were done in a deterministic way by assuming an ideal profile which is repeated periodically. With the development of statistical and stochastic roughness representation on the one hand, and with the emergence of probabilistic fracture mechanics on the other, the present work adds a stochastic framework to the previous studies. In fact, one of the Stochastic Finite Element Methods, the Perturbation method is chosen for implementation, because it can investigate the effect of the geometric variations on the mechanical response such as displacement field. In addition, it can carry out at once what traditional Finite Element Analysis does with numerous simulations which require changing geometric parameters each time. This method is developed analytically, then numerically by implementing a module in a Finite Element package MSc. Marc/Mentat. In order to get acquainted and to validate the implementation, the Perturbation method is applied analytically and numerically to the 3 point bending test on a beam problem, because the input of the Perturbation method in terms of roughness parameters is still being studied. The capabilities and limitations of the implementation are outlined. Finally, recommendations for using the implementation and for furture work on roughness representation are discussed.
APA, Harvard, Vancouver, ISO, and other styles
24

Alodah, Abdullah. "Stochastic Assessment of Climate-Induced Risk for Water Resources Systems in a Bottom-Up Framework." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39761.

Full text
Abstract:
Significant challenges in water resources management arise because of the ever-increasing pressure on the world’s heavily exploited and limited water resources. These stressors include demographic growth, intensification of agriculture, climate variability, and climate change. These challenges to water resources are usually tackled using a top-down approach, which suffers from many limitations including the use of a limited set of climate change scenarios, the lack of methodology to rank these scenarios, and the lack of credibility, particularly on extremes. The bottom-up approach, the recently introduced approach, reverses the process by assessing vulnerabilities of water resources systems to variations in future climates and determining the prospects of such wide range of changes. While it solves some issues of the top-down approach, several issues remain unaddressed. The current project seeks to provide end-users and the research community with an improved version of the bottom-up framework for streamlining climate variability into water resources management decisions. The improvement issues that are tackled are a) the generation of a sufficient number of climate projections that provide better coverage of the risk space; b) a methodology to quantitatively estimate the plausibility of a future desired or undesired outcome and c) the optimization of the size of the projections pool to achieve the desired precision with the minimum time and computing resources. The results will hopefully help to cope with the present-day and future challenges induced mainly by climate. In the first part of the study, the adequacy of stochastically generated climate time series for water resources systems risk and performance assessment is investigated. A number of stochastic weather generators (SWGs) are first used to generate a large number of realizations (i.e. an ensemble of climate outputs) of precipitation and temperature time series. Each realization of the generated climate time series is then used individually as an input to a hydrological model to obtain streamflow time series. The usefulness of weather generators is evaluated by assessing how the statistical properties of simulated precipitation, temperatures, and streamflow deviate from those of observations. This is achieved by plotting a large ensemble of (1) synthetic precipitation and temperature time series in a Climate Statistics Space (CSS), and (2) hydrological indices using simulated streamflow data in a Risk and Performance Indicators Space (RPIS). The performance of the weather generator is assessed using visual inspection and the Mahalanobis distance between statistics derived from observations and simulations. A case study was carried out using five different weather generators: two versions of WeaGETS, two versions of MulGETS and the k-nearest neighbor weather generator (knn). In the second part of the thesis, the impacts of climate change, on the other hand, was evaluated by generating a large number of representative climate projections. Large ensembles of future series are created by perturbing downscaled regional climate models’ outputs with a stochastic weather generator, then used as inputs to a hydrological model that was calibrated using observed data. Risk indices calculated with the simulated streamflow data are converted into probability distributions using Kernel Density Estimations. The results are dimensional joint probability distributions of risk-relevant indices that provide estimates of the likelihood of unwanted events under a given watershed configuration and management policy. The proposed approach offers a more complete vision of the impacts of climate change and opens the door to a more objective assessment of adaptation strategies. The third part of the thesis deals with the estimation of the optimal size of SWG realizations needed to calculate risk and performance indices. The number of realizations required to reach is investigated utilizing Relative Root Mean Square Error and Relative Error. While results indicate that a single realization is not enough to adequately represent a given stochastic weather generator, results generally indicate that there is no major benefit of generating more than 100 realizations as they are not notably different from results obtained using 1000 realizations. Adopting a smaller but carefully chosen number of realizations can significantly reduce the computational time and resources and therefore benefit a larger audience particularly where high-performance machines are not easily accessible. The application was done in one pilot watershed, the South Nation Watershed in Eastern Ontario, yet the methodology will be of interest for Canada and beyond. Overall, the results contribute to making the bottom-up more objective and less computationally intensive, hence more attractive to practitioners and researchers.
APA, Harvard, Vancouver, ISO, and other styles
25

Zippenfennig, Claudio, Laura Niklaus, Katrin Karger, and Thomas L. Milani. "Subliminal electrical and mechanical stimulation does not improve foot sensitivity in healthy elderly subjects." Elsevier B.V, 2018. https://monarch.qucosa.de/id/qucosa%3A32464.

Full text
Abstract:
Objective Deterioration of cutaneous perception may be one reason for the increased rate of falling in the elderly. The stochastic resonance phenomenon may compensate this loss of information by improving the capability to detect and transfer weak signals. In the present study, we hypothesize that subliminal electrical and mechanical noise applied to the sole of the foot of healthy elderly subjects improves vibration perception thresholds (VPT). Methods VPTs of 99 healthy elderly subjects were measured at 30 Hz at the heel and first metatarsal head (MET I). Participants were randomly assigned to one of five groups: vibration (Vi-G), current (Cu-G), control (Co-G), placebo-vibration (Pl-Vi), and placebo-current (Pl-Cu). Vi-G and Cu-G were stimulated using 90% (subliminal) of their individual perception thresholds for five minutes in a standing position. Co-G received no stimulation. The placebo groups were treated with mock stimulation. VPTs were measured twice before the intervention (baseline (BASE) and pre-measurement (PRE)), and once after the intervention (post-measurement (POST)). Results Significant differences were found between measurement conditions comparing BASE and POST, and PRE and POST. VPTs between groups within each measurement condition showed no significant differences. Vi-G was the only group that showed significantly higher VPTs in POST compared to BASE and PRE, which contradicts previous studies. Conclusion We analyzed increased VPTs after subliminal mechanical stimulation. The pressure load of standing for five minutes combined with subliminal stimulation may have shifted the initial level of mechanoreceptor sensitivity, which may lead to a deterioration of the VPT. The subliminal electrical stimulation had no effect on VPT. Significance Based on our results, we cannot confirm positive effects of subliminal electrical or mechanical stimulation on the sole of the foot.
APA, Harvard, Vancouver, ISO, and other styles
26

Charnay, Clément. "Enhancing supervised learning with complex aggregate features and context sensitivity." Thesis, Strasbourg, 2016. http://www.theses.fr/2016STRAD025/document.

Full text
Abstract:
Dans cette thèse, nous étudions l'adaptation de modèles en apprentissage supervisé. Nous adaptons des algorithmes d'apprentissage existants à une représentation relationnelle. Puis, nous adaptons des modèles de prédiction aux changements de contexte.En représentation relationnelle, les données sont modélisées par plusieurs entités liées par des relations. Nous tirons parti de ces relations avec des agrégats complexes. Nous proposons des heuristiques d'optimisation stochastique pour inclure des agrégats complexes dans des arbres de décisions relationnels et des forêts, et les évaluons sur des jeux de données réelles.Nous adaptons des modèles de prédiction à deux types de changements de contexte. Nous proposons une optimisation de seuils sur des modèles à scores pour s'adapter à un changement de coûts. Puis, nous utilisons des transformations affines pour adapter les attributs numériques à un changement de distribution. Enfin, nous étendons ces transformations aux agrégats complexes
In this thesis, we study model adaptation in supervised learning. Firstly, we adapt existing learning algorithms to the relational representation of data. Secondly, we adapt learned prediction models to context change.In the relational setting, data is modeled by multiples entities linked with relationships. We handle these relationships using complex aggregate features. We propose stochastic optimization heuristics to include complex aggregates in relational decision trees and Random Forests, and assess their predictive performance on real-world datasets.We adapt prediction models to two kinds of context change. Firstly, we propose an algorithm to tune thresholds on pairwise scoring models to adapt to a change of misclassification costs. Secondly, we reframe numerical attributes with affine transformations to adapt to a change of attribute distribution between a learning and a deployment context. Finally, we extend these transformations to complex aggregates
APA, Harvard, Vancouver, ISO, and other styles
27

Weaver, Josh. "The Self-Optimizing Inverse Methodology for Material Parameter Identification and Distributed Damage Detection." University of Akron / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=akron1428316985.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Önskog, Thomas. "The Skorohod problem and weak approximation of stochastic differential equations in time-dependent domains." Doctoral thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-25429.

Full text
Abstract:
This thesis consists of a summary and four scientific articles. All four articles consider various aspects of stochastic differential equations and the purpose of the summary is to provide an introduction to this subject and to supply the notions required in order to fully understand the articles. In the first article we conduct a thorough study of the multi-dimensional Skorohod problem in time-dependent domains. In particular we prove the existence of cádlág solutions to the Skorohod problem with oblique reflection in time-independent domains with corners. We use this existence result to construct weak solutions to stochastic differential equations with oblique reflection in time-dependent domains. In the process of obtaining these results we also establish convergence results for sequences of solutions to the Skorohod problem and a number of estimates for solutions, with bounded jumps, to the Skorohod problem. The second article considers the problem of determining the sensitivities of a solution to a second order parabolic partial differential equation with respect to perturbations in the parameters of the equation. We derive an approximate representation of the sensitivities and an estimate of the discretization error arising in the sensitivity approximation. We apply these theoretical results to the problem of determining the sensitivities of the price of European swaptions in a LIBOR market model with respect to perturbations in the volatility structure (the so-called ‘Greeks’). The third article treats stopped diffusions in time-dependent graph domains with low regularity. We compare, numerically, the performance of one adaptive and three non-adaptive numerical methods with respect to order of convergence, efficiency and stability. In particular we investigate if the performance of the algorithms can be improved by a transformation which increases the regularity of the domain but, at the same time, reduces the regularity of the parameters of the diffusion. In the fourth article we use the existence results obtained in Article I to construct a projected Euler scheme for weak approximation of stochastic differential equations with oblique reflection in time-dependent domains. We prove theoretically that the order of convergence of the proposed algorithm is 1/2 and conduct numerical simulations which support this claim.
APA, Harvard, Vancouver, ISO, and other styles
29

Piveta, Alessandro. "Análise de desempenho de suspensão convencional e hidropneumática considerando a variabilidade dos parâmetros." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/265930.

Full text
Abstract:
Orientador: Pablo Siqueira Meirelles
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica
Made available in DSpace on 2018-08-20T08:25:52Z (GMT). No. of bitstreams: 1 Piveta_Alessandro_M.pdf: 20999298 bytes, checksum: 60bf64ff246a3515b0455a01ccb8e817 (MD5) Previous issue date: 2012
Resumo: No desenvolvimento de projetos mecânicos é comum o uso de valores determinísticos para definir parâmetros que descrevem o sistema, atribuindo elevados coeficientes de segurança a fim de contornar o problema de incertezas relacionadas aos valores das cargas externas aplicadas, da variabilidade das propriedades dos materiais, das condições ambientais, entre outros fatores que possam afetar o desempenho do sistema. Atualmente, com o desenvolvimento de alguns métodos, pode-se introduzir modelos estocásticos que consideram estas incertezas e variações, deixando assim o modelo mais próximo da realidade. Este trabalho analisa o efeito da variabilidade simultânea de determinados parâmetros no desempenho de dois tipos de suspensão: convencional e hidropneumática. Utilizou-se um modelo matemático que representa um veículo completo com sete graus de liberdade implementado no software Matlab®. É desenvolvida a formulação para modelar a suspensão hidropneumática considerando que o gás contido no acumulador sofre uma mudança de estado politrópico. Inicialmente são apresentadas as respostas dos modelos determinísticos devido a excitações transientes representadas por funções que descrevem uma lombada e um degrau com diferentes dimensões. Em seguida, é realizada a análise de sensibilidade da rigidez hidropneumática com o intuito de descobrir quais parâmetros possuem maior influência sobre este sistema. O Método de Monte Carlo é utilizado como solver estocástico nas simulações, tendo como objetivo avaliar a influência das variações dos parâmetros sobre a dinâmica vertical do veículo. A distribuição de probabilidade escolhida para cada parâmetro é a função gama. Por último, são analisadas as respostas resultantes dos modelos probabilísticos. As saídas avaliadas são: aceleração vertical do centro de gravidade, razão de amortecimento, valor da força de contato entre pneu e via e ângulo de rolagem da massa suspensa
Abstract: Deterministic values are largely used in mechanical design to define the parameters which describe the system. Large coefficients of safety are used because of the uncertainties related to these parameters. There are uncertainties related to the external loadings, to the variability of the material properties and to the environmental conditions, among other important parameters which affect the performance of the system. In the last decade, stochastic models have been used to consider these uncertainties and variations. The present work analyzes the effect of the simultaneous variability of certain parameters on the performance of the conventional and the hydropneumatic suspension of automotive vehicles. A mathematical model of a full vehicle with seven degrees of freedom has been used. This model was implemented in the software Matlab®. In the present formulation of the hydropneumatic suspension, it is considered that the gas contained in the accumulator undergoes a polytrophic process. First, the deterministic model of the vehicle is subjected to two transient excitations, namely a function describing a bump and a function describing a step with different dimensions. Then, an analysis of sensibility of the stiffness of the hydropneumatic suspension is performed, which aims to identify which parameters are the most influential on that system. Monte Carlo method is used as the stochastic solver throughout the simulations which study the influence of the parameter variability on the vertical dynamics of the vehicle. The Gamma function is used as the probability density function for each parameter. Finally, the response of the probabilistic models is studied. This work investigates the acceleration of the center of gravity, damping factor, pitch angle of the suspended mass and contact force between the tires and the road
Mestrado
Mecanica dos Sólidos e Projeto Mecanico
Mestre em Engenharia Mecânica
APA, Harvard, Vancouver, ISO, and other styles
30

Зайцева, С. С., and S. S. Zaitseva. "Исследование стохастической динамики в моделях биохимической реакции : магистерская диссертация." Master's thesis, б. и, 2020. http://hdl.handle.net/10995/94598.

Full text
Abstract:
В работе изучаются три нелинейных модели, предложенных Альбертом Голдбетером для описания ферментативной реакции в живой клетке. Математически эти нелинейные модели интересны своей быстро-медленной динамикой, автоколебаниями канардового типа, крайней неоднородностью детерминированных фазовых портретов, большой вариабельностью и сосуществованием динамических режимов. В этих условиях даже небольшие случайные возмущения существенно изменяют динамику системы и индуцируют такие феномены, как стохастическая возбудимость, мультимодальность, фантомный аттрактор и переходы от порядка к хаосу. Проведенное исследование данных моделей дает понимание основных механизмов этих явлений с помощью методов численного и статистического анализа, а также теоретического подхода, основанного на функции стохастической чувствительности и методе доверительных областей.
The work examines three nonlinear models proposed by Albert Goldbeter to describe the enzymatic reaction in a living cell. Mathematically, these nonlinear models are interesting for their slow-fast dynamics, canard-type self-oscillations, extreme inhomogeneity of deterministic phase portraits, great variability and coexistence of dynamic modes. Under these conditions, even small random perturbations significantly change the dynamics of the system and induce such phenomena as stochastic excitability, multimodality, phantom attractor, and transitions from order to chaos. The study of these models provides an understanding of the main mechanisms of these phenomena using methods of numerical and statistical analysis, as well as a theoretical approach based on the stochastic sensitivity function and the method of confidence domains.
APA, Harvard, Vancouver, ISO, and other styles
31

Vyambwera, Sibaliwe Maku. "Mathematical modelling of the HIV/AIDS epidemic and the effect of public health education." Thesis, University of Western Cape, 2014. http://hdl.handle.net/11394/3360.

Full text
Abstract:
>Magister Scientiae - MSc
HIV/AIDS is nowadays considered as the greatest public health disaster of modern time. Its progression has challenged the global population for decades. Through mathematical modelling, researchers have studied different interventions on the HIV pandemic, such as treatment, education, condom use, etc. Our research focuses on different compartmental models with emphasis on the effect of public health education. From the point of view of statistics, it is well known how the public health educational programs contribute towards the reduction of the spread of HIV/AIDS epidemic. Many models have been studied towards understanding the dynamics of the HIV/AIDS epidemic. The impact of ARV treatment have been observed and analysed by many researchers. Our research studies and investigates a compartmental model of HIV with treatment and education campaign. We study the existence of equilibrium points and their stability. Original contributions of this dissertation are the modifications on the model of Cai et al. [1], which enables us to use optimal control theory to identify optimal roll-out of strategies to control the HIV/AIDS. Furthermore, we introduce randomness into the model and we study the almost sure exponential stability of the disease free equilibrium. The randomness is regarded as environmental perturbations in the system. Another contribution is the global stability analysis on the model of Nyabadza et al. in [3]. The stability thresholds are compared for the HIV/AIDS in the absence of any intervention to assess the possible community benefit of public health educational campaigns. We illustrate the results by way simulation The following papers form the basis of much of the content of this dissertation, [1 ] L. Cai, Xuezhi Li, Mini Ghosh, Boazhu Guo. Stability analysis of an HIV/AIDS epidemic model with treatment, 229 (2009) 313-323. [2 ] C.P. Bhunu, S. Mushayabasa, H. Kojouharov, J.M. Tchuenche. Mathematical Analysis of an HIV/AIDS Model: Impact of Educational Programs and Abstinence in Sub-Saharan Africa. J Math Model Algor 10 (2011),31-55. [3 ] F. Nyabadza, C. Chiyaka, Z. Mukandavire, S.D. Hove-Musekwa. Analysis of an HIV/AIDS model with public-health information campaigns and individual with-drawal. Journal of Biological Systems, 18, 2 (2010) 357-375. Through this dissertation the author has contributed to two manuscripts [4] and [5], which are currently under review towards publication in journals, [4 ] G. Abiodun, S. Maku Vyambwera, N. Marcus, K. Okosun, P. Witbooi. Control and sensitivity of an HIV model with public health education (under submission). [5 ] P.Witbooi, M. Nsuami, S. Maku Vyambwera. Stability of a stochastic model of HIV population dynamics (under submission).
APA, Harvard, Vancouver, ISO, and other styles
32

Mulani, Sameer B. "Uncertainty Quantification in Dynamic Problems With Large Uncertainties." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/28617.

Full text
Abstract:
This dissertation investigates uncertainty quantification in dynamic problems. The Advanced Mean Value (AMV) method is used to calculate probabilistic sound power and the sensitivity of elastically supported panels with small uncertainty (coefficient of variation). Sound power calculations are done using Finite Element Method (FEM) and Boundary Element Method (BEM). The sensitivities of the sound power are calculated through direct differentiation of the FEM/BEM/AMV equations. The results are compared with Monte Carlo simulation (MCS). An improved method is developed using AMV, metamodel, and MCS. This new technique is applied to calculate sound power of a composite panel using FEM and Rayleigh Integral. The proposed methodology shows considerable improvement both in terms of accuracy and computational efficiency. In systems with large uncertainties, the above approach does not work. Two Spectral Stochastic Finite Element Method (SSFEM) algorithms are developed to solve stochastic eigenvalue problems using Polynomial chaos. Presently, the approaches are restricted to problems with real and distinct eigenvalues. In both the approaches, the system uncertainties are modeled by Wiener-Askey orthogonal polynomial functions. Galerkin projection is applied in the probability space to minimize the weighted residual of the error of the governing equation. First algorithm is based on inverse iteration method. A modification is suggested to calculate higher eigenvalues and eigenvectors. The above algorithm is applied to both discrete and continuous systems. In continuous systems, the uncertainties are modeled as Gaussian processes using Karhunen-Loeve (KL) expansion. Second algorithm is based on implicit polynomial iteration method. This algorithm is found to be more efficient when applied to discrete systems. However, the application of the algorithm to continuous systems results in ill-conditioned system matrices, which seriously limit its application. Lastly, an algorithm to find the basis random variables of KL expansion for non-Gaussian processes, is developed. The basis random variables are obtained via nonlinear transformation of marginal cumulative distribution function using standard deviation. Results are obtained for three known skewed distributions, Log-Normal, Beta, and Exponential. In all the cases, it is found that the proposed algorithm matches very well with the known solutions and can be applied to solve non-Gaussian process using SSFEM.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
33

Pang, Weijie. "In the Wake of the Financial Crisis - Regulators’ and Investors’ Perspectives." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-dissertations/519.

Full text
Abstract:
Before the 2008 financial crisis, most research in financial mathematics focused on the risk management and the pricing of options without considering effects of counterparties’ default, illiquidity problems, systemic risk and the role of the repurchase agreement (Repo). During the 2008 financial crisis, a frozen Repo market led to a shutdown of short sales in the stock market. Cyclical interdependencies among financial corporations caused that a default of one firm seriously affected other firms and even the whole financial network. In this dissertation, we will consider financial markets which are shaped by financial crisis. This will be done from two distinct perspectives, an investor’s and a regulator’s. From an investor’s perspective, recently models were proposed to compute the total valuation adjustment (XVA) of derivatives without considering a potential crisis in the market. In our research, we include a possible crisis by apply an alternating renewal process to describe a switching between a normal financial status and a financial crisis status. We develop a framework for pricing the XVA of a European claim in this state-dependent framework. We represent the price as a solution to a backward stochastic differential equation and prove the existence and uniqueness of the solution. To study financial networks from a regulator’s perspective, one popular method is the fixed point based approach by L. Eisenberg and T. Noe. However, in practice, there is no accurate record of the interbank liabilities and thus one has to estimate them to use Eisenberg - Noe type models. In our research, we conduct a sensitivity analysis of the Eisenberg - Noe framework, and quantify the effect of the estimation errors to the clearing payments. We show that the effect of the missing specification of interbank connection to clearing payments can be described via directional derivatives that can be represented as solutions of fixed point equations. We also compute the probability of observing clearing payment deviations of a certain magnitude.
APA, Harvard, Vancouver, ISO, and other styles
34

Сатов, А. В., and A. V. Satov. "Компьютерные методы исследования нелинейных динамических систем : магистерская диссертация." Master's thesis, б. и, 2021. http://hdl.handle.net/10995/99133.

Full text
Abstract:
Работа содержит описание построения доверительной полосы стохастического хаоса и реализацию алгоритмов исследования n-мерных моделей. В работе рассматривается дискретная модель, представленная в виде нелинейной динамической системы разностных уравнений, которая описывает динамику взаимодействия потребителей. Выделяются две задачи, которые были поставлены и выполнены в рамках данной работы для расширения программного инструментария исследования динамических систем такого рода. Для двумерного случая осуществляется стохастический анализ чувствительности хаоса через построение доверительной области с использованием критических линий. Помимо этого, описывается разработанный и реализованный алгоритм построения внешней границы хаоса. Производится переход к n-мерному варианту модели (взаимодействие n потребителей). Выделяется 4 алгоритма для исследования n-мерной модели: 1. построение фазовой траектории, 2. построение бифуркационной диаграммы, 3. построение карты режимов, 4. построение показателей Ляпунова. Описывается реализация данных алгоритмов с уклоном в параллельные вычисления. Реализация алгоритмов выполнена на языке программирования C# (платформа .NET) в виде консольного приложения для запуска параллельных вычислений на вычислительном кластере УрО РАН (суперкомпьютер «Уран»).
The work contains description of confidence band construction of a stochastic chaos and realization of algorithms for n-dimensional models studying. The thesis considers a discrete model presented in the form of a nonlinear dynamic system of difference equations, which describes the dynamic of consumer interaction. There are two task that were set and performed in this work to expand the software tools for research dynamic sys-tems of this kind. For the two-dimensional case, a stochastic analysis of the sensitivity of chaos is carried out through the construction of a confidence band using critical lines. In addition, there is description and implementation of algorithm, that can build outer boundary of chaos. A transition is made to the n-dimensional version of the model (interaction of n consumers). There are 4 algorithms for studying the n-dimensional model: 1. phase trajectory building, 2. bifurcation diagram building, 3. mode map building, 4. Lyapunov components building. Algorithm implementation is described with a bias in parallel computations. The algorithms are implemented with C# programming language (.NET platform) in the form of a console application for running parallel computations on the computing cluster of the Ural Branch of the Russian Academy of Sciences (supercomputer «Uranus»).
APA, Harvard, Vancouver, ISO, and other styles
35

Mesado, Melia Carles. "Uncertainty Quantification and Sensitivity Analysis for Cross Sections and Thermohydraulic Parameters in Lattice and Core Physics Codes. Methodology for Cross Section Library Generation and Application to PWR and BWR." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/86167.

Full text
Abstract:
This PhD study, developed at Universitat Politècnica de València (UPV), aims to cover the first phase of the benchmark released by the expert group on Uncertainty Analysis in Modeling (UAM-LWR). The main contribution to the benchmark, made by the thesis' author, is the development of a MATLAB program requested by the benchmark organizers. This is used to generate neutronic libraries to distribute among the benchmark participants. The UAM benchmark pretends to determine the uncertainty introduced by coupled multi-physics and multi-scale LWR analysis codes. The benchmark is subdivided into three phases: 1. Neutronic phase: obtain collapsed and homogenized problem-dependent cross sections and criticality analyses. 2. Core phase: standalone thermohydraulic and neutronic codes. 3. System phase: coupled thermohydraulic and neutronic code. In this thesis the objectives of the first phase are covered. Specifically, a methodology is developed to propagate the uncertainty of cross sections and other neutronic parameters through a lattice physics code and core simulator. An Uncertainty and Sensitivity (U&S) analysis is performed over the cross sections contained in the ENDF/B-VII nuclear library. Their uncertainty is propagated through the lattice physics code SCALE6.2.1, including the collapse and homogenization phase, up to the generation of problem-dependent neutronic libraries. Afterward, the uncertainty contained in these libraries can be further propagated through a core simulator, in this study PARCSv3.2. The module SAMPLER -available in the latest release of SCALE- and DAKOTA 6.3 statistical tool are used for the U&S analysis. As a part of this process, a methodology to obtain neutronic libraries in NEMTAB format -to be used in a core simulator- is also developed. A code-to-code comparison with CASMO-4 is used as a verification. The whole methodology is tested using a Boiling Water Reactor (BWR) reactor type. Nevertheless, there is not any concern or limitation regarding its use in any other type of nuclear reactor. The Gesellschaft für Anlagen und Reaktorsicherheit (GRS) stochastic methodology for uncertainty quantification is used. This methodology makes use of the high-fidelity model and nonparametric sampling to propagate the uncertainty. As a result, the number of samples (determined using the revised Wilks' formula) does not depend on the number of input parameters but only on the desired confidence and uncertainty of output parameters. Moreover, the output Probability Distribution Functions (PDFs) are not subject to normality. The main disadvantage is that each input parameter must have a pre-defined PDF. If possible, input PDFs are defined using information found in the related literature. Otherwise, the uncertainty definition is based on expert judgment. A second scenario is used to propagate the uncertainty of different thermohydraulic parameters through the coupled code TRACE5.0p3/PARCSv3.0. In this case, a PWR reactor type is used and a transient control rod drop occurrence is simulated. As a new feature, the core is modeled chan-by-chan following a fully 3D discretization. No other study is found using a detailed 3D core. This U&S analysis also makes use of the GRS methodology and DAKOTA 6.3.
Este trabajo de doctorado, desarrollado en la Universitat Politècnica de València (UPV), tiene como objetivo cubrir la primera fase del benchmark presentado por el grupo de expertos Uncertainty Analysis in Modeling (UAM-LWR). La principal contribución al benchmark, por parte del autor de esta tesis, es el desarrollo de un programa de MATLAB solicitado por los organizadores del benchmark, el cual se usa para generar librerías neutrónicas a distribuir entre los participantes del benchmark. El benchmark del UAM pretende determinar la incertidumbre introducida por los códigos multifísicos y multiescala acoplados de análisis de reactores de agua ligera. El citado benchmark se divide en tres fases: 1. Fase neutrónica: obtener los parámetros neutrónicos y secciones eficaces del problema específico colapsados y homogenizados, además del análisis de criticidad. 2. Fase de núcleo: análisis termo-hidráulico y neutrónico por separado. 3. Fase de sistema: análisis termo-hidráulico y neutrónico acoplados. En esta tesis se completan los principales objetivos de la primera fase. Concretamente, se desarrolla una metodología para propagar la incertidumbre de secciones eficaces y otros parámetros neutrónicos a través de un código lattice y un simulador de núcleo. Se lleva a cabo un análisis de incertidumbre y sensibilidad para las secciones eficaces contenidas en la librería neutrónica ENDF/B-VII. Su incertidumbre se propaga a través del código lattice SCALE6.2.1, incluyendo las fases de colapsación y homogenización, hasta llegar a la generación de una librería neutrónica específica del problema. Luego, la incertidumbre contenida en dicha librería puede continuar propagándose a través de un simulador de núcleo, para este estudio PARCSv3.2. Para el análisis de incertidumbre y sensibilidad se ha usado el módulo SAMPLER -disponible en la última versión de SCALE- y la herramienta estadística DAKOTA 6.3. Como parte de este proceso, también se ha desarrollado una metodología para obtener librerías neutrónicas en formato NEMTAB para ser usadas en simuladores de núcleo. Se ha realizado una comparación con el código CASMO-4 para obtener una verificación de la metodología completa. Esta se ha probado usando un reactor de agua en ebullición del tipo BWR. Sin embargo, no hay ninguna preocupación o limitación respecto a su uso con otro tipo de reactor nuclear. Para la cuantificación de la incertidumbre se usa la metodología estocástica Gesellschaft für Anlagen und Reaktorsicherheit (GRS). Esta metodología hace uso del modelo de alta fidelidad y un muestreo no paramétrico para propagar la incertidumbre. Como resultado, el número de muestras (determinado con la fórmula revisada de Wilks) no depende del número de parámetros de entrada, sólo depende del nivel de confianza e incertidumbre deseados de los parámetros de salida. Además, las funciones de distribución de probabilidad no están limitadas a normalidad. El principal inconveniente es que se ha de disponer de las distribuciones de probabilidad de cada parámetro de entrada. Si es posible, las distribuciones de probabilidad de entrada se definen usando información encontrada en la literatura relacionada. En caso contrario, la incertidumbre se define en base a la opinión de un experto. Se usa un segundo escenario para propagar la incertidumbre de diferentes parámetros termo-hidráulicos a través del código acoplado TRACE5.0p3/PARCSv3.0. En este caso, se utiliza un reactor tipo PWR para simular un transitorio de una caída de barra. Como nueva característica, el núcleo se modela elemento a elemento siguiendo una discretización totalmente en 3D. No se ha encontrado ningún otro estudio que use un núcleo tan detallado en 3D. También se usa la metodología GRS y el DAKOTA 6.3 para este análisis de incertidumbre y sensibilidad.
Aquest treball de doctorat, desenvolupat a la Universitat Politècnica de València (UPV), té com a objectiu cobrir la primera fase del benchmark presentat pel grup d'experts Uncertainty Analysis in Modeling (UAM-LWR). La principal contribució al benchmark, per part de l'autor d'aquesta tesi, es el desenvolupament d'un programa de MATLAB sol¿licitat pels organitzadors del benchmark, el qual s'utilitza per a generar llibreries neutròniques a distribuir entre els participants del benchmark. El benchmark del UAM pretén determinar la incertesa introduïda pels codis multifísics i multiescala acoblats d'anàlisi de reactors d'aigua lleugera. El citat benchmark es divideix en tres fases: 1. Fase neutrònica: obtenir els paràmetres neutrònics i seccions eficaces del problema específic, col¿lapsats i homogeneïtzats, a més de la anàlisi de criticitat. 2. Fase de nucli: anàlisi termo-hidràulica i neutrònica per separat. 3. Fase de sistema: anàlisi termo-hidràulica i neutrònica acoblats. En aquesta tesi es completen els principals objectius de la primera fase. Concretament, es desenvolupa una metodologia per propagar la incertesa de les seccions eficaces i altres paràmetres neutrònics a través d'un codi lattice i un simulador de nucli. Es porta a terme una anàlisi d'incertesa i sensibilitat per a les seccions eficaces contingudes en la llibreria neutrònica ENDF/B-VII. La seua incertesa es propaga a través del codi lattice SCALE6.2.1, incloent les fases per col¿lapsar i homogeneïtzar, fins aplegar a la generació d'una llibreria neutrònica específica del problema. Després, la incertesa continguda en la esmentada llibreria pot continuar propagant-se a través d'un simulador de nucli, per a aquest estudi PARCSv3.2. Per a l'anàlisi d'incertesa i sensibilitat s'ha utilitzat el mòdul SAMPLER -disponible a l'última versió de SCALE- i la ferramenta estadística DAKOTA 6.3. Com a part d'aquest procés, també es desenvolupa una metodologia per a obtenir llibreries neutròniques en format NEMTAB per ser utilitzades en simuladors de nucli. S'ha realitzat una comparació amb el codi CASMO-4 per obtenir una verificació de la metodologia completa. Aquesta s'ha provat utilitzant un reactor d'aigua en ebullició del tipus BWR. Tanmateix, no hi ha cap preocupació o limitació respecte del seu ús amb un altre tipus de reactor nuclear. Per a la quantificació de la incertesa s'utilitza la metodologia estocàstica Gesellschaft für Anlagen und Reaktorsicherheit (GRS). Aquesta metodologia fa ús del model d'alta fidelitat i un mostreig no paramètric per propagar la incertesa. Com a resultat, el nombre de mostres (determinat amb la fórmula revisada de Wilks) no depèn del nombre de paràmetres d'entrada, sols depèn del nivell de confiança i incertesa desitjats dels paràmetres d'eixida. A més, las funcions de distribució de probabilitat no estan limitades a la normalitat. El principal inconvenient és que s'ha de disposar de les distribucions de probabilitat de cada paràmetre d'entrada. Si és possible, les distribucions de probabilitat d'entrada es defineixen utilitzant informació trobada a la literatura relacionada. En cas contrari, la incertesa es defineix en base a l'opinió d'un expert. S'utilitza un segon escenari per propagar la incertesa de diferents paràmetres termo-hidràulics a través del codi acoblat TRACE5.0p3/PARCSv3.0. En aquest cas, s'utilitza un reactor tipus PWR per simular un transitori d'una caiguda de barra. Com a nova característica, cal assenyalar que el nucli es modela element a element seguint una discretizació totalment 3D. No s'ha trobat cap altre estudi que utilitze un nucli tan detallat en 3D. També s'utilitza la metodologia GRS i el DAKOTA 6.3 per a aquesta anàlisi d'incertesa i sensibilitat.¿
Mesado Melia, C. (2017). Uncertainty Quantification and Sensitivity Analysis for Cross Sections and Thermohydraulic Parameters in Lattice and Core Physics Codes. Methodology for Cross Section Library Generation and Application to PWR and BWR [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86167
TESIS
APA, Harvard, Vancouver, ISO, and other styles
36

Graciani, Guillaume. "Three-dimensional stochastic interferometry : theory and applications to high-sensitivity optical metrology and light scattering amplification Random dynamic interferometer:cavity amplified speckle spectroscopy using a highly symmetric coherent fieldcreated inside a closed Lambertian optical cavity 3D stochastic interferometry detects picometer dynamics of an optical volume Cavity Amplified Speckle Spectroscopy expands lightscattering methods to transparent or miniature samples Super-resolution provided by the arbitrarily strongsuperlinearity of the blackbody radiation." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX058.

Full text
Abstract:
La puissance de la métrologie optique nécessite généralement des géométries simples avec un alignement précis et une phase optique bien contrôlée. Dans cette thèse, nous développons plutôt la notion d'interférométrie du chaos, en utilisant un champ optique ayant un désordre géométrique maximal et une phase aléatoire. Nous montrons que la stochasticité conduit à une très grande sensibilité interférométrique et ouvre la possibilité d'un large éventail de nouvelles mesures optiques et d'une nouvelle méthode que nous appelons la Cavity Amplified Speckle Spectroscopy (Spectroscopie de Speckle Amplifiée par Cavité).L'idée clef est d'injecter un laser monochromatique à très faible bande passante dans une cavité à haute réflectivité Lambertienne, qui agit comme un résonateur aléatoire à gain élevé. On obtient alors un billard Lambertien cohérent en 3D, rempli d'un champ aléatoire 3D statistiquement homogène dans l'espace et invariant par rotation. En tout point P, il peut être décrit comme la superposition cohérente d'un grand nombre d'ondes planes prises au hasard dans une distribution statistique unique qui combine indépendamment (1) une distribution à symétrie sphérique du vecteur d'onde sur une sphère ||k||=k0, avec (2) une distribution uniforme de la phase sur [0,2pi], et (3) un état de polarisation uniformément distribué sur la sphère de Poincarré. Le motif de speckle 3D aléatoire qui en résulte reste constant dans le temps tant que la diffusion de la longueur d'onde du laser peut être négligée. A plus long terme, cependant, il se comporte ergodiquement. Ce travail représente la première réalisation expérimentale de la notion de champ aléatoire 3D proposée par Berry, et il se rapporte également aux recherches sur l'enchevêtrement classique de la lumière. Les concepts de résonateur aléatoire à gain élevé, ou billard lambertien cohérent, correspondent à un nouveau domaine de l’optique, qui n'obéit ni à l'équation d'onde ni à l'équation de diffusion, et devrait conduire à de nouvelles recherches théoriques et expérimentales.En pratique, avec une diffusion suffisamment lente de la phase d'entrée et un bruit de photons suffisamment faible, l'intensité du champ de speckle ne fluctue et ne devient ergodique que si la géométrie de la cavité n'est pas constante, ou si elle contient un milieu avec une distribution de longueurs de chemins optiques ou une polarisation non constante. En utilisant des spectres de décorrélation d'intensité obtenus entre 100 MHz et 0,01 Hz à partir de grains de speckle individuels, nous démontrons la possibilité de mesurer les variations picométriques de la géométrie de la cavité et de détecter sous l’ angstrom le mouvement de diffuseurs en solution. Cette interférométrie volumique du chaos peut également être utilisée pour amplifier des signaux de diffusion auparavant indétectables, et nous montrons un setup miniaturisé de diffusion de la lumière fonctionnant avec des volumes de l’ordre du microlitre et des systèmes quasi-transparents. Un brevet a été déposé pour toute une série d'applications, notamment la détection des vibrations sismiques et acoustiques, la caractérisation du bruit de phase des lasers et la mesure d'échantillons très dilués et peu diffusants
The power of optical metrology generally requires simple geometries with precise alignment and well controlled optical phases. In the present thesis, we develop instead the notion of chaos interferometry, using an optical field with maximal geometric disorder and phase randomness. We show that stochasticity leads to a very high interferometric sensitivity and opens up the possibility for a wide range of new optical measurements and a new method we call Cavity Amplified Speckle Spectroscopy.The key idea is to inject a very small bandwidth monochromatic laser into a cavity with high albedo Lambertian reflectivity, which acts as a high-gain random resonator. A 3D coherent Lambertian billiard is obtained, filled with a 3D random field that is statistically uniform in space and invariant by rotation. At any given point P, it can be described as the coherent superposition of a large number of plane waves randomly taken from a unique statistical distribution that independently combines (1) a spherically symmetric distribution of the wave vector on a sphere ||k||=k0, with (2) a uniform distribution of the phase on [0,2pi], and (3) a uniformly distributed polarization state on the Poincarré sphere. The resulting random 3D speckle pattern remains constant with time as long as the diffusion of the laser’s wavelength can be neglected. At longer times however, it behaves ergodically. This work represents the first experimental realization of the notion of a 3D random field proposed by Berry, and it also relates to the investigations on classical light entanglement. The concepts of high-gain random resonator, or coherent Lambertian billiard, correspond to a new kind of field in optics, that obeys neither the wave equation nor the diffusion equation, and should lead to new theoretical and experimental investigations.Practically, with a slow enough diffusion of the input phase and a small enough photon-number noise, the speckle intensity field fluctuates and becomes ergodic only if the geometry of the cavity is not constant, or if it contains a medium with a non-constant optical path length distribution or polarization. Using intensity decorrelation spectra obtained between 100 MHz and 0.01 Hz from single speckles, we show the possibility to measure picometer variations of the cavity geometry and to detect sub-angstrom motion of scatterers in solutions. Chaos interferometry can also be used to amplify previously undetectable scattering signals, and we show a miniaturized light scattering setup working with microliter volumes and quasi-transparent systems. A patent was filled for a range of applications including seismic and acoustic vibration sensing, laser phase noise characterization and measurements of highly diluted and poorly scattering samples
APA, Harvard, Vancouver, ISO, and other styles
37

Reutenauer, Victor. "Algorithmes stochastiques pour la gestion du risque et l'indexation de bases de données de média." Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4018/document.

Full text
Abstract:
Cette thèse s’intéresse à différents problèmes de contrôle et d’optimisation dont il n’existe à ce jour que des solutions approchées. D’une part nous nous intéressons à des techniques visant à réduire ou supprimer les approximations pour obtenir des solutions plus précises voire exactes. D’autre part nous développons de nouvelles méthodes d’approximation pour traiter plus rapidement des problèmes à plus grande échelle. Nous étudions des méthodes numériques de simulation d’équation différentielle stochastique et d’amélioration de calculs d’espérance. Nous mettons en œuvre des techniques de type quantification pour la construction de variables de contrôle ainsi que la méthode de gradient stochastique pour la résolution de problèmes de contrôle stochastique. Nous nous intéressons aussi aux méthodes de clustering liées à la quantification, ainsi qu’à la compression d’information par réseaux neuronaux. Les problèmes étudiés sont issus non seulement de motivations financières, comme le contrôle stochastique pour la couverture d’option en marché incomplet mais aussi du traitement des grandes bases de données de médias communément appelé Big data dans le chapitre 5. Théoriquement, nous proposons différentes majorations de la convergence des méthodes numériques d’une part pour la recherche d’une stratégie optimale de couverture en marché incomplet dans le chapitre 3, d’autre part pour l’extension la technique de Beskos-Roberts de simulation d’équation différentielle dans le chapitre 4. Nous présentons une utilisation originale de la décomposition de Karhunen-Loève pour une réduction de variance de l’estimateur d’espérance dans le chapitre 2
This thesis proposes different problems of stochastic control and optimization that can be solved only thanks approximation. On one hand, we develop methodology aiming to reduce or suppress approximations to obtain more accurate solutions or something exact ones. On another hand we develop new approximation methodology in order to solve quicker larger scale problems. We study numerical methodology to simulated differential equations and enhancement of computation of expectations. We develop quantization methodology to build control variate and gradient stochastic methods to solve stochastic control problems. We are also interested in clustering methods linked to quantization, and principal composant analysis or compression of data thanks neural networks. We study problems motivated by mathematical finance, like stochastic control for the hedging of derivatives in incomplete market but also to manage huge databases of media commonly known as big Data in chapter 5. Theoretically we propose some upper bound for convergence of the numerical method used. This is the case of optimal hedging in incomplete market in chapter 3 but also an extension of Beskos-Roberts methods of exact simulation of stochastic differential equations in chapter 4. We present an original application of karhunen-Loève decomposition for a control variate of computation of expectation in chapter 2
APA, Harvard, Vancouver, ISO, and other styles
38

Kucek, Martin. "Stochastická analýza smykového porušování železobetonových nosníků." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2017. http://www.nusl.cz/ntk/nusl-265231.

Full text
Abstract:
The diploma thesis is focused on a solution of the load reaction of the bridge construction from girders KA-73. Proposal methods of the nonlinear analysis by means of final elements on the stochastic and deterministic level are used for the solution of the load reaction. A simulation technique Latin Hypercube Sampling is used within the stochastic analysis. A material degradation in the form of the trussing corrosion is solved with the expected decrease of the construction lifetime. The conclusion of the thesis contains an evaluation of initial quantities of material parameters for the load reaction of the construction in the form of the sensitivity analysis.
APA, Harvard, Vancouver, ISO, and other styles
39

Bucas, Simon. "Evaluation de la fiabilité des éléments de charpente de grue à tour." Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22539/document.

Full text
Abstract:
Les grues à tour sont des engins de levage utilisés de manière cyclique sur les chantiers de construction. De ce fait, la prise en compte du phénomène de fatigue dans le dimensionnement des charpentes de grue est primordiale. La fatigue est usuellement considérée dans les normes au moyen de règles déterministes ayant pour but de garantir l’intégrité de la structure sous diverses conditions d’utilisation. Bien que cette approche fournisse des résultats satisfaisants dans la plupart des cas, celle-ci ne permet pas d’évaluer le niveau de fiabilité des éléments de charpente en fonction de leur durée d’exploitation. De ce point de vue, les approches probabilistes permettent de pallier cette difficulté en proposant des outils pertinents servant à caractériser et à propager les incertitudes liées à la fatigue au travers d’un modèle mécanique. Une approche probabiliste originale permettant la prise en compte des incertitudes liées à la fatigue dans le dimensionnement des charpentes de grues à tour est proposée dans ce manuscrit. La méthode proposée est basée sur la définition de deux densités de probabilité représentant respectivement les variabilités liées à la résistance des joints soudés d’une part, et les nombreuses dispersions associées à la sollicitation des éléments de charpente d’autre part. La définition de la densité de probabilité de résistance repose sur la capitalisation d’un grand nombre de résultats d’essais d’endurance sur structures soudées, tandis que la définition de la distribution de sollicitation est basée sur une modélisation à deux niveaux tenant compte de divers jeux de données collectés sur chantier. Les résultats de l’analyse de fiabilité présentée dans ce manuscrit démontrent la pertinence des approches probabilistes dans le cadre du dimensionnement en fatigue des éléments de charpente de grue à tour
Tower cranes are lifting appliances which are cyclically used on construction sites. Thus, the consideration of the fatigue phenomenon in the design of crane structural members is essential. This phenomenon is usually taken into account in standards by means of deterministic rules enabling to ensure structural safety under various operating conditions. Although it provides satisfactory results in most cases, the deterministic approach do not enable to evaluate the reliability of crane structural members according to their operating time. From this point of view, probabilistic approaches allow to overcome this difficulty by proposing relevant tools enabling to characterize and propagate uncertainties related to fatigue through a mechanical model. An original probabilistic approach enabling the consideration of the uncertainties related to crane members fatigue design is proposed in this manuscript. It relies on the definition of two probability density functions related respectively to the strength variability of crane welded joints on one hand, and the dispersion of operating conditions (stress) on this other hand. The definition of the strength distribution stems from the capitalization of various welded joint fatigue test results, while the characterization of the stress distribution relies on the analysis of various data sets coming from crane monitoring performed on different construction sites. The results coming from the reliability analysis presented in this manuscript show the relevance of probabilistic approaches in the frame of tower crane structural members fatigue design
APA, Harvard, Vancouver, ISO, and other styles
40

Scotti, Simone. "Applications of the error theory using Dirichlet forms." Phd thesis, Université Paris-Est, 2008. http://tel.archives-ouvertes.fr/tel-00349241.

Full text
Abstract:
This thesis is devoted to the study of the applications of the error theory using Dirichlet forms. Our work is split into three parts. The first one deals with the models described by stochastic differential equations. After a short technical chapter, an innovative model for order books is proposed. We assume that the bid-ask spread is not an imperfection, but an intrinsic property of exchange markets instead. The uncertainty is carried by the Brownian motion guiding the asset. We find that spread evolutions can be evaluated using closed formulae and we estimate the impact of the underlying uncertainty on the related contingent claims. Afterwards, we deal with the PBS model, a new model to price European options. The seminal idea is to distinguish the market volatility with respect to the parameter used by traders for hedging. We assume the former constant, while the latter volatility being an erroneous subjective estimation of the former. We prove that this model anticipates a bid-ask spread and a smiled implied volatility curve. Major properties of this model are the existence of closed formulae for prices, the impact of the underlying drift and an efficient calibration strategy. The second part deals with the models described by partial differential equations. Linear and non-linear PDEs are examined separately. In the first case, we show some interesting relations between the error and wavelets theories. When non-linear PDEs are concerned, we study the sensitivity of the solution using error theory. Except when exact solution exists, two possible approaches are detailed: first, we analyze the sensitivity obtained by taking "derivatives" of the discrete governing equations. Then, we study the PDEs solved by the sensitivity of the theoretical solutions. In both cases, we show that sharp and bias solve linear PDE depending on the solution of the former PDE itself and we suggest algorithms to evaluate numerically the sensitivities. Finally, the third part is devoted to stochastic partial differential equations. Our analysis is split into two chapters. First, we study the transmission of an uncertainty, present on starting conditions, on the solution of SPDE. Then, we analyze the impact of a perturbation of the functional terms of SPDE and the coefficient of the related Green function. In both cases, we show that the sharp and bias verify linear SPDE depending on the solution of the former SPDE itself
APA, Harvard, Vancouver, ISO, and other styles
41

Nasri, Amin. "On the Dynamics and Statics of Power System Operation : Optimal Utilization of FACTS Devicesand Management of Wind Power Uncertainty." Doctoral thesis, KTH, Elektriska energisystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-154576.

Full text
Abstract:
Nowadays, power systems are dealing with some new challenges raisedby the major changes that have been taken place since 80’s, e.g., deregu-lation in electricity markets, significant increase of electricity demands andmore recently large-scale integration of renewable energy resources such aswind power. Therefore, system operators must make some adjustments toaccommodate these changes into the future of power systems.One of the main challenges is maintaining the system stability since theextra stress caused by the above changes reduces the stability margin, andmay lead to rise of many undesirable phenomena. The other important chal-lenge is to cope with uncertainty and variability of renewable energy sourceswhich make power systems to become more stochastic in nature, and lesscontrollable.Flexible AC Transmission Systems (FACTS) have emerged as a solutionto help power systems with these new challenges. This thesis aims to ap-propriately utilize such devices in order to increase the transmission capacityand flexibility, improve the dynamic behavior of power systems and integratemore renewable energy into the system. To this end, the most appropriatelocations and settings of these controllable devices need to be determined.This thesis mainly looks at (i) rotor angle stability, i.e., small signal andtransient stability (ii) system operation under wind uncertainty. In the firstpart of this thesis, trajectory sensitivity analysis is used to determine themost suitable placement of FACTS devices for improving rotor angle sta-bility, while in the second part, optimal settings of such devices are foundto maximize the level of wind power integration. As a general conclusion,it was demonstrated that FACTS devices, installed in proper locations andtuned appropriately, are effective means to enhance the system stability andto handle wind uncertainty.The last objective of this thesis work is to propose an efficient solutionapproach based on Benders’ decomposition to solve a network-constrained acunit commitment problem in a wind-integrated power system. The numericalresults show validity, accuracy and efficiency of the proposed approach.

The Doctoral Degrees issued upon completion of the programme are issued by Comillas Pontifical University, Delft University of Technology and KTH Royal Institute of Technology. The invested degrees are official in Spain, the Netherlands and Sweden, respectively.QC 20141028

APA, Harvard, Vancouver, ISO, and other styles
42

Абрамова, Е. П., and E. P. Abramova. "Анализ стохастических моделей взаимодействия популяций : магистерская диссертация." Master's thesis, б. и, 2020. http://hdl.handle.net/10995/87577.

Full text
Abstract:
В работе рассматриваются двумерная популяционная модель типа «хищник–жертва» с учетом конкуренции жертв и конкуренции хищников за отличные от жертв ресурсы, а также трехмерная популяционная модель типа «хищник–две жертвы» с учетом внутривидовой и межвидовой конкуренции жертв и конкуренции хищников за отличные от жертв ресурсы. Проводится анализ существования и устойчивости аттракторов моделей, строятся бифуркационные диаграммы и типичные фазовые портреты. Для стохастических моделей проводится анализ чувствительности аттракторов на основе теории функции стохастической чувствительности. С использованием аппарата доверительных областей: эллипсов и эллипсоидов для равновесий, а также полос и торов – для циклов, изучаются стохастические феномены: переходы между аттракторами, генерация большеамлитудных колебаний, вымирание популяций. Изучаются вероятностные механизмы вымирания популяций.
The thesis considers a two-dimensional population model of the «predator–prey» type, taking into account the competition of preys and competition of predators for resources different from the preys, and also a three-dimensional population model of the «predator–two preys» type, with intraspecific and interspecific competition of preys and competition predators for resources other than preys. An analysis is made of the existence and stability of attractors. Bifurcation diagrams and typical phase portraits are constructed. For stochastic models, an analysis of the sensitivity of attractors is carried out based on stochastic sensitivity function teqnique. Using the confidence domain method: ellipses or ellipsoids for equilibria and bands or tor for cycles, following stochastic phenomena are studied: transitions between attractors, the generation of large amplitude oscillation and the extinction of populations. The probabilistic mechanisms of extinction of populations are studied.
APA, Harvard, Vancouver, ISO, and other styles
43

Беляев, А. В., and A. V. Belyaev. "Анализ стохастических моделей живых систем с дискретным временем : магистерская диссертация." Master's thesis, б. и, 2020. http://hdl.handle.net/10995/87578.

Full text
Abstract:
Работа содержит исследования трех моделей живых систем с дискретным временем. В первой главе рассматривается одномерная модель нейронной активности, задаваемая кусочно-гладким отображением. Показывается, что в случае одномерного отображения наличие случайного возмущения приводит к появлению всплесков (спайкингу). Исследуются два механизма генерации спайков, вызванных добавлением случайного возмущения в один из параметров. Иллюстрируется, что сосуществование двух аттракторов является не единственной причиной возникновения спайкинга. Для прогнозирования уровня интенсивности шума, необходимого для генерации спайков, применяется метод доверительных областей, который основан на функции стохастической чувствительности. Также находятся основные характеристики межспайковых интервалов в зависимости от интенсивности шума. Вторая глава работы посвящена применению метода функции стохастической чувствительности к аттракторам кусочно-гладкого одномерного отображения, описывающего динамику численности популяции. Первым этапом исследования является параметрический анализ возможных режимов детерминированной модели: определение зон существования устойчивых равновесий и хаотических аттракторов. Для определения параметрических границ хаотического аттрактора применяется теория критических точек. В случае, когда на систему оказывает влияние случайное воздействие, на основе техники функции стохастической чувствительности дается описание разброса случайных состояний вокруг равновесия и хаотического аттрактора. Проводится сравнительный анализ влияния параметрического и аддитивного шума на аттракторы системы. С помощью техники доверительных интервалов изучаются вероятностные механизмы вымирания популяции под действием шума. Анализируются изменения параметрических границ существования популяции под действием случайного возмущения. В третьей главе проводится анализ возможных динамических режимов детерминированной и стохастической модели Лотки-Вольтерры. В зависимости от двух параметров системы строится карта режимов. Изучаются параметрические зоны существования устойчивых равновесий, циклов, замкнутых инвариантных кривых, а также хаотических аттракторов. Описываются бифуркации удвоения периода, Неймарка--Саккера и кризиса. Демонстрируется сложная форма бассейнов притяжения. Помимо детерминированной системы подробно изучается стохастическая, описывающая влияние внешнего случайного воздействия. В случае хаоса дан алгоритм нахождения критических линий, описывающих границу хаотического аттрактора. Опираясь на найденную чувствительность аттракторов, строятся доверительные полосы и эллипсы, позволяющие описать разброс случайных состояний вокруг детерминированного аттрактора.
The work contains study of three models of biological systems with discrete time. In the first chapter a one-dimensional model of neural activity defined by a piecewise-smooth map is considered. It is shown that in the case of a one-dimensional model, the presence of a random disturbance leads to a spike generation. Two mechanisms of spike generation caused by the presence of a random disturbance in one of the parameters are investigated. It is illustrated that the coexistence of two attractors is not the only reason of spiking. To predict the level of noise intensity needed to generate spikes, the confidence-domain method is used, which is based on the stochastic sensitivity function. The main characteristics of interspike intervals depending on the intensity of the noise are also described. The second chapter is devoted to the application of the method of the stochastic sensitivity function to attractors of a piecewise-smooth one-dimensional map, which describes the population dynamics. The first stage of the study is a parametric analysis of the possible regimes of the deterministic model: determining the zones of existence of stable equilibria and chaotic attractors. The theory of critical points is used to determine the parametric boundaries of a chaotic attractor. In the case where the system is affected by a random noise, based on the stochastic sensitivity function, a description of the spread of random states around equilibrium and a chaotic attractor is given. A comparative analysis of the influence of parametric and additive noise on the attractors is carried out. Using the technique of confidence intervals, the probabilistic mechanisms of extinction of a population under the influence of noise are studied. Changes in the parametric boundaries of the existence of population under the influence of random disturbance are analyzed. In the third chapter the possible dynamic modes of the Lotka-Volterra model in determi\-nistic and stochastic cases are analyzed. Depending on the two parameters of the system, bifurcation diagram is constructed. Parametric zones of the existence of stable equilibria, cycles, closed invariant curves, and also chaotic attractors are studied. The bifurcations of the period doubling, Neimark--Sacker and the crisis are described. The complex shape of the basins of attraction is demonstrated. In addition to the deterministic system, the stochastic system is studied in detail, which describes the influence of external random disturbance. In the case of chaos, an algorithm for finding critical lines describing the boundary of a chaotic attractor is given. Based on the stochastic sensitivity function, confidence bands and ellipses are constructed to describe the spread of random states around a deterministic attractor.
APA, Harvard, Vancouver, ISO, and other styles
44

Niang, Ibrahima. "Quantification et méthodes statistiques pour le risque de modèle." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1015/document.

Full text
Abstract:
En finance, le risque de modèle est le risque de pertes financières résultant de l'utilisation de modèles. Il s'agit d'un risque complexe à appréhender qui recouvre plusieurs situations très différentes, et tout particulièrement le risque d'estimation (on utilise en général dans un modèle un paramètre estimé) et le risque d'erreur de spécification de modèle (qui consiste à utiliser un modèle inadéquat). Cette thèse s'intéresse d'une part à la quantification du risque de modèle dans la construction de courbes de taux ou de crédit et d'autre part à l'étude de la compatibilité des indices de Sobol avec la théorie des ordres stochastiques. Elle est divisée en trois chapitres. Le Chapitre 1 s'intéresse à l'étude du risque de modèle dans la construction de courbes de taux ou de crédit. Nous analysons en particulier l'incertitude associée à la construction de courbes de taux ou de crédit. Dans ce contexte, nous avons obtenus des bornes de non-arbitrage associées à des courbes de taux ou de défaut implicite parfaitement compatibles avec les cotations des produits de référence associés. Dans le Chapitre 2 de la thèse, nous faisons le lien entre l'analyse de sensibilité globale et la théorie des ordres stochastiques. Nous analysons en particulier comment les indices de Sobol se transforment suite à une augmentation de l'incertitude d'un paramètre au sens de l'ordre stochastique dispersif ou excess wealth. Le Chapitre 3 de la thèse s'intéresse à l'indice de contraste quantile. Nous faisons d'une part le lien entre cet indice et la mesure de risque CTE puis nous analysons, d'autre part, dans quelles mesures une augmentation de l'incertitude d'un paramètre au sens de l'ordre stochastique dispersif ou excess wealth entraine une augmentation de l'indice de contraste quantile. Nous proposons enfin une méthode d'estimation de cet indice. Nous montrons, sous des hypothèses adéquates, que l'estimateur que nous proposons est consistant et asymptotiquement normal
In finance, model risk is the risk of loss resulting from using models. It is a complex risk which recover many different situations, and especially estimation risk and risk of model misspecification. This thesis focuses: on model risk inherent in yield and credit curve construction methods and the analysis of the consistency of Sobol indices with respect to stochastic ordering of model parameters. it is divided into three chapters. Chapter 1 focuses on model risk embedded in yield and credit curve construction methods. We analyse in particular the uncertainty associated to the construction of yield curves or credit curves. In this context, we derive arbitrage-free bounds for discount factor and survival probability at the most liquid maturities. In Chapter 2 of this thesis, we quantify the impact of parameter risk through global sensitivity analysis and stochastic orders theory. We analyse in particular how Sobol indices are transformed further to an increase of parameter uncertainty with respect to the dispersive or excess wealth orders. Chapter 3 of the thesis focuses on contrast quantile index. We link this latter with the risk measure CTE and then we analyse on the other side, in which circumstances an increase of a parameter uncertainty in the sense of dispersive or excess wealth orders implies and increase of contrast quantile index. We propose finally an estimation procedure for this index. We prove under some conditions that our estimator is consistent and asymptotically normal
APA, Harvard, Vancouver, ISO, and other styles
45

Kouassi, Attibaud. "Propagation d'incertitudes en CEM. Application à l'analyse de fiabilité et de sensibilité de lignes de transmission et d'antennes." Thesis, Université Clermont Auvergne‎ (2017-2020), 2017. http://www.theses.fr/2017CLFAC067/document.

Full text
Abstract:
De nos jours, la plupart des analyses CEM d’équipements et systèmes électroniques sont basées sur des approches quasi-déterministes dans lesquelles les paramètres internes et externes des modèles sont supposés parfaitement connus et où les incertitudes les affectant sont prises en compte sur les réponses par le biais de marges de sécurité importantes. Or, l’inconvénient de telles approches est qu’elles sont non seulement trop conservatives, mais en outre totalement inadaptées à certaines situations, notamment lorsque l’objectif de l’étude impose de prendre en compte le caractère aléatoire de ces paramètres via des modélisations stochastiques appropriées de type variables, processus ou champs aléatoires. Cette approche probabiliste a fait l’objet ces dernières années d’un certain nombre de recherches en CEM, tant au plan national qu’au plan international. Le travail présenté dans cette thèse est une contribution à ces recherches et a un double objectif : (1) développer et mettre en œuvre une méthodologie probabiliste et ses outils numériques d’accompagnement pour l’évaluation de la fiabilité et l’analyse sensibilité des équipements et systèmes électroniques en se limitant à des modélisations stochastiques par variables aléatoires ; (2) étendre cette étude au cas des modélisations stochastiques par processus et champs aléatoires dans le cadre d’une analyse prospective basée sur la résolution de l’équation aux dérivées partielles des télégraphistes à coefficients aléatoires.L’approche probabiliste mentionnée au point (1) consiste à évaluer la probabilité de défaillance d’un équipement ou d’un système électronique vis-à-vis d’un critère de défaillance donné et à déterminer l’importance relative de chacun des paramètres aléatoires en présence. Les différentes méthodes retenues à cette fin sont des adaptations à la CEM de méthodes développées dans le domaine de la mécanique aléatoire pour les études de propagation d’incertitudes. Pour le calcul des probabilités de défaillance, deux grandes catégories de méthodes sont proposées : celles basées sur une approximation de la fonction d’état-limite relative au critère de défaillance et les méthodes de Monte-Carlo basées sur la simulation numérique des variables aléatoires du modèle et l’estimation statistique des probabilités cibles. Pour l’analyse de sensibilité, une approche locale et une approche globale sont retenues. Ces différentes méthodes sont d’abord testées sur des applications académiques afin de mettre en lumière leur intérêt dans le domaine de la CEM. Elles sont ensuite appliquées à des problèmes de lignes de transmission et d’antennes plus représentatifs de la réalité.Dans l’analyse prospective, des méthodes de résolution avancées sont proposées, basées sur des techniques spectrales requérant les développements en chaos polynomiaux et de Karhunen-Loève des processus et champs aléatoires présents dans les modèles. Ces méthodes ont fait l’objet de tests numériques encourageant, mais qui ne sont pas présentés dans le rapport de thèse, faute de temps pour leur analyse complète
Nowadays, most EMC analyzes of electronic or electrical devices are based on deterministic approaches for which the internal and external models’ parameters are supposed to be known and the uncertainties on models’ parameters are taken into account on the outputs by defining very large security margins. But, the disadvantage of such approaches is their conservative character and their limitation when dealing with the parameters’ uncertainties using appropriate stochastic modeling (via random variables, processes or fields) is required in agreement with the goal of the study. In the recent years, this probabilistic approach has been the subject of several researches in the EMC community. The work presented here is a contribution to these researches and has a dual purpose : (1) develop a probabilistic methodology and implement the associated numerical tools for the reliability and sensitivity analyzes of the electronic devices and systems, assuming stochastic modeling via random variables; (2) extend this study to stochastic modeling using random processes and random fields through a prospective analysis based on the resolution of the telegrapher equations (partial derivative equations) with random coefficients. The first mentioned probabilistic approach consists in computing the failure probability of an electronic device or system according to a given criteria and in determining the relative importance of each considered random parameter. The methods chosen for this purpose are adaptations to the EMC framework of methods developed in the structural mechanics community for uncertainty propagation studies. The failure probabilities computation is performed using two type of methods: the ones based on an approximation of the limit state function associated to the failure criteria, and the Monte Carlo methods based on the simulation of the model’s random variables and the statistical estimation of the target failure probabilities. In the case of the sensitivity analysis, a local approach and a global approach are retained. All these methods are firstly applied to academic EMC problems in order to illustrate their interest in the EMC field. Next, they are applied to transmission lines problems and antennas problems closer to reality. In the prospective analysis, more advanced resolution methods are proposed. They are based on spectral approaches requiring the polynomial chaos expansions and the Karhunen-Loève expansions of random processes and random fields considered in the models. Although the first numerical tests of these methods have been hopeful, they are not presented here because of lack of time for a complete analysis
APA, Harvard, Vancouver, ISO, and other styles
46

Forrester, Marie Leanne. "Epidemic models and inference for the transmission of hospital pathogens." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16419/.

Full text
Abstract:
The primary objective of this dissertation is to utilise, adapt and extend current stochastic models and statistical inference techniques to describe the transmission of nosocomial pathogens, i.e. hospital-acquired pathogens, and multiply-resistant organisms within the hospital setting. The emergence of higher levels of antibiotic resistance is threatening the long term viability of current treatment options and placing greater emphasis on the use of infection control procedures. The relative importance and value of various infection control practices is often debated and there is a lack of quantitative evidence concerning their effectiveness. The methods developed in this dissertation are applied to data of methicillin-resistant Staphylococcus aureus occurrence in intensive care units to quantify the effectiveness of infection control procedures. Analysis of infectious disease or carriage data is complicated by dependencies within the data and partial observation of the transmission process. Dependencies within the data are inherent because the risk of colonisation depends on the number of other colonised individuals. The colonisation times, chain and duration are often not visible to the human eye making only partial observation of the transmission process possible. Within a hospital setting, routine surveillance monitoring permits knowledge of interval-censored colonisation times. However, consideration needs to be given to the possibility of false negative outcomes when relying on observations from routine surveillance monitoring. SI (Susceptible, Infected) models are commonly used to describe community epidemic processes and allow for any inherent dependencies. Statistical inference techniques, such as the expectation-maximisation (EM) algorithm and Markov chain Monte Carlo (MCMC) can be used to estimate the model parameters when only partial observation of the epidemic process is possible. These methods appear well suited for the analysis of hospital infectious disease data but need to be adapted for short patient stays through migration. This thesis focuses on the use of Bayesian statistics to explore the posterior distributions of the unknown parameters. MCMC techniques are introduced to overcome analytical intractability caused by partial observation of the epidemic process. Statistical issues such as model adequacy and MCMC convergence assessment are discussed throughout the thesis. The new methodology allows the quantification of the relative importance of different transmission routes and the benefits of hospital practices, in terms of changed transmission rates. Evidence-based decisions can therefore be made on the impact of infection control procedures which is otherwise difficult on the basis of clinical studies alone. The methods are applied to data describing the occurrence of methicillin-resistant Staphylococcus aureus within intensive care units in hospitals in Brisbane and London
APA, Harvard, Vancouver, ISO, and other styles
47

Gandibleux, Jean. "Contribution à l'évaluation de sûreté de fonctionnement des architectures de surveillance/diagnostic embarquées. Application au transport ferroviaire." Phd thesis, Université de Valenciennes et du Hainaut-Cambresis, 2013. http://tel.archives-ouvertes.fr/tel-00990970.

Full text
Abstract:
Dans le transport ferroviaire, le coût et la disponibilité du matériel roulant sont des questions majeures. Pour optimiser le coût de maintenance du système de transport ferroviaire, une solution consiste à mieux détecter et diagnostiquer les défaillances. Actuellement, les architectures de surveillance/diagnostic centralisées atteignent leurs limites et imposent d'innover. Cette innovation technologique peut se matérialiser par la mise en oeuvre d'architectures embarquées de surveillance/diagnostic distribuées et communicantes afin de détecter et localiser plus rapidement les défaillances et de les valider dans le contexte opérationnel du train. Les présents travaux de doctorat, menés dans le cadre du FUI SURFER (SURveillance active Ferroviaire) coordonné par Bombardier, visent à proposer une démarche méthodologique d'évaluation de la sûreté de fonctionnement d'architectures de surveillance/diagnostic. Pour ce faire, une caractérisation et une modélisation génériques des architectures de surveillance/diagnostic basée sur le formalisme des Réseaux de Petri stochastiques ont été proposées. Ces modèles génériques intègrent les réseaux de communication (et les modes de défaillances associés) qui constituent un point dur des architectures de surveillance/diagnostic retenues. Les modèles proposés ont été implantés et validés théoriquement par simulation et une étude de sensibilité de ces architectures de surveillance/diagnostic à certains paramètres influents a été menée. Enfin, ces modèles génériques sont appliqués sur un cas réel du domaine ferroviaire, les systèmes accès voyageurs des trains, qui sont critiques en matière de disponibilité et diagnosticabilité.
APA, Harvard, Vancouver, ISO, and other styles
48

Lepine, Paul. "Recalage stochastique robuste d'un modèle d'aube de turbine composite à matrice céramique." Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCD051/document.

Full text
Abstract:
Les travaux de la présente thèse portent sur le recalage de modèles dynamiques d’aubes de turbinecomposites à matrice céramique. Ils s’inscrivent dans le cadre de la quantification d’incertitudes pour la validation de modèles et ont pour objectif de fournir des outils d’aide à la décision pour les ingénieurs desbureaux d’études. En effet, la dispersion importante observée lors des campagnes expérimentales invalidel’utilisation des méthodes de recalage déterministe. Après un état de l’art sur la relation entre les incertitudeset la physique, l’approche de Vérification & Validation a été introduite comme approche permettantd’assurer la crédibilité des modèles numériques. Puis, deux méthodes de recalages stochastiques, permettantde déterminer la distribution statistique des paramètres, ont été comparées sur un cas académique. La priseen compte des incertitudes n’élude pas les potentielles compensations entre paramètres. Par conséquent, desindicateurs ont été développés afin de détecter la présence de ces phénomènes perturbateurs. Ensuite, lathéorie info-gap a été employée en tant que moyen de modéliser ces méconnaissances. Une méthode derecalage stochastique robuste a ainsi été proposée, assurant un compromis entre la fidélité du modèle auxessais et la robustesse aux méconnaissances. Ces outils ont par la suite été appliqués sur un modèle éléments
This work is focused on the stochastic updating of ceramic matrix composite turbine blade model. They arepart of the uncertainty quantification framework for model validation. The aim is to enhance the existing toolused by the industrial decision makers. Indeed, consequent dispersion was measured during the experimentalcampaigns preventing the use of deterministic approaches. The first part of this thesis is dedicated to therelationship between mechanical science and uncertainty. Thus, Verification and Validation was introduced asthe processes by which credibility in numerical models is established. Then two stochastic updatingtechniques, able to handle statistic distribution, were compared through an academic example. Nevertheless,taking into account uncertainties doesn’t remove potential compensating effects between parameters.Therefore, criteria were developed in order to detect these disturbing phenomena. Info-gap theory wasemployed as a mean to model these lack of knowledge. Paired with the stochastic updating method, a robuststochasticapproach has been proposed. Results demonstrate a trade-off relationship between the model’sfidelity and robustness. The developed tools were applied on a ceramic matrix composite turbine blade finiteelement model
APA, Harvard, Vancouver, ISO, and other styles
49

Riahi, Hassen. "Analyse de structures à dimension stochastique élevée : application aux toitures bois sous sollicitation sismique." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2013. http://tel.archives-ouvertes.fr/tel-00881187.

Full text
Abstract:
Le problème de la dimension stochastique élevée est récurrent dans les analyses probabilistes des structures. Il correspond à l'augmentation exponentielle du nombre d'évaluations du modèle mécanique lorsque le nombre de paramètres incertains est élevé. Afin de pallier cette difficulté, nous avons proposé dans cette thèse, une approche à deux étapes. La première consiste à déterminer la dimension stochastique efficace, en se basant sur une hiérarchisation des paramètres incertains en utilisant les méthodes de criblage. Une fois les paramètres prépondérants sur la variabilité de la réponse du modèle identifiés, ils sont modélisés par des variables aléatoires et le reste des paramètres est fixé à leurs valeurs moyennes respectives, dans le calcul stochastique proprement dit. Cette tâche fut la deuxième étape de l'approche proposée, dans laquelle la méthode de décomposition de la dimension est utilisée pour caractériser l'aléa de la réponse du modèle, par l'estimation des moments statistiques et la construction de la densité de probabilité. Cette approche permet d'économiser jusqu'à 90% du temps de calcul demandé par les méthodes de calcul stochastique classiques. Elle est ensuite utilisée dans l'évaluation de l'intégrité d'une toiture à ossature bois d'une habitation individuelle installée sur un site d'aléa sismique fort. Dans ce contexte, l'analyse du comportement de la structure est basée sur un modèle éléments finis, dans lequel les assemblages en bois sont modélisés par une loi anisotrope avec hystérésis et l'action sismique est représentée par huit accélérogrammes naturels fournis par le BRGM. Ces accélérogrammes permettent de représenter différents types de sols selon en se référant à la classification de l'Eurocode 8. La défaillance de la toiture est définie par l'atteinte de l'endommagement, enregistré dans les assemblages situés sur les éléments de contreventement et les éléments d'anti-flambement, d'un niveau critique fixé à l'aide des résultats des essais. Des analyses déterministes du modèle éléments finis ont montré que la toiture résiste à l'aléa sismique de la ville du Moule en Guadeloupe. Les analyses probabilistes ont montré que parmi les 134 variables aléatoires représentant l'aléa dans le comportement non linéaire des assemblages, 15 seulement contribuent effectivement à la variabilité de la réponse mécanique ce qui a permis de réduire la dimension stochastique dans le calcul des moments statistiques. En s'appuyant sur les estimations de la moyenne et de l'écart-type on a montré que la variabilité de l'endommagement dans les assemblages situés dans les éléments de contreventement est plus importante que celle de l'endommagement sur les assemblages situés sur les éléments d'anti-flambement. De plus, elle est plus significative pour les signaux les plus nocifs sur la structure.
APA, Harvard, Vancouver, ISO, and other styles
50

Lin, Yi-Shan, and 林宜珊. "Sensitivity Analysis for Stochastic-flow Networks." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/05916045660604529057.

Full text
Abstract:
碩士
國立臺灣科技大學
工業管理系
100
For real world systems can be modeled a network for analyzing its performance, such as a logistics network, a computer network, an electric power network, in which the network is constructed by a set of arcs and a set of nodes. Therefore, sensitivity analysis of network reliability is the goal which most managers pursue. Recently, determine a plan for improving network reliability is a crucial issue. How to improve the network reliability and reduce decrease of network reliability? In addition, each arc/node owns several capacities with a probability distribution and may fail should be regarded as stochastic-flow networks. In the past, most of researches extended importance measure to explore the important factor of network reliability. This study addresses three sensitivity analysis techniques to analyze the changes of arcs that will affect the network reliability, including changes in probability, changes in arc capacity and improvement potential. The main goal is to apply these solutions to obtain the characteristic of network and plan for improving network reliability. In addition, the network reliability is evaluated by Minimal Paths (MPs) and Recursive Sum of Disjoint Product (RSDP) algorithm. Furthermore, sensitivity analysis techniques is applied to four network models, including stochastic-flow network, stochastic electric power network, time-based computer network and stochastic project network. The condition is set the same for sensitivity analysis in stochastic-flow network model. Other three network models related with our daily life, utilizing sensitivity analysis to figure out the way of improving network reliability or stably keeping network reliability.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography