To see the other types of publications on this topic, follow the link: Brownian Motion model.

Dissertations / Theses on the topic 'Brownian Motion model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Brownian Motion model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lampo, Aniello, Soon Hoe Lim, Jan Wehr, Pietro Massignan, and Maciej Lewenstein. "Lindblad model of quantum Brownian motion." AMER PHYSICAL SOC, 2016. http://hdl.handle.net/10150/622483.

Full text
Abstract:
The theory of quantum Brownian motion describes the properties of a large class of open quantum systems. Nonetheless, its description in terms of a Born-Markov master equation, widely used in the literature, is known to violate the positivity of the density operator at very low temperatures. We study an extension of existing models, leading to an equation in the Lindblad form, which is free of this problem. We study the dynamics of the model, including the detailed properties of its stationary solution, for both constant and position-dependent coupling of the Brownian particle to the bath, focusing in particular on the correlations and the squeezing of the probability distribution induced by the environment.
APA, Harvard, Vancouver, ISO, and other styles
2

Mota, Pedro José dos Santos Palhinhas. "Brownian motion with drift threshold model." Doctoral thesis, FCT - UNL, 2008. http://hdl.handle.net/10362/1766.

Full text
Abstract:
In this thesis we implement estimating procedures in order to estimate threshold parameters for the continuous time threshold models driven by stochastic di®erential equations. The ¯rst procedure is based on the EM (expectation-maximization) algorithm applied to the threshold model built from the Brownian motion with drift process. The second procedure mimics one of the fundamental ideas in the estimation of the thresholds in time series context, that is, conditional least squares estimation. We implement this procedure not only for the threshold model built from the Brownian motion with drift process but also for more generic models as the ones built from the geometric Brownian motion or the Ornstein-Uhlenbeck process. Both procedures are implemented for simu- lated data and the least squares estimation procedure is also implemented for real data of daily prices from a set of international funds. The ¯rst fund is the PF-European Sus- tainable Equities-R fund from the Pictet Funds company and the second is the Parvest Europe Dynamic Growth fund from the BNP Paribas company. The data for both funds are daily prices from the year 2004. The last fund to be considered is the Converging Europe Bond fund from the Schroder company and the data are daily prices from the year 2005.
European Community's Human Po-tential Programme under contract HPRN-CT-2000-00100, DYNSTOCH and by PRODEP III (medida 5 - Acção 5.3)
APA, Harvard, Vancouver, ISO, and other styles
3

Betz, Volker. "Gibbs measures relative to Brownian motion and Nelson's model." [S.l. : s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=964465647.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Endres, Derek. "Development and Demonstration of a General-Purpose Model for Brownian Motion." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1307459444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mbona, Innocent. "Portfolio risk measures and option pricing under a Hybrid Brownian motion model." Diss., University of Pretoria, 2017. http://hdl.handle.net/2263/64068.

Full text
Abstract:
The 2008/9 financial crisis intensified the search for realistic return models, that capture real market movements. The assumed underlying statistical distribution of financial returns plays a crucial role in the evaluation of risk measures, and pricing of financial instruments. In this dissertation, we discuss an empirical study on the evaluation of the traditional portfolio risk measures, and option pricing under the hybrid Brownian motion model, developed by Shaw and Schofield. Under this model, we derive probability density functions that have a fat-tailed property, such that “25-sigma” or worse events are more probable. We then estimate Value-at-Risk (VaR) and Expected Shortfall (ES) using four equity stocks listed on the Johannesburg Stock Exchange, including the FTSE/JSE Top 40 index. We apply the historical method and Variance-Covariance method (VC) in the valuation of VaR. Under the VC method, we adopt the GARCH(1,1) model to deal with the volatility clustering phenomenon. We backtest the VaR results and discuss our findings for each probability density function. Furthermore, we apply the hybrid model to price European style options. We compare the pricing performance of the hybrid model to the classical Black-Scholes model.
Dissertation (MSc)--University of Pretoria, 2017.
National Research Fund (NRF), University of Pretoria Postgraduate bursary and the General Studentship bursary
Mathematics and Applied Mathematics
MSc
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
6

Salopek, Donna Mary. "Tolerance to arbitrage, inclusion of fractional Brownian motion to model stock price fluctuations." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq22176.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Salopek, Donna Mary Carleton University Dissertation Mathematics and Statistics. "Tolerance to arbitrage: inclusion of fractional Brownian motion to model stock price fluctuations." Ottawa, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Walljee, Raabia. "The Levy-LIBOR model with default risk." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/96957.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2015
ENGLISH ABSTRACT : In recent years, the use of Lévy processes as a modelling tool has come to be viewed more favourably than the use of the classical Brownian motion setup. The reason for this is that these processes provide more flexibility and also capture more of the ’real world’ dynamics of the model. Hence the use of Lévy processes for financial modelling is a motivating factor behind this research presentation. As a starting point a framework for the LIBOR market model with dynamics driven by a Lévy process instead of the classical Brownian motion setup is presented. When modelling LIBOR rates the use of a more realistic driving process is important since these rates are the most realistic interest rates used in the market of financial trading on a daily basis. Since the financial crisis there has been an increasing demand and need for efficient modelling and management of risk within the market. This has further led to the motivation of the use of Lévy based models for the modelling of credit risky financial instruments. The motivation stems from the basic properties of stationary and independent increments of Lévy processes. With these properties, the model is able to better account for any unexpected behaviour within the market, usually referred to as "jumps". Taking both of these factors into account, there is much motivation for the construction of a model driven by Lévy processes which is able to model credit risk and credit risky instruments. The model for LIBOR rates driven by these processes was first introduced by Eberlein and Özkan (2005) and is known as the Lévy-LIBOR model. In order to account for the credit risk in the market, the Lévy-LIBOR model with default risk was constructed. This was initially done by Kluge (2005) and then formally introduced in the paper by Eberlein et al. (2006). This thesis aims to present the theoretical construction of the model as done in the above mentioned references. The construction includes the consideration of recovery rates associated to the default event as well as a pricing formula for some popular credit derivatives.
AFRIKAANSE OPSOMMING : In onlangse jare, is die gebruik van Lévy-prosesse as ’n modellerings instrument baie meer gunstig gevind as die gebruik van die klassieke Brownse bewegingsproses opstel. Die rede hiervoor is dat hierdie prosesse meer buigsaamheid verskaf en die dinamiek van die model wat die praktyk beskryf, beter hierin vervat word. Dus is die gebruik van Lévy-prosesse vir finansiële modellering ’n motiverende faktor vir hierdie navorsingsaanbieding. As beginput word ’n raamwerk vir die LIBOR mark model met dinamika, gedryf deur ’n Lévy-proses in plaas van die klassieke Brownse bewegings opstel, aangebied. Wanneer LIBOR-koerse gemodelleer word is die gebruik van ’n meer realistiese proses belangriker aangesien hierdie koerse die mees realistiese koerse is wat in die finansiële mark op ’n daaglikse basis gebruik word. Sedert die finansiële krisis was daar ’n toenemende aanvraag en behoefte aan doeltreffende modellering en die bestaan van risiko binne die mark. Dit het verder gelei tot die motivering van Lévy-gebaseerde modelle vir die modellering van finansiële instrumente wat in die besonder aan kridietrisiko onderhewig is. Die motivering spruit uit die basiese eienskappe van stasionêre en onafhanklike inkremente van Lévy-prosesse. Met hierdie eienskappe is die model in staat om enige onverwagte gedrag (bekend as spronge) vas te vang. Deur hierdie faktore in ag te neem, is daar genoeg motivering vir die bou van ’n model gedryf deur Lévy-prosesse wat in staat is om kredietrisiko en instrumente onderhewig hieraan te modelleer. Die model vir LIBOR-koerse gedryf deur hierdie prosesse was oorspronklik bekendgestel deur Eberlein and Özkan (2005) en staan beken as die Lévy-LIBOR model. Om die kredietrisiko in die mark te akkommodeer word die Lévy-LIBOR model met "default risk" gekonstrueer. Dit was aanvanklik deur Kluge (2005) gedoen en formeel in die artikel bekendgestel deur Eberlein et al. (2006). Die doel van hierdie tesis is om die teoretiese konstruksie van die model aan te bied soos gedoen in die bogenoemde verwysings. Die konstruksie sluit ondermeer in die terugkrygingskoers wat met die wanbetaling geassosieer word, sowel as ’n prysingsformule vir ’n paar bekende krediet afgeleide instrumente.
APA, Harvard, Vancouver, ISO, and other styles
9

Froemel, Anneliese [Verfasser], and Detlef [Akademischer Betreuer] Dürr. "A semi-realistic model for Brownian motion in one dimension / Anneliese Froemel ; Betreuer: Detlef Dürr." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2020. http://d-nb.info/1220631914/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kelekele, Liloo Didier Joel. "Mathematical model of performance measurement of defined contribution pension funds." University of the Western Cape, 2015. http://hdl.handle.net/11394/4367.

Full text
Abstract:
>Magister Scientiae - MSc
The industry of pension funds has become one of the drivers of today’s economic activity by its important volume of contribution in the financial market and by creating wealth. The increasing importance that pension funds have acquired in today’s economy and financial market, raises special attention from investors, financial actors and pundits in the sector. Regarding this economic weight of pension funds, a thorough analysis of the performance of different pension funds plans in order to optimise benefits need to be undertaken. The research explores criteria and invariants that make it possible to compare the performance of different pension fund products. Pension fund companies currently do measure their performances with those of others. Likewise, the individual investing in a pension plan compares different products available in the market. There exist different ways of measuring the performance of a pension fund according to their different schemes. Generally, there exist two main pension funds plans. The defined benefit (DB) pension funds plan which is mostly preferred by pension members due to his ability to hold the risk to the pension fund manager. The defined contributions (DC) pension fund plan on the other hand, is more popularly preferred by the pension fund managers due to its ability to transfer the risk to the pension fund members. One of the reasons that motivate pension fund members’ choices of entering into a certain programme is that their expectations of maintaining their living lifestyle after retirement are met by the pension fund strategies. This dissertation investigates the various properties and characteristics of the defined contribution pension fund plan with a minimum guarantee and benchmark in order to mitigate the risk that pension fund members are subject to. For the pension fund manager the aim is to find the optimal asset allocation strategy which optimises its retribution which is in fact a part of the surplus (the difference between the pension fund value and the guarantee) (2004) [19] and to analyse the effect of sharing between the contributor and the pension fund. From the pension fund members’ perspective it is to define a optimal guarantee as a solution to the contributor’s optimisation programme. In particular, we consider a case of a pension fund company which invests in a bond, stocks and a money market account. The uncertainty in the financial market is driven by Brownian motions. Numerical simulations were performed to compare the different models.
APA, Harvard, Vancouver, ISO, and other styles
11

Londani, Mukhethwa. "Numerical Methods for Mathematical Models on Warrant Pricing." University of the Western Cape, 2010. http://hdl.handle.net/11394/8210.

Full text
Abstract:
>Magister Scientiae - MSc
Warrant pricing has become very crucial in the present market scenario. See, for example, M. Hanke and K. Potzelberger, Consistent pricing of warrants and traded options, Review Financial Economics 11(1) (2002) 63-77 where the authors indicate that warrants issuance affects the stock price process of the issuing company. This change in the stock price process leads to subsequent changes in the prices of options written on the issuing company's stocks. Another notable work is W.G. Zhang, W.L. Xiao and C.X. He, Equity warrant pricing model under Fractional Brownian motion and an empirical study, Expert System with Applications 36(2) (2009) 3056-3065 where the authors construct equity warrants pricing model under Fractional Brownian motion and deduce the European options pricing formula with a simple method. We study this paper in details in this mini-thesis. We also study some of the mathematical models on warrant pricing using the Black-Scholes framework. The relationship between the price of the warrants and the price of the call accounts for the dilution effect is also studied mathematically. Finally we do some numerical simulations to derive the value of warrants.
APA, Harvard, Vancouver, ISO, and other styles
12

Jansson, Rådberg Weronica. "A model system for understanding the distribution of fines in a paper structure using fluorescence microscopy." Thesis, Karlstads universitet, Institutionen för ingenjörs- och kemivetenskaper, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-36263.

Full text
Abstract:
Fines have a very important role in paper chemistry and are a determinant in retention, drainage and the properties of paper. The purpose of this project was to be able to label the fines with fluorophores and study their Brownian motion with fluorescence microscopy. When succeeded this could then be used to study fines, fibers and other additives in a suspension thus giving the fundamental knowledge of why fines have this important role. Due to aggregation of the fines no Brownian motion could be detected. Instead the fines were handled as a network system and small fluorescence labeled latex particles were then studied in this system. This approach yields information about the fines when the obstacle with sedimentation of the network is resolved.
Fines har en viktig roll i papperskemin och har en avgörande roll när det gäller retention, dränering och papprets egenskaper. Syftet med detta projekt var att kunna färga in fines med fluoroforer och sedan följa deras brownska rörelse med hjälp av ett fluorescensmikroskop. Denna metod skulle sedan kunna användas för att observera interaktionerna mellan fines, fibrer och andra additiver i en suspension. Det skulle göra de underliggande mekanismerna kända för varför fines utgör en så viktig del i processen. På grund av att fines aggregerade så fick man istället behandla dem som ett nätverk där man tillsatte redan fluorescerande prober vars rörelser studerades. Att studera fines indirekt på detta vis kommer att ge information när sedimenteringen av nätverket är löst.
APA, Harvard, Vancouver, ISO, and other styles
13

Carlsson, Gunilla. "Latex Colloid Dynamics in Complex Dispersions : Fluorescence Microscopy Applied to Coating Color Model Systems." Doctoral thesis, Karlstads universitet, Institutionen för kemi, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-2621.

Full text
Abstract:
Coating colors are applied to the base paper in order to maximize the performance of the end product. Coating colors are complex colloidal systems, mainly consisting of water, binders, and pigments. To understand the behavior of colloidal suspensions, an understanding of the interactions between its components is essential.
APA, Harvard, Vancouver, ISO, and other styles
14

Unver, Ibrahim Emre. "Pricing And Hedging A Participating Forward Contract." Master's thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615532/index.pdf.

Full text
Abstract:
We use the Garman-Kohlhagen model to compute the hedge and price of a participating forward contract on the US dollar that is written by a Turkish Bank. The algorithm is computed using actual market data and a weekly updated hedge is computed. We note that despite a weekly update and many assumptions made on the volatility and the interest rates the model gives a very reasonable hedge.
APA, Harvard, Vancouver, ISO, and other styles
15

Littin, Curinao Jorge Andrés. "Quasi stationary distributions when infinity is an entrance boundary : optimal conditions for phase transition in one dimensional Ising model by Peierls argument and its consequences." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4789/document.

Full text
Abstract:
Cette thèse comporte deux chapitres principaux. Deux problèmes indépendants de Modélisation Mathématique y sont étudiés. Au chapitre 1, on étudiera le problème de l’existence et de l’unicité des distributions quasi-stationnaires (DQS) pour un mouvement Brownien avec dérive, tué en zéro dans le cas où la frontière d’entrée est l’infini et la frontière de sortie est zéro selon la classification de Feller.Ce travail est lié à l’article pionnier dans ce sujet par Cattiaux, Collet, Lambert, Martínez, Méléard, San Martín; où certaines conditions suffisantes ont été établies pour prouver l’existence et l’unicité de DQS dans le contexte d’une famille de Modèles de Dynamique des Populations.Dans ce chapitre, nous généralisons les théorèmes les plus importants de ce travail pionnier, la partie technique est basée dans la théorie de Sturm-Liouville sur la demi-droite positive. Au chapitre 2, on étudiera le problème d’obtenir des bornes inférieures optimales sur l’Hamiltonien du Modèle d’Ising avec interactions à longue portée, l’interaction entre deux spins situés à distance d décroissant comme d^(2-a), où a ϵ[0,1).Ce travail est lié à l’article publié en 2005 par Cassandro, Ferrari, Merola, Presutti où les bornes inférieures optimales sont obtenues dans le cas où a est dans [0,(log3/log2)-1) en termes de structures hiérarchiques appelées triangles et contours.Les principaux théorèmes obtenus dans cette thèse peuvent être résumés de la façon suivante:1. Il n’existe pas de borne inférieure optimale pour l’Hamiltonien en termes de triangles pour a dans ϵ[log2/log3,1). 2. Il existe une borne optimale pour l’Hamiltonien en termes de contours pour a dans a ϵ [0,1)
This thesis contains two main Chapters, where we study two independent problems of Mathematical Modelling : In Chapter 1, we study the existence and uniqueness of Quasi Stationary Distributions (QSD) for a drifted Browian Motion killed at zero, when $+infty$ is an entrance Boundary and zero is an exit Boundary according to Feller's classification. The work is related to the previous paper published in 2009 by { Cattiaux, P., Collet, P., Lambert, A., Martínez, S., Méléard, S., San Martín, where some sufficient conditions were provided to prove the existence and uniqueness of QSD in the context of a family of Population Dynamic Models. This work generalizes the most important theorems of this work, since no extra conditions are imposed to get the existence, uniqueness of QSD and the existence of a Yaglom limit. The technical part is based on the Sturm Liouville theory on the half line. In Chapter 2, we study the problem of getting quasi additive bounds on the Hamiltonian for the Long Range Ising Model when the interaction term decays according to d^{2-a}, a ϵ[0,1). This work is based on the previous paper written by Cassandro, Ferrari, Merola, Presutti, where quasi-additive bounds for the Hamiltonian were obtained for a in [0,(log3/log2)-1) in terms of hierarchical structures called triangles and Contours. The main theorems of this work can be summarized as follows: 1 There does not exist a quasi additive bound for the Hamiltonian in terms of triangles when a ϵ [0,(log3/log2)-1), 2. There exists a quasi additive bound for the Hamiltonian in terms of Contours for a in [0,1)
APA, Harvard, Vancouver, ISO, and other styles
16

Menes, Matheus Dorival Leonardo Bombonato. "Versão discreta do modelo de elasticidade constante da variância." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-16042013-151325/.

Full text
Abstract:
Neste trabalho propomos um modelo de mercado através de uma discretização aleatória do movimento browniano proposta por Leão & Ohashi (2010). Com este modelo, dada uma função payoff, vamos desenvolver uma estratégia de hedging e uma metodologia para precificação de opções
In this work we propose a market model using a discretization scheme of the random Brownian motion proposed by Leão & Ohashi (2010). With this model, for any given payoff function, we develop a hedging strategy and a methodology to option pricing
APA, Harvard, Vancouver, ISO, and other styles
17

Rafiou, AS. "Foreign Exchange Option Valuation under Stochastic Volatility." University of the Western Cape, 2009. http://hdl.handle.net/11394/7777.

Full text
Abstract:
>Magister Scientiae - MSc
The case of pricing options under constant volatility has been common practise for decades. Yet market data proves that the volatility is a stochastic phenomenon, this is evident in longer duration instruments in which the volatility of underlying asset is dynamic and unpredictable. The methods of valuing options under stochastic volatility that have been extensively published focus mainly on stock markets and on options written on a single reference asset. This work probes the effect of valuing European call option written on a basket of currencies, under constant volatility and under stochastic volatility models. We apply a family of the stochastic models to investigate the relative performance of option prices. For the valuation of option under constant volatility, we derive a closed form analytic solution which relaxes some of the assumptions in the Black-Scholes model. The problem of two-dimensional random diffusion of exchange rates and volatilities is treated with present value scheme, mean reversion and non-mean reversion stochastic volatility models. A multi-factor Gaussian distribution function is applied on lognormal asset dynamics sampled from a normal distribution which we generate by the Box-Muller method and make inter dependent by Cholesky factor matrix decomposition. Furthermore, a Monte Carlo simulation method is adopted to approximate a general form of numeric solution The historic data considered dates from 31 December 1997 to 30 June 2008. The basket contains ZAR as base currency, USD, GBP, EUR and JPY are foreign currencies.
APA, Harvard, Vancouver, ISO, and other styles
18

Wesselhöfft, Niels. "Utilizing self-similar stochastic processes to model rare events in finance." Doctoral thesis, Humboldt-Universität zu Berlin, 2021. http://dx.doi.org/10.18452/22360.

Full text
Abstract:
In der Statistik und der Mathematik ist die Normalverteilung der am meisten verbreitete, stochastische Term für die Mehrheit der statistischen Modelle. Wir zeigen, dass der entsprechende stochastische Prozess, die Brownsche Bewegung, drei entscheidende empirische Beobachtungen nicht abbildet: schwere Ränder, Langzeitabhängigkeiten und Skalierungsgesetze. Ein selbstähnlicher Prozess, der in der Lage ist Langzeitabhängigkeiten zu modellieren, ist die Gebrochene Brownsche Bewegung, welche durch die Faltung der Inkremente im Limit nicht normalverteilt sein muss. Die Inkremente der Gebrochenen Brownschen Bewegung können durch einen Parameter H, dem Hurst Exponenten, Langzeitabhängigkeiten darstellt werden. Für die Gebrochene Brownsche Bewegung müssten die Skalierungs-(Hurst-) Exponenten über die Momente verschiedener Ordnung konstant sein. Empirisch beobachten wir variierende Hölder-Exponenten, die multifraktales Verhalten implizieren. Wir erklären dieses multifraktale Verhalten durch die Änderung des alpha-stabilen Indizes der alpha-stabilen Verteilung, indem wir Filter für Saisonalitäten und Langzeitabhängigkeiten über verschiedene Zeitfrequenzen anwenden, startend bei 1-minütigen Hochfrequenzdaten. Durch die Anwendung eines Filters für die Langzeitabhängigkeit zeigen wir, dass die Residuen des stochastischen Prozesses geringer Zeitfrequenz (wöchentlich) durch die alpha-stabile Bewegung beschrieben werden können. Dies erlaubt es uns, den empirischen, hochfrequenten Datensatz auf die niederfrequente Zeitfrequenz zu skalieren. Die generierten wöchentlichen Daten aus der Frequenz-Reskalierungs-Methode (FRM) haben schwerere Ränder als der ursprüngliche, wöchentliche Prozess. Wir zeigen, dass eine Teilmenge des Datensatzes genügt, um aus Risikosicht bessere Vorhersagen für den gesamten Datensatz zu erzielen. Im Besonderen wäre die Frequenz-Reskalierungs-Methode (FRM) in der Lage gewesen, die seltenen Events der Finanzkrise 2008 zu modellieren.
Coming from a sphere in statistics and mathematics in which the Normal distribution is the dominating underlying stochastic term for the majority of the models, we indicate that the relevant diffusion, the Brownian Motion, is not accounting for three crucial empirical observations for financial data: Heavy tails, long memory and scaling laws. A self-similar process, which is able to account for long-memory behavior is the Fractional Brownian Motion, which has a possible non-Gaussian limit under convolution of the increments. The increments of the Fractional Brownian Motion can exhibit long memory through a parameter H, the Hurst exponent. For the Fractional Brownian Motion this scaling (Hurst) exponent would be constant over different orders of moments, being unifractal. But empirically, we observe varying Hölder exponents, the continuum of Hurst exponents, which implies multifractal behavior. We explain the multifractal behavior through the changing alpha-stable indices from the alpha-stable distributions over sampling frequencies by applying filters for seasonality and time dependence (long memory) over different sampling frequencies, starting at high-frequencies up to one minute. By utilizing a filter for long memory we show, that the low-sampling frequency process, not containing the time dependence component, can be governed by the alpha-stable motion. Under the alpha-stable motion we propose a semiparametric method coined Frequency Rescaling Methodology (FRM), which allows to rescale the filtered high-frequency data set to the lower sampling frequency. The data sets for e.g. weekly data which we obtain by rescaling high-frequency data with the Frequency Rescaling Method (FRM) are more heavy tailed than we observe empirically. We show that using a subset of the whole data set suffices for the FRM to obtain a better forecast in terms of risk for the whole data set. Specifically, the FRM would have been able to account for tail events of the financial crisis 2008.
APA, Harvard, Vancouver, ISO, and other styles
19

Triampo, Wannapong. "Non-Equilibrium Disordering Processes In binary Systems Due to an Active Agent." Diss., Virginia Tech, 2001. http://hdl.handle.net/10919/26738.

Full text
Abstract:
In this thesis, we study the kinetic disordering of systems interacting with an agent or a walker. Our studies divide naturally into two classes: for the first, the dynamics of the walker conserves the total magnetization of the system, for the second, it does not. These distinct dynamics are investigated in part I and II respectively. In part I, we investigate the disordering of an initially phase-segregated binary alloy due to a highly mobile vacancy which exchanges with the alloy atoms. This dynamics clearly conserves the total magnetization. We distinguish three versions of dynamic rules for the vacancy motion, namely a pure random walk , an ``active' and a biased walk. For the random walk case, we review and reproduce earlier work by Z. Toroczkai et. al.,~cite{TKSZ} which will serve as our base-line. To test the robustness of these findings and to make our model more accessible to experimental studies, we investigated the effects of finite temperatures (``active walks') as well as external fields (biased walks). To monitor the disordering process, we define a suitable disorder parameter, namely the number of broken bonds, which we study as a function of time, system size and vacancy number. Using Monte Carlo simulations and a coarse-grained field theory, we observe that the disordering process exhibits three well separated temporal regimes. We show that the later stages exhibit dynamic scaling, characterized by a set of exponents and scaling functions. For the random and the biased case, these exponents and scaling functions are computed analytically in excellent agreement with the simulation results. The exponents are remarkably universal. We conclude this part with some comments on the early stage, the interfacial roughness and other related features. In part II, we introduce a model of binary data corruption induced by a Brownian agent or random walker. Here, the magnetization is not conserved, being related to the density of corrupted bits }$ ho ${small .} {small Using both continuum theory and computer simulations, we study the average density of corrupted bits, and the associated density-density correlation function, as well as several other related quantities. In the second half, we extend our investigations in three main directions which allow us to make closer contact with real binary systems. These are i) a detailed analysis of two dimensions, ii) the case of competing agents, and iii) the cases of asymmetric and quenched random couplings. Our analytic results are in good agreement with simulation results. The remarkable finding of this study is the robustness of the phenomenological model which provides us with the tool, continuum theory, to understand the nature of such a simple model.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
20

Bauke, Francisco Conti [UNESP]. "Portadores quentes: modelo browniano." Universidade Estadual Paulista (UNESP), 2011. http://hdl.handle.net/11449/91881.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:25:31Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-02-17Bitstream added on 2014-06-13T20:14:03Z : No. of bitstreams: 1 bauke_fc_me_rcla.pdf: 1413465 bytes, checksum: 5695187aaf8a438767e3a8684e26c073 (MD5)
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Neste trabalho estudamos o modelo do movimento Browniano de uma partícula carregada sob a ação de campos elétrico e magnético, externos e homogêneos, no formalismo de Langevin. Calculamos a energia cinética média através do teorema da flutuação-dissipação e obtivemos uma expressão para a temperatura efetiva das partículas Brownianas em função da temperatura do reservatório e dos campos externos. Esta temperatura efetiva mostrou-se sempre maior que a temperatura do reservatório, o que explica a expressão “portadores quentes”. Estudamos essa temperatura efetiva no regime assintótico, ou seja, no estado estacionário atingido em tempos muito longos (quando comparado com o tempo de colisão) e a utilizamos para escrever as equações de transporte em semicondutores, denominadas equações de Shockley generalizadas sendo que incluem nesse caso também a ação do campo magnético. Uma aplicação direta e relevante foi a modelagem para o já conhecido efeito Gunn para portadores assumidos como Brownianos. A temperatura efetiva calculada por nós no regime transiente permitiu estudar também os efeitos do reservatório na relaxação da temperatura efetiva à temperatura terminal (de não equilíbrio e estacionária). Nossos resultados no que diz respeito ao efeito Gunn, embora seja o modelo mais simples de um portador Browniano, mostrou uma surpreendente concordância com resultados experimentais, sugerindo que modelos mais sofisticados devam incluir os elementos apresentados neste estudo
We present a Brownian model for a charged particle in a field of forces, in particular, electric and magnetic external homogeneous fields, within the Langevin formalism. We compute the average kinetic energy via the fluctuation dissipation and obtain an expression for the Brownian particle´s effective temperature. The latter is a function of the heat bath temperature and both external fields. This effective temperature is always greater than the heat bath temperature, therefore the expression “hot carriers”. This effective temperature, in the asymptotic regime, the stationary state at long times (greater than the collision time), is used to write down the transport equations for semiconductors, namely the generalized Shockley equations, now incorporating the magnetic field effect. A direct and relevant application follows: a model for the well known Gunn effect, assuming a Brownian scheme. In the transient regime the computed effective temperature also allow us to probe some features of the heat bath, as the effective temperature relaxes to its terminal stationary value. As for our results in the Gunn effect model, the simplest of all in a Brownian scheme, we obtain a surprisingly good agreement with experimental data, suggesting that more involved models should include our minimal assumptions
APA, Harvard, Vancouver, ISO, and other styles
21

Casse, Jérôme. "Automates cellulaires probabilistes et processus itérés ad libitum." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0248/document.

Full text
Abstract:
La première partie de cette thèse porte sur les automates cellulaires probabilistes (ACP) sur la ligne et à deux voisins. Pour un ACP donné, nous cherchons l'ensemble de ces lois invariantes. Pour des raisons expliquées en détail dans la thèse, ceci est à l'heure actuelle inenvisageable de toutes les obtenir et nous nous concentrons, dans cette thèse, surles lois invariantes markoviennes. Nous établissons, tout d'abord, un théorème de nature algébrique qui donne des conditions nécessaires et suffisantes pour qu'un ACP admette une ou plusieurs lois invariantes markoviennes dans le cas où l'alphabet E est fini. Par la suite, nous généralisons ce résultat au cas d'un alphabet E polonais après avoir clarifié les difficultés topologiques rencontrées. Enfin, nous calculons la fonction de corrélation du modèleà 8 sommets pour certaines valeurs des paramètres du modèle en utilisant une partie desrésultats précédents
The first part of this thesis is about probabilistic cellular automata (PCA) on the line and with two neighbors. For a given PCA, we look for the set of its invariant distributions. Due to reasons explained in detail in this thesis, it is nowadays unthinkable to get all of them and we concentrate our reections on the invariant Markovian distributions. We establish, first, an algebraic theorem that gives a necessary and sufficient condition for a PCA to have one or more invariant Markovian distributions when the alphabet E is finite. Then, we generalize this result to the case of a polish alphabet E once we have clarified the encountered topological difficulties. Finally, we calculate the 8-vertex model's correlation function for some parameters values using previous results.The second part of this thesis is about infinite iterations of stochastic processes. We establish the convergence of the finite dimensional distributions of the α-stable processes iterated n times, when n goes to infinite, according to parameter of stability and to drift r. Then, we describe the limit distributions. In the iterated Brownian motion case, we show that the limit distributions are linked with iterated functions system
APA, Harvard, Vancouver, ISO, and other styles
22

Ang, Eu-Jin. "Brownian motion queueing models of communications and manufacturing systems." Thesis, Imperial College London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Bauke, Francisco Conti. "Portadores quentes : modelo browniano /." Rio Claro : [s.n.], 2011. http://hdl.handle.net/11449/91881.

Full text
Abstract:
Orientador: Roberto E. Lagos Monaco
Banca: José Antonio Roversi
Banca: Bernardo Laks
Resumo: Neste trabalho estudamos o modelo do movimento Browniano de uma partícula carregada sob a ação de campos elétrico e magnético, externos e homogêneos, no formalismo de Langevin. Calculamos a energia cinética média através do teorema da flutuação-dissipação e obtivemos uma expressão para a temperatura efetiva das partículas Brownianas em função da temperatura do reservatório e dos campos externos. Esta temperatura efetiva mostrou-se sempre maior que a temperatura do reservatório, o que explica a expressão "portadores quentes". Estudamos essa temperatura efetiva no regime assintótico, ou seja, no estado estacionário atingido em tempos muito longos (quando comparado com o tempo de colisão) e a utilizamos para escrever as equações de transporte em semicondutores, denominadas equações de Shockley generalizadas sendo que incluem nesse caso também a ação do campo magnético. Uma aplicação direta e relevante foi a modelagem para o já conhecido efeito Gunn para portadores assumidos como Brownianos. A temperatura efetiva calculada por nós no regime transiente permitiu estudar também os efeitos do reservatório na relaxação da temperatura efetiva à temperatura terminal (de não equilíbrio e estacionária). Nossos resultados no que diz respeito ao efeito Gunn, embora seja o modelo mais simples de um portador Browniano, mostrou uma surpreendente concordância com resultados experimentais, sugerindo que modelos mais sofisticados devam incluir os elementos apresentados neste estudo
Abstract: We present a Brownian model for a charged particle in a field of forces, in particular, electric and magnetic external homogeneous fields, within the Langevin formalism. We compute the average kinetic energy via the fluctuation dissipation and obtain an expression for the Brownian particle's effective temperature. The latter is a function of the heat bath temperature and both external fields. This effective temperature is always greater than the heat bath temperature, therefore the expression "hot carriers". This effective temperature, in the asymptotic regime, the stationary state at long times (greater than the collision time), is used to write down the transport equations for semiconductors, namely the generalized Shockley equations, now incorporating the magnetic field effect. A direct and relevant application follows: a model for the well known Gunn effect, assuming a Brownian scheme. In the transient regime the computed effective temperature also allow us to probe some features of the heat bath, as the effective temperature relaxes to its terminal stationary value. As for our results in the Gunn effect model, the simplest of all in a Brownian scheme, we obtain a surprisingly good agreement with experimental data, suggesting that more involved models should include our minimal assumptions
Mestre
APA, Harvard, Vancouver, ISO, and other styles
24

Ghorbanzadeh, Dariush. "Détection de rupture dans les modèles statistiques." Paris 7, 1992. http://www.theses.fr/1992PA077246.

Full text
Abstract:
Ce travail concerne l'etude d'une classe de tests de detection de rupture dans les modeles statistiques. La classe de tests consideree est basee sur le test du rapport de vraisemblance. Dans le cadre de la contiguite au sens de lecam, sous l'hypothese nulle (non rupture) et sous l'hypothese alternative (rupture), les lois asymptotiques des statistiques de tests sont evaluees, ce qui permet de determiner asymptotiquement les regions critiques des tests. Les expressions analytiques des puissances asymptotiques sont proposees. En utilisant les techniques de l'analyse discriminante, le probleme de la detection du passage en sida avere a ete etudie par application directe sur les donnees concernant 450 patients hiv-positifs
APA, Harvard, Vancouver, ISO, and other styles
25

Pain, Michel. "Mouvement brownien branchant et autres modèles hiérarchiques en physique statistique." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS305.

Full text
Abstract:
Le mouvement brownien branchant (BBM) est un système de particules se déplaçant et se reproduisant aléatoirement. En premier lieu, nous étudions avec précision la transition de phase qui a lieu au sein de ce système de particules près de son minimum, en se plaçant dans le cas dit presque-critique. Ensuite, nous décrivons les fluctuations 1-stable universelles qui apparaissent dans le front du BBM, ainsi que le comportement typique des particules qui y contribuent. Une version du BBM avec sélection est également étudiée, où les particules sont tuées quand elles descendent à une distance L de la particule la plus haute : nous verrons comment cette règle de sélection affecte la vitesse de déplacement des individus les plus rapides quand L est grand. Puis, sous l'angle de la question du chaos en température pour les verres de spin, nous comparons le champ libre gaussien discret en dimension 2, un modèle possèdant une structure hiérarchique approximative et des propriétés très proches de celles du BBM, avec le Random Energy Model. Finalement, le dernier chapitre porte sur le modèle de Derrida-Retaux, qui est également défini par une structure hiérarchique. Nous introduisons une version continue de ce modèle, possédant une famille exactement soluble de solutions qui permet de répondre à différentes conjectures existantes sur le modèle discret
Branching Brownian motion (BBM) is a particle system, where particles move and reproduce randomly. Firstly, we study precisely the phase transition occuring for this particle system close to its minimum, in the setting of the so-called near-critical case. Then, we describe the universal 1-stable fluctuations appearing in the front of BBM and identify the typical behavior of particles contributing to them. A version of BBM with selection, where particles are killed when going down at a distance larger than L from the highest particle, is also sudied: we see how this selection rule affects the speed of the fastest individuals in the population, when L is large. Thereafter, motivated by temperature chaos in spin glasses, we study the 2-dimensional discrete Gaussian free field, which is a model with an approximative hierarchical structure and properties similar to BBM, and show that, from this perspective, it behaves differently than the Random Energy Model. Finally, the last part of this thesis is dedicated to the Derrida-Retaux model, which is also defined by a hierarchical structure. We introduce a continuous time version of this model and exhibit a family of exactly solvable solutions, which allows us to answer several conjectures stated on the discrete time model
APA, Harvard, Vancouver, ISO, and other styles
26

Newbury, James. "Limit order books, diffusion approximations and reflected SPDEs : from microscopic to macroscopic models." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:825d9465-842b-424b-99d0-ff4dfa9ebfc5.

Full text
Abstract:
Motivated by a zero-intelligence approach, the aim of this thesis is to unify the microscopic (discrete price and volume), mesoscopic (discrete price and continuous volume) and macroscopic (continuous price and volume) frameworks of limit order books, with a view to providing a novel yet analytically tractable description of their behaviour in a high to ultra high-frequency setting. Starting with the canonical microscopic framework, the first part of the thesis examines the limiting behaviour of the order book process when order arrival and cancellation rates are sent to infinity and when volumes are considered to be of infinitesimal size. Mathematically speaking, this amounts to establishing the weak convergence of a discrete-space process to a mesoscopic diffusion limit. This step is initially carried out in a reduced-form context, in other words, by simply looking at the best bid and ask queues, before the procedure is extended to the whole book. This subsequently leads us to the second part of the thesis, which is devoted to the transition between mesoscopic and macroscopic models of limit order books, where the general idea is to send the tick size to zero, or equivalently, to consider infinitely many price levels. The macroscopic limit is then described in terms of reflected SPDEs which typically arise in stochastic interface models. Numerical applications are finally presented, notably via the simulation of the mesocopic and macroscopic limits, which can be used as market simulators for short-term price prediction or optimal execution strategies.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Zhipeng. "MAGNETIC TWEEZERS: ACTUATION, MEASUREMENT, AND CONTROL AT NANOMETER SCALE." Columbus, Ohio : Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view.cgi?acc%5Fnum=osu1243885884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Graf, Ferdinand. "Exotic Option Pricing in Stochastic Volatility Levy Models and with Fractional Brownian Motion." [S.l. : s.n.], 2007. http://nbn-resolving.de/urn:nbn:de:bsz:352-opus-35340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Mohaupt, Mikaël. "Modélisation et simulation de l'agglomération des colloïdes dans un écoulement turbulent." Thesis, Vandoeuvre-les-Nancy, INPL, 2011. http://www.theses.fr/2011INPL068N/document.

Full text
Abstract:
Ce travail de thèse porte sur la modélisation et la simulation numérique de la collision et l'agglomération de particules colloïdales dans un écoulement fluide turbulent par une nouvelle méthode. Ces particules sont sensibles dans une même mesure aux effets brownien et turbulent. La première partie du travail concerne la modélisation du phénomène physique,allant du transport des particules jusqu'à la modélisation des forces d'adhésion physico-chimiques en passant par l'étape cruciale qui est la détection des interactions entre les particules (collisions). Cette détection des collisions est dans un premier temps étudiée par rapport aux algorithmes classiques existants dans la littérature. Bien que très efficaces dans le cadre de particules soumises à l'agitation turbulente, les conclusions de cette partie exposent les limites des méthodes existantes en termes de coûts numériques, pour le traitement d'un ensemble de colloïdes soumis au mouvement brownien. La seconde partie du travail oriente alors les travaux vers une vision novatrice du phénomène physique considéré. Le caractère diffusif aléatoire est alors considéré d'un point de vu stochastique, comme un processus conditionné dans l'espace et dans le temps. Ainsi, une nouvelle méthode de détection et de traitement des collisions de particules soumises exclusivement à un mouvement diffusif est présentée et validée, exposant un gain considérable en termes de coûts numériques. Le potentiel de cette nouvelle approche est validé et ouvre de nombreuses pistes de réflexion dans l'utilisation des méthodes stochastiques appliqués à la représentation de la physique
Ph.D thesis focuses on modeling and numerical simulation of collision and agglomeration of colloidal particles in a turbulent flow by using a new method. These particles are affected by both Brownian and turbulent effects. The first part of the work deals with current models of the physical phenomenon, from the transport of single particles to a model for physico-chemical adhesive forces, and points out the critical step which is the detection of interactions between particles (collisions). This detection is initially studied by applying classical algorithms existing in the literature. Although they are very efficient in the context of particles subject to turbulent agitation, first conclusions show the limitations of these existing methods in terms of numerical costs, considering the treatment of colloids subject to the Brownian motion. The second part of this work proposes a new vision of the physical phenomenon focusing on the random diffusive behaviour. This issue is adressed from a stochastic point of view as a process conditionned in space and time. Thus, a new method for the detection and treatment of collisions is presented and validated, which represents considerable gain in terms of numerical cost. The potential of this new approach is validated and opens new opportunities for the use of stochastic methods applied to the representation of physics
APA, Harvard, Vancouver, ISO, and other styles
30

Allez, Romain. "Chaos multiplicatif Gaussien, matrices aléatoires et applications." Phd thesis, Université Paris Dauphine - Paris IX, 2012. http://tel.archives-ouvertes.fr/tel-00780270.

Full text
Abstract:
Dans ce travail, nous nous sommes intéressés d'une part à la théorie du chaos multiplicatif Gaussien introduite par Kahane en 1985 et d'autre part à la théorie des matrices aléatoires dont les pionniers sont Wigner, Wishart et Dyson. La première partie de ce manuscrit contient une brève introduction à ces deux théories ainsi que les contributions personnelles de ce manuscrit expliquées rapidement. Les parties suivantes contiennent les textes des articles publiés [1], [2], [3], [4], [5] et pré-publiés [6], [7], [8] sur ces résultats dans lesquels le lecteur pourra trouver des développements plus détaillés
APA, Harvard, Vancouver, ISO, and other styles
31

Mvondo, Bernardin Gael. "Numerical techniques for optimal investment consumption models." University of the Western Cape, 2014. http://hdl.handle.net/11394/4352.

Full text
Abstract:
>Magister Scientiae - MSc
The problem of optimal investment has been extensively studied by numerous researchers in order to generalize the original framework. Those generalizations have been made in different directions and using different techniques. For example, Perera [Optimal consumption, investment and insurance with insurable risk for an investor in a Levy market, Insurance: Mathematics and Economics, 46 (3) (2010) 479-484] applied the martingale approach to obtain a closed form solution for the optimal investment, consumption and insurance strategies of an individual in the presence of an insurable risk when the insurable risk and risky asset returns are described by Levy processes and the utility is a constant absolute risk aversion. In another work, Sattinger [The Markov consumption problem, Journal of Mathematical Economics, 47 (4-5) (2011) 409-416] gave a model of consumption behavior under uncertainty as the solution to a continuous-time dynamic control problem in which an individual moves between employment and unemployment according to a Markov process. In this thesis, we will review the consumption models in the above framework and will simulate some of them using an infinite series expansion method − a key focus of this research. Several numerical results obtained by using MATLAB are presented with detailed explanations.
APA, Harvard, Vancouver, ISO, and other styles
32

Chen, Yaming. "Dynamical properties of piecewise-smooth stochastic models." Thesis, Queen Mary, University of London, 2014. http://qmro.qmul.ac.uk/xmlui/handle/123456789/9129.

Full text
Abstract:
Piecewise-smooth stochastic systems are widely used in engineering science. However, the theory of these systems is only in its infancy. In this thesis, we take as an example the Brownian motion with dry friction to illustrate dynamical properties of these systems with respect to three interesting topics: (i) weak-noise approximations, (ii) first-passage time (FPT) problems and (iii) functionals of stochastic processes. Firstly, we investigate the validity and accuracy of weak-noise approximations for piecewise-smooth stochastic differential equations (SDEs), taking as an illustrative example the Brownian motion with pure dry friction. For this model, we show that the weak-noise approximation of the path integral correctly reproduces the known propagator of the SDE at lowest order in the noise power, as well as the main features of the exact propagator with higher-order corrections, provided that the singularity of the path integral is treated with some heuristics. We also consider a smooth regularisation of this piecewise-constant SDE and study to what extent this regularisation can rectify some of the problems encountered in the non-smooth case. Secondly, we provide analytic solutions to the FPT problem of the Brownian motion with dry friction. For the pure dry friction case, we find a phase transition phenomenon in the spectrum which relates to the position of the exit point and affects the tail of the FPT distribution. For the model with dry and viscous friction, we evaluate quantitatively the impact of the corresponding stick-slip transition and of the transition to ballistic exit. We also derive analytically the distributions of the maximum velocity till the FPT for the dry friction model. Thirdly, we generalise the so-called backward Fokker-Planck technique and obtain a recursive ordinary differential equation for the moments of functionals in the Laplace space. We then apply the developed results to analyse the local time, the occupation time and the displacement of the dry friction model. Finally, we conclude this thesis and state some related unsolved problems.
APA, Harvard, Vancouver, ISO, and other styles
33

Teichmann, Jakob. "Stochastic modeling of Brownian and turbulent coagulation." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2017. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-220625.

Full text
Abstract:
Als Beitrag zu einer verbesserten Filtration von Metallschmelzen werden stochastische Modelle für den essentiellen Mechanismus der Koagulation von Brownschen Partikeln und Partikeln in turbulenten Strömungen entwickelt und untersucht. Formeln für die zeitliche Entwicklung der Partikelkonzentration in diesen Systemen erlauben die Bestimmung von physikalischen Parametern, welche die Koagulation und somit die Filtration begünstigen. Um wichtige Resultate im Zusammenhang mit der traditionellen Herangehensweise für Brownsche Partikel zu berichtigen und zu erweitern, wird ein neuer Ansatz in Form zweier Modelle entwickelt. Für beide werden Formeln für die Partikelkonzentration, auf Basis einer neuartigen Verallgemeinerung der Matérn Hard-Core-Punktprozesse, abgeleitet. Um im Hinblick auf die Koagulationsgleichung der fraktalartigen Gestalt der Agglomerate besser Rechnung zu tragen, wird deren Morphologie anhand zweier neuer Modelle quantifiziert. Die Arbeit wird durch Anwendung der Modelle und numerische Simulationen von Koagulation und Abscheidung in turbulenten Strömungen abgerundet.
APA, Harvard, Vancouver, ISO, and other styles
34

Pesee, Chatchai. "Stochastic Modelling of Financial Processes with Memory and Semi-Heavy Tails." Queensland University of Technology, 2005. http://eprints.qut.edu.au/16057/.

Full text
Abstract:
This PhD thesis aims to study financial processes which have semi-heavy-tailed marginal distributions and may exhibit memory. The traditional Black-Scholes model is expanded to incorporate memory via an integral operator, resulting in a class of market models which still preserve the completeness and arbitragefree conditions needed for replication of contingent claims. This approach is used to estimate the implied volatility of the resulting model. The first part of the thesis investigates the semi-heavy-tailed behaviour of financial processes. We treat these processes as continuous-time random walks characterised by a transition probability density governed by a fractional Riesz- Bessel equation. This equation extends the Feller fractional heat equation which generates a-stable processes. These latter processes have heavy tails, while those processes generated by the fractional Riesz-Bessel equation have semi-heavy tails, which are more suitable to model financial data. We propose a quasi-likelihood method to estimate the parameters of the fractional Riesz- Bessel equation based on the empirical characteristic function. The second part considers a dynamic model of complete financial markets in which the prices of European calls and puts are given by the Black-Scholes formula. The model has memory and can distinguish between historical volatility and implied volatility. A new method is then provided to estimate the implied volatility from the model. The third part of the thesis considers the problem of classification of financial markets using high-frequency data. The classification is based on the measure representation of high-frequency data, which is then modelled as a recurrent iterated function system. The new methodology developed is applied to some stock prices, stock indices, foreign exchange rates and other financial time series of some major markets. In particular, the models and techniques are used to analyse the SET index, the SET50 index and the MAI index of the Stock Exchange of Thailand.
APA, Harvard, Vancouver, ISO, and other styles
35

Nouri, Suhila Lynn. "Expected maximum drawdowns under constant and stochastic volatility." Link to electronic thesis, 2006. http://www.wpi.edu/Pubs/ETD/Available/etd-050406-151319/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Doshi, Ankit. "Seasonal volatility models with applications in option pricing." Gowas Publishing House, 2011. http://hdl.handle.net/1993/8889.

Full text
Abstract:
GARCH models have been widely used in finance to model volatility ever since the introduction of the ARCH model and its extension to the generalized ARCH (GARCH) model. Lately, there has been growing interest in modelling seasonal volatility, most recently with the introduction of the multiplicative seasonal GARCH models. As an application of the multiplicative seasonal GARCH model with real data, call prices from the major stock market index of India are calculated using estimated parameter values. It is shown that a multiplicative seasonal GARCH option pricing model outperforms the Black-Scholes formula and a GARCH(1,1) option pricing formula. A parametric bootstrap procedure is also employed to obtain an interval approximation of the call price. Narrower confidence intervals are obtained using the multiplicative seasonal GARCH model than the intervals provided by the GARCH(1,1) model for data that exhibits multiplicative seasonal GARCH volatility.
APA, Harvard, Vancouver, ISO, and other styles
37

Wu, Ching-Tang. "Construction of Brownian Motions in Enlarged Filtrations and Their Role in Mathematical Models of Insider Trading." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 1999. http://dx.doi.org/10.18452/14364.

Full text
Abstract:
In dieser Arbeit untersuchen wir die Struktur von Gausschen Prozessen, die durch gewisse lineare Transformationen von zwei Gausschen Martingalen erzeugt werden. Die Klasse dieser Transformationen ist durch nanzmathematische Gleichgewichtsmodelle mit heterogener Information motiviert. In Kapital 2 bestimmen wir für solche Prozesse, die zunächst in einer erweiterten Filtrierung konstruiert werden, die kanonische Zerlegung als Semimartin-gale in ihrer eigenen Filtrierung. Die resultierende Drift wird durch Volterra-Kerne beschrieben. Insbesondere charakterisieren wir diejenigen Prozesse, die in ihrer eigenen Filtrierung eine Brownsche Bewegung bilden. In Kapital 3 konstruieren wir neue orthogonale Zerlegungen der Brownschen Filtrierungen. In den Kapitaln 4 bis 6 wenden wir unsere Resultate zur Charakterisierung Brownscher Bewegungen im Kontext nanzmathematischer Modelle an, in denen es Marktteilnehmer mit zusätzlicher Insider-Information gibt. Wir untersuchen Erweiterungen eines Gleichgewichtsmodells von Kyle [42] und Back [7], in denen die Insider-Information in verschiedener Weise durch Gaussche Martingale spezifiziert wird. Insbesondere klären wir die Struktur von Insider-Strategien, die insofern unaufallig bleiben, als sich die resultierende Gesamtnachfrage wie eine Brownsche Bewegung verhält.
In this thesis, we study Gaussian processes generated by certain linear transformations of two Gaussian martingales. This class of transformations is motivated by nancial equilibrium models with heterogeneous information. In Chapter 2 we derive the canonical decomposition of such processes, which are constructed in an enlarged ltration, as semimartingales in their own ltration. The resulting drift is described in terms of Volterra kernels. In particular we characterize those processes which are Brownian motions in their own ltration. In Chapter 3 we construct new orthogonal decompositions of Brownian ltrations. In Chapters 4 to 6 we are concerned with applications of our characterization results in the context of mathematical models of insider trading. We analyze extensions of the nancial equilibrium model of Kyle [42] and Back [7] where the Gaussian martingale describing the insider information is specified in various ways. In particular we discuss the structure of insider strategies which remain inconspicuous in the sense that the resulting cumulative demand is again a Brownian motion.
APA, Harvard, Vancouver, ISO, and other styles
38

Arikan, Ali Ferda. "Structural models for the pricing of corporate securities and financial synergies : applications with stochastic processes including arithmetic Brownian motion." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5416.

Full text
Abstract:
Mergers are the combining of two or more firms to create synergies. These synergies may come from various sources such as operational synergies come from economies of scale or financial synergies come from increased value of securities of the firm. There are vast amount of studies analysing operational synergies of mergers. This study analyses the financial ones. This way the dynamics of purely financial synergies can be revealed. Purely financial synergies can be transformed into financial instruments such as securitization. While analysing financial synergies the puzzle of distribution of financial synergies between claimholders is investigated. Previous literature on mergers showed that bondholders may gain more than existing shareholders of the merging firms. This may become rather controversial. A merger may be synergistic but it does not necessarily mean that shareholders' wealth will increase. Managers and/or shareholders are the parties making the merger decision. If managers are acting to the best interest of shareholders then they would try to increase shareholders' wealth. To solve this problem first the dynamics of mergers were analysed and then new strategies developed and demonstrated to transfer the financial synergies to the shareholders.
APA, Harvard, Vancouver, ISO, and other styles
39

Arikan, Ali F. "Structural models for the pricing of corporate securities and financial synergies. Applications with stochastic processes including arithmetic Brownian motion." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5416.

Full text
Abstract:
Mergers are the combining of two or more firms to create synergies. These synergies may come from various sources such as operational synergies come from economies of scale or financial synergies come from increased value of securities of the firm. There are vast amount of studies analysing operational synergies of mergers. This study analyses the financial ones. This way the dynamics of purely financial synergies can be revealed. Purely financial synergies can be transformed into financial instruments such as securitization. While analysing financial synergies the puzzle of distribution of financial synergies between claimholders is investigated. Previous literature on mergers showed that bondholders may gain more than existing shareholders of the merging firms. This may become rather controversial. A merger may be synergistic but it does not necessarily mean that shareholders¿ wealth will increase. Managers and/or shareholders are the parties making the merger decision. If managers are acting to the best interest of shareholders then they would try to increase shareholders¿ wealth. To solve this problem first the dynamics of mergers were analysed and then new strategies developed and demonstrated to transfer the financial synergies to the shareholders.
APA, Harvard, Vancouver, ISO, and other styles
40

Antczak, Magdalena, and Marta Leniec. "Pricing and Hedging of Defaultable Models." Thesis, Högskolan i Halmstad, Tillämpad matematik och fysik (MPE-lab), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-16052.

Full text
Abstract:
Modelling defaultable contingent claims has attracted a lot of interest in recent years, motivated in particular by the Late-2000s Financial Crisis. In several papers various approaches on the subject have been made. This thesis tries to summarize these results and derive explicit formulas for the prices of financial derivatives with credit risk. It is divided into two main parts. The first one is devoted to the well-known theory of modelling the default risk while the second one presents the results concerning pricing of the defaultable models that we obtained ourselves.
APA, Harvard, Vancouver, ISO, and other styles
41

Manzini, Muzi Charles. "Stochastic Volatility Models for Contingent Claim Pricing and Hedging." Thesis, University of the Western Cape, 2008. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_8197_1270517076.

Full text
Abstract:

The present mini-thesis seeks to explore and investigate the mathematical theory and concepts that underpins the valuation of derivative securities, particularly European plainvanilla options. The main argument that we emphasise is that novel models of option pricing, as is suggested by Hull and White (1987) [1] and others, must account for the discrepancy observed on the implied volatility &ldquo
smile&rdquo
curve. To achieve this we also propose that market volatility be modeled as random or stochastic as opposed to certain standard option pricing models such as Black-Scholes, in which volatility is assumed to be constant.

APA, Harvard, Vancouver, ISO, and other styles
42

Serrano, Francisco de Castilho Monteiro Gil. "Fractional processes: an application to finance." Master's thesis, Instituto Superior de Economia e Gestão, 2016. http://hdl.handle.net/10400.5/13002.

Full text
Abstract:
Mestrado em Matemática Financeira
Neste trabalho é apresentada uma extensa descrição matemática, orientada para a modelação financeira, de três principais processos fracionários: o processo Browniano fracionário e os dois processos de Lévy fracionários. Mostram-se como estes processos podem ser originados. É explorado o conceito de auto-semelhança e apresentamos algumas noções de cálculo fracionário. Também é discutido o lugar destes processos no problema de encontrar o preço de derivados financeiros e apresentamos uma nova abordagem para a simulação do processo de Lévy fracionário que permite um método Monte Carlo para encontrar o preço de derivados financeiros.
In this work it is presented an extensive mathematical description oriented to financial modelling based on three main fractional processes: the fractional Brownian motion and both fractional Lévy processes. It is shown how these processes were originated. The concept of self-similarity is explored and we present some notions of fractional calculus. It is discussed the opportunity of these processes in pricing financial derivatives and we present a new approach for simulation of the fractional Lévy process, which allows a Monte Carlo method for pricing financial derivatives.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
43

Albers, Tony. "Weak nonergodicity in anomalous diffusion processes." Doctoral thesis, Universitätsbibliothek Chemnitz, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-214327.

Full text
Abstract:
Anomale Diffusion ist ein weitverbreiteter Transportmechanismus, welcher für gewöhnlich mit ensemble-basierten Methoden experimentell untersucht wird. Motiviert durch den Fortschritt in der Einzelteilchenverfolgung, wo typischerweise Zeitmittelwerte bestimmt werden, entsteht die Frage nach der Ergodizität. Stimmen ensemble-gemittelte Größen und zeitgemittelte Größen überein, und wenn nicht, wie unterscheiden sie sich? In dieser Arbeit studieren wir verschiedene stochastische Modelle für anomale Diffusion bezüglich ihres ergodischen oder nicht-ergodischen Verhaltens hinsichtlich der mittleren quadratischen Verschiebung. Wir beginnen unsere Untersuchung mit integrierter Brownscher Bewegung, welche von großer Bedeutung für alle Systeme mit Impulsdiffusion ist. Für diesen Prozess stellen wir die ensemble-gemittelte quadratische Verschiebung und die zeitgemittelte quadratische Verschiebung gegenüber und charakterisieren insbesondere die Zufälligkeit letzterer. Im zweiten Teil bilden wir integrierte Brownsche Bewegung auf andere Modelle ab, um einen tieferen Einblick in den Ursprung des nicht-ergodischen Verhaltens zu bekommen. Dabei werden wir auf einen verallgemeinerten Lévy-Lauf geführt. Dieser offenbart interessante Phänomene, welche in der Literatur noch nicht beobachtet worden sind. Schließlich führen wir eine neue Größe für die Analyse anomaler Diffusionsprozesse ein, die Verteilung der verallgemeinerten Diffusivitäten, welche über die mittlere quadratische Verschiebung hinausgeht, und analysieren mit dieser ein oft verwendetes Modell der anomalen Diffusion, den subdiffusiven zeitkontinuierlichen Zufallslauf
Anomalous diffusion is a widespread transport mechanism, which is usually experimentally investigated by ensemble-based methods. Motivated by the progress in single-particle tracking, where time averages are typically determined, the question of ergodicity arises. Do ensemble-averaged quantities and time-averaged quantities coincide, and if not, in what way do they differ? In this thesis, we study different stochastic models for anomalous diffusion with respect to their ergodic or nonergodic behavior concerning the mean-squared displacement. We start our study with integrated Brownian motion, which is of high importance for all systems showing momentum diffusion. For this process, we contrast the ensemble-averaged squared displacement with the time-averaged squared displacement and, in particular, characterize the randomness of the latter. In the second part, we map integrated Brownian motion to other models in order to get a deeper insight into the origin of the nonergodic behavior. In doing so, we are led to a generalized Lévy walk. The latter reveals interesting phenomena, which have never been observed in the literature before. Finally, we introduce a new tool for analyzing anomalous diffusion processes, the distribution of generalized diffusivities, which goes beyond the mean-squared displacement, and we analyze with this tool an often used model of anomalous diffusion, the subdiffusive continuous time random walk
APA, Harvard, Vancouver, ISO, and other styles
44

Dépée, Alexis. "Etude expérimentale et théorique des mécanismes microphysiques mis en jeu dans la capture des aérosols radioactifs par les nuages." Thesis, Université Clermont Auvergne‎ (2017-2020), 2019. http://www.theses.fr/2019CLFAC057.

Full text
Abstract:
Les particules atmosphériques sont un sujet d’importance dans plusieurs couches de la société. Leur présence dans l’atmosphère est aussi bien une problématique météorologique et climatique qu’un enjeu de santé publique, notamment de par l’accroissement des maladies cardiovasculaires. En particulier, les particules radioactives émises dans l’atmosphère à la suite d’un accident nucléaire peuvent polluer les écosystèmes durant plusieurs années. Le récent accident du Centre Nucléaire de Production d’Électricité de Fukushima Daiichi en 2011 nous rappelle que, même aujourd’hui, le risque zéro n’existe pas. À la suite d’une émission dans l’atmosphère, les particules nanométriques diffusent et s’agglomèrent alors que les particules de plusieurs micromètres sédimentent. Les tailles intermédiaires vont, quant à elles, pouvoir être transportées à l’échelle globale dont le mécanisme principal de rabattement au sol provient des interactions avec les nuages et les précipitations. Afin d’améliorer la connaissance de la contamination des sols consécutive à de tels accidents, la compréhension de la capture des aérosols par les nuages est alors essentielle. Dans ce but, un modèle microphysique est implémenté dans ce travail, considérant les mécanismes microphysiques qui interviennent dans la capture des aérosols par des gouttes de nuage, notamment les forces électrostatiques dès lors que les radionucléides ont pour propriété de fortement se charger. Des mesures en laboratoire sont alors réalisées à l’aide de In-CASE (In-Cloud Aerosols Scavenging Experiment), expérience conçue dans ce travail, afin de comparer le modèle développé aux observations, et ce, toujours à une échelle microphysique où les paramètres d’influence régissant la capture au sein du nuage sont contrôlés. Par ailleurs, des systèmes de charge des particules et des gouttes sont conçus pour soigneusement maîtriser les charges électriques, tandis que l’humidité relative est précisément pilotée. Les nouvelles connaissances de la capture des particules par des gouttes de nuage qui en découlent, concernant entre autres les effets électrostatiques, sont ensuite incorporées au modèle de nuage convectif DESCAM (Detailed SCAvenging Model). Ce modèle à microphysique détaillée décrit un nuage de sa formation jusqu’aux précipitations, permettant d’étudier l’impact des nouvelles données sur le rabattement des particules à méso-échelle. De plus, des modifications sont apportées à DESCAM pour élargir l’étude aux nuages stratiformes qui constituent en France, la majorité des précipitations. À terme, cette étude ouvre la voie à l’amélioration de la modélisation du rabattement atmosphérique des particules, et notamment à la contamination des sols dans les modèles de crise de l’Institut de Radioprotection et de Sûreté Nucléaire
Atmospheric particles are a key topic in many social issues. Their presence in this atmosphere is a meteorological and climatic subject, as well as a public health concern since these particles are correlated with the increase of cardiovascular diseases. Specially, radioactive particles emitted as a result of a nuclear accident can jeopardise ecosystems for decades. The recent accident at the Fukushima Daiichi’s nuclear power plant in 2011 reminds us that the risk, even extremely unlikely, exists.After a release of nuclear material in the atmosphere, nanometric particles diffuse and coagulate, while micrometric particles settle due to gravity. Nevertheless, the intermediate size particles can be transported at a global scale when the main mechanism involved in their scavenging comes from the interaction with clouds and their precipitations. To enhance the ground contamination knowledge after such accidental releases, the understanding of the particle in-cloud collection is thus essential. For this purpose, a microphysical model is implemented in this work, including the whole microphysical mechanisms acting on the particle collection by cloud droplets like the electrostatic forces since radionuclides are well-known to become significantly charged. Laboratory measurements are then conducted through In-CASE (In-Cloud Aerosols Scavenging Experiment), a novel experiment built in this work, to get comparisons between modelling and observations, once again at a microphysical scale where every parameter influencing the particle in-cloud collection is controlled. Furthermore, two systems to electrically charge particles and droplets are constructed to set the electric charges carefully while the relative humidity level is also regulated. These new research results related to the particle collection by cloud droplets following the electrostatic forces, among others effects, are thus incorporated into the convective cloud model DESCAM (Detailed SCAvenging Model). This detailed microphysical model describes a cloud from its formation to the precipitations, allowing the study at a meso-scale of the impact of the new data on the particle scavenging. Moreover, some changes are made in DESCAM to expand the study to stratiform clouds since the major part of the French precipitations come from the stratiform ones. Finally, this work paves the way for the enhancement of the atmospheric particle scavenging modelling, including the ground contamination in the crisis model used by the French Institute in Radiological Protection and Nuclear Safety
APA, Harvard, Vancouver, ISO, and other styles
45

Bruna, Maria. "Excluded-volume effects in stochastic models of diffusion." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:020c2d3e-5fef-478c-9861-553cd310daf5.

Full text
Abstract:
Stochastic models describing how interacting individuals give rise to collective behaviour have become a widely used tool across disciplines—ranging from biology to physics to social sciences. Continuum population-level models based on partial differential equations for the population density can be a very useful tool (when, for large systems, particle-based models become computationally intractable), but the challenge is to predict the correct macroscopic description of the key attributes at the particle level (such as interactions between individuals and evolution rules). In this thesis we consider the simple class of models consisting of diffusive particles with short-range interactions. It is relevant to many applications, such as colloidal systems and granular gases, and also for more complex systems such as diffusion through ion channels, biological cell populations and animal swarms. To derive the macroscopic model of such systems, previous studies have used ad hoc closure approximations, often generating errors. Instead, we provide a new systematic method based on matched asymptotic expansions to establish the link between the individual- and the population-level models. We begin by deriving the population-level model of a system of identical Brownian hard spheres. The result is a nonlinear diffusion equation for the one-particle density function with excluded-volume effects enhancing the overall collective diffusion rate. We then expand this core problem in several directions. First, for a system with two types of particles (two species) we obtain a nonlinear cross-diffusion model. This model captures both alternative notions of diffusion, the collective diffusion and the self-diffusion, and can be used to study diffusion through obstacles. Second, we study the diffusion of finite-size particles through confined domains such as a narrow channel or a Hele–Shaw cell. In this case the macroscopic model depends on a confinement parameter and interpolates between severe confinement (e.g., a single- file diffusion in the narrow channel case) and an unconfined situation. Finally, the analysis for diffusive soft spheres, particles with soft-core repulsive potentials, yields an interaction-dependent non-linear term in the diffusion equation.
APA, Harvard, Vancouver, ISO, and other styles
46

Rahouli, Sami El. "Modélisation financière avec des processus de Volterra et applications aux options, aux taux d'intérêt et aux risques de crédit." Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0042/document.

Full text
Abstract:
Ce travail étudie des modèles financiers pour les prix d'options, les taux d'intérêts et le risque de crédit, avec des processus stochastiques à mémoire et comportant des discontinuités. Ces modèles sont formulés en termes du mouvement Brownien fractionnaire, du processus de Lévy fractionnaire ou filtré (et doublement stochastique) et de leurs approximations par des semimartingales. Leur calcul stochastique est traité au sens de Malliavin, et des formules d'Itô sont déduites. Nous caractérisons les probabilités risque neutre en termes de ces processus pour des modèles d'évaluation d'options de type de Black-Scholes avec sauts. Nous étudions également des modèles de taux d'intérêts, en particulier les modèles de Vasicek, de Cox-Ingersoll-Ross et de Heath-Jarrow-Morton. Finalement nous étudions la modélisation du risque de crédit
This work investigates financial models for option pricing, interest rates and credit risk with stochastic processes that have memory and discontinuities. These models are formulated in terms of the fractional Brownian motion, the fractional or filtered Lévy process (also doubly stochastic) and their approximations by semimartingales. Their stochastic calculus is treated in the sense of Malliavin and Itô formulas are derived. We characterize the risk-neutral probability measures in terms of these processes for options pricing models of Black-Scholes type with jumps. We also study models of interest rates, in particular the models of Vasicek, Cox-Ingersoll-Ross and Heath-Jarrow-Morton. Finally we study credit risk models
APA, Harvard, Vancouver, ISO, and other styles
47

Esstafa, Youssef. "Modèles de séries temporelles à mémoire longue avec innovations dépendantes." Thesis, Bourgogne Franche-Comté, 2019. http://www.theses.fr/2019UBFCD021.

Full text
Abstract:
Dans cette thèse nous considérons, dans un premier temps, le problème de l'analyse statistique des modèles FARIMA (Fractionally AutoRegressive Integrated Moving-Average) induits par un bruit blanc non corrélé mais qui peut contenir des dépendances non linéaires très générales. Ces modèles sont appelés FARIMA faibles et permettent de modéliser des processus à mémoire longue présentant des dynamiques non linéaires, de structures souvent non-identifiées, très générales. Relâcher l'hypothèse d'indépendance sur le terme d'erreur, une hypothèse habituellement imposée dans la littérature, permet aux modèles FARIMA faibles d'élargir considérablement leurs champs d'application en couvrant une large classe de processus à mémoire longue non linéaires. Les modèles FARIMA faibles sont denses dans l'ensemble des processus stationnaires purement non déterministes, la classe formée par ces modèles englobe donc celle des processus FARIMA avec un bruit indépendant et identiquement distribué (iid). Nous appelons par la suite FARIMA forts les modèles dans lesquels le terme d'erreur est supposé être un bruit iid.Nous établissons les procédures d'estimation et de validation des modèles FARIMA faibles. Nous montrons, sous des hypothèses faibles de régularités sur le bruit, que l'estimateur des moindres carrés des paramètres des modèles FARIMA(p,d,q) faibles est fortement convergent et asymptotiquement normal. La matrice de variance asymptotique de l'estimateur des moindres carrés des modèles FARIMA(p,d,q) faibles est de la forme "sandwich". Cette matrice peut être très différente de la variance asymptotique obtenue dans le cas fort (i.e. dans le cas où le bruit est supposé iid). Nous proposons, par deux méthodes différentes, un estimateur convergent de cette matrice. Une méthode alternative basée sur une approche d'auto-normalisation est également proposée pour construire des intervalles de confiance des paramètres des modèles FARIMA(p,d,q) faibles. Cette technique nous permet de contourner le problème de l'estimation de la matrice de variance asymptotique de l'estimateur des moindres carrés.Nous accordons ensuite une attention particulière au problème de la validation des modèles FARIMA(p,d,q) faibles. Nous montrons que les autocorrélations résiduelles ont une distribution asymptotique normale de matrice de covariance différente de celle obtenue dans le cadre des FARIMA forts. Cela nous permet de déduire la loi asymptotique exacte des statistiques portmanteau et de proposer ainsi des versions modifiées des tests portmanteau standards de Box-Pierce et Ljung-Box. Il est connu que la distribution asymptotique des tests portmanteau est correctement approximée par un khi-deux lorsque le terme d'erreur est supposé iid. Dans le cas général, nous montrons que cette distribution asymptotique est celle d'une somme pondérée de khi-deux. Elle peut être très différente de l'approximation khi-deux usuelle du cas fort. Nous adoptons la même approche d'auto-normalisation utilisée pour la construction des intervalles de confiance des paramètres des modèles FARIMA faibles pour tester l'adéquation des modèles FARIMA(p,d,q) faibles. Cette méthode a l'avantage de contourner le problème de l'estimation de la matrice de variance asymptotique du vecteur joint de l'estimateur des moindres carrés et des autocovariances empiriques du bruit.Dans un second temps, nous traitons dans cette thèse le problème de l'estimation des modèles autorégressifs d'ordre 1 induits par un bruit gaussien fractionnaire d'indice de Hurst H supposé connu. Nous étudions, plus précisément, la convergence et la normalité asymptotique de l'estimateur des moindres carrés généralisés du paramètre autorégressif de ces modèles
We first consider, in this thesis, the problem of statistical analysis of FARIMA (Fractionally AutoRegressive Integrated Moving-Average) models endowed with uncorrelated but non-independent error terms. These models are called weak FARIMA and can be used to fit long-memory processes with general nonlinear dynamics. Relaxing the independence assumption on the noise, which is a standard assumption usually imposed in the literature, allows weak FARIMA models to cover a large class of nonlinear long-memory processes. The weak FARIMA models are dense in the set of purely non-deterministic stationary processes, the class of these models encompasses that of FARIMA processes with an independent and identically distributed noise (iid). We call thereafter strong FARIMA models the models in which the error term is assumed to be an iid innovations.We establish procedures for estimating and validating weak FARIMA models. We show, under weak assumptions on the noise, that the least squares estimator of the parameters of weak FARIMA(p,d,q) models is strongly consistent and asymptotically normal. The asymptotic variance matrix of the least squares estimator of weak FARIMA(p,d,q) models has the "sandwich" form. This matrix can be very different from the asymptotic variance obtained in the strong case (i.e. in the case where the noise is assumed to be iid). We propose, by two different methods, a convergent estimator of this matrix. An alternative method based on a self-normalization approach is also proposed to construct confidence intervals for the parameters of weak FARIMA(p,d,q) models.We then pay particular attention to the problem of validation of weak FARIMA(p,d,q) models. We show that the residual autocorrelations have a normal asymptotic distribution with a covariance matrix different from that one obtained in the strong FARIMA case. This allows us to deduce the exact asymptotic distribution of portmanteau statistics and thus to propose modified versions of portmanteau tests. It is well known that the asymptotic distribution of portmanteau tests is correctly approximated by a chi-squared distribution when the error term is assumed to be iid. In the general case, we show that this asymptotic distribution is a mixture of chi-squared distributions. It can be very different from the usual chi-squared approximation of the strong case. We adopt the same self-normalization approach used for constructing the confidence intervals of weak FARIMA model parameters to test the adequacy of weak FARIMA(p,d,q) models. This method has the advantage of avoiding the problem of estimating the asymptotic variance matrix of the joint vector of the least squares estimator and the empirical autocovariances of the noise.Secondly, we deal in this thesis with the problem of estimating autoregressive models of order 1 endowed with fractional Gaussian noise when the Hurst parameter H is assumed to be known. We study, more precisely, the convergence and the asymptotic normality of the generalized least squares estimator of the autoregressive parameter of these models
APA, Harvard, Vancouver, ISO, and other styles
48

Scipioni, Angel. "Contribution à la théorie des ondelettes : application à la turbulence des plasmas de bord de Tokamak et à la mesure dimensionnelle de cibles." Thesis, Nancy 1, 2010. http://www.theses.fr/2010NAN10125.

Full text
Abstract:
La nécessaire représentation en échelle du monde nous amène à expliquer pourquoi la théorie des ondelettes en constitue le formalisme le mieux adapté. Ses performances sont comparées à d'autres outils : la méthode des étendues normalisées (R/S) et la méthode par décomposition empirique modale (EMD).La grande diversité des bases analysantes de la théorie des ondelettes nous conduit à proposer une approche à caractère morphologique de l'analyse. L'exposé est organisé en trois parties.Le premier chapitre est dédié aux éléments constitutifs de la théorie des ondelettes. Un lien surprenant est établi entre la notion de récurrence et l'analyse en échelle (polynômes de Daubechies) via le triangle de Pascal. Une expression analytique générale des coefficients des filtres de Daubechies à partir des racines des polynômes est ensuite proposée.Le deuxième chapitre constitue le premier domaine d'application. Il concerne les plasmas de bord des réacteurs de fusion de type tokamak. Nous exposons comment, pour la première fois sur des signaux expérimentaux, le coefficient de Hurst a pu être mesuré à partir d'un estimateur des moindres carrés à ondelettes. Nous détaillons ensuite, à partir de processus de type mouvement brownien fractionnaire (fBm), la manière dont nous avons établi un modèle (de synthèse) original reproduisant parfaitement la statistique mixte fBm et fGn qui caractérise un plasma de bord. Enfin, nous explicitons les raisons nous ayant amené à constater l'absence de lien existant entre des valeurs élevées du coefficient d'Hurst et de supposées longues corrélations.Le troisième chapitre est relatif au second domaine d'application. Il a été l'occasion de mettre en évidence comment le bien-fondé d'une approche morphologique couplée à une analyse en échelle nous ont permis d'extraire l'information relative à la taille, dans un écho rétrodiffusé d'une cible immergée et insonifiée par une onde ultrasonore
The necessary scale-based representation of the world leads us to explain why the wavelet theory is the best suited formalism. Its performances are compared to other tools: R/S analysis and empirical modal decomposition method (EMD). The great diversity of analyzing bases of wavelet theory leads us to propose a morphological approach of the analysis. The study is organized into three parts. The first chapter is dedicated to the constituent elements of wavelet theory. Then we will show the surprising link existing between recurrence concept and scale analysis (Daubechies polynomials) by using Pascal's triangle. A general analytical expression of Daubechies' filter coefficients is then proposed from the polynomial roots. The second chapter is the first application domain. It involves edge plasmas of tokamak fusion reactors. We will describe how, for the first time on experimental signals, the Hurst coefficient has been measured by a wavelet-based estimator. We will detail from fbm-like processes (fractional Brownian motion), how we have established an original model perfectly reproducing fBm and fGn joint statistics that characterizes magnetized plasmas. Finally, we will point out the reasons that show the lack of link between high values of the Hurst coefficient and possible long correlations. The third chapter is dedicated to the second application domain which is relative to the backscattered echo analysis of an immersed target insonified by an ultrasonic plane wave. We will explain how a morphological approach associated to a scale analysis can extract the diameter information
APA, Harvard, Vancouver, ISO, and other styles
49

"Applications of Lie symmetry analysis to the quantum Brownian motion model." Thesis, 2008. http://hdl.handle.net/10413/455.

Full text
Abstract:
Lie symmetry group methods provide a useful tool for the analysis of differential equations in a variety of areas in physics and applied mathematics. The nature of symmetry is that it provides information on properties which remain invariant under transformation. In differential equations this invariance provides a route toward complete integrations, reductions, linearisations and analytical solutions which can evade standard techniques of analysis. In this thesis we study two problems in quantum mechanics from a symmetry perspective: We consider for pedagogical purposes the linear time dependent Schrodinger equation in a potential and provide a symmetry analysis of the resulting equations. Thereafter, as an original contribution, we study the group theoretic properties of the density matrix equation for the quantum Brownian motion of a free particle interacting with a bath of harmonic oscillators. We provide a number of canonical reductions of the system to equations of reduced dimensionality as well as several complete integrations.
Thesis (M.Sc.) - University of KwaZulu-Natal, Westville, 2008.
APA, Harvard, Vancouver, ISO, and other styles
50

Hsiao, Yu Kai, and 蕭友凱. "Brownian Motion Stochastic Differential Equations Model and Risk Premium Confidence Interval." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/26066043928350069731.

Full text
Abstract:
碩士
真理大學
統計與精算學系碩士班
98
More recently because of advances in medical technology and public health developments, the developed countries improve the people's mortality makes the life expectancy has been increasing. It leads to us to research the longevity risk. The increase in life expectancy may cause insurance company annuity payment and medical expenses resulting operational risks such as ill-prepared. Government's social security is also facing the same problem. To address such problems, we should discuss the fundamental interest rates model and mortality model. This article is only for the research of mortality model. At present the insurance product pricing mortality model used in deterministic form, Such as the Gompertz law, Coale-Kisker model so as to parameters and assumptions based. It did not consider the uncertainty of future mortality. In recent years, stochastic discrete mortality model, Lee-Carter model in consideration of the random components for future is favored. Due to changes in mortality should be random and continuous. In this article, we set up a continuous stochastic mortality model by differential equations of Brownian motion. We use HMD (Human Mortality Databases) in Japan, Taiwan, England & Wales, Sweden and USA data as examples by Monte Carlo simulation method to establish 95% confidence interval. Fitting and forecasting mortality and compare with Lee-Carter Methods.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography