To see the other types of publications on this topic, follow the link: Markov Theorem.

Dissertations / Theses on the topic 'Markov Theorem'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Markov Theorem.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

at, Andreas Cap@esi ac. "Markov Operators and the Nevo--Stein Theorem." ESI preprints, 2001. ftp://ftp.esi.ac.at/pub/Preprints/esi1077.ps.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Holzmann, Hajo. "Some remarks on the central limit theorem for stationary Markov processes." Doctoral thesis, [S.l.] : [s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=972079874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sedova, Ada. "Conditions for deterministic limits of markov jump processes| The Kurtz theorem in chemistry." Thesis, State University of New York at Albany, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1588003.

Full text
Abstract:
<p> A theorem by Kurtz on convergence of Markov jump processes is presented as it relates to the use of the chemical master equation. Necessary mathematical background in the theory of stochastic processes is developed, as well as requirements of the mathematical model necessitated by results in the physical sciences. Applicability and usefulness of the master equation for this type of combinatorial model in chemistry is discussed, as well as analytical connections and modern applications in multiple research fields.</p>
APA, Harvard, Vancouver, ISO, and other styles
4

Müller, Gustavo Henrique. "Uma introdução aos grandes desvios." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/150245.

Full text
Abstract:
Nesta dissertação de mestrado, vamos apresentar uma prova para os grandes desvios para variáveis aleatórias independentes e identicamente distribuídas com todos os momentos finitos e para a medida empírica de cadeias de Markov com espaço de estados finito e tempo discreto. Além disso, abordaremos os teoremas de Sanov e Gärtner-Ellis.<br>In this master thesis it is presented a proof of the large deviations for independent and identically distributed random variables with all finite moments and for the empirical measure of Markov chains with finite state space and with discrete time. Moreover, we address the theorems of Sanov and of Gartner-Ellis.
APA, Harvard, Vancouver, ISO, and other styles
5

Hubbard, Rebecca Allana. "Modeling a non-homogeneous Markov process via time transformation /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/9607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Damian, Doris. "A Bayesian approach to estimating heterogeneous spatial covariances /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/9563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Haneuse, Sebastian J. P. A. "Ecological studies using supplemental case-control data /." Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/9595.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Asgeirsson, Agni. "On-line algorithms for bin-covering problems with known item distributions." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53413.

Full text
Abstract:
This thesis focuses on algorithms solving the on-line Bin-Covering problem, when the items are generated from a known, stationary distribution. We introduce the Prospect Algorithm. The main idea behind the Prospect Algorithm is to use information on the item distribution to estimate how easy it will be to fill a bin with small overfill as a function of the empty space left in it. This estimate is then used to determine where to place the items, so that all active bins either stay easily fillable, or are finished with small overfill. We test the performance of the algorithm by simulation, and discuss how it can be modified to cope with additional constraints and extended to solve the Bin-Packing problem as well. The Prospect Algorithm is then adapted to achieve perfect packing, yielding a new version, the Prospect+ Algorithm, that is a slight but consistent improvement. Next, a Markov Decision Process formulation is used to obtain an optimal Bin-Covering algorithm to compare with the Prospect Algorithm. Even though the optimal algorithm can only be applied to limited (small) cases, it gives useful insights that lead to another modification of the Prospect Algorithm. We also discuss two relaxations of the on-line constraint, and describe how algorithms that are based on solving the Subset-Sum problem are used to tackle these relaxed problems. Finally, several practical issues encountered when using the Prospect Algorithm in the real-world are analyzed, a computationally efficient way of doing the background calculations needed for the Prospect Algorithm is described, and the three versions of the Prospect Algorithm developed in this thesis are compared.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhou, Chuan. "A Bayesian model for curve clustering with application to gene expression data analysis /." Thesis, Connect to this title online; UW restricted, 2003. http://hdl.handle.net/1773/9576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Martínez, Sosa José. "Optimal exposure strategies in insurance." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/optimal-exposure-strategies-in-insurance(3768eede-a363-475b-bf25-8eff039fe6b7).html.

Full text
Abstract:
Two optimisation problems were considered, in which market exposure is indirectly controlled. The first one models the capital of a company and an independent portfolio of new businesses, each one represented by a Cram\'r-Lundberg process. The company can choose the proportion of new business it wants to take on and can alter this proportion over time. Here the objective is to find a strategy that maximises the survival probability. We use a point processes framework to deal with the impact of an adapted strategy in the intensity of the new business. We prove that when Cram\'{e}r-Lundberg processes with exponentially distributed claims, it is optimal to choose a threshold type strategy, where the company switches between owning all new businesses or none depending on the capital level. For this type of processes that change both drift and jump measure when crossing the constant threshold, we solve the one and two-sided exit problems. This optimisation problem is also solved when the capital of the company and the new business are modelled by spectrally positive L\'vy processes of bounded variation. Here the one-sided exit problem is solved and we prove optimality of the same type of threshold strategy for any jump distribution. The second problem is a stochastic variation of the work done by Taylor about underwriting in a competitive market. Taylor maximised discounted future cash flows over a finite time horizon in a discrete time setting when the change of exposure from one period to the next has a multiplicative form involving the company's premium and the market average premium. The control is the company's premium strategy over a the mentioned finite time horizon. Taylor's work opened a rich line of research, and we discuss some of it. In contrast with Taylor's model, we consider the market average premium to be a Markov chain instead of a deterministic vector. This allows to model uncertainty in future conditions of the market. We also consider an infinite time horizon instead of finite. This solves the time dependency in Taylor's optimal strategies that were giving unrealistic results. Our main result is a formula to calculate explicitly the value function of a specific class of pricing strategies. Further we explore concrete examples numerically. We find a mix of optimal strategies where in some examples the company should follow the market while in other cases should go against it.
APA, Harvard, Vancouver, ISO, and other styles
11

Aliou, Diallo Aoudi Mohamed Habib. "Local matching algorithms on the configuration model." Electronic Thesis or Diss., Compiègne, 2023. http://www.theses.fr/2023COMP2742.

Full text
Abstract:
Nous proposons une alternative à l’approche prévalente dans les algorithmes de mariage en ligne. Basés sur le choix d’un critère de mariage, nous construisons des algorithmes dits locaux. Ces algorithmes sont locaux dans le sens où chacun des individus est tour à tour soumis au critère de mariage choisi. Ce qui nous amène à démontrer que le nombre de sommets qui finissent mariés lorsque chaque individu adopte une stratégie prédéfinie est solution d’une équation différentielle ordinaire. Grâce à cette approche nous prédisons les performances et comparons deux algorithmes/stratégies. Pour émuler l'asymptotique des graphes, nous utilisons le modèle de configuration basé sur un échantillonnage de la distribution de degré du graphe d'intérêt. Et globalement notre méthode peut être vue comme une généralisation de la Differential Equation Method de Wormald. Il est à noter que l’approche en ligne se concentre principalement sur les graphes bipartis<br>The present thesis constructs an alternative framework to online matching algorithms on large graphs. Using the configuration model to mimic the degree distributions of large networks, we are able to build algorithms based on local matching policies for nodes. Thus, we are allowed to predict and approximate the performances of a class of matching policies given the degree distributions of the initial network. Towards this goal, we use a generalization of the differential equation method to measure valued processes. Through-out the text, we provide simulations and a comparison to the seminal work of Karp, Vazirani and Vazirani based on the prevailing viewpoint in online bipartite matching
APA, Harvard, Vancouver, ISO, and other styles
12

Tillman, Måns. "On-Line Market Microstructure Prediction Using Hidden Markov Models." Thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208312.

Full text
Abstract:
Over the last decades, financial markets have undergone dramatic changes. With the advent of the arbitrage pricing theory, along with new technology, markets have become more efficient. In particular, the new high-frequency markets, with algorithmic trading operating on micro-second level, make it possible to translate ”information” into price almost instantaneously. Such phenomena are studied in the field of market microstructure theory, which aims to explain and predict them. In this thesis, we model the dynamics of high frequency markets using non-linear hidden Markov models (HMMs). Such models feature an intuitive separation between observations and dynamics, and are therefore highly convenient tools in financial settings, where they allow a precise application of domain knowledge. HMMs can be formulated based on only a few parameters, yet their inherently dynamic nature can be used to capture well-known intra-day seasonality effects that many other models fail to explain. Due to recent breakthroughs in Monte Carlo methods, HMMs can now be efficiently estimated in real-time. In this thesis, we develop a holistic framework for performing both real-time inference and learning of HMMs, by combining several particle-based methods. Within this framework, we also provide methods for making accurate predictions from the model, as well as methods for assessing the model itself. In this framework, a sequential Monte Carlo bootstrap filter is adopted to make on-line inference and predictions. Coupled with a backward smoothing filter, this provides a forward filtering/backward smoothing scheme. This is then used in the sequential Monte Carlo expectation-maximization algorithm for finding the optimal hyper-parameters for the model. To design an HMM specifically for capturing information translation, we adopt the observable volume imbalance into a dynamic setting. Volume imbalance has previously been used in market microstructure theory to study, for example, price impact. Through careful selection of key model assumptions, we define a slightly modified observable as a process that we call scaled volume imbalance. The outcomes of this process retain the key features of volume imbalance (that is, its relationship to price impact and information), and allows an efficient evaluation of the framework, while providing a promising platform for future studies. This is demonstrated through a test on actual financial trading data, where we obtain high-performance predictions. Our results demonstrate that the proposed framework can successfully be applied to the field of market microstructure.<br>Under de senaste decennierna har det gjorts stora framsteg inom finansiell teori för kapitalmarknader. Formuleringen av arbitrageteori medförde möjligheten att konsekvent kunna prissätta finansiella instrument. Men i en tid då högfrekvenshandel numera är standard, har omsättningen av information i pris börjat ske i allt snabbare takt. För att studera dessa fenomen; prispåverkan och informationsomsättning, har mikrostrukturteorin vuxit fram. I den här uppsatsen studerar vi mikrostruktur med hjälp av en dynamisk modell. Historiskt sett har mikrostrukturteorin fokuserat på statiska modeller men med hjälp av icke-linjära dolda Markovmodeller (HMM:er) utökar vi detta till den dynamiska domänen. HMM:er kommer med en naturlig uppdelning mellan observation och dynamik, och är utformade på ett sådant sätt att vi kan dra nytta av domänspecifik kunskap. Genom att formulera lämpliga nyckelantaganden baserade på traditionell mikrostrukturteori specificerar vi en modell—med endast ett fåtal parametrar—som klarar av att beskriva de välkända säsongsbeteenden som statiska modeller inte klarar av. Tack vare nya genombrott inom Monte Carlo-metoder finns det nu kraftfulla verktyg att tillgå för att utföra optimal filtrering med HMM:er i realtid. Vi applicerar ett så kallat bootstrap filter för att sekventiellt filtrera fram tillståndet för modellen och prediktera framtida tillstånd. Tillsammans med tekniken backward smoothing estimerar vi den posteriora simultana fördelningen för varje handelsdag. Denna används sedan för statistisk inlärning av våra hyperparametrar via en sekventiell Monte Carlo Expectation Maximization-algoritm. För att formulera en modell som beskriver omsättningen av information, väljer vi att utgå ifrån volume imbalance, som ofta används för att studera prispåverkan. Vi definierar den relaterade observerbara storheten scaled volume imbalance som syftar till att bibehålla kopplingen till prispåverkan men även går att modellera med en dynamisk process som passar in i ramverket för HMM:er. Vi visar även hur man inom detta ramverk kan utvärdera HMM:er i allmänhet, samt genomför denna analys för vår modell i synnerhet. Modellen testas mot finansiell handelsdata för både terminskontrakt och aktier och visar i bägge fall god predikteringsförmåga.
APA, Harvard, Vancouver, ISO, and other styles
13

Lin, Lijing. "Roots of stochastic matrices and fractional matrix powers." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/roots-of-stochastic-matrices-and-fractional-matrix-powers(3f7dbb69-7c22-4fe9-9461-429c25c0db85).html.

Full text
Abstract:
In Markov chain models in finance and healthcare a transition matrix over a certain time interval is needed but only a transition matrix over a longer time interval may be available. The problem arises of determining a stochastic $p$th root of astochastic matrix (the given transition matrix). By exploiting the theory of functions of matrices, we develop results on the existence and characterization of stochastic $p$th roots. Our contributions include characterization of when a real matrix hasa real $p$th root, a classification of $p$th roots of a possibly singular matrix,a sufficient condition for a $p$th root of a stochastic matrix to have unit row sums,and the identification of two classes of stochastic matrices that have stochastic $p$th roots for all $p$. We also delineate a wide variety of possible configurationsas regards existence, nature (primary or nonprimary), and number of stochastic roots,and develop a necessary condition for existence of a stochastic root in terms of the spectrum of the given matrix. On the computational side, we emphasize finding an approximate stochastic root: perturb the principal root $A^{1/p}$ or the principal logarithm $\log(A)$ to the nearest stochastic matrix or the nearest intensity matrix, respectively, if they are not valid ones;minimize the residual $\normF{X^p-A}$ over all stochastic matrices $X$ and also over stochastic matrices that are primary functions of $A$. For the first two nearness problems, the global minimizers are found in the Frobenius norm. For the last two nonlinear programming problems, we derive explicit formulae for the gradient and Hessian of the objective function $\normF{X^p-A}^2$ and investigate Newton's method, a spectral projected gradient method (SPGM) and the sequential quadratic programming method to solve the problem as well as various matrices to start the iteration. Numerical experiments show that SPGM starting with the perturbed $A^{1/p}$to minimize $\normF{X^p-A}$ over all stochastic matrices is method of choice.Finally, a new algorithm is developed for computing arbitrary real powers $A^\a$ of a matrix $A\in\mathbb{C}^{n\times n}$. The algorithm starts with a Schur decomposition,takes $k$ square roots of the triangular factor $T$, evaluates an $[m/m]$ Pad\'e approximant of $(1-x)^\a$ at $I - T^$, and squares the result $k$ times. The parameters $k$ and $m$ are chosen to minimize the cost subject to achieving double precision accuracy in the evaluation of the Pad\'e approximant, making use of a result that bounds the error in the matrix Pad\'e approximant by the error in the scalar Pad\'e approximant with argument the norm of the matrix. The Pad\'e approximant is evaluated from the continued fraction representation in bottom-up fashion, which is shown to be numerically stable. In the squaring phase the diagonal and first superdiagonal are computed from explicit formulae for $T^$, yielding increased accuracy. Since the basic algorithm is designed for $\a\in(-1,1)$, a criterion for reducing an arbitrary real $\a$ to this range is developed, making use of bounds for the condition number of the $A^\a$ problem. How best to compute $A^k$ for a negative integer $k$ is also investigated. In numerical experiments the new algorithm is found to be superior in accuracy and stability to several alternatives,including the use of an eigendecomposition, a method based on the Schur--Parlett\alg\ with our new algorithm applied to the diagonal blocks and approaches based on the formula $A^\a = \exp(\a\log(A))$.
APA, Harvard, Vancouver, ISO, and other styles
14

Jonsson, Fredrik. "Physiologically based pharmacokinetic modeling in risk assessment - Development of Bayesian population methods." Doctoral thesis, Solna : National Institute for Working Life (Arbetslivsinstitutet), 2001. http://publications.uu.se/theses/91-7045-599-6/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kreacic, Eleonora. "Some problems related to the Karp-Sipser algorithm on random graphs." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:3b2eb52a-98f5-4af8-9614-e4909b8b9ffa.

Full text
Abstract:
We study certain questions related to the performance of the Karp-Sipser algorithm on the sparse Erdös-Rényi random graph. The Karp-Sipser algorithm, introduced by Karp and Sipser [34] is a greedy algorithm which aims to obtain a near-maximum matching on a given graph. The algorithm evolves through a sequence of steps. In each step, it picks an edge according to a certain rule, adds it to the matching and removes it from the remaining graph. The algorithm stops when the remining graph is empty. In [34], the performance of the Karp-Sipser algorithm on the Erdös-Rényi random graphs G(n,M = [<sup>cn</sup>/<sub>2</sub>]) and G(n, p = <sup>c</sup>/<sub>n</sub>), c &GT; 0 is studied. It is proved there that the algorithm behaves near-optimally, in the sense that the difference between the size of a matching obtained by the algorithm and a maximum matching is at most o(n), with high probability as n → ∞. The main result of [34] is a law of large numbers for the size of a maximum matching in G(n,M = <sup>cn</sup>/<sub>2</sub>) and G(n, p = <sup>c</sup>/<sub>n</sub>), c &GT; 0. Aronson, Frieze and Pittel [2] further refine these results. In particular, they prove that for c &LT; e, the Karp-Sipser algorithm obtains a maximum matching, with high probability as n → ∞; for c &GT; e, the difference between the size of a matching obtained by the algorithm and the size of a maximum matching of G(n,M = <sup>cn</sup>/<sub>2</sub>) is of order Θ<sub>log n</sub>(n<sup>1/5</sup>), with high probability as n → ∞. They further conjecture a central limit theorem for the size of a maximum matching of G(n,M = <sup>cn</sup>/<sub>2</sub>) and G(n, p = <sup>c</sup>/<sub>n</sub>) for all c &GT; 0. As noted in [2], the central limit theorem for c &LT; 1 is a consequence of the result of Pittel [45]. In this thesis, we prove a central limit theorem for the size of a maximum matching of both G(n,M = <sup>cn</sup>/<sub>2</sub>) and G(n, p = <sup>c</sup>/<sub>n</sub>) for c &GT; e. (We do not analyse the case 1 ≤ c ≤ e). Our approach is based on the further analysis of the Karp-Sipser algorithm. We use the results from [2] and refine them. For c &GT; e, the difference between the size of a matching obtained by the algorithm and the size of a maximum matching is of order Θ<sub>log n</sub>(n<sup>1/5</sup>), with high probability as n → ∞, and the study [2] suggests that this difference is accumulated at the very end of the process. The question how the Karp-Sipser algorithm evolves in its final stages for c > e, motivated us to consider the following problem in this thesis. We study a model for the destruction of a random network by fire. Let us assume that we have a multigraph with minimum degree at least 2 with real-valued edge-lengths. We first choose a uniform random point from along the length and set it alight. The edges burn at speed 1. If the fire reaches a node of degree 2, it is passed on to the neighbouring edge. On the other hand, a node of degree at least 3 passes the fire either to all its neighbours or none, each with probability 1/2. If the fire extinguishes before the graph is burnt, we again pick a uniform point and set it alight. We study this model in the setting of a random multigraph with N nodes of degree 3 and α(N) nodes of degree 4, where α(N)/N → 0 as N → ∞. We assume the edges to have i.i.d. standard exponential lengths. We are interested in the asymptotic behaviour of the number of fires we must set alight in order to burn the whole graph, and the number of points which are burnt from two different directions. Depending on whether α(N) » √N or not, we prove that after the suitable rescaling these quantities converge jointly in distribution to either a pair of constants or to (complicated) functionals of Brownian motion. Our analysis supports the conjecture that the difference between the size of a matching obtained by the Karp-Sipser algorithm and the size of a maximum matching of the Erdös-Rényi random graph G(n,M = <sup>cn</sup>/<sub>2</sub>) for c > e, rescaled by n<sup>1/5</sup>, converges in distribution.
APA, Harvard, Vancouver, ISO, and other styles
16

Ye, Yinna. "PROBABILITÉ DE SURVIE D'UN PROCESSUS DE BRANCHEMENT DANS UN ENVIRONNEMENT ALÉATOIRE MARKOVIEN." Phd thesis, Université François Rabelais - Tours, 2011. http://tel.archives-ouvertes.fr/tel-00605751.

Full text
Abstract:
L'objet de cette thèse est d'étudier la probabilité de survie d'un processus de branchement en environnement aléatoire markovien et d'étendre dans ce cadre les résultats connus en milieu aléatoire indépendant et identiquement distribué. Le coeur de l'étude repose sur l'utilisation des théorèmes limites locaux pour une marche aléatoire centrée (Sn)n 0 sur R à pas markoviens et pour (mn)n 0, où mn = min (0; S1; ; Sn). Pour traiter le cas d'un environnement aléatoire markovien, nous développons dans un premier temps une étude des théorèmes locaux pour une chaîne semi-markovienne à valeurs réelles en améliorant certains résultats déjà connus et développés initialement par E. L. Presman (voir [22] et [23]). Nous utilisons ensuite ces résultats pour l'étude du comportement asymptotique de la probabilité de survie d'un processus de branchement critique en environnement aléatoire markovien. Les résultats principaux de cette thèse ont été annoncés dans les Comptes Rendus de l'Académie des Sciences ([21]). Un article plus détaillé est soumis pour publication dans la revue Journal of Theoretical Probability. Dans cette thèse, nous précisons les énoncés de ces théorèmes et détaillons leurs démonstrations.
APA, Harvard, Vancouver, ISO, and other styles
17

Ba, Mamadou. "Processus d'exploration, arbres binaires aléatoires avec ou sans interaction et théorème de Ray-Knight généralisé." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM4710.

Full text
Abstract:
Dans cette thèse, on étudie des liens entre processus d'exploration et arbres aléatoires avec ou sans interaction, pour en déduire des extensions du théorème de Ray Knight. Dans la première partie nous décrivons une certaine bijection entre l'ensemble des processus d'exploration et l'ensemble des arbres binaires. On montre que l'arbre associé à un processus d'exploration défini avec les paramètres mu et lambda décrivant les taux de ses minimas et maximas locaux respectivement à n'importe quel instant considéré, est un arbre binaire aléatoire de taux de naissance mu et de taux de mort lambda. De cette correspondance, nous déduisons une représentation discrète d'un processus de branchement linéaire en terme de temps local d'un processus d'exploration. Après renormalisation des paramètres, nous en déduisons une preuve du théorème de Ray Knight généralisé donnant une représentation en loi d'un processus de Feller linéaire en terme du temps local du mouvement brownien réfléchi en zéro avec une dérive. Dans la deuxième partie, nous considérons un modèle de population avec compétition définie par une fonction polynomiale f(x) = x^{alpha}, alpha&gt;0 et partant de m ancêtres à l'instant initial 0. On étudie l'effet de la compétition sur la hauteur et la longueur de la forêt d'arbres généalogiques quand m tend vers l'infini. On montre que la hauteur est d'espérance finie si alpha&gt; 1, et est infinie dans le cas contraire, tandis que la longueur est d'espérance finie si alpha &gt; 2, et est infinie dans le cas contraire<br>In this thesis, we study connections between explorations processes and random trees, from which we deduce Ray Knight Theorem. In the first part, we describe a bijection between exploration processes and Galton Watson binary trees. We show that the tree we obtain under the curve of an exploration process whose maxima and minima rates are respectively lambda and mu, is a Galton Watson binary tree with birth rate mu and death rate lambda. From this correspondence, we establish a discrete Ray Knight representation of the process population size of a Galton Watson tree in term of local time of exploration process associated to this tree. After some renormalization, we deduce from this discrete approximation with a limiting argument, a generalized Ray Knight theorem giving a representation of a Feller branching process in term of local time of a reflected Brownian motion with a linear drift. In the second part, we consider a population model with competition defined with a function f(x) = x^{alpha}. We study the effect of the competition on the height and the length of the genealogical trees of a large population. We show that the expectation of the height has a finite expectation stays finite if alpha&gt; 1 and is infinite almost surely if alpha le 1, while the length has a finite expectation if alpha &gt; 2, and is infinite almost surely if alpha le 2. In the last part, we consider a population model with interaction defined with a more general non linear function f
APA, Harvard, Vancouver, ISO, and other styles
18

Xu, Chongyuan. "Markov denumerable process and queue theory." Thesis, University of Liverpool, 2013. http://livrepository.liverpool.ac.uk/12573/.

Full text
Abstract:
In this thesis, we study a modified Markovian batch-arrival and bulk- service queue including finite states for dependent control. We first consider the stopped batch-arrival and bulk-service queue process Q∗, which is the process with the restriction of the state-dependent control. After we obtain the expression of the Q∗-resolvent, the extinction probability and the mean extinction time are explored. Then, we apply a decomposition theorem to resume the stopped queue process back to our initial queueing model, that is to find the expression of Q-resolvent. After that, the criteria for the recurrence and ergodicity are also explored, and then, the generating function of equilibrium distribution is obtained. Additionally, the Laplace transform of the mean queue length is presented. The hitting time behaviors including the hitting probability and the hitting time distribution are also established. Furthermore, the busy period distribution is also obtained by the expression of Laplace transform. To conclude the discussion of the queue properties, a special case that m = 3 for our queueing model is discussed. Furthermore, we consider the decay parameter and decay properties of our initial queue process. First of all, similarly we consider the case of the stopped queue process Q∗. Based on this q-matrix, the exact value of the decay parameter λC is obtained theoretically. Then, we apply this result back to our initial queue model and find the decay parameter of our initial queueing model. More specifically, we prove that the decay parameter can be expressed accurately. After that, under the assumption of transient Q, the criteria for λC -recurrence are established. For λC -positive recurrent examples, the generating function of the λC-invariant measure and vector are explored. Finally, a simple example is provided to end this thesis.
APA, Harvard, Vancouver, ISO, and other styles
19

Rezaie, Reza. "Gaussian Conditionally Markov Sequences: Theory with Application." ScholarWorks@UNO, 2019. https://scholarworks.uno.edu/td/2679.

Full text
Abstract:
Markov processes have been widely studied and used for modeling problems. A Markov process has two main components (i.e., an evolution law and an initial distribution). Markov processes are not suitable for modeling some problems, for example, the problem of predicting a trajectory with a known destination. Such a problem has three main components: an origin, an evolution law, and a destination. The conditionally Markov (CM) process is a powerful mathematical tool for generalizing the Markov process. One class of CM processes, called $CM_L$, fits the above components of trajectories with a destination. The CM process combines the Markov property and conditioning. The CM process has various classes that are more general and powerful than the Markov process, are useful for modeling various problems, and possess many Markov-like attractive properties. Reciprocal processes were introduced in connection to a problem in quantum mechanics and have been studied for years. But the existing viewpoint for studying reciprocal processes is not revealing and may lead to complicated results which are not necessarily easy to apply. We define and study various classes of Gaussian CM sequences, obtain their models and characterizations, study their relationships, demonstrate their applications, and provide general guidelines for applying Gaussian CM sequences. We develop various results about Gaussian CM sequences to provide a foundation and tools for general application of Gaussian CM sequences including trajectory modeling and prediction. We initiate the CM viewpoint to study reciprocal processes, demonstrate its significance, obtain simple and easy to apply results for Gaussian reciprocal sequences, and recommend studying reciprocal processes from the CM viewpoint. For example, we present a relationship between CM and reciprocal processes that provides a foundation for studying reciprocal processes from the CM viewpoint. Then, we obtain a model for nonsingular Gaussian reciprocal sequences with white dynamic noise, which is easy to apply. Also, this model is extended to the case of singular sequences and its application is demonstrated. A model for singular sequences has not been possible for years based on the existing viewpoint for studying reciprocal processes. This demonstrates the significance of studying reciprocal processes from the CM viewpoint.
APA, Harvard, Vancouver, ISO, and other styles
20

Longla, Martial. "Modeling dependence and limit theorems for Copula-based Markov chains." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1367944672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Baxter, Martin William. "Discounted functionals of Markov processes." Thesis, University of Cambridge, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.309008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Möllering, Karin. "Inventory rationing : a new modeling approach using Markov chain theory /." Köln : Kölner Wiss.-Verl, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=2942052&prov=M&dokv̲ar=1&doke̲xt=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Petrides, Andreas. "Advances in the stochastic and deterministic analysis of multistable biochemical networks." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/279059.

Full text
Abstract:
This dissertation is concerned with the potential multistability of protein concentrations in the cell that can arise in biochemical networks. That is, situations where one, or a family of, proteins may sit at one of two or more different steady state concentrations in otherwise identical cells, and in spite of them being in the same environment. Models of multisite protein phosphorylation have shown that this mechanism is able to exhibit unlimited multistability. Nevertheless, these models have not considered enzyme docking, the binding of the enzymes to one or more substrate docking sites, which are separate from the motif that is chemically modified. Enzyme docking is, however, increasingly being recognised as a method to achieve specificity in protein phosphorylation and dephosphorylation cycles. Most models in the literature for these systems are deterministic i.e. based on Ordinary Differential Equations, despite the fact that these are accurate only in the limit of large molecule numbers. For small molecule numbers, a discrete probabilistic, stochastic, approach is more suitable. However, when compared to the tools available in the deterministic framework, the tools available for stochastic analysis offer inadequate visualisation and intuition. We firstly try to bridge that gap, by developing three tools: a) a discrete `nullclines' construct applicable to stochastic systems - an analogue to the ODE nullcines, b) a stochastic tool based on a Weakly Chained Diagonally Dominant M-matrix formulation of the Chemical Master Equation and c) an algorithm that is able to construct non-reversible Markov chains with desired stationary probability distributions. We subsequently prove that, for multisite protein phosphorylation and similar models, in the deterministic domain, enzyme docking and the consequent substrate enzyme-sequestration must inevitably limit the extent of multistability, ultimately to one steady state. In contrast, bimodality can be obtained in the stochastic domain even in situations where bistability is not possible for large molecule numbers. We finally extend our results to cases where we have an autophosphorylating kinase, as for example is the case with $Ca^{2+}$/calmodulin-dependent protein kinase II (CaMKII), a key enzyme in synaptic plasticity.
APA, Harvard, Vancouver, ISO, and other styles
24

Austin, Stephen Christopher. "Hidden Markov models for automatic speech recognition." Thesis, University of Cambridge, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.292913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Simmons, Dayton C. (Dayton Cooper). "Applications of Rapidly Mixing Markov Chains to Problems in Graph Theory." Thesis, University of North Texas, 1993. https://digital.library.unt.edu/ark:/67531/metadc277740/.

Full text
Abstract:
In this dissertation the results of Jerrum and Sinclair on the conductance of Markov chains are used to prove that almost all generalized Steinhaus graphs are rapidly mixing and an algorithm for the uniform generation of 2 - (4k + 1,4,1) cyclic Mendelsohn designs is developed.
APA, Harvard, Vancouver, ISO, and other styles
26

Lari, Karim. "Applications of Hidden Markov Grammars to speech recognition." Thesis, University of Cambridge, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358676.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Huang, Xuedong. "Semi-continuous hidden Markov models for speech recognition." Thesis, University of Edinburgh, 1989. http://hdl.handle.net/1842/14125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Zhu, Jinxia. "Ruin theory under Markovian regime-switching risk models." Click to view the E-thesis via HKUTO, 2008. http://sunzi.lib.hku.hk/hkuto/record/b40203980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Levitz, Michael. "Separation, completeness, and Markov properties for AMP chain graph models /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/9564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Cecchin, Alekos. "Finite State N-player and Mean Field Games." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3424949.

Full text
Abstract:
Mean field games represent limit models for symmetric non-zero sum dynamic games when the number N of players tends to infinity. In this thesis, we study mean field games and corresponding N- player games in continuous time over a finite time horizon where the position of each agent belongs to a finite state space. As opposed to previous works on finite statemean field games, we use a probabilistic representation of the system dynamics in terms of stochastic differential equations driven by Poisson random measures. Firstly, under mild assumptions, we prove existence of solutions to the mean field game in relaxed open-loop as well as relaxed feedback controls. Relying on the probabilistic representation and a coupling argument, we show that mean field game solutions provide symmetric εN- Nash equilibria for the N-player game, both in open-loop and in feedback strategies (not relaxed), with εN≤ constant √N. Under stronger assumptions, we also find solutions of the mean field game in ordinary feedback controls and prove uniqueness either in case of a small time horizon or under monotonicity. Then, assuming that players control just their transition rates from state to state, we show the convergence, as N tends to infinity, of the N-player game to a limiting dynamics given by a finite state mean field game system made of two coupled forward-backward ODEs. We exploit the so-called master equation, which in this finite-dimensional framework is a first order PDE in the simplex of probability measures. If the master equation possesses a unique regular solution, then such solution can be used to prove the convergence of the value functions of the N players and of the feedback Nash equilibria, and a propagation of chaos property for the associated optimal trajectories. A sufficient condition for the required regularity of the master equation is given by the monotonicity assumptions. Further, we employ the convergence results to establish a Central Limit Theorem and a Large Deviation Principle for the evolution of the N-player optimal empirical measures. Finally, we analyze an example with<−1,1> as state space and anti-monotonous cost,and show that the mean field game has exactly three solutions. The Nash equilibrium is always unique and we prove that the N-player game always admits a limit: it selects one mean field game solution, except in one critical case, so there is propagation of chaos. The value functions also converge and the limit is the entropy solution to the master equation, which for two state models can be written as a scalar conservation law. Moreover, viewing the mean field game system as the necessary conditions for optimality of a deterministic control problem, we show that the N-player game selects the optimum of this problem when it is unique.<br>I giochi a campo medio rappresentano modelli limite per giochi dinamici, simmetrici ed a somma non zero, quando il numero N di giocatori tende all’infinito. In questa tesi consideriamo giochi a campo medio e ad N giocatori in cui la posizione di ogni giocatore appartiene ad un insieme degli stati finito. Il tempo è continuo e l’orizzonte temporale è finito. A differenza dei precedenti lavori sull’argomento, utilizziamo una rappresentazione probabilistica delle dinamiche in termini di equazioni differenziali stocastiche rispetto a misure aleatorie di Poisson. Per prima cosa dimostriamo l’esistenza di soluzioni del gioco a campo medio con controlli rilassati, sia open-loop che feedback, in ipotesi piuttosto generali. Basandoci sulla rappresentazione probabilistica e su un argomento di accoppiamento, mostriamo che le soluzioni del gioco a campo medio forniscono εN equilibri di Nash per il gioco ad N giocatori, in strategie sia open-loop che feedback (non rilassate), con εN≤ costante √N. In ipotesi più forti troviamo anche soluzioni del gioco a campo medio con controlli feedback ordinari e dimostriamo l’unicità se l’orizzonte temporale è abbastanza piccolo oppure sotto ipotesi di monotonia. Poi, assumendo che i giocatori controllino solamente il proprio tasso di transizione da stato a stato, mostriamo la convergenza, per N che tende all’infinito, del gioco ad N giocatori alla dinamica limite data dal sistema del gioco a campo medio costituito da due ODE accoppiate, una in avanti e l’altra all’indietro. Sfruttiamo la cosiddetta master equation che nel presente contesto finito dimensionale è una PDE del primo ordine nel simplesso delle misure di probabilità. Se la master equation possiede una soluzione classica, allora tale solutione può essere usata per provare la convergenza delle funzioni valore degli N giocatori e degli equilibri di Nash feedback, ed anche la proprietà di propagazione del chaos per le traiettorie ottimali associate. Una condizione sufficiente per la regolarità richiesta per la master equation è data dalle ipotesi di monotonia. Inoltre impieghiamo il risultato di convergenza per stabilire un Teorema Limite Centrale ed un Principio delle Grandi Deviazioni per l’evoluzione delle misure empiriche ottimali. Infine analizziamo un’esempio in cui lo spazio degli stati è <−1,1> ed il costo è anti-monotono, e mostriamo che il gioco a campo medio possiede esattamente tre soluzioni. L’equilibrio di Nash è sempre unico e proviamo che il gioco ad N giocatori ammette sempre un limite: seleziona una singola soluzione del gioco a campo medio, tranne in un caso critico, pertanto c’è propagazione del chaos. Anche le funzioni valore convergono ed il limite è dato dalla soluzione di entropia della master equation, la quale in questo caso può essere scritta come una legge di conservazione scalare. Inoltre, vedendo il sistema del gioco a campo medio come le condizioni necessarie di ottimalità di un problema di controllo deterministico, mostriamo che il gioco ad N giocatori seleziona esattamente l’ottimo di questo problema quando è unico.
APA, Harvard, Vancouver, ISO, and other styles
31

Knoch, Fabian [Verfasser]. "Nonequilibrium Markov state modeling : theory and application / Fabian Knoch." Mainz : Universitätsbibliothek Mainz, 2017. http://d-nb.info/1145198554/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Meitz, Mika. "Five contributions to econometric theory and the econometrics of ultra-high-frequency data." Doctoral thesis, Stockholm : Economic Research Institute, Stockholm School of Economics [Ekonomiska forskningsinstitutet vid Handelshögskolan i Stockholm] (EFI), 2006. http://www2.hhs.se/EFI/summary/694.htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Huang, Wenzong. "Spatial queueing systems and reversible markov processes." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/24871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Martin, Russell Andrew. "Paths, sampling, and markov chain decomposition." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/29383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Van, Gael Jurgen. "Bayesian nonparametric hidden Markov models." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Wortman, M. A. "Vacation queues with Markov schedules." Diss., Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/54468.

Full text
Abstract:
Vacation systems represent an important class of queueing models having application in both computer communication systems and integrated manufacturing systems. By specifying an appropriate server scheduling discipline, vacation systems are easily particularized to model many practical situations where the server's effort is divided between primary and secondary customers. A general stochastic framework that subsumes a wide variety of server scheduling disciplines for the M/GI/1/L vacation system is developed. Here, a class of server scheduling disciplines, called Markov schedules, is introduced. It is shown that the queueing behavior M/GI/1/L vacation systems having Markov schedules is characterized by a queue length/server activity marked point process that is Markov renewal and a joint queue length/server activity process that is semi-regenerative. These processes allow characterization of both the transient and ergodic queueing behavior of vacation systems as seen immediately following customer service completions, immediately following server vacation completions, and at arbitrary times The state space of the joint queue length/server activity process can be systematically particularized so as to model most server scheduling disciplines appearing in the literature and a number of disciplines that do not appear in the literature. The Markov renewal nature of the queue length/server activity marked point process yields important results that offer convenient computational formulae. These computational formulae are employed to investigate the ergodic queue length of several important vacation systems; a number of new results are introduced. In particular, the M/GI/1 vacation with limited batch service is investigated for the first time, and the probability generating functions for queue length as seen immediately following service completions, immediately following vacation completions, and at arbitrary times are developed.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
37

Möllering, Karin. "Inventory rationing a new modeling approach using Markov chain theory." Köln Kölner Wiss.-Verl, 2006. http://deposit.d-nb.de/cgi-bin/dokserv?id=2942052&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Greenberg, Sam. "Random sampling of lattice configurations using local Markov chains." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/28090.

Full text
Abstract:
Thesis (M. S.)--Mathematics, Georgia Institute of Technology, 2009.<br>Committee Chair: Randall, Dana; Committee Member: Heitsch, Christine; Committee Member: Mihail, Milena; Committee Member: Trotter, Tom; Committee Member: Vigoda, Eric.
APA, Harvard, Vancouver, ISO, and other styles
39

Wong, Georges. "Improved speech hidden Markov modelling via an expectation-maximization framework." Thesis, University of Cambridge, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.259544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hagemann, Klaas [Verfasser]. "Doob-Martin-Theorie diskreter Markov-Ketten : Struktur und Anwendungen / Klaas Hagemann." Hannover : Technische Informationsbibliothek (TIB), 2017. http://d-nb.info/1128666677/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

LADELLI, LUCIA. "Theoremes limites pour les chaines de markov : application aux algorithmes stochastiques." Paris 6, 1989. http://www.theses.fr/1989PA066283.

Full text
Abstract:
Cette these se compose de quatre articles et d'une note aux comptes rendus de l'academie des sciences concernant d'une part l'etude du comportement d'une classe d'algorithmes stochastiques, d'autre part un theoreme central limite pour des processus stochastiques markoviens intervenant dans certains problemes de statistique. En ce qui concerne les algorithmes, l'interet est centre sur les algorithmes adaptatifs a dynamique markovienne et les resultats obtenus se situent dans la lignee des travaux effectues par a. Benveniste, m. Metivier et p. Priouret. En ce qui concerne le comportement asymptotique des chaines de markov, dans les modeles que nous avons etudies nous obtenons: dans le cas ergodique, la normalite asymptotique, dans le cas recurrent nul, un processus limite qui est un mouvement brownien change de temps par un processus de mittag-leffler independant du brownien
APA, Harvard, Vancouver, ISO, and other styles
42

Humala, Acuña Alberto. "Markov switching modelling of interest rate pass-through." Thesis, University of Warwick, 2005. http://wrap.warwick.ac.uk/34676/.

Full text
Abstract:
The first paper, "Interest rate pass-through and financial crises: do switching regimes matter? The case of Argentina", analyses the dynamic relationship between a money market (interbank) rate and different short-term lending rates by measuring their passthrough. Neither linear single-equation modelling nor linear multi-equation systems capture efficiently this relationship. Several financial crises alter the speed and degree of response to interbank rate shocks. Hence, a Markov switching VAR model shows the pass-through increases considerably for all market interest rates in a high-volatility scenario. The model identifies correctly the periods in which regime shifts occur, and associates them to financial crises. The second paper, "Modelling interest rate pass-through with endogenous switching regimes in Argentina", extends the scope of the Markov switching modelling by including time-varying transition probabilities. Interest rate spreads are used as leading indicators. The model allows devaluation expectations and country risks, (measured by rate spreads) to signal regime switching. Estimation results suggest that the passthrough tends to overshoot with financial instability, but to decrease if that condition is sufficiently large and long-lived. Likewise, results show a quite heterogeneous credit market, with a highly efficient transmission mechanism in the corporate segment, but considerably less in the consumer segment. The final paper, "Regime switching in interest rate pass-through and dynamic bank modelling with risks", builds a theoretical model of dynamic bank optimisation, which provides rationale to a regime-switching behaviour in the interest rate pass-through. It is shown that a regime-switching interbank rate induces a nonlinear behaviour in lending and deposit rates and (by further introducing interbank-alike regime-switching risk premiums) in the pass-through. Thus, the pass-through process is consistent with a nonlinear behaviour even if there are no asymmetric adjustment costs in the response to interbank rate shocks. An empirical application to France and Germany provide results that support these conclusions.
APA, Harvard, Vancouver, ISO, and other styles
43

McGillivray, Ivor Edward. "Some applications of Dirichlet forms in probability theory." Thesis, University of Cambridge, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Zhu, Jinxia, and 朱金霞. "Ruin theory under Markovian regime-switching risk models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B40203980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Koh, You Beng, and 辜有明. "Bayesian analysis in Markov regime-switching models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B48521644.

Full text
Abstract:
van Norden and Schaller (1996) develop a standard regime-switching model to study stock market crashes. In their seminal paper, they use the maximum likelihood estimation to estimate the model parameters and show that a two-regime speculative bubble model has significant explanatory power for stock market returns in some observed periods. However, it is well known that the maximum likelihood estimation can lead to bias if the model contains multiple local maximum points or the estimation starts with poor initial values. Therefore, a better approach to estimate the parameters in the regime-switching models is to be found. One possible way is the Bayesian Gibbs-sampling approach, where its advantages are well discussed in Albert and Chib (1993). In this thesis, the Bayesian Gibbs-sampling estimation is examined by using two U.S. stock datasets: CRSP monthly value-weighted index from Jan 1926 to Dec 2010 and S&P 500 index from Jan 1871 to Dec 2010. It is found that the Gibbs-sampling estimation explains the U.S. data better than the maximum likelihood estimation. Moreover, the existing standard regime-switching speculative behaviour model is extended by considering the time-varying transition probabilities which are governed by the first-order Markov chain. It is shown that the time-varying first-order transition probabilities of Markov regime-switching speculative rational bubbles can lead stock market returns to have a second-order Markov regime. In addition, a Bayesian Gibbs-sampling algorithm is developed to estimate the parameters in the second-order two-state Markov regime-switching model.<br>published_or_final_version<br>Statistics and Actuarial Science<br>Doctoral<br>Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
46

Ghenciu, Eugen Andrei. "Dimension spectrum and graph directed Markov systems." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5226/.

Full text
Abstract:
In this dissertation we study graph directed Markov systems (GDMS) and limit sets associated with these systems. Given a GDMS S, by the Hausdorff dimension spectrum of S we mean the set of all positive real numbers which are the Hausdorff dimension of the limit set generated by a subsystem of S. We say that S has full Hausdorff dimension spectrum (full HD spectrum), if the dimension spectrum is the interval [0, h], where h is the Hausdorff dimension of the limit set of S. We give necessary conditions for a finitely primitive conformal GDMS to have full HD spectrum. A GDMS is said to be regular if the Hausdorff dimension of its limit set is also the zero of the topological pressure function. We show that every number in the Hausdorff dimension spectrum is the Hausdorff dimension of a regular subsystem. In the particular case of a conformal iterated function system we show that the Hausdorff dimension spectrum is compact. We introduce several new systems: the nearest integer GDMS, the Gauss-like continued fraction system, and the Renyi-like continued fraction system. We prove that these systems have full HD spectrum. A special attention is given to the backward continued fraction system that we introduce and we prove that it has full HD spectrum. This system turns out to be a parabolic iterated function system and this makes the analysis more involved. Several examples have been constructed in the past of systems not having full HD spectrum. We give an example of such a system whose limit set has positive Lebesgue measure.
APA, Harvard, Vancouver, ISO, and other styles
47

Varenius, Malin. "Using Hidden Markov Models to Beat OMXS30." Thesis, Uppsala universitet, Tillämpad matematik och statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-409780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Kargoll, Boris. "On the theory and application of model misspecification tests in geodesy /." Bonn : Igg, 2008. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=016737999&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Leppämäki, Mikko. "An economic theory of collusion, blackmail and whistle-blowing in organisations." Thesis, London School of Economics and Political Science (University of London), 1997. http://etheses.lse.ac.uk/2239/.

Full text
Abstract:
This thesis examines informal and corruptive activities agents may pursue within organisations. Chapter 1 is a brief introduction to the general theme and the related literature. Chapter 2 develops a simple theory of non-monetary collusion, where agents collude by exchanging favours. It examines the optimal use of supervisory information in a simple hierarchy under potential collusion. It is shown that when only the supervisor's information about the agent is used, collusion does not arise, since favours can not be exchanged. Secondly, it is analysed whether the agent's information about his superior should be used. In this case collusion is possible, and there is an interesting trade-off between the benefits of using additional information and the costs of collusion. It is then shown that sometimes the principal may be better off when using less than all available information. Chapter 3 considers task assignment and whistle-blowing as measures a principal may use to break collusion. The principal's response to potential collusion is to allocate less time to monitoring, and he breaks collusion with money. It is shown that the principal may also break collusion by hiring a third worker, and the decision how to break collusion optimally is endogenously determined. Breaking collusion by task assignment is costly, and therefore we consider whistle-blowing as a collusion breaking device. It provides the principal strictly higher welfare than the collusion-proof solution. It is also shown that under reasonable conditions, the collusion-free outcome will be achieved with no further cost. Chapter 4 develops a model of blackmail, where a piece of information an agent prefers to keep private may facilitate blackmail when another agent, namely a blackmailer, threatens to reveal that information. The crucial feature of the blackmail game is the commitment problem from the blackmailer's side. The blackmailer can not commit not to come back in future to demand more despite the payments received in the past. The chapter outlines conditions under which successful extortion may arise, and shows that there is a unique Markov Perfect Equilibrium, which gives a precise prediction how much money the blackmailer is able to extort from the victim. It is also shown that the blackmailer receives a blackmail premium that compensates the blackmailer for not taking money from the victim and revealing information anyway.
APA, Harvard, Vancouver, ISO, and other styles
50

Magalhaes, Marcos N. "Queues with a Markov renewal service process." Diss., Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/53582.

Full text
Abstract:
In the present work, we study a queue with a Markov renewal service process. The objective is to model systems where different customers request different services and there is a setup time required to adjust from one type of service to the next. The arrival is a Poisson process independent of the service. After arrival, all the customers will be attended in order of arrival. Immediately before a service starts, the type of next customer is chosen using a finite, irreducible and aperiodic Markov chain P. There is only one server and the service time has a distribution function F<sub>ij</sub>, where i and j are the types of the previous and current customer in service, respectively. This model will be called M/MR/l. Embedding at departure epochs, we characterize the queue length and the type of customer as a Markov renewal process. We study a special case where F<sub>ij</sub>, is exponential with parameter μ<sub>ij</sub>. We prove that the departure is a renewal process if and only if μ<sub>ij</sub> = μ , A i j ε E. Furthermore, we show that this renewal is a Poisson process. The type-departure process is extensively studied through the respective counting processes. The crosscovariance and the crosscorrelation are computed and numerical results are shown. Finally, we introduce several expressions to study the interdependence among the type·departure processes in the general case, i.e. the distribution function F<sub>ij</sub>, does not have any special form.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography