To see the other types of publications on this topic, follow the link: Continuous time Markov chain.

Dissertations / Theses on the topic 'Continuous time Markov chain'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Continuous time Markov chain.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Rao, V. A. P. "Markov chain Monte Carlo for continuous-time discrete-state systems." Thesis, University College London (University of London), 2012. http://discovery.ucl.ac.uk/1349490/.

Full text
Abstract:
A variety of phenomena are best described using dynamical models which operate on a discrete state space and in continuous time. Examples include Markov (and semi-Markov) jump processes, continuous-time Bayesian networks, renewal processes and other point processes. These continuous-time, discrete-state models are ideal building blocks for Bayesian models in fields such as systems biology, genetics, chemistry, computing networks, human-computer interactions etc. However, a challenge towards their more widespread use is the computational burden of posterior inference; this typically involves approximations like time discretization and can be computationally intensive. In this thesis, we describe a new class of Markov chain Monte Carlo methods that allow efficient computation while still being exact. The core idea is an auxiliary variable Gibbs sampler that alternately resamples a random discretization of time given the state-trajectory of the system, and then samples a new trajectory given this discretization. We introduce this idea by relating it to a classical idea called uniformization, and use it to develop algorithms that outperform the state-of-the-art for models based on the Markov jump process. We then extend the scope of these samplers to a wider class of models such as nonstationary renewal processes, and semi-Markov jump processes. By developing a more general framework beyond uniformization, we remedy various limitations of the original algorithms, allowing us to develop MCMC samplers for systems with infinite state spaces, unbounded rates, as well as systems indexed by more general continuous spaces than time.
APA, Harvard, Vancouver, ISO, and other styles
2

Alharbi, Randa. "Bayesian inference for continuous time Markov chains." Thesis, University of Glasgow, 2019. http://theses.gla.ac.uk/40972/.

Full text
Abstract:
Continuous time Markov chains (CTMCs) are a flexible class of stochastic models that have been employed in a wide range of applications from timing of computer protocols, through analysis of reliability in engineering, to models of biochemical networks in molecular biology. These models are defined as a state system with continuous time transitions between the states. Extensive work has been historically performed to enable convenient and flexible definition, simulation, and analysis of continuous time Markov chains. This thesis considers the problem of Bayesian parameter inference on these models and investigates computational methodologies to enable such inference. Bayesian inference over continuous time Markov chains is particularly challenging as the likelihood cannot be evaluated in a closed form. To overcome the statistical problems associated with evaluation of the likelihood, advanced algorithms based on Monte Carlo have been used to enable Bayesian inference without explicit evaluation of the likelihoods. An additional class of approximation methods has been suggested to handle such inference problems, known as approximate Bayesian computation. Novel Markov chain Monte Carlo (MCMC) approaches were recently proposed to allow exact inference. The contribution of this thesis is in discussion of the techniques and challenges in implementing these inference methods and performing an extensive comparison of these approaches on two case studies in systems biology. We investigate how the algorithms can be designed and tuned to work on CTMC models, and to achieve an accurate estimate of the posteriors with reasonable computational cost. Through this comparison, we investigate how to avoid some practical issues with accuracy and computational cost, for example by selecting an optimal proposal distribution and introducing a resampling step within the sequential Monte-Carlo method. Within the implementation of the ABC methods we investigate using an adaptive tolerance schedule to maximise the efficiency of the algorithm and in order to reduce the computational cost.
APA, Harvard, Vancouver, ISO, and other styles
3

Witte, Hugh Douglas. "Markov chain Monte Carlo and data augmentation methods for continuous-time stochastic volatility models." Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/283976.

Full text
Abstract:
In this paper we exploit some recent computational advances in Bayesian inference, coupled with data augmentation methods, to estimate and test continuous-time stochastic volatility models. We augment the observable data with a latent volatility process which governs the evolution of the data's volatility. The level of the latent process is estimated at finer increments than the data are observed in order to derive a consistent estimator of the variance over each time period the data are measured. The latent process follows a law of motion which has either a known transition density or an approximation to the transition density that is an explicit function of the parameters characterizing the stochastic differential equation. We analyze several models which differ with respect to both their drift and diffusion components. Our results suggest that for two size-based portfolios of U.S. common stocks, a model in which the volatility process is characterized by nonstationarity and constant elasticity of instantaneous variance (with respect to the level of the process) greater than 1 best describes the data. We show how to estimate the various models, undertake the model selection exercise, update posterior distributions of parameters and functions of interest in real time, and calculate smoothed estimates of within sample volatility and prediction of out-of-sample returns and volatility. One nice aspect of our approach is that no transformations of the data or the latent processes, such as subtracting out the mean return prior to estimation, or formulating the model in terms of the natural logarithm of volatility, are required.
APA, Harvard, Vancouver, ISO, and other styles
4

Dai, Pra Paolo, Pierre-Yves Louis, and Ida Minelli. "Monotonicity and complete monotonicity for continuous-time Markov chains." Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2006/766/.

Full text
Abstract:
We analyze the notions of monotonicity and complete monotonicity for Markov Chains in continuous-time, taking values in a finite partially ordered set. Similarly to what happens in discrete-time, the two notions are not equivalent.
However, we show that there are partially ordered sets for which monotonicity and complete monotonicity coincide in continuous time but not in discrete-time.
Nous étudions les notions de monotonie et de monotonie complète pour les processus de Markov (ou chaînes de Markov à temps continu) prenant leurs valeurs dans un espace partiellement ordonné. Ces deux notions ne sont pas équivalentes, comme c'est le cas lorsque le temps est discret. Cependant, nous établissons que pour certains ensembles partiellement ordonnés, l'équivalence a lieu en temps continu bien que n'étant pas vraie en temps discret.
APA, Harvard, Vancouver, ISO, and other styles
5

Keller, Peter, Sylvie Roelly, and Angelo Valleriani. "On time duality for quasi-birth-and-death processes." Universität Potsdam, 2012. http://opus.kobv.de/ubp/volltexte/2012/5697/.

Full text
Abstract:
We say that (weak/strong) time duality holds for continuous time quasi-birth-and-death-processes if, starting from a fixed level, the first hitting time of the next upper level and the first hitting time of the next lower level have the same distribution. We present here a criterion for time duality in the case where transitions from one level to another have to pass through a given single state, the so-called bottleneck property. We also prove that a weaker form of reversibility called balanced under permutation is sufficient for the time duality to hold. We then discuss the general case.
APA, Harvard, Vancouver, ISO, and other styles
6

Ayana, Haimanot, and Sarah Al-Swej. "A review of two financial market models: the Black--Scholes--Merton and the Continuous-time Markov chain models." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55417.

Full text
Abstract:
The objective of this thesis is to review the two popular mathematical models of the financialderivatives market. The models are the classical Black–Scholes–Merton and the Continuoustime Markov chain (CTMC) model. We study the CTMC model which is illustrated by themathematician Ragnar Norberg. The thesis demonstrates how the fundamental results ofFinancial Engineering work in both models.The construction of the main financial market components and the approach used for pricingthe contingent claims were considered in order to review the two models. In addition, the stepsused in solving the first–order partial differential equations in both models are explained.The main similarity between the models are that the financial market components are thesame. Their contingent claim is similar and the driving processes for both models utilizeMarkov property.One of the differences observed is that the driving process in the BSM model is the Brownianmotion and Markov chain in the CTMC model.We believe that the thesis can motivate other students and researchers to do a deeper andadvanced comparative study between the two models.
APA, Harvard, Vancouver, ISO, and other styles
7

Lo, Chia Chun. "Application of continuous time Markov chain models : option pricing, term structure of interest rates and stochastic filtering." Thesis, University of Essex, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.496255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sokolović, Sonja [Verfasser]. "Multigrid methods for highdimensional, tensor structured continuous time Markov chains / Sonja Sokolović." Wuppertal : Universitätsbibliothek Wuppertal, 2017. http://d-nb.info/1135623945/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Levin, Pavel. "Computing Most Probable Sequences of State Transitions in Continuous-time Markov Systems." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/22918.

Full text
Abstract:
Continuous-time Markov chains (CTMC's) form a convenient mathematical framework for analyzing random systems across many different disciplines. A specific research problem that is often of interest is to try to predict maximum probability sequences of state transitions given initial or boundary conditions. This work shows how to solve this problem exactly through an efficient dynamic programming algorithm. We demonstrate our approach through two different applications - ranking mutational pathways of HIV virus based on their probabilities, and determining the most probable failure sequences in complex fault-tolerant engineering systems. Even though CTMC's have been used extensively to realistically model many types of complex processes, it is often a standard practice to eventually simplify the model in order to perform the state evolution analysis. As we show here, simplifying approaches can lead to inaccurate and often misleading solutions. Therefore we expect our algorithm to find a wide range of applications across different domains.
APA, Harvard, Vancouver, ISO, and other styles
10

Popp, Anton [Verfasser], and N. [Akademischer Betreuer] Bäuerle. "Risk-Sensitive Stopping Problems for Continuous-Time Markov Chains / Anton Popp. Betreuer: N. Bäuerle." Karlsruhe : KIT-Bibliothek, 2016. http://d-nb.info/1110969678/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Prezioso, Valentina. "Interest rate derivatives pricing when the short rate is a continuous time finite state Markov process." Doctoral thesis, Università degli studi di Padova, 2010. http://hdl.handle.net/11577/3421547.

Full text
Abstract:
The purpose of this work is to price interest rate derivatives by assuming the spot rate as a time-continuous Markov chain with a finite state space. Our model is inspired by Filipovic'-Zabczyk [1]: we extend their discrete time structure by another one with random times, considering in this way the random jumps realistically occurred in the market, and we use a technique based on a contracting operator. We are able to price with the same approach zero-coupon bonds, caps and swaptions; furthermore we present some numerical results for the pricing of these products. We finally extend the one factor model to a multi-factor one.
Lo scopo di questo lavoro è prezzare i titoli derivati sui tassi di interesse assumendo che il tasso spot è una catena di Markov a tempo continuo con spazio degli stati finito. Il nostro modello si inspira all'articolo di Filipovic'-Zabczyk: noi estendiamo la loro struttura a tempo discreto con una con tempi aleatori, considerando in questo modo i salti aleatori che realisticamente avvengono nel mercato, e usiamo una tecnica basata su un operatore contraente. Riusciamo a prezzare con lo stesso approccio zero-coupon bond, cap e swaption; presentiamo inoltre dei risultati numerici per il prezzaggio di questi prodotti. Infine estendiamo il modello unifattoriale con uno multifattoriale.
APA, Harvard, Vancouver, ISO, and other styles
12

Linzner, Dominik [Verfasser], Heinz [Akademischer Betreuer] Köppl, and Manfred [Akademischer Betreuer] Opper. "Scalable Inference in Graph-coupled Continuous-time Markov Chains / Dominik Linzner ; Heinz Köppl, Manfred Opper." Darmstadt : Universitäts- und Landesbibliothek, 2021. http://d-nb.info/1225040817/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Schuster, Johann [Verfasser]. "Towards faster numerical solution of Continuous Time Markov Chains stored by symbolic data structures / Johann Schuster." München : Verlag Dr. Hut, 2012. http://d-nb.info/1020299347/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Schneider, Olaf. "Krylov subspace methods and their generalizations for solving singular linear operator equations with applications to continuous time Markov chains." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola&quot, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:105-1148840.

Full text
Abstract:
Viele Resultate über MR- und OR-Verfahren zur Lösung linearer Gleichungssysteme bleiben (in leicht modifizierter Form) gültig, wenn der betrachtete Operator nicht invertierbar ist. Neben dem für reguläre Probleme charakteristischen Abbruchverhalten, kann bei einem singulären Gleichungssystem auch ein so genannter singulärer Zusammenbruch auftreten. Für beide Fälle werden verschiedene Charakterisierungen angegeben. Die Unterrauminverse, eine spezielle verallgemeinerte Inverse, beschreibt die Näherungen eines MR-Unterraumkorrektur-Verfahrens. Für Krylov-Unterräume spielt die Drazin-Inverse eine Schlüsselrolle. Bei Krylov-Unterraum-Verfahren kann a-priori entschieden werden, ob ein regulärer oder ein singulärer Abbruch auftritt. Wir können zeigen, dass ein Krylov-Verfahren genau dann für beliebige Startwerte eine Lösung des linearen Gleichungssystems liefert, wenn der Index der Matrix nicht größer als eins und das Gleichungssystem konsistent ist. Die Berechnung stationärer Zustandsverteilungen zeitstetiger Markov-Ketten mit endlichem Zustandsraum stellt eine praktische Aufgabe dar, welche die Lösung eines singulären linearen Gleichungssystems erfordert. Die Eigenschaften der Übergangs-Halbgruppe folgen aus einfachen Annahmen auf rein analytischem und matrixalgebrischen Wege. Insbesondere ist die erzeugende Matrix eine singuläre M-Matrix mit Index 1. Ist die Markov-Kette irreduzibel, so ist die stationäre Zustandsverteilung eindeutig bestimmt.
APA, Harvard, Vancouver, ISO, and other styles
15

Conforti, Giovanni [Verfasser], and Sylvie [Akademischer Betreuer] Roelly. "Reciprocal classes of continuous time Markov Chains / Giovanni Conforti ; Betreuer: Sylvie Roelly ; Universita degli Studi di Padova." Potsdam : Universität Potsdam, 2015. http://d-nb.info/1218399740/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Bondesson, Carl. "Modelling of Safety Concepts for Autonomous Vehicles using Semi-Markov Models." Thesis, Uppsala universitet, Signaler och System, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-353060.

Full text
Abstract:
Autonomous vehicles is soon a reality in the every-day life. Though before it is used commercially the vehicles need to be proven safe. The current standard for functional safety on roads, ISO 26262, does not include autonomous vehicles at the moment, which is why in this project an approach using semi-Markov models is used to assess safety. A semi-Markov process is a stochastic process modelled by a state space model where the transitions between the states of the model can be arbitrarily distributed. The approach is realized as a MATLAB tool where the user can use a steady-state based analysis called a Loss and Risk based measure of safety to assess safety. The tool works and can assess safety of semi-Markov systems as long as they are irreducible and positive recurrent. For systems that fulfill these properties, it is possible to draw conclusions about the safety of the system through a risk analysis and also about which autonomous driving level the system is in through a sensitivity analysis. The developed tool, or the approach with the semi-Markov model, might be a good complement to ISO 26262.
APA, Harvard, Vancouver, ISO, and other styles
17

Charlou, Christophe. "Caractérisation et modélisation de l’écoulement de boues résiduaires dans un sécheur à palettes." Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2014. http://www.theses.fr/2014EMAC0004/document.

Full text
Abstract:
Le séchage est une opération incontournable pour la valorisation énergétique des boues résiduaires. La flexibilité pour ajuster la teneur en matière sèche finale de la boue est un critère important pour le choix d'une technologie. Cet objectif est difficile à atteindre pour les sécheurs à palettes. La modélisation du processus est alors essentielle. Malheureusement, le comportement rhéologique des boues est complexe et la mécanique des fluides numérique est hors de portée. La notion de Distribution des Temps de Séjour est employée ici pour caractériser l'écoulement. Un protocole fiable et reproductible a été établi et mis en œuvre sur un pilote de laboratoire. Des injections Dirac d'oxyde de titane et de sels métalliques, avec la spectrométrie de fluorescence X comme méthode de détection, ont été employées pour caractériser les DTS du solide anhydre et de la boue humide. Pré-Mélanger la boue pâteuse, pour disperser le traceur par exemple, modifie la structure du matériau. Ceci a été mis en évidence par des mesures de distribution en taille des particules et par des caractérisations rhéologiques. Cependant, des expériences de séchage en batch ont montré que ce pré-Mélange n'a aucune influence sur la cinétique et sur la phase plastique. Nous avons montré que le solide anhydre et le solide humide s'écoulent de la même manière. Une seconde méthode, basée sur une détection par conductimétrie, a alors été développée. Plus facile à mettre en œuvre et moins onéreuse, cette méthode s'avère tout aussi fiable que la première. L'influence de la durée de stockage de la boue, avant séchage, a été évaluée. Le temps de séjour de la boue dans le sécheur double quand la durée de stockage passe de 24h à 48h. Finalement, un modèle d'écoulement, basé sur la théorie de chaînes de Markov, a été développé. L'écoulement du solide anhydre est décrit par une chaîne de n cellules parfaitement mélangées, n correspondant au nombre de palettes. Les probabilités de transition entre les cellules sont régies par deux paramètres : le ratio de recyclage interne, R, et la masse de solides retenus, MS. R est déterminé par la relation de Van der Laan et MS est identifié par ajustement du modèle aux données expérimentales. Le modèle décrit de manière satisfaisante les DTS. La masse de solides retenus identifiée est toujours plus faible que la quantité mesurée expérimentalement. Une partie de la boue, collée aux parois du sécheur et au rotor, agit comme un volume mort
Drying is an unavoidable operation prior to sludge valorization in incineration, pyrolysis or gasification. The flexibility to adapt the solid content of the dried sludge to the demand is a major requirement of any drying system. This objective is difficult to reach for paddle dryers. Modeling the process is thus essential. Unfortunately, sludge rheological behavior is complex and computational fluid dynamics is out of reach for the time being. The concept of Residence Time Distribution (RTD) is used here to investigate sludge flow pattern in a paddle dryer. A reliable and reproducible protocol was established and implemented on a lab-Scale continuous dryer. Pulse injections of titanium oxide and of salt metals, with X-Ray fluorescence spectroscopy as detection method, were used to characterize the RTD of anhydrous solid and wet sludge, respectively. Premixing the pasty sludge, for tracer powder dispersion for instance, changes the structure of the material. This was highlighted through the measurements of particle size distributions and characterization of rheological properties. However, drying experiments performed in batch emphasized that premixing does not have any influence on the kinetic and the sticky phase. The RTD curves of the anhydrous solid are superimposed on those of the moist sludge. Consequently, a simpler protocol, based on pulse injection of chloride sodium and offline conductivity measurements, was established. Easier to implement in industry and cheaper, this method proves to be as reliable as the first one. The influence of storage duration prior to drying was assessed. The mean residence time doubles when the storage duration changes from 24h to 48h. Finally, a model based on the theory of Markov chains has been developed to represent the RTD. The flow of anhydrous solids is described by a chain of n perfectly mixed cells, n corresponding to the number of paddles. The transition probabilities between the cells are governed by two parameters: the ratio of internal recirculation, R, and the solids hold-Up, MS. R is determined from the Van der Laan's relation and MS is identified by fitting the model to the experimental RTD. The model describes the flow pattern with a good accuracy. The computed hold-Up is lower than the experimental one. Part of the sludge is stuck to the walls of the dryer, acting as dead volumes in the process
APA, Harvard, Vancouver, ISO, and other styles
18

Schuster, Johann [Verfasser], Markus [Akademischer Betreuer] Siegle, and Holger [Akademischer Betreuer] Hermanns. "Towards faster numerical solution of Continuous Time Markov Chains stored by symbolic data structures / Johann Schuster. Universität der Bundeswehr München, Fakultät für Informatik. Gutachter: Holger Hermanns. Betreuer: Markus Siegle." Neubiberg : Universitätsbibliothek der Universität der Bundeswehr, 2012. http://d-nb.info/102057920X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

CHAOUCH, CHAKIB. "Cyber-physical systems in the framework of audio song recognition and reliability engineering." Doctoral thesis, Chakib Chaouch, 2021. http://hdl.handle.net/11570/3210939.

Full text
Abstract:
Music Information Retrieval (MIR) is the interdisciplinary discipline of extracting information from music, and it is the topic of our research. The MIR system faces a significant issue in dealing with various genres of music. Music retrieval aims at helping end-users search and finds a desired piece of music from an extensive database. In other words, Music Information retrieval tries to make music information more accessible to listeners, musicians, and data scientists. The challenges and research problems that an audio recognition system faces in everyday use might come in a variety of forms. Significant aspects are: near-identical original audio, noise, and spectral or temporal distortion invariance, a minimal length of song track required for identification, retrieval speed, and processing load are all important factors. In order to overcome these problems and achieve our goal, a Short Time Power Spectral Density (ST-PSD) fingerprinting is proposed as an innovative, efficient, highly accurate, and exact fingerprinting approach. To maintain high accuracy and specificity on hard datasets, we propose matching features based on an efficient hamming distance search on a binary type fingerprint, followed by a verification step for match hypotheses. We gradually improve this system by adding additional components like the Mel frequency bank filter and progressive probability evaluation score. Besides, we introduce a new fingerprint generation method and we present the fundamentals for generating fingerprints and we show they are robust in the song recognition process. Then, we evaluate the performance of our proposed method using a scoring measure based on the accuracy classification of thousands of Songs. Our purpose is to communicate the effectiveness of the fingerprints generated with two proposed approaches; we will show that, even without any optimized searching algorithm, the accuracy obtained in recognizing pieces of songs is very good, thus making the apprapproachropose a good candidate to be used in an effective song recognition process. I will be discussing another area of research that was done as part of my period abroad at Duke University, USA, as part of an exchange program. The topic related to reliability engineering has been incorporated. The first part focuses on the reliability and interval reliability of the Phased Mission System (PMS) with repairable components and disconnected phases, using analytical modeling as a state space-oriented method using the Continuous-time Markov chain (CTMC). The second aspect focuses on non-repairable multi-state components PMS, in which we present a practical case study of a spacecraft satellite that was used to demonstrate only the (PMS-BDD) method proposed with the implementation of Sharpe tools based on (FT) configuration in order to demonstrate the system’s reliability/unreliability in this case.
APA, Harvard, Vancouver, ISO, and other styles
20

Tribastone, Mirco. "Scalable analysis of stochastic process algebra models." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/4629.

Full text
Abstract:
The performance modelling of large-scale systems using discrete-state approaches is fundamentally hampered by the well-known problem of state-space explosion, which causes exponential growth of the reachable state space as a function of the number of the components which constitute the model. Because they are mapped onto continuous-time Markov chains (CTMCs), models described in the stochastic process algebra PEPA are no exception. This thesis presents a deterministic continuous-state semantics of PEPA which employs ordinary differential equations (ODEs) as the underlying mathematics for the performance evaluation. This is suitable for models consisting of large numbers of replicated components, as the ODE problem size is insensitive to the actual population levels of the system under study. Furthermore, the ODE is given an interpretation as the fluid limit of a properly defined CTMC model when the initial population levels go to infinity. This framework allows the use of existing results which give error bounds to assess the quality of the differential approximation. The computation of performance indices such as throughput, utilisation, and average response time are interpreted deterministically as functions of the ODE solution and are related to corresponding reward structures in the Markovian setting. The differential interpretation of PEPA provides a framework that is conceptually analogous to established approximation methods in queueing networks based on meanvalue analysis, as both approaches aim at reducing the computational cost of the analysis by providing estimates for the expected values of the performance metrics of interest. The relationship between these two techniques is examined in more detail in a comparison between PEPA and the Layered Queueing Network (LQN) model. General patterns of translation of LQN elements into corresponding PEPA components are applied to a substantial case study of a distributed computer system. This model is analysed using stochastic simulation to gauge the soundness of the translation. Furthermore, it is subjected to a series of numerical tests to compare execution runtimes and accuracy of the PEPA differential analysis against the LQN mean-value approximation method. Finally, this thesis discusses the major elements concerning the development of a software toolkit, the PEPA Eclipse Plug-in, which offers a comprehensive modelling environment for PEPA, including modules for static analysis, explicit state-space exploration, numerical solution of the steady-state equilibrium of the Markov chain, stochastic simulation, the differential analysis approach herein presented, and a graphical framework for model editing and visualisation of performance evaluation results.
APA, Harvard, Vancouver, ISO, and other styles
21

Liechty, John Calder. "MCMC methods and continuous-time, hidden Markov models." Thesis, University of Cambridge, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.625002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Rýzner, Zdeněk. "Využití teorie hromadné obsluhy při návrhu a optimalizaci paketových sítí." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-219285.

Full text
Abstract:
This master's thesis deals with queueing theory and its application in designing node models in packet-switched network. There are described general principles of designing queueing theory models and its mathematical background. Further simulator of packet delay in network was created. This application implements two described models - M/M/1 and M/G/1. Application can be used for simulating network nodes and obtaining basic network characteristics like packet delay or packet loss. Next, lab exercise was created, in that exercise students familiarize themselves with basic concepts of queueing theory and examine both analytical and simulation approach to solving queueing systems.
APA, Harvard, Vancouver, ISO, and other styles
23

Robacker, Thomas C. "Comparison of Two Parameter Estimation Techniques for Stochastic Models." Digital Commons @ East Tennessee State University, 2015. https://dc.etsu.edu/etd/2567.

Full text
Abstract:
Parameter estimation techniques have been successfully and extensively applied to deterministic models based on ordinary differential equations but are in early development for stochastic models. In this thesis, we first investigate using parameter estimation techniques for a deterministic model to approximate parameters in a corresponding stochastic model. The basis behind this approach lies in the Kurtz limit theorem which implies that for large populations, the realizations of the stochastic model converge to the deterministic model. We show for two example models that this approach often fails to estimate parameters well when the population size is small. We then develop a new method, the MCR method, which is unique to stochastic models and provides significantly better estimates and smaller confidence intervals for parameter values. Initial analysis of the new MCR method indicates that this method might be a viable method for parameter estimation for continuous time Markov chain models.
APA, Harvard, Vancouver, ISO, and other styles
24

Dittmer, Evelyn [Verfasser]. "Hidden Markov Models with time-continuous output behavior / Evelyn Dittmer." Berlin : Freie Universität Berlin, 2009. http://d-nb.info/1023498081/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Chotard, Alexandre. "Markov chain Analysis of Evolution Strategies." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112230/document.

Full text
Abstract:
Cette thèse contient des preuves de convergence ou de divergence d'algorithmes d'optimisation appelés stratégies d'évolution (ESs), ainsi que le développement d'outils mathématiques permettant ces preuves.Les ESs sont des algorithmes d'optimisation stochastiques dits ``boîte noire'', i.e. où les informations sur la fonction optimisée se réduisent aux valeurs qu'elle associe à des points. En particulier, le gradient de la fonction est inconnu. Des preuves de convergence ou de divergence de ces algorithmes peuvent être obtenues via l'analyse de chaînes de Markov sous-jacentes à ces algorithmes. Les preuves de convergence et de divergence obtenues dans cette thèse permettent d'établir le comportement asymptotique des ESs dans le cadre de l'optimisation d'une fonction linéaire avec ou sans contrainte, qui est un cas clé pour des preuves de convergence d'ESs sur de larges classes de fonctions.Cette thèse présente tout d'abord une introduction aux chaînes de Markov puis un état de l'art sur les ESs et leur contexte parmi les algorithmes d'optimisation continue boîte noire, ainsi que les liens établis entre ESs et chaînes de Markov. Les contributions de cette thèse sont ensuite présentées:o Premièrement des outils mathématiques généraux applicables dans d'autres problèmes sont développés. L'utilisation de ces outils permet d'établir aisément certaines propriétés (à savoir l'irreducibilité, l'apériodicité et le fait que les compacts sont des small sets pour la chaîne de Markov) sur les chaînes de Markov étudiées. Sans ces outils, établir ces propriétés était un processus ad hoc et technique, pouvant se montrer très difficile.o Ensuite différents ESs sont analysés dans différents problèmes. Un (1,\lambda)-ES utilisant cumulative step-size adaptation est étudié dans le cadre de l'optimisation d'une fonction linéaire. Il est démontré que pour \lambda > 2 l'algorithme diverge log-linéairement, optimisant la fonction avec succès. La vitesse de divergence de l'algorithme est donnée explicitement, ce qui peut être utilisé pour calculer une valeur optimale pour \lambda dans le cadre de la fonction linéaire. De plus, la variance du step-size de l'algorithme est calculée, ce qui permet de déduire une condition sur l'adaptation du paramètre de cumulation avec la dimension du problème afin d'obtenir une stabilité de l'algorithme. Ensuite, un (1,\lambda)-ES avec un step-size constant et un (1,\lambda)-ES avec cumulative step-size adaptation sont étudiés dans le cadre de l'optimisation d'une fonction linéaire avec une contrainte linéaire. Avec un step-size constant, l'algorithme résout le problème en divergeant lentement. Sous quelques conditions simples, ce résultat tient aussi lorsque l'algorithme utilise des distributions non Gaussiennes pour générer de nouvelles solutions. En adaptant le step-size avec cumulative step-size adaptation, le succès de l'algorithme dépend de l'angle entre les gradients de la contrainte et de la fonction optimisée. Si celui ci est trop faible, l'algorithme convergence prématurément. Autrement, celui ci diverge log-linéairement.Enfin, les résultats sont résumés, discutés, et des perspectives sur des travaux futurs sont présentées
In this dissertation an analysis of Evolution Strategies (ESs) using the theory of Markov chains is conducted. Proofs of divergence or convergence of these algorithms are obtained, and tools to achieve such proofs are developed.ESs are so called "black-box" stochastic optimization algorithms, i.e. information on the function to be optimized are limited to the values it associates to points. In particular, gradients are unavailable. Proofs of convergence or divergence of these algorithms can be obtained through the analysis of Markov chains underlying these algorithms. The proofs of log-linear convergence and of divergence obtained in this thesis in the context of a linear function with or without constraint are essential components for the proofs of convergence of ESs on wide classes of functions.This dissertation first gives an introduction to Markov chain theory, then a state of the art on ESs and on black-box continuous optimization, and present already established links between ESs and Markov chains.The contributions of this thesis are then presented:o General mathematical tools that can be applied to a wider range of problems are developed. These tools allow to easily prove specific Markov chain properties (irreducibility, aperiodicity and the fact that compact sets are small sets for the Markov chain) on the Markov chains studied. Obtaining these properties without these tools is a ad hoc, tedious and technical process, that can be of very high difficulty.o Then different ESs are analyzed on different problems. We study a (1,\lambda)-ES using cumulative step-size adaptation on a linear function and prove the log-linear divergence of the step-size; we also study the variation of the logarithm of the step-size, from which we establish a necessary condition for the stability of the algorithm with respect to the dimension of the search space. Then we study an ES with constant step-size and with cumulative step-size adaptation on a linear function with a linear constraint, using resampling to handle unfeasible solutions. We prove that with constant step-size the algorithm diverges, while with cumulative step-size adaptation, depending on parameters of the problem and of the ES, the algorithm converges or diverges log-linearly. We then investigate the dependence of the convergence or divergence rate of the algorithm with parameters of the problem and of the ES. Finally we study an ES with a sampling distribution that can be non-Gaussian and with constant step-size on a linear function with a linear constraint. We give sufficient conditions on the sampling distribution for the algorithm to diverge. We also show that different covariance matrices for the sampling distribution correspond to a change of norm of the search space, and that this implies that adapting the covariance matrix of the sampling distribution may allow an ES with cumulative step-size adaptation to successfully diverge on a linear function with any linear constraint.Finally, these results are summed-up, discussed, and perspectives for future work are explored
APA, Harvard, Vancouver, ISO, and other styles
26

Wan, Lijie. "CONTINUOUS TIME MULTI-STATE MODELS FOR INTERVAL CENSORED DATA." UKnowledge, 2016. http://uknowledge.uky.edu/statistics_etds/19.

Full text
Abstract:
Continuous-time multi-state models are widely used in modeling longitudinal data of disease processes with multiple transient states, yet the analysis is complex when subjects are observed periodically, resulting in interval censored data. Recently, most studies focused on modeling the true disease progression as a discrete time stationary Markov chain, and only a few studies have been carried out regarding non-homogenous multi-state models in the presence of interval-censored data. In this dissertation, several likelihood-based methodologies were proposed to deal with interval censored data in multi-state models. Firstly, a continuous time version of a homogenous Markov multi-state model with backward transitions was proposed to handle uneven follow-up assessments or skipped visits, resulting in the interval censored data. Simulations were used to compare the performance of the proposed model with the traditional discrete time stationary Markov chain under different types of observation schemes. We applied these two methods to the well-known Nun study, a longitudinal study of 672 participants aged ≥ 75 years at baseline and followed longitudinally with up to ten cognitive assessments per participant. Secondly, we constructed a non-homogenous Markov model for this type of panel data. The baseline intensity was assumed to be Weibull distributed to accommodate the non-homogenous property. The proportional hazards method was used to incorporate risk factors into the transition intensities. Simulation studies showed that the Weibull assumption does not affect the accuracy of the parameter estimates for the risk factors. We applied our model to data from the BRAiNS study, a longitudinal cohort of 531 subjects each cognitively intact at baseline. Last, we presented a parametric method of fitting semi-Markov models based on Weibull transition intensities with interval censored cognitive data with death as a competing risk. We relaxed the Markov assumption and took interval censoring into account by integrating out all possible unobserved transitions. The proposed model also allowed for incorporating time-dependent covariates. We provided a goodness-of-fit assessment for the proposed model by the means of prevalence counts. To illustrate the methods, we applied our model to the BRAiNS study.
APA, Harvard, Vancouver, ISO, and other styles
27

Unwala, Ishaq Hasanali. "Pipelined processor modeling with finite homogeneous discrete-time Markov chain /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Veitch, John D. "Applications of Markov Chain Monte Carlo methods to continuous gravitational wave data analysis." Thesis, Connect to e-thesis to view abstract. Move to record for print version, 2007. http://theses.gla.ac.uk/35/.

Full text
Abstract:
Thesis (Ph.D.) - University of Glasgow, 2007.
Ph.D. thesis submitted to Information and Mathematical Sciences Faculty, Department of Mathematics, University of Glasgow, 2007. Includes bibliographical references. Print version also available.
APA, Harvard, Vancouver, ISO, and other styles
29

Shaikh, A. D. "Modelling data and voice traffic over IP networks using continuous-time Markov models." Thesis, Aston University, 2009. http://publications.aston.ac.uk/15385/.

Full text
Abstract:
Common approaches to IP-traffic modelling have featured the use of stochastic models, based on the Markov property, which can be classified into black box and white box models based on the approach used for modelling traffic. White box models, are simple to understand, transparent and have a physical meaning attributed to each of the associated parameters. To exploit this key advantage, this thesis explores the use of simple classic continuous-time Markov models based on a white box approach, to model, not only the network traffic statistics but also the source behaviour with respect to the network and application. The thesis is divided into two parts: The first part focuses on the use of simple Markov and Semi-Markov traffic models, starting from the simplest two-state model moving upwards to n-state models with Poisson and non-Poisson statistics. The thesis then introduces the convenient to use, mathematically derived, Gaussian Markov models which are used to model the measured network IP traffic statistics. As one of the most significant contributions, the thesis establishes the significance of the second-order density statistics as it reveals that, in contrast to first-order density, they carry much more unique information on traffic sources and behaviour. The thesis then exploits the use of Gaussian Markov models to model these unique features and finally shows how the use of simple classic Markov models coupled with use of second-order density statistics provides an excellent tool for capturing maximum traffic detail, which in itself is the essence of good traffic modelling. The second part of the thesis, studies the ON-OFF characteristics of VoIP traffic with reference to accurate measurements of the ON and OFF periods, made from a large multi-lingual database of over 100 hours worth of VoIP call recordings. The impact of the language, prosodic structure and speech rate of the speaker on the statistics of the ON-OFF periods is analysed and relevant conclusions are presented. Finally, an ON-OFF VoIP source model with log-normal transitions is contributed as an ideal candidate to model VoIP traffic and the results of this model are compared with those of previously published work.
APA, Harvard, Vancouver, ISO, and other styles
30

Despain, Lynnae. "A Mathematical Model of Amoeboid Cell Motion as a Continuous-Time Markov Process." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5671.

Full text
Abstract:
Understanding cell motion facilitates the understanding of many biological processes such as wound healing and cancer growth. Constructing mathematical models that replicate amoeboid cell motion can help us understand and make predictions about real-world cell movement. We review a force-based model of cell motion that considers a cell as a nucleus and several adhesion sites connected to the nucleus by springs. In this model, the cell moves as the adhesion sites attach to and detach from a substrate. This model is then reformulated as a random process that tracks the attachment characteristic (attached or detached) of each adhesion site, the location of each adhesion site, and the centroid of the attached sites. It is shown that this random process is a continuous-time jump-type Markov process and that the sub-process that counts the number of attached adhesion sites is also a Markov process with an attracting invariant distribution. Under certain hypotheses, we derive a formula for the velocity of the expected location of the centroid.
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Yinglu. "A Markov Chain Based Method for Time Series Data Modeling and Prediction." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1592395278430805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Helbert, Zachary T. "Modeling Enrollment at a Regional University using a Discrete-Time Markov Chain." Digital Commons @ East Tennessee State University, 2015. https://dc.etsu.edu/honors/281.

Full text
Abstract:
A discrete time Markov Chain is used to model enrollment at a regional university. A preliminary analysis is conducted on the data set in order to determine the classes for the Markov chain model. The semester, yearly, and long term results of the model are examined thoroughly. A sensitivity analysis of the probability matrix entries is then conducted to determine the overall greatest influence on graduation rates.
APA, Harvard, Vancouver, ISO, and other styles
33

Mamudu, Lohuwa. "Modeling Student Enrollment at ETSU Using a Discrete-Time Markov Chain Model." Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etd/3310.

Full text
Abstract:
Discrete-time Markov chain models can be used to make future predictions in many important fields including education. Government and educational institutions today are concerned about college enrollment and what impacts the number of students enrolling. One challenge is how to make an accurate prediction about student enrollment so institutions can plan appropriately. In this thesis, we model student enrollment at East Tennessee State University (ETSU) with a discrete-time Markov chain model developed using ETSU student data from Fall 2008 to Spring 2017. In this thesis, we focus on the progression from one level to another within the university system including graduation and dropout probabilities as indicated by the data. We further include the probability that a student will leave school for a limited period of time and then return to the institution. We conclude with a simulation of the model and a comparison to the trends seen in the data.
APA, Harvard, Vancouver, ISO, and other styles
34

Frühwirth-Schnatter, Sylvia, Stefan Pittner, Andrea Weber, and Rudolf Winter-Ebmer. "Analysing plant closure effects using time-varying mixture-of-experts Markov chain clustering." Institute of Mathematical Statistics, 2018. http://dx.doi.org/10.1214/17-AOAS1132.

Full text
Abstract:
In this paper we study data on discrete labor market transitions from Austria. In particular, we follow the careers of workers who experience a job displacement due to plant closure and observe - over a period of 40 quarters - whether these workers manage to return to a steady career path. To analyse these discrete-valued panel data, we apply a new method of Bayesian Markov chain clustering analysis based on inhomogeneous first order Markov transition processes with time-varying transition matrices. In addition, a mixtureof- experts approach allows us to model the probability of belonging to a certain cluster as depending on a set of covariates via a multinomial logit model. Our cluster analysis identifies five career patterns after plant closure and reveals that some workers cope quite easily with a job loss whereas others suffer large losses over extended periods of time.
APA, Harvard, Vancouver, ISO, and other styles
35

Yildirak, Sahap Kasirga. "The Identificaton Of A Bivariate Markov Chain Market Model." Phd thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/1257898/index.pdf.

Full text
Abstract:
This work is an extension of the classical Cox-Ross-Rubinstein discrete time market model in which only one risky asset is considered. We introduce another risky asset into the model. Moreover, the random structure of the asset price sequence is generated by bivariate finite state Markov chain. Then, the interest rate varies over time as it is the function of generating sequences. We discuss how the model can be adapted to the real data. Finally, we illustrate sample implementations to give a better idea about the use of the model.
APA, Harvard, Vancouver, ISO, and other styles
36

Gupta, Amrita. "Unsupervised learning of disease subtypes from continuous time Hidden Markov Models of disease progression." Thesis, Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54364.

Full text
Abstract:
The detection of subtypes of complex diseases has important implications for diagnosis and treatment. Numerous prior studies have used data-driven approaches to identify clusters of similar patients, but it is not yet clear how to best specify what constitutes a clinically meaningful phenotype. This study explored disease subtyping on the basis of temporal development patterns. In particular, we attempted to differentiate infants with autism spectrum disorder into more fine-grained classes with distinctive patterns of early skill development. We modeled the progression of autism explicitly using a continuous-time hidden Markov model. Subsequently, we compared subjects on the basis of their trajectories through the model state space. Two approaches to subtyping were utilized, one based on time-series clustering with a custom distance function and one based on tensor factorization. A web application was also developed to facilitate the visual exploration of our results. Results suggested the presence of 3 developmental subgroups in the ASD outcome group. The two subtyping approaches are contrasted and possible future directions for research are discussed.
APA, Harvard, Vancouver, ISO, and other styles
37

Rodrigues, Caio César Graciani. "Control and filtering for continuous-time Markov jump linear systems with partial mode information." Laboratório Nacional de Computação Científica, 2017. https://tede.lncc.br/handle/tede/267.

Full text
Abstract:
Submitted by Maria Cristina (library@lncc.br) on 2017-08-10T18:45:47Z No. of bitstreams: 1 tese_caio_cesar_graciani_rodrigues.pdf: 1550607 bytes, checksum: 740cf1e87f2a897b734accc7abd6ec11 (MD5)
Approved for entry into archive by Maria Cristina (library@lncc.br) on 2017-08-10T18:45:57Z (GMT) No. of bitstreams: 1 tese_caio_cesar_graciani_rodrigues.pdf: 1550607 bytes, checksum: 740cf1e87f2a897b734accc7abd6ec11 (MD5)
Made available in DSpace on 2017-08-10T18:46:07Z (GMT). No. of bitstreams: 1 tese_caio_cesar_graciani_rodrigues.pdf: 1550607 bytes, checksum: 740cf1e87f2a897b734accc7abd6ec11 (MD5) Previous issue date: 2017-04-10
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (Capes)
Over the past few decades, the study of systems subjected to abrupt changes in their structures has consolidated as a significant area of research, due, in part, to the increasing importance of dealing with the occurrence of random failures in complex systems. In this context, Markov jump linear system (MJLS) comes up as an approach of central interest, as a means of representing these dynamics. Among the numerous works that seek to establish design methods for control and filtering considering this class of systems, the scarcity of literature related to the partial observation scenarios is noticeable. This thesis features contributions to the H1 control and filtering for continuous-time MJLS with partial mode information. In order to overcome the challenge regarding the lack of information of the current state of the Markov chain, we use a detector-based formulation. In this formulation, we assume the existence of a detector, available at all times, which provides partial information about the operating mode of the jump process. A favorable feature of this strategy is that it allows us to recover (without being limited to) some recent results of partial information scenarios in which we have an explicit solution, such as the cases of complete information, mode-independent and cluster observations. Our results comprise a new bounded real lemma followed by the design of controllers and filters driven only by the informations given by the detector. Both, the H1 analysis and the design methods presented are established through the solutions of linear matrix inequalities. In addition, numerical simulations are also presented encompassing the H1 performance for particular structures of the detector process. From an application point of view, we highlight some examples related to the linearized dynamics for an unmanned aerial vehicle.
Nas últimas décadas, o estudo de sistemas cujas estruturas estão sujeitas a mudanças abruptas de comportamento tem se consolidado como uma significante área de pesquisa, devido, em parte, pela importância crescente de lidar com a ocorrência de falhas aleatórias em sistemas complexos. Neste contexto, os sistemas lineares com salto Markoviano (SLSM) surgem como uma abordagem de interesse central, como um meio de representar estas dinâmicas. Dentre os inúmeros trabalhos que buscam estabelecer técnicas de controle e filtragem considerando esta classe de sistemas, a escassez de literatura relacionada ao cenário de observações parciais é perceptível. Esta tese apresenta novos resultados de controle e filtragem H1 para SLSM a tempo contínuo e observações parciais no modo de operação. A fim de superar o desafio quanto a falta de informações do atual estado da cadeia de Markov, utilizamos uma formulação baseada em um detector. Com esta abordagem, assumimos a existência de um detector, disponível em todo instante de tempo, que fornece informações a respeito do modo de operação do processo de salto. Uma favorável característica desta estratégia é a de nos possibilitar o resgate (sem estar-se limitado a eles) de alguns resultados recentes dos cenários de informações parciais nos quais temos uma solução explícita, como os casos de informações completas, independentes do modo e cluster de observações. Os nossos resultados compreendem um novo bounded real lemma seguido do projeto de controladores e filtros que usam apenas as informações do detector. Tanto a análise H1 quanto os métodos de projeto apresentados são estabelecidos através da soluções de inequações matriciais lineares. Adicionalmente, também são apresentadas simulações numéricas que mostram a performance H1 para estruturas particulares do detector. Sob o ponto de vista de aplicações, destacamos os exemplos relacionados a dinâmicas linearizadas para um avião aéreo não tripulado.
APA, Harvard, Vancouver, ISO, and other styles
38

Figueiredo, Danilo Zucolli. "Discrete-time jump linear systems with Markov chain in a general state space." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3139/tde-18012017-115659/.

Full text
Abstract:
This thesis deals with discrete-time Markov jump linear systems (MJLS) with Markov chain in a general Borel space S. Several control issues have been addressed for this class of dynamic systems, including stochastic stability (SS), linear quadratic (LQ) optimal control synthesis, fllter design and a separation principle. Necessary and sffcient conditions for SS have been derived. It was shown that SS is equivalent to the spectral radius of an operator being less than 1 or to the existence of a solution to a \\Lyapunov-like\" equation. Based on the SS concept, the finite- and infinite-horizon LQ optimal control problems were tackled. The solution to the finite- (infinite-)horizon LQ optimal control problem was derived from the associated control S-coupled Riccati difference (algebraic) equations. By S-coupled it is meant that the equations are coupled via an integral over a transition probability kernel having a density with respect to a in-finite measure on the Borel space S. The design of linear Markov jump filters was analyzed and a solution to the finite- (infinite-)horizon filtering problem was obtained based on the associated filtering S-coupled Riccati difference (algebraic) equations. Conditions for the existence and uniqueness of a stabilizing positive semi-definite solution to the control and filtering S-coupled algebraic Riccati equations have also been derived. Finally a separation principle for discrete-time MJLS with Markov chain in a general state space was obtained. It was shown that the optimal controller for a partial information optimal control problem separates the partial information control problem into two problems, one associated with a filtering problem and the other associated with an optimal control problem with complete information. It is expected that the results obtained in this thesis may motivate further research on discrete-time MJLS with Markov chain in a general state space.
Esta tese trata de sistemas lineares com saltos markovianos (MJLS) a tempo discreto com cadeia de Markov em um espaço geral de Borel S. Vários problemas de controle foram abordados para esta classe de sistemas dinâmicos, incluindo estabilidade estocástica (SS), síntese de controle ótimo linear quadrático (LQ), projeto de filtros e um princípio da separação. Condições necessárias e suficientes para a SS foram obtidas. Foi demonstrado que SS é equivalente ao raio espectral de um operador ser menor que 1 ou à existência de uma solução para uma equação de Lyapunov. Os problemas de controle ótimo a horizonte finito e infinito foram abordados com base no conceito de SS. A solução para o problema de controle ótimo LQ a horizonte finito (infinito) foi obtida a partir das associadas equações a diferenças (algébricas) de Riccati S-acopladas de controle. Por S-acopladas entende-se que as equações são acopladas por uma integral sobre o kernel estocástico com densidade de transição em relação a uma medida in-finita no espaço de Borel S. O projeto de filtros lineares markovianos foi analisado e uma solução para o problema da filtragem a horizonte finito (infinito) foi obtida com base nas associadas equações a diferenças (algébricas) de Riccati S-acopladas de filtragem. Condições para a existência e unicidade de uma solução positiva semi-definida e estabilizável para as equações algébricas de Riccati S-acopladas associadas aos problemas de controle e filtragem também foram obtidas. Por último, foi estabelecido um princípio da separação para MJLS a tempo discreto com cadeia de Markov em um espaço de estados geral. Foi demonstrado que o controlador ótimo para um problema de controle ótimo com informação parcial separa o problema de controle com informação parcial em dois problemas, um deles associado a um problema de filtragem e o outro associado a um problema de controle ótimo com informação completa. Espera-se que os resultados obtidos nesta tese possam motivar futuras pesquisas sobre MJLS a tempo discreto com cadeia de Markov em um espaço de estados geral.
APA, Harvard, Vancouver, ISO, and other styles
39

Manrique, Garcia Aurora. "Econometric analysis of limited dependent time series." Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

O'Ruanaidh, Joseph J. K. "Numerical Bayesian methods applied to signal processing." Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.339465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Horký, Miroslav. "Modely hromadné obsluhy." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-232033.

Full text
Abstract:
The master’s thesis solves models of queueing systems, which use the property of Markov chains. The queueing system is a system, where the objects enter into this system in random moments and require the service. This thesis solves specifically such models of queueing systems, in which the intervals between the objects incomings and service time have exponential distribution. In the theoretical part of the master’s thesis I deal with the topics stochastic process, queueing theory, classification of models and description of the models having Markovian property. In the practical part I describe realization and function of the program, which solves simulation of chosen model M/M/m. At the end I compare results which were calculated in analytic way and by simulation of the model M/M/m.
APA, Harvard, Vancouver, ISO, and other styles
42

Spade, David Allen. "Investigating Convergence of Markov Chain Monte Carlo Methods for Bayesian Phylogenetic Inference." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1372173121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Yang, Linji. "Phase transitions in spin systems: uniqueness, reconstruction and mixing time." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47593.

Full text
Abstract:
Spin systems are powerful mathematical models widely used and studied in Statistical Physics and Computer Science. This thesis focuses the study of spin systems on colorings and weighted independent sets (the hard-core model). In many spin systems, there exist phase transition phenomena: there is a threshold value of a parameter such that when the parameter is on one side of the threshold, the system exhibits the so-called spatial decay of correlation, i.e., the influence from a set of vertices to another set of vertices diminishes as the distance between the two sets grows; when the parameter is on the other side, long range correlations persist. The uniqueness problem and the reconstruction problem are two major threshold problems that are concerned with the decay of correlations in the Gibbs measure from different perspectives. In Computer Science, the study of spin systems mainly focused on finding an efficient algorithm that samples the configurations from a distribution that is very close to the Gibbs measure. Glauber dynamics is a typical Markov chain algorithm for performing sampling. In many systems, the convergence time of the Glauber dynamics also exhibits a threshold behavior: the speed of convergence experiences a dramatic change around the threshold of the parameter. The first two parts of this thesis focus on making connections between the phase transition of the convergence time of the dynamics and the phase transition of the reconstruction phenomenon in both colorings and the hard-core model on regular trees. A relatively sharp threshold is established for the change of the convergence time, which coincides with the reconstruction threshold. A general technique of upper bounding the conductance of the dynamics via analyzing the sensitivity of the reconstruction algorithm is proposed and proven to be very effective for lower bounding the convergence time of the dynamics. The third part of the thesis provides an innovative analytical method for establishing a strong version of the decay of correlation of the Gibbs distributions for many two spin systems on various classes of graphs. In particular, the method is applied to the hard-core model on the square lattice, a very important graph that is of great interest in both Statistical Physics and Computer Science. As a result, we significantly improve the lower bound of the uniqueness threshold on the square lattice and extend the range of parameter where the Glauber dynamics is rapidly mixing.
APA, Harvard, Vancouver, ISO, and other styles
44

VILLA, SIMONE. "Continuous Time Bayesian Networks for Reasoning and Decision Making in Finance." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2015. http://hdl.handle.net/10281/69953.

Full text
Abstract:
L'analisi dell'enorme quantità di dati finanziari, messi a disposizione dai mercati elettronici, richiede lo sviluppo di nuovi modelli e tecniche per estrarre efficacemente la conoscenza da utilizzare in un processo decisionale informato. Lo scopo della tesi concerne l'introduzione di modelli grafici probabilistici utilizzati per il ragionamento e l'attività decisionale in tale contesto. Nella prima parte della tesi viene presentato un framework che utilizza le reti Bayesiane per effettuare l'analisi e l'ottimizzazione di portafoglio in maniera olistica. In particolare, esso sfrutta, da un lato, la capacità delle reti Bayesiane di rappresentare distribuzioni di probabilità in modo compatto ed efficiente per modellare il portafoglio e, dall'altro, la loro capacità di fare inferenza per ottimizzare il portafoglio secondo diversi scenari economici. In molti casi, si ha la necessità di ragionare in merito a scenari di mercato nel tempo, ossia si vuole rispondere a domande che coinvolgono distribuzioni di probabilità che evolvono nel tempo. Le reti Bayesiane a tempo continuo possono essere utilizzate in questo contesto. Nella seconda parte della tesi viene mostrato il loro utilizzo per affrontare problemi finanziari reali e vengono descritte due importanti estensioni. La prima estensione riguarda il problema di classificazione, in particolare vengono introdotti un algoritmo per apprendere tali classificatori da Big Data e il loro utilizzo nel contesto di previsione dei cambi valutari ad alta frequenza. La seconda estensione concerne l'apprendimento delle reti Bayesiane a tempo continuo in domini non stazionari, in cui vengono modellate esplicitamente le dipendenze statistiche presenti nelle serie temporali multivariate consentendo loro di cambiare nel corso del tempo. Nella terza parte della tesi viene descritto l'uso delle reti Bayesiane a tempo continuo nell'ambito dei processi decisionali di Markov, i quali consentono di modellare processi decisionali sequenziali in condizioni di incertezza. In particolare, viene introdotto un metodo per il controllo di sistemi dinamici a tempo continuo che sfrutta le proprietà additive e contestuali per scalare efficacemente su grandi spazi degli stati. Infine, vengono mostrate le prestazioni di tale metodo in un contesto significativo di trading.
The analysis of the huge amount of financial data, made available by electronic markets, calls for new models and techniques to effectively extract knowledge to be exploited in an informed decision-making process. The aim of this thesis is to introduce probabilistic graphical models that can be used to reason and to perform actions in such a context. In the first part of this thesis, we present a framework which exploits Bayesian networks to perform portfolio analysis and optimization in a holistic way. It leverages on the compact and efficient representation of high dimensional probability distributions offered by Bayesian networks and their ability to perform evidential reasoning in order to optimize the portfolio according to different economic scenarios. In many cases, we would like to reason about the market change, i.e. we would like to express queries as probability distributions over time. Continuous time Bayesian networks can be used to address this issue. In the second part of the thesis, we show how it is possible to use this model to tackle real financial problems and we describe two notable extensions. The first one concerns classification, where we introduce an algorithm for learning these classifiers from Big Data, and we describe their straightforward application to the foreign exchange prediction problem in the high frequency domain. The second one is related to non-stationary domains, where we explicitly model the presence of statistical dependencies in multivariate time-series while allowing them to change over time. In the third part of the thesis, we describe the use of continuous time Bayesian networks within the Markov decision process framework, which provides a model for sequential decision-making under uncertainty. We introduce a method to control continuous time dynamic systems, based on this framework, that relies on additive and context-specific features to scale up to large state spaces. Finally, we show the performances of our method in a simplified, but meaningful trading domain.
APA, Harvard, Vancouver, ISO, and other styles
45

Atamna, Asma. "Analysis of Randomized Adaptive Algorithms for Black-Box Continuous Constrained Optimization." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS010/document.

Full text
Abstract:
On s'intéresse à l'étude d'algorithmes stochastiques pour l'optimisation numérique boîte-noire. Dans la première partie de cette thèse, on présente une méthodologie pour évaluer efficacement des stratégies d'adaptation du step-size dans le cas de l'optimisation boîte-noire sans contraintes. Le step-size est un paramètre important dans les algorithmes évolutionnaires tels que les stratégies d'évolution; il contrôle la diversité de la population et, de ce fait, joue un rôle déterminant dans la convergence de l'algorithme. On présente aussi les résultats empiriques de la comparaison de trois méthodes d'adaptation du step-size. Ces algorithmes sont testés sur le testbed BBOB (black-box optimization benchmarking) de la plateforme COCO (comparing continuous optimisers). Dans la deuxième partie de cette thèse, sont présentées nos contributions dans le domaine de l'optimisation boîte-noire avec contraintes. On analyse la convergence linéaire d'algorithmes stochastiques adaptatifs pour l'optimisation sous contraintes dans le cas de contraintes linéaires, gérées avec une approche Lagrangien augmenté adaptative. Pour ce faire, on étend l'analyse par chaines de Markov faite dans le cas d'optimisation sans contraintes au cas avec contraintes: pour chaque algorithme étudié, on exhibe une classe de fonctions pour laquelle il existe une chaine de Markov homogène telle que la stabilité de cette dernière implique la convergence linéaire de l'algorithme. La convergence linéaire est déduite en appliquant une loi des grands nombres pour les chaines de Markov, sous l'hypothèse de la stabilité. Dans notre cas, la stabilité est validée empiriquement
We investigate various aspects of adaptive randomized (or stochastic) algorithms for both constrained and unconstrained black-box continuous optimization. The first part of this thesis focuses on step-size adaptation in unconstrained optimization. We first present a methodology for assessing efficiently a step-size adaptation mechanism that consists in testing a given algorithm on a minimal set of functions, each reflecting a particular difficulty that an efficient step-size adaptation algorithm should overcome. We then benchmark two step-size adaptation mechanisms on the well-known BBOB noiseless testbed and compare their performance to the one of the state-of-the-art evolution strategy (ES), CMA-ES, with cumulative step-size adaptation. In the second part of this thesis, we investigate linear convergence of a (1 + 1)-ES and a general step-size adaptive randomized algorithm on a linearly constrained optimization problem, where an adaptive augmented Lagrangian approach is used to handle the constraints. To that end, we extend the Markov chain approach used to analyze randomized algorithms for unconstrained optimization to the constrained case. We prove that when the augmented Lagrangian associated to the problem, centered at the optimum and the corresponding Lagrange multipliers, is positive homogeneous of degree 2, then for algorithms enjoying some invariance properties, there exists an underlying homogeneous Markov chain whose stability (typically positivity and Harris-recurrence) leads to linear convergence to both the optimum and the corresponding Lagrange multipliers. We deduce linear convergence under the aforementioned stability assumptions by applying a law of large numbers for Markov chains. We also present a general framework to design an augmented-Lagrangian-based adaptive randomized algorithm for constrained optimization, from an adaptive randomized algorithm for unconstrained optimization
APA, Harvard, Vancouver, ISO, and other styles
46

Stettler, John. "The Discrete Threshold Regression Model." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1440369876.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Ma, Xinyuan. "Research on dynamic correlation based on stochastic time-varying beta and stochastic volatility." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/120023/1/Xinyuan_Ma_Thesis.pdf.

Full text
Abstract:
This research used monthly returns of Chinese industry sectors that capture China economic status to investigate the relationship between the return of individual industry indices and a market portfolio. To understand characteristics of the relationships and their response to the presence of market fluctuation, this study examined both the impact from good and bad news. Factors like market value and market capitalization were incorporated to capture investors' expectation towards individual industries. These factors are essential as they classify industries into popular ones that take dominant position and unpopular ones that take less dominant position of China's stock market.
APA, Harvard, Vancouver, ISO, and other styles
48

Singh, Jasdeep. "Schedulability Analysis of Probabilistic Real-Time Systems." Thesis, Toulouse, ISAE, 2020. http://www.theses.fr/2020ESAE0010.

Full text
Abstract:
La thèse est une étude d'approches probabilistes pour la modélisation et l'analyse de systèmes temps réel. L'objectif est de comprendre et d'améliorer le pessimisme qui existe dans l'analyse du système. Les systèmes temps réel doivent produire des résultats avec des contraintes de temps réelles. L'exécution des tâches dans le système est basée sur leur pire temps d'exécution. En pratique, il peut y avoir de nombreux temps d'exécution possibles inférieurs au pire des cas. Nous utilisons le temps d’exécution probabiliste dans le pire cas, qui est une distribution de probabilité dans le pire des cas, qui limite tous les temps d’exécution possibles. Nous nous approchons avec le modèle de chaîne de Markov à temps continu pour obtenir des probabilités de manquer une contrainte de synchronisation dans le monde réel. Nous étudions également les systèmes de criticité mixte (MC) car ceux-ci ont également tendance à faire face au pessimisme dans un souci de sécurité. Les systèmes MC consistent en des tâches d’importance ou de criticité ged différentes. Le système fonctionne sous différents modes de criticité dans lesquels l'exécution des tâches de criticité identique ou supérieure est assurée. Nous abordons d’abord les systèmes MC en utilisant la chaîne de Markov en temps discret pour obtenir la probabilité que le système entre dans des niveaux de criticité plus élevés. Nous observons certaines limites de nos approches et nous procédons à la modélisation des systèmes probabilistes MC à l'aide de modèles Graph. Nous remettons en question les approches existantes dans la littérature et fournissons les nôtres. Nous obtenons des calendriers pour les systèmes MC optimisés pour l'utilisation des ressources. Nous faisons également le premier pas vers la dépendance entre les tâches en raison de leur scheduling
The thesis is a study of probabilistic approaches for modelling and analyzing real-time systems. The objective is to understand and improve of the pessimism that exists in the system analysis. Real-time systems must produce results with real world timing constraints. The execution of the tasks within the system is based on their worst case execution time. In practice, there can be many possible execution times below the worst case. We use probabilistic Worst Case Execution Time which is a worst case probability distribution which upper bounds all those possible execution times. We approach with Continuous Time Markov Chain model to obtain probabilities of missing real- world timing constraint. We also study Mixed Criticality (MC) systems because MC systems also tend to cope with pessimism with safety in mind. MC systems consist of tasks with different importance or criticalities. The system operates under different criticality modes in which the execution of the tasks of the same or higher criticality is ensured. We first approach MC systems using Discrete Time Markov Chain to obtain the probability of system entering higher criticalities. We observe certain limitations of our approaches and we proceed to model the MC probabilistic systems using Graph models. We question the existing approaches in the literature and provide our own. We obtain schedules for MC systems which is optimized for resource usage. We also make the first step towards dependence among the tasks due their scheduling
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Chiying. "Contributions to Collective Dynamical Clustering-Modeling of Discrete Time Series." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-dissertations/198.

Full text
Abstract:
The analysis of sequential data is important in business, science, and engineering, for tasks such as signal processing, user behavior mining, and commercial transactions analysis. In this dissertation, we build upon the Collective Dynamical Modeling and Clustering (CDMC) framework for discrete time series modeling, by making contributions to clustering initialization, dynamical modeling, and scaling. We first propose a modified Dynamic Time Warping (DTW) approach for clustering initialization within CDMC. The proposed approach provides DTW metrics that penalize deviations of the warping path from the path of constant slope. This reduces over-warping, while retaining the efficiency advantages of global constraint approaches, and without relying on domain dependent constraints. Second, we investigate the use of semi-Markov chains as dynamical models of temporal sequences in which state changes occur infrequently. Semi-Markov chains allow explicitly specifying the distribution of state visit durations. This makes them superior to traditional Markov chains, which implicitly assume an exponential state duration distribution. Third, we consider convergence properties of the CDMC framework. We establish convergence by viewing CDMC from an Expectation Maximization (EM) perspective. We investigate the effect on the time to convergence of our efficient DTW-based initialization technique and selected dynamical models. We also explore the convergence implications of various stopping criteria. Fourth, we consider scaling up CDMC to process big data, using Storm, an open source distributed real-time computation system that supports batch and distributed data processing. We performed experimental evaluation on human sleep data and on user web navigation data. Our results demonstrate the superiority of the strategies introduced in this dissertation over state-of-the-art techniques in terms of modeling quality and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
50

Petersson, Mikael. "Perturbed discrete time stochastic models." Doctoral thesis, Stockholms universitet, Matematiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-128979.

Full text
Abstract:
In this thesis, nonlinearly perturbed stochastic models in discrete time are considered. We give algorithms for construction of asymptotic expansions with respect to the perturbation parameter for various quantities of interest. In particular, asymptotic expansions are given for solutions of renewal equations, quasi-stationary distributions for semi-Markov processes, and ruin probabilities for risk processes.

At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 4: Manuscript. Paper 5: Manuscript. Paper 6: Manuscript.

APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography