To see the other types of publications on this topic, follow the link: Continuous time Markov chain.

Journal articles on the topic 'Continuous time Markov chain'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Continuous time Markov chain.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lekgari, Mokaedi V. "Maximal Coupling Procedure and Stability of Continuous-Time Markov Chains." Bulletin of Mathematical Sciences and Applications 10 (November 2014): 30–37. http://dx.doi.org/10.18052/www.scipress.com/bmsa.10.30.

Full text
Abstract:
In this study we first investigate the stability of subsampled discrete Markov chains through the use of the maximal coupling procedure. This is an extension of the available results on Markov chains and is realized through the analysis of the subsampled chain ΦΤn, where {Τn, nєZ+}is an increasing sequence of random stopping times. Then the similar results are realized for the stability of countable-state Continuous-time Markov processes by employing the skeleton-chain method.
APA, Harvard, Vancouver, ISO, and other styles
2

Eriksson, B., and M. R. Pistorius. "American Option Valuation under Continuous-Time Markov Chains." Advances in Applied Probability 47, no. 2 (June 2015): 378–401. http://dx.doi.org/10.1239/aap/1435236980.

Full text
Abstract:
This paper is concerned with the solution of the optimal stopping problem associated to the value of American options driven by continuous-time Markov chains. The value-function of an American option in this setting is characterised as the unique solution (in a distributional sense) of a system of variational inequalities. Furthermore, with continuous and smooth fit principles not applicable in this discrete state-space setting, a novel explicit characterisation is provided of the optimal stopping boundary in terms of the generator of the underlying Markov chain. Subsequently, an algorithm is presented for the valuation of American options under Markov chain models. By application to a suitably chosen sequence of Markov chains, the algorithm provides an approximate valuation of an American option under a class of Markov models that includes diffusion models, exponential Lévy models, and stochastic differential equations driven by Lévy processes. Numerical experiments for a range of different models suggest that the approximation algorithm is flexible and accurate. A proof of convergence is also provided.
APA, Harvard, Vancouver, ISO, and other styles
3

Eriksson, B., and M. R. Pistorius. "American Option Valuation under Continuous-Time Markov Chains." Advances in Applied Probability 47, no. 02 (June 2015): 378–401. http://dx.doi.org/10.1017/s0001867800007904.

Full text
Abstract:
This paper is concerned with the solution of the optimal stopping problem associated to the value of American options driven by continuous-time Markov chains. The value-function of an American option in this setting is characterised as the unique solution (in a distributional sense) of a system of variational inequalities. Furthermore, with continuous and smooth fit principles not applicable in this discrete state-space setting, a novel explicit characterisation is provided of the optimal stopping boundary in terms of the generator of the underlying Markov chain. Subsequently, an algorithm is presented for the valuation of American options under Markov chain models. By application to a suitably chosen sequence of Markov chains, the algorithm provides an approximate valuation of an American option under a class of Markov models that includes diffusion models, exponential Lévy models, and stochastic differential equations driven by Lévy processes. Numerical experiments for a range of different models suggest that the approximation algorithm is flexible and accurate. A proof of convergence is also provided.
APA, Harvard, Vancouver, ISO, and other styles
4

Yap, V. B. "Similar States in Continuous-Time Markov Chains." Journal of Applied Probability 46, no. 2 (June 2009): 497–506. http://dx.doi.org/10.1239/jap/1245676102.

Full text
Abstract:
In a homogeneous continuous-time Markov chain on a finite state space, two states that jump to every other state with the same rate are called similar. By partitioning states into similarity classes, the algebraic derivation of the transition matrix can be simplified, using hidden holding times and lumped Markov chains. When the rate matrix is reversible, the transition matrix is explicitly related in an intuitive way to that of the lumped chain. The theory provides a unified derivation for a whole range of useful DNA base substitution models, and a number of amino acid substitution models.
APA, Harvard, Vancouver, ISO, and other styles
5

Yap, V. B. "Similar States in Continuous-Time Markov Chains." Journal of Applied Probability 46, no. 02 (June 2009): 497–506. http://dx.doi.org/10.1017/s002190020000560x.

Full text
Abstract:
In a homogeneous continuous-time Markov chain on a finite state space, two states that jump to every other state with the same rate are called similar. By partitioning states into similarity classes, the algebraic derivation of the transition matrix can be simplified, using hidden holding times and lumped Markov chains. When the rate matrix is reversible, the transition matrix is explicitly related in an intuitive way to that of the lumped chain. The theory provides a unified derivation for a whole range of useful DNA base substitution models, and a number of amino acid substitution models.
APA, Harvard, Vancouver, ISO, and other styles
6

Kijima, Masaaki. "Hazard rate and reversed hazard rate monotonicities in continuous-time Markov chains." Journal of Applied Probability 35, no. 3 (September 1998): 545–56. http://dx.doi.org/10.1239/jap/1032265203.

Full text
Abstract:
A continuous-time Markov chain on the non-negative integers is called skip-free to the right (left) if only unit increments to the right (left) are permitted. If a Markov chain is skip-free both to the right and to the left, it is called a birth–death process. Karlin and McGregor (1959) showed that if a continuous-time Markov chain is monotone in the sense of likelihood ratio ordering then it must be an (extended) birth–death process. This paper proves that if an irreducible Markov chain in continuous time is monotone in the sense of hazard rate (reversed hazard rate) ordering then it must be skip-free to the right (left). A birth–death process is then characterized as a continuous-time Markov chain that is monotone in the sense of both hazard rate and reversed hazard rate orderings. As an application, the first-passage-time distributions of such Markov chains are also studied.
APA, Harvard, Vancouver, ISO, and other styles
7

Kijima, Masaaki. "Hazard rate and reversed hazard rate monotonicities in continuous-time Markov chains." Journal of Applied Probability 35, no. 03 (September 1998): 545–56. http://dx.doi.org/10.1017/s002190020001620x.

Full text
Abstract:
A continuous-time Markov chain on the non-negative integers is called skip-free to the right (left) if only unit increments to the right (left) are permitted. If a Markov chain is skip-free both to the right and to the left, it is called a birth–death process. Karlin and McGregor (1959) showed that if a continuous-time Markov chain is monotone in the sense of likelihood ratio ordering then it must be an (extended) birth–death process. This paper proves that if an irreducible Markov chain in continuous time is monotone in the sense of hazard rate (reversed hazard rate) ordering then it must be skip-free to the right (left). A birth–death process is then characterized as a continuous-time Markov chain that is monotone in the sense of both hazard rate and reversed hazard rate orderings. As an application, the first-passage-time distributions of such Markov chains are also studied.
APA, Harvard, Vancouver, ISO, and other styles
8

Ball, Frank, and Geoffrey F. Yeo. "Lumpability and marginalisability for continuous-time Markov chains." Journal of Applied Probability 30, no. 3 (September 1993): 518–28. http://dx.doi.org/10.2307/3214762.

Full text
Abstract:
We consider lumpability for continuous-time Markov chains and provide a simple probabilistic proof of necessary and sufficient conditions for strong lumpability, valid in circumstances not covered by known theory. We also consider the following marginalisability problem. Let {X{t)} = {(X1(t), X2(t), · ··, Xm(t))} be a continuous-time Markov chain. Under what conditions are the marginal processes {X1(t)}, {X2(t)}, · ··, {Xm(t)} also continuous-time Markov chains? We show that this is related to lumpability and, if no two of the marginal processes can jump simultaneously, then they are continuous-time Markov chains if and only if they are mutually independent. Applications to ion channel modelling and birth–death processes are discussed briefly.
APA, Harvard, Vancouver, ISO, and other styles
9

Rydén, Tobias. "On identifiability and order of continuous-time aggregated Markov chains, Markov-modulated Poisson processes, and phase-type distributions." Journal of Applied Probability 33, no. 3 (September 1996): 640–53. http://dx.doi.org/10.2307/3215346.

Full text
Abstract:
An aggregated Markov chain is a Markov chain for which some states cannot be distinguished from each other by the observer. In this paper we consider the identifiability problem for such processes in continuous time, i.e. the problem of determining whether two parameters induce identical laws for the observable process or not. We also study the order of a continuous-time aggregated Markov chain, which is the minimum number of states needed to represent it. In particular, we give a lower bound on the order. As a by-product, we obtain results of this kind also for Markov-modulated Poisson processes, i.e. doubly stochastic Poisson processes whose intensities are directed by continuous-time Markov chains, and phase-type distributions, which are hitting times in finite-state Markov chains.
APA, Harvard, Vancouver, ISO, and other styles
10

Rydén, Tobias. "On identifiability and order of continuous-time aggregated Markov chains, Markov-modulated Poisson processes, and phase-type distributions." Journal of Applied Probability 33, no. 03 (September 1996): 640–53. http://dx.doi.org/10.1017/s0021900200100087.

Full text
Abstract:
An aggregated Markov chain is a Markov chain for which some states cannot be distinguished from each other by the observer. In this paper we consider the identifiability problem for such processes in continuous time, i.e. the problem of determining whether two parameters induce identical laws for the observable process or not. We also study the order of a continuous-time aggregated Markov chain, which is the minimum number of states needed to represent it. In particular, we give a lower bound on the order. As a by-product, we obtain results of this kind also for Markov-modulated Poisson processes, i.e. doubly stochastic Poisson processes whose intensities are directed by continuous-time Markov chains, and phase-type distributions, which are hitting times in finite-state Markov chains.
APA, Harvard, Vancouver, ISO, and other styles
11

Ball, Frank, and Geoffrey F. Yeo. "Lumpability and marginalisability for continuous-time Markov chains." Journal of Applied Probability 30, no. 03 (September 1993): 518–28. http://dx.doi.org/10.1017/s0021900200044272.

Full text
Abstract:
We consider lumpability for continuous-time Markov chains and provide a simple probabilistic proof of necessary and sufficient conditions for strong lumpability, valid in circumstances not covered by known theory. We also consider the following marginalisability problem. Let {X{t)} = {(X 1(t), X 2(t), · ··, Xm (t))} be a continuous-time Markov chain. Under what conditions are the marginal processes {X 1(t)}, {X 2(t)}, · ··, {Xm (t)} also continuous-time Markov chains? We show that this is related to lumpability and, if no two of the marginal processes can jump simultaneously, then they are continuous-time Markov chains if and only if they are mutually independent. Applications to ion channel modelling and birth–death processes are discussed briefly.
APA, Harvard, Vancouver, ISO, and other styles
12

Coolen-Schrijner, Pauline, Andrew Hart, and Phil Pollett. "Quasistationarity of continuous-time Markov chains with positive drift." Journal of the Australian Mathematical Society. Series B. Applied Mathematics 41, no. 4 (April 2000): 423–41. http://dx.doi.org/10.1017/s0334270000011735.

Full text
Abstract:
AbstractWe shall study continuous-time Markov chains on the nonnegative integers which are both irreducible and transient, and which exhibit discernible stationarity before drift to infinity “sets in”. We will show how this ‘quasi’ stationary behaviour can be modelled using a limiting conditional distribution: specifically, the limiting state probabilities conditional on not having left 0 for the last time. By way of a dual chain, obtained by killing the original process on last exit from 0, we invoke the theory of quasistationarity for absorbing Markov chains. We prove that the conditioned state probabilities of the original chain are equal to the state probabilities of its dual conditioned on non-absorption, thus allowing to establish the simultaneous existence and then equivalence, of their limiting conditional distributions. Although a limiting conditional distribution for the dual chain is always quasistationary distribution in the usual sense, a similar statement is not possible for the original chain.
APA, Harvard, Vancouver, ISO, and other styles
13

Elliott, Robert J., and John van der Hoek. "Default Times in a Continuous Time Markov Chain Economy." Applied Mathematical Finance 20, no. 5 (November 2013): 450–60. http://dx.doi.org/10.1080/1350486x.2012.755825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Esquível, Manuel L., Nadezhda P. Krasii, and Gracinda R. Guerreiro. "Open Markov Type Population Models: From Discrete to Continuous Time." Mathematics 9, no. 13 (June 25, 2021): 1496. http://dx.doi.org/10.3390/math9131496.

Full text
Abstract:
We address the problem of finding a natural continuous time Markov type process—in open populations—that best captures the information provided by an open Markov chain in discrete time which is usually the sole possible observation from data. Given the open discrete time Markov chain, we single out two main approaches: In the first one, we consider a calibration procedure of a continuous time Markov process using a transition matrix of a discrete time Markov chain and we show that, when the discrete time transition matrix is embeddable in a continuous time one, the calibration problem has optimal solutions. In the second approach, we consider semi-Markov processes—and open Markov schemes—and we propose a direct extension from the discrete time theory to the continuous time one by using a known structure representation result for semi-Markov processes that decomposes the process as a sum of terms given by the products of the random variables of a discrete time Markov chain by time functions built from an adequate increasing sequence of stopping times.
APA, Harvard, Vancouver, ISO, and other styles
15

Fill, James Allen. "Time to Stationarity for a Continuous-Time Markov Chain." Probability in the Engineering and Informational Sciences 5, no. 1 (January 1991): 61–76. http://dx.doi.org/10.1017/s0269964800001893.

Full text
Abstract:
Separation is one measure of distance from stationarity for Markov chains. Strong stationary times provide bounds on separation and so aid in the analysis of mixing rates. The precise connection between separation and strong stationary times was drawn by Aldous and Diaconis (1987) (Advances in Applied Mathematics8: 69−97) for discrete time chains. We develop the corresponding foundational theory for continuous time chains; several new and interesting mathematical issues arise.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhao, Pan. "Strong Stationary Duality and Algebraic Duality for Continuous Time Möbius Monotone Markov Chains." International Journal of Applied Mathematics and Machine Learning 15, no. 2 (December 5, 2021): 69–86. http://dx.doi.org/10.18642/ijamml_710012241.

Full text
Abstract:
Under the assumption of Möbius monotonicity, we develop the theory of strong stationary duality for continuous time Markov chains on the finite partially ordered state space, we also construct a nonexplosive algebraic duality for continuous time Markov chains on Finally, we present an application to the two-dimensional birth and death chain.
APA, Harvard, Vancouver, ISO, and other styles
17

Ross, Sheldon M. "A Note on Approximating Mean Occupation Times of Continuous-Time Markov Chains." Probability in the Engineering and Informational Sciences 2, no. 2 (April 1988): 267–68. http://dx.doi.org/10.1017/s0269964800000796.

Full text
Abstract:
In [1] an approach to approximate the transition probabilities and mean occupation times of a continuous-time Markov chain is presented. For the chain under consideration, let Pij(t) and Tij(t) denote respectively the probability that it is in state j at time t, and the total time spent in j by time t, in both cases conditional on the chain starting in state i. Also, let Y1,…, Yn be independent exponential random variables each with rate λ = n/t, which are also independent of the Markov chain.
APA, Harvard, Vancouver, ISO, and other styles
18

Geweke, John, Robert C. Marshall, and Gary A. Zarkin. "Exact Inference for Continuous Time Markov Chain Models." Review of Economic Studies 53, no. 4 (August 1986): 653. http://dx.doi.org/10.2307/2297610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

ZHANG, Wei-feng, Xia LIU, Ying-zhou ZHANG, and Guo-qiang ZHOU. "Continuous time Markov chain based website navigability measure." Journal of China Universities of Posts and Telecommunications 18, no. 2 (April 2011): 45–52. http://dx.doi.org/10.1016/s1005-8885(10)60043-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Coolen-Schrijner, Pauline, and Erik A. van Doorn. "THE DEVIATION MATRIX OF A CONTINUOUS-TIME MARKOV CHAIN." Probability in the Engineering and Informational Sciences 16, no. 3 (May 22, 2002): 351–66. http://dx.doi.org/10.1017/s0269964802163066.

Full text
Abstract:
The deviation matrix of an ergodic, continuous-time Markov chain with transition probability matrix P(·) and ergodic matrix Π is the matrix D ≡ ∫0∞(P(t) − Π) dt. We give conditions for D to exist and discuss properties and a representation of D. The deviation matrix of a birth–death process is investigated in detail. We also describe a new application of deviation matrices by showing that a measure for the convergence to stationarity of a stochastically increasing Markov chain can be expressed in terms of the elements of the deviation matrix of the chain.
APA, Harvard, Vancouver, ISO, and other styles
21

Marwa, Yohana Maiga, Isambi Sailon Mbalawata, and Samuel Mwalili. "Continuous Time Markov Chain Model for Cholera Epidemic Transmission Dynamics." International Journal of Statistics and Probability 8, no. 3 (April 18, 2019): 32. http://dx.doi.org/10.5539/ijsp.v8n3p32.

Full text
Abstract:
This paper is concern with modeling cholera epidemic. Despite the advances made in understanding this disease and its treatment, cholera continues to be a major public health problem in many countries. Deterministic and stochastic models emerged in modeling of cholera epidemic, in order to understand the mechanism by which cholera disease spread, conditions for cholera disease to have minor and large outbreaks. We formulate a continuous time Markov chain model for cholera epidemic transmission from the deterministic model. The basic reproduction number (R0) and the extinction thresholds of corresponding cholera continuous time Markov chain model are derived under certain assumptions. We find that, the probability of extinction (no outbreak) is 1 if R0 < 1, but less than 1 if R0 > 1. We also carry out numerical simulations using Gillespie algorithm and Runge–Kutta method to generate the sample path of cholera continuous time Markov chain model and the solution of ordinary differential equation respectively. The results show that the sample path of continuous time Markov chain model fluctuates within the solution of the ordinary differential equation.
APA, Harvard, Vancouver, ISO, and other styles
22

Xiang, Xuyan, Xiao Zhang, and Xiaoyun Mo. "Statistical Identification of Markov Chain on Trees." Mathematical Problems in Engineering 2018 (2018): 1–13. http://dx.doi.org/10.1155/2018/2036248.

Full text
Abstract:
The theoretical study of continuous-time homogeneous Markov chains is usually based on a natural assumption of a known transition rate matrix (TRM). However, the TRM of a Markov chain in realistic systems might be unknown and might even need to be identified by partially observable data. Thus, an issue on how to identify the TRM of the underlying Markov chain by partially observable information is derived from the great significance in applications. That is what we call the statistical identification of Markov chain. The Markov chain inversion approach has been derived for basic Markov chains by partial observation at few states. In the current letter, a more extensive class of Markov chain on trees is investigated. Firstly, a type of a more operable derivative constraint is developed. Then, it is shown that all Markov chains on trees can be identified only by such derivative constraints of univariate distributions of sojourn time and/or hitting time at a few states. A numerical example is included to demonstrate the correctness of the proposed algorithms.
APA, Harvard, Vancouver, ISO, and other styles
23

Kijima, Masaaki. "Quasi-limiting distributions of Markov chains that are skip-free to the left in continuous time." Journal of Applied Probability 30, no. 3 (September 1993): 509–17. http://dx.doi.org/10.2307/3214761.

Full text
Abstract:
A continuous-time Markov chain on the non-negative integers is called skip-free to the left (right) if the governing infinitesimal generator A = (aij) has the property that aij = 0 for j ≦ i ‒ 2 (i ≦ j – 2). If a Markov chain is skip-free both to the left and to the right, it is called a birth-death process. Quasi-limiting distributions of birth–death processes have been studied in detail in their own right and from the standpoint of finite approximations. In this paper, we generalize, to some extent, results for birth-death processes to Markov chains that are skip-free to the left in continuous time. In particular the decay parameter of skip-free Markov chains is shown to have a similar representation to the birth-death case and a result on convergence of finite quasi-limiting distributions is obtained.
APA, Harvard, Vancouver, ISO, and other styles
24

Kijima, Masaaki. "Quasi-limiting distributions of Markov chains that are skip-free to the left in continuous time." Journal of Applied Probability 30, no. 03 (September 1993): 509–17. http://dx.doi.org/10.1017/s0021900200044260.

Full text
Abstract:
A continuous-time Markov chain on the non-negative integers is called skip-free to the left (right) if the governing infinitesimal generator A = (aij ) has the property that aij = 0 for j ≦ i ‒ 2 (i ≦ j – 2). If a Markov chain is skip-free both to the left and to the right, it is called a birth-death process. Quasi-limiting distributions of birth–death processes have been studied in detail in their own right and from the standpoint of finite approximations. In this paper, we generalize, to some extent, results for birth-death processes to Markov chains that are skip-free to the left in continuous time. In particular the decay parameter of skip-free Markov chains is shown to have a similar representation to the birth-death case and a result on convergence of finite quasi-limiting distributions is obtained.
APA, Harvard, Vancouver, ISO, and other styles
25

Krak, Thomas, Jasper De Bock, and Arno Siebes. "Imprecise continuous-time Markov chains." International Journal of Approximate Reasoning 88 (September 2017): 452–528. http://dx.doi.org/10.1016/j.ijar.2017.06.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Guo, Xianping, and Onésimo Hernández-Lerma. "Continuous-time controlled Markov chains." Annals of Applied Probability 13, no. 1 (2003): 363–88. http://dx.doi.org/10.1214/aoap/1042765671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Suchard, Marc A., Robert E. Weiss, and Janet S. Sinsheimer. "Bayesian Selection of Continuous-Time Markov Chain Evolutionary Models." Molecular Biology and Evolution 18, no. 6 (June 1, 2001): 1001–13. http://dx.doi.org/10.1093/oxfordjournals.molbev.a003872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Huo, Yunzhang, and Ping Ji. "Continuous-Time Markov Chain–Based Flux Analysis in Metabolism." Journal of Computational Biology 21, no. 9 (September 2014): 691–98. http://dx.doi.org/10.1089/cmb.2014.0073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Le, Hung V., and M. J. Tsatsomeros. "Matrix Analysis for Continuous-Time Markov Chains." Special Matrices 10, no. 1 (January 1, 2021): 219–33. http://dx.doi.org/10.1515/spma-2021-0157.

Full text
Abstract:
Abstract Continuous-time Markov chains have transition matrices that vary continuously in time. Classical theory of nonnegative matrices, M-matrices and matrix exponentials is used in the literature to study their dynamics, probability distributions and other stochastic properties. For the benefit of Perron-Frobenius cognoscentes, this theory is surveyed and further adapted to study continuous-time Markov chains on finite state spaces.
APA, Harvard, Vancouver, ISO, and other styles
30

Kijima, Masaaki. "On passage and conditional passage times for Markov chains in continuous time." Journal of Applied Probability 25, no. 2 (June 1988): 279–90. http://dx.doi.org/10.2307/3214436.

Full text
Abstract:
Let X(t) be a temporally homogeneous irreducible Markov chain in continuous time defined on . For k < i < j, let H = {k + 1, ···, j − 1} and let kTij (jTik) be the upward (downward) conditional first-passage time of X(t) from i to j(k) given no visit to . These conditional passage times are studied through first-passage times of a modified chain HX(t) constructed by making the set of states absorbing. It will be shown that the densities of kTij and jTik for any birth-death process are unimodal and the modes kmij (jmik) of the unimodal densities are non-increasing (non-decreasing) with respect to i. Some distribution properties of kTij and jTik for a time-reversible Markov chain are presented. Symmetry among kTij, jTik, and is also discussed, where , and are conditional passage times of the reversed process of X(t).
APA, Harvard, Vancouver, ISO, and other styles
31

Kijima, Masaaki. "On passage and conditional passage times for Markov chains in continuous time." Journal of Applied Probability 25, no. 02 (June 1988): 279–90. http://dx.doi.org/10.1017/s0021900200040924.

Full text
Abstract:
Let X(t) be a temporally homogeneous irreducible Markov chain in continuous time defined on . For k &lt; i &lt; j, let H = {k + 1, ···, j − 1} and let kTij ( jTik ) be the upward (downward) conditional first-passage time of X(t) from i to j(k) given no visit to . These conditional passage times are studied through first-passage times of a modified chain HX(t) constructed by making the set of states absorbing. It will be shown that the densities of kTij and jTik for any birth-death process are unimodal and the modes kmij ( jmik ) of the unimodal densities are non-increasing (non-decreasing) with respect to i. Some distribution properties of kTij and jTik for a time-reversible Markov chain are presented. Symmetry among kTij, jTik , and is also discussed, where , and are conditional passage times of the reversed process of X(t).
APA, Harvard, Vancouver, ISO, and other styles
32

Hahn, Markus, and Jörn Sass. "Parameter estimation in continuous time Markov switching models: a semi-continuous Markov chain Monte Carlo approach." Bayesian Analysis 4, no. 1 (March 2009): 63–84. http://dx.doi.org/10.1214/09-ba402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Miller, A. B., B. M. Miller, and K. V. Stepanyan. "Simultaneous Impulse and Continuous Control of a Markov Chain in Continuous Time." Automation and Remote Control 81, no. 3 (March 2020): 469–82. http://dx.doi.org/10.1134/s0005117920030066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Norberg, Ragnar. "The Markov Chain Market." ASTIN Bulletin 33, no. 02 (November 2003): 265–87. http://dx.doi.org/10.2143/ast.33.2.503693.

Full text
Abstract:
We consider a financial market driven by a continuous time homogeneous Markov chain. Conditions for absence of arbitrage and for completeness are spelled out, non-arbitrage pricing of derivatives is discussed, and details are worked out for some cases. Closed form expressions are obtained for interest rate derivatives. Computations typically amount to solving a set of first order partial differential equations. An excursion into risk minimization in the incomplete case illustrates the matrix techniques that are instrumental in the model.
APA, Harvard, Vancouver, ISO, and other styles
35

Norberg, Ragnar. "The Markov Chain Market." ASTIN Bulletin 33, no. 2 (November 2003): 265–87. http://dx.doi.org/10.1017/s0515036100013465.

Full text
Abstract:
We consider a financial market driven by a continuous time homogeneous Markov chain. Conditions for absence of arbitrage and for completeness are spelled out, non-arbitrage pricing of derivatives is discussed, and details are worked out for some cases. Closed form expressions are obtained for interest rate derivatives. Computations typically amount to solving a set of first order partial differential equations. An excursion into risk minimization in the incomplete case illustrates the matrix techniques that are instrumental in the model.
APA, Harvard, Vancouver, ISO, and other styles
36

Sharma, Vinod. "Approximations of general discrete time queues by discrete time queues with arrivals modulated by finite chains." Advances in Applied Probability 29, no. 4 (December 1997): 1039–59. http://dx.doi.org/10.2307/1427853.

Full text
Abstract:
Recently, Asmussen and Koole (Journal of Applied Probability30, pp. 365–372) showed that any discrete or continuous time marked point process can be approximated by a sequence of arrival streams modulated by finite state continuous time Markov chains. If the original process is customer (time) stationary then so are the approximating processes. Also, the moments in the stationary case converge. For discrete marked point processes we construct a sequence of discrete processes modulated by discrete time finite state Markov chains. All the above features of approximating sequences of Asmussen and Koole continue to hold. For discrete arrival sequences (to a queue) which are modulated by a countable state Markov chain we form a different sequence of approximating arrival streams by which, unlike in the Asmussen and Koole case, even the stationary moments of waiting times can be approximated. Explicit constructions for the output process of a queue and the total input process of a discrete time Jackson network with these characteristics are obtained.
APA, Harvard, Vancouver, ISO, and other styles
37

Sharma, Vinod. "Approximations of general discrete time queues by discrete time queues with arrivals modulated by finite chains." Advances in Applied Probability 29, no. 04 (December 1997): 1039–59. http://dx.doi.org/10.1017/s0001867800048011.

Full text
Abstract:
Recently, Asmussen and Koole (Journal of Applied Probability 30, pp. 365–372) showed that any discrete or continuous time marked point process can be approximated by a sequence of arrival streams modulated by finite state continuous time Markov chains. If the original process is customer (time) stationary then so are the approximating processes. Also, the moments in the stationary case converge. For discrete marked point processes we construct a sequence of discrete processes modulated by discrete time finite state Markov chains. All the above features of approximating sequences of Asmussen and Koole continue to hold. For discrete arrival sequences (to a queue) which are modulated by a countable state Markov chain we form a different sequence of approximating arrival streams by which, unlike in the Asmussen and Koole case, even the stationary moments of waiting times can be approximated. Explicit constructions for the output process of a queue and the total input process of a discrete time Jackson network with these characteristics are obtained.
APA, Harvard, Vancouver, ISO, and other styles
38

Ball, Frank, Robin K. Milne, Ian D. Tame, and Geoffrey F. Yeo. "Superposition of Interacting Aggregated Continuous-Time Markov Chains." Advances in Applied Probability 29, no. 1 (March 1997): 56–91. http://dx.doi.org/10.2307/1427861.

Full text
Abstract:
Consider a system of interacting finite Markov chains in continuous time, where each subsystem is aggregated by a common partitioning of the state space. The interaction is assumed to arise from dependence of some of the transition rates for a given subsystem at a specified time on the states of the other subsystems at that time. With two subsystem classes, labelled 0 and 1, the superposition process arising from a system counts the number of subsystems in the latter class. Key structure and results from the theory of aggregated Markov processes are summarized. These are then applied also to superposition processes. In particular, we consider invariant distributions for the level m entry process, marginal and joint distributions for sojourn-times of the superposition process at its various levels, and moments and correlation functions associated with these distributions. The distributions are obtained mainly by using matrix methods, though an approach based on point process methods and conditional probability arguments is outlined. Conditions under which an interacting aggregated Markov chain is reversible are established. The ideas are illustrated with simple examples for which numerical results are obtained using Matlab. Motivation for this study has come from stochastic modelling of the behaviour of ion channels; another application is in reliability modelling.
APA, Harvard, Vancouver, ISO, and other styles
39

Ball, Frank, Robin K. Milne, Ian D. Tame, and Geoffrey F. Yeo. "Superposition of Interacting Aggregated Continuous-Time Markov Chains." Advances in Applied Probability 29, no. 01 (March 1997): 56–91. http://dx.doi.org/10.1017/s0001867800027798.

Full text
Abstract:
Consider a system of interacting finite Markov chains in continuous time, where each subsystem is aggregated by a common partitioning of the state space. The interaction is assumed to arise from dependence of some of the transition rates for a given subsystem at a specified time on the states of the other subsystems at that time. With two subsystem classes, labelled 0 and 1, the superposition process arising from a system counts the number of subsystems in the latter class. Key structure and results from the theory of aggregated Markov processes are summarized. These are then applied also to superposition processes. In particular, we consider invariant distributions for the level m entry process, marginal and joint distributions for sojourn-times of the superposition process at its various levels, and moments and correlation functions associated with these distributions. The distributions are obtained mainly by using matrix methods, though an approach based on point process methods and conditional probability arguments is outlined. Conditions under which an interacting aggregated Markov chain is reversible are established. The ideas are illustrated with simple examples for which numerical results are obtained using Matlab. Motivation for this study has come from stochastic modelling of the behaviour of ion channels; another application is in reliability modelling.
APA, Harvard, Vancouver, ISO, and other styles
40

Pritchard, Geoffrey, and David J. Scott. "Empirical convergence rates for continuous-time Markov chains." Journal of Applied Probability 38, no. 1 (March 2001): 262–69. http://dx.doi.org/10.1239/jap/996986661.

Full text
Abstract:
We consider the problem of estimating the rate of convergence to stationarity of a continuous-time, finite-state Markov chain. This is done via an estimator of the second-largest eigenvalue of the transition matrix, which in turn is based on conventional inference in a parametric model. We obtain a limiting distribution for the eigenvalue estimator. As an example we treat an M/M/c/c queue, and show that the method allows us to estimate the time to stationarity τ within a time comparable to τ.
APA, Harvard, Vancouver, ISO, and other styles
41

Pritchard, Geoffrey, and David J. Scott. "Empirical convergence rates for continuous-time Markov chains." Journal of Applied Probability 38, no. 01 (March 2001): 262–69. http://dx.doi.org/10.1017/s0021900200018684.

Full text
Abstract:
We consider the problem of estimating the rate of convergence to stationarity of a continuous-time, finite-state Markov chain. This is done via an estimator of the second-largest eigenvalue of the transition matrix, which in turn is based on conventional inference in a parametric model. We obtain a limiting distribution for the eigenvalue estimator. As an example we treat an M/M/c/c queue, and show that the method allows us to estimate the time to stationarity τ within a time comparable to τ.
APA, Harvard, Vancouver, ISO, and other styles
42

Böttcher, Björn. "Embedded Markov chain approximations in Skorokhod topologies." Probability and Mathematical Statistics 39, no. 2 (December 19, 2019): 259–77. http://dx.doi.org/10.19195/0208-4147.39.2.2.

Full text
Abstract:
We prove a J1-tightness condition for embedded Markov chains and discuss four Skorokhod topologies in a unified manner. To approximate a continuous time stochastic process by discrete time Markov chains, one has several options to embed the Markov chains into continuous time processes. On the one hand, there is a Markov embedding which uses exponential waiting times. On the other hand, each Skorokhod topology naturally suggests a certain embedding. These are the step function embedding for J1, the linear interpolation embedding forM1, the multistep embedding for J2 and a more general embedding for M2. We show that the convergence of the step function embedding in J1 implies the convergence of the other embeddings in the corresponding topologies. For the converse statement, a J1-tightness condition for embedded time-homogeneous Markov chains is given.Additionally, it is shown that J1 convergence is equivalent to the joint convergence in M1 and J2.
APA, Harvard, Vancouver, ISO, and other styles
43

Magazev, A. A., A. S. Melnikova, and V. F. Tsyrulnik. "Evaluating mean time to security failure based on continuous-time Markov chains." Mathematical Structures and Modeling, no. 4 (56) (December 18, 2020): 112–25. http://dx.doi.org/10.24147/2222-8772.2020.4.112-125.

Full text
Abstract:
In the article, we consider a Markov model of computer attacks, in the framework of which attacks and the system responses are modelled by homogeneous Poisson point processes. We describe a method of solving the corresponding Kolmogorov system of equations by calculating eigenvalues and eigenvectors of some matrix. An important random variable associated with the corresponding Markov chain and called the time to security failure is explored in detail. The comparison of the results obtained and the results of simulation modelling are presented.
APA, Harvard, Vancouver, ISO, and other styles
44

Bäuerle, Nicole, Igor Gilitschenski, and Uwe Hanebeck. "Exact and approximate hidden Markov chain filters based on discrete observations." Statistics & Risk Modeling 32, no. 3-4 (December 1, 2015): 159–76. http://dx.doi.org/10.1515/strm-2015-0004.

Full text
Abstract:
Abstract We consider a Hidden Markov Model (HMM) where the integrated continuous-time Markov chain can be observed at discrete time points perturbed by a Brownian motion. The aim is to derive a filter for the underlying continuous-time Markov chain. The recursion formula for the discrete-time filter is easy to derive, however involves densities which are very hard to obtain. In this paper we derive exact formulas for the necessary densities in the case the state space of the HMM consists of two elements only. This is done by relating the underlying integrated continuous-time Markov chain to the so-called asymmetric telegraph process and by using recent results on this process. In case the state space consists of more than two elements we present three different ways to approximate the densities for the filter. The first approach is based on the continuous filter problem. The second approach is to derive a PDE for the densities and solve it numerically. The third approach is a crude discrete time approximation of the Markov chain. All three approaches are compared in a numerical study.
APA, Harvard, Vancouver, ISO, and other styles
45

Aziz, Adnan, Kumud Sanwal, Vigyan Singhal, and Robert Brayton. "Model-checking continuous-time Markov chains." ACM Transactions on Computational Logic 1, no. 1 (July 2000): 162–70. http://dx.doi.org/10.1145/343369.343402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Pollett, P. K. "Integrals for continuous-time Markov chains." Mathematical Biosciences 182, no. 2 (April 2003): 213–25. http://dx.doi.org/10.1016/s0025-5564(02)00161-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Li, Pei-Sen. "Perturbations of continuous-time Markov chains." Statistics & Probability Letters 125 (June 2017): 17–24. http://dx.doi.org/10.1016/j.spl.2017.01.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Johnson, Jean T. "Continuous-time, constant causative Markov chains." Stochastic Processes and their Applications 26 (1987): 161–71. http://dx.doi.org/10.1016/0304-4149(87)90057-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Aggoun, L., L. Benkherouf, and L. Tadj. "Filtering of continuous-time Markov chains." Mathematical and Computer Modelling 26, no. 12 (December 1997): 73–83. http://dx.doi.org/10.1016/s0895-7177(97)00241-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Burini, Diletta, Elena De Angelis, and Miroslaw Lachowicz. "A Continuous–Time Markov Chain Modeling Cancer–Immune System Interactions." Communications in Applied and Industrial Mathematics 9, no. 2 (December 1, 2018): 106–18. http://dx.doi.org/10.2478/caim-2018-0018.

Full text
Abstract:
Abstract In the present paper we propose two mathematical models describing, respectively at the microscopic level and at the mesoscopic level, a system of interacting tumor cells and cells of the immune system. The microscopic model is in terms of a Markov chain defined by the generator, the mesoscopic model is developed in the framework of the kinetic theory of active particles. The main result is to prove the transition from the microscopic to mesoscopic level of description.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography