Journal articles on the topic 'Markov chain simulation'

To see the other types of publications on this topic, follow the link: Markov chain simulation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Markov chain simulation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

BOUCHER, THOMAS R., and DAREN B. H. CLINE. "PIGGYBACKING THRESHOLD PROCESSES WITH A FINITE STATE MARKOV CHAIN." Stochastics and Dynamics 09, no. 02 (June 2009): 187–204. http://dx.doi.org/10.1142/s0219493709002622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The state-space representations of certain nonlinear autoregressive time series are general state Markov chains. The transitions of a general state Markov chain among regions in its state-space can be modeled with the transitions among states of a finite state Markov chain. Stability of the time series is then informed by the stationary distributions of the finite state Markov chain. This approach generalizes some previous results.
2

Bucklew, James A., Peter Ney, and John S. Sadowsky. "Monte Carlo simulation and large deviations theory for uniformly recurrent Markov chains." Journal of Applied Probability 27, no. 1 (March 1990): 44–59. http://dx.doi.org/10.2307/3214594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Importance sampling is a Monte Carlo simulation technique in which the simulation distribution is different from the true underlying distribution. In order to obtain an unbiased Monte Carlo estimate of the desired parameter, simulated events are weighted to reflect their true relative frequency. In this paper, we consider the estimation via simulation of certain large deviations probabilities for time-homogeneous Markov chains. We first demonstrate that when the simulation distribution is also a homogeneous Markov chain, the estimator variance will vanish exponentially as the sample size n tends to∞. We then prove that the estimator variance is asymptotically minimized by the same exponentially twisted Markov chain which arises in large deviation theory, and furthermore, this optimization is unique among uniformly recurrent homogeneous Markov chain simulation distributions.
3

Bucklew, James A., Peter Ney, and John S. Sadowsky. "Monte Carlo simulation and large deviations theory for uniformly recurrent Markov chains." Journal of Applied Probability 27, no. 01 (March 1990): 44–59. http://dx.doi.org/10.1017/s0021900200038419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Importance sampling is a Monte Carlo simulation technique in which the simulation distribution is different from the true underlying distribution. In order to obtain an unbiased Monte Carlo estimate of the desired parameter, simulated events are weighted to reflect their true relative frequency. In this paper, we consider the estimation via simulation of certain large deviations probabilities for time-homogeneous Markov chains. We first demonstrate that when the simulation distribution is also a homogeneous Markov chain, the estimator variance will vanish exponentially as the sample size n tends to∞. We then prove that the estimator variance is asymptotically minimized by the same exponentially twisted Markov chain which arises in large deviation theory, and furthermore, this optimization is unique among uniformly recurrent homogeneous Markov chain simulation distributions.
4

Chung, Gunhui, Kyu Bum Sim, Deok Jun Jo, and Eung Seok Kim. "Hourly Precipitation Simulation Characteristic Analysis Using Markov Chain Model." Journal of Korean Society of Hazard Mitigation 16, no. 3 (June 30, 2016): 351–57. http://dx.doi.org/10.9798/kosham.2016.16.3.351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Glynn, Peter W., and Chang-Han Rhee. "Exact estimation for Markov chain equilibrium expectations." Journal of Applied Probability 51, A (December 2014): 377–89. http://dx.doi.org/10.1239/jap/1417528487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We introduce a new class of Monte Carlo methods, which we call exact estimation algorithms. Such algorithms provide unbiased estimators for equilibrium expectations associated with real-valued functionals defined on a Markov chain. We provide easily implemented algorithms for the class of positive Harris recurrent Markov chains, and for chains that are contracting on average. We further argue that exact estimation in the Markov chain setting provides a significant theoretical relaxation relative to exact simulation methods.
6

Glynn, Peter W., and Chang-Han Rhee. "Exact estimation for Markov chain equilibrium expectations." Journal of Applied Probability 51, A (December 2014): 377–89. http://dx.doi.org/10.1017/s0021900200021392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We introduce a new class of Monte Carlo methods, which we call exact estimation algorithms. Such algorithms provide unbiased estimators for equilibrium expectations associated with real-valued functionals defined on a Markov chain. We provide easily implemented algorithms for the class of positive Harris recurrent Markov chains, and for chains that are contracting on average. We further argue that exact estimation in the Markov chain setting provides a significant theoretical relaxation relative to exact simulation methods.
7

Jasra, Ajay, Kody J. H. Law, and Yaxian Xu. "Markov chain simulation for multilevel Monte Carlo." Foundations of Data Science 3, no. 1 (2021): 27. http://dx.doi.org/10.3934/fods.2021004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Weidong, Baoguo Li, and Yuanchun Shi. "Markov-chain simulation of soil textural profiles." Geoderma 92, no. 1-2 (September 1999): 37–53. http://dx.doi.org/10.1016/s0016-7061(99)00024-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Milios, Dimitrios, and Stephen Gilmore. "Markov Chain Simulation with Fewer Random Samples." Electronic Notes in Theoretical Computer Science 296 (August 2013): 183–97. http://dx.doi.org/10.1016/j.entcs.2013.07.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Skeel, Robert, and Youhan Fang. "Comparing Markov Chain Samplers for Molecular Simulation." Entropy 19, no. 10 (October 21, 2017): 561. http://dx.doi.org/10.3390/e19100561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Krajčí, M. "Markov chain algorithms for canonical ensemble simulation." Computer Physics Communications 42, no. 1 (September 1986): 29–35. http://dx.doi.org/10.1016/0010-4655(86)90227-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hajihashemi, Mahdi, and Keivan Aghababaei Samani. "Multi-strategy evolutionary games: A Markov chain approach." PLOS ONE 17, no. 2 (February 17, 2022): e0263979. http://dx.doi.org/10.1371/journal.pone.0263979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Interacting strategies in evolutionary games is studied analytically in a well-mixed population using a Markov chain method. By establishing a correspondence between an evolutionary game and Markov chain dynamics, we show that results obtained from the fundamental matrix method in Markov chain dynamics are equivalent to corresponding ones in the evolutionary game. In the conventional fundamental matrix method, quantities like fixation probability and fixation time are calculable. Using a theorem in the fundamental matrix method, conditional fixation time in the absorbing Markov chain is calculable. Also, in the ergodic Markov chain, the stationary probability distribution that describes the Markov chain’s stationary state is calculable analytically. Finally, the Rock, scissor, paper evolutionary game are evaluated as an example, and the results of the analytical method and simulations are compared. Using this analytical method saves time and computational facility compared to prevalent simulation methods.
13

NISHIO, YOSHIFUMI, YUTA KOMATSU, YOKO UWATE, and MARTIN HASLER. "MARKOV CHAIN MODELING AND ANALYSIS OF COMPLICATED PHENOMENA IN COUPLED CHAOTIC OSCILLATORS." Journal of Circuits, Systems and Computers 19, no. 04 (June 2010): 801–18. http://dx.doi.org/10.1142/s0218126610006451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, we propose a Markov chain modeling of complicated phenomena observed from coupled chaotic oscillators. Once we obtain the transition probability matrix from computer simulation results, various statistical quantities can be easily calculated from the model. It is shown that various statistical quantities are easily calculated by using the Markov chain model. Various features derived from the Markov chain models of chaotic wandering of synchronization states and switching of clustering states are compared with those obtained from computer simulations of original circuit equations.
14

Andradóttir, Sigrún, James M. Calvin, and Peter W. Glynn. "Accelerated Regeneration for Markov Chain Simulations." Probability in the Engineering and Informational Sciences 9, no. 4 (October 1995): 497–523. http://dx.doi.org/10.1017/s0269964800004022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper describes a generalization of the classical regenerative method of simulation output analysis. Instead of blocking a generated sample path on returns to a fixed return state, a more general scheme to randomly decompose the path is used. In some cases, this decomposition scheme results in regeneration times that are a supersequence of the classical regeneration times. This “accelerated” regeneration is advantageous in several simulation contexts. It is shown that when this decomposition scheme accelerates regeneration relative to the classical regenerative method, it also yields a smaller asymptotic variance of the regenerative variance estimator than the classical method. Several other contexts in which increased regeneration frequency is beneficial are also discussed.
15

LIU, PEIDONG, and YAN ZHENG. "MARKOV CHAIN PERTURBATIONS OF A CLASS OF PARTIALLY EXPANDING ATTRACTORS." Stochastics and Dynamics 06, no. 03 (September 2006): 341–54. http://dx.doi.org/10.1142/s0219493706001761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper Markov chain perturbations of a class of partially expanding attractors of a diffeomorphism are considered. We show that, under some regularity conditions on the transition probabilities, the zero-noise limits of stationary measures of the Markov chains are Sinai–Ruelle–Bowen measures of the diffeomorphism on the attractors.
16

Jones, Galin L., and Qian Qin. "Markov Chain Monte Carlo in Practice." Annual Review of Statistics and Its Application 9, no. 1 (March 7, 2022): 557–78. http://dx.doi.org/10.1146/annurev-statistics-040220-090158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Markov chain Monte Carlo (MCMC) is an essential set of tools for estimating features of probability distributions commonly encountered in modern applications. For MCMC simulation to produce reliable outcomes, it needs to generate observations representative of the target distribution, and it must be long enough so that the errors of Monte Carlo estimates are small. We review methods for assessing the reliability of the simulation effort, with an emphasis on those most useful in practically relevant settings. Both strengths and weaknesses of these methods are discussed. The methods are illustrated in several examples and in a detailed case study.
17

Jones, Galin L., and Qian Qin. "Markov Chain Monte Carlo in Practice." Annual Review of Statistics and Its Application 9, no. 1 (March 7, 2022): 557–78. http://dx.doi.org/10.1146/annurev-statistics-040220-090158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Markov chain Monte Carlo (MCMC) is an essential set of tools for estimating features of probability distributions commonly encountered in modern applications. For MCMC simulation to produce reliable outcomes, it needs to generate observations representative of the target distribution, and it must be long enough so that the errors of Monte Carlo estimates are small. We review methods for assessing the reliability of the simulation effort, with an emphasis on those most useful in practically relevant settings. Both strengths and weaknesses of these methods are discussed. The methods are illustrated in several examples and in a detailed case study.
18

Kuo, Lynn. "Markov Chain Monte Carlo." Technometrics 42, no. 2 (May 2000): 216. http://dx.doi.org/10.1080/00401706.2000.10486017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Roberts, Gareth O., Jeffrey S. Rosenthal, and Peter O. Schwartz. "Convergence Properties of Perturbed Markov Chains." Journal of Applied Probability 35, no. 1 (March 1998): 1–11. http://dx.doi.org/10.1239/jap/1032192546.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, we consider the question of which convergence properties of Markov chains are preserved under small perturbations. Properties considered include geometric ergodicity and rates of convergence. Perturbations considered include roundoff error from computer simulation. We are motivated primarily by interest in Markov chain Monte Carlo algorithms.
20

Roberts, Gareth O., Jeffrey S. Rosenthal, and Peter O. Schwartz. "Convergence Properties of Perturbed Markov Chains." Journal of Applied Probability 35, no. 01 (March 1998): 1–11. http://dx.doi.org/10.1017/s0021900200014625.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, we consider the question of which convergence properties of Markov chains are preserved under small perturbations. Properties considered include geometric ergodicity and rates of convergence. Perturbations considered include roundoff error from computer simulation. We are motivated primarily by interest in Markov chain Monte Carlo algorithms.
21

Glasserman, Paul, and Pirooz Vakili. "Comparing Markov Chains Simulated in Parallel." Probability in the Engineering and Informational Sciences 8, no. 3 (July 1994): 309–26. http://dx.doi.org/10.1017/s0269964800003430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We investigate the dependence induced among multiple Markov chains when they are simulated in parallel using a shared Poisson stream of potential event occurrences. One expects this dependence to facilitate comparisons among systems; our results support this intuition. We give conditions on the transition structure of the individual chains implying that the coupled process is an associated Markov chain. Association implies that variance is reduced in comparing increasing functions of the chains, relative to independent simulations, through a routine argument. We also give an apparently new application of association to the problem of selecting the better of two systems from limited data. Under conditions, the probability of incorrect selection is asymptotically smaller when the systems compared are associated than when they are independent. This suggests a further advantage to linking multiple systems through parallel simulation.
22

Nakagawa, K. "On the optimal Markov chain of IS simulation." IEEE Transactions on Information Theory 47, no. 1 (2001): 442–46. http://dx.doi.org/10.1109/18.904558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Müller, Peter, Bruno Sansó, and Maria De Iorio. "Optimal Bayesian Design by Inhomogeneous Markov Chain Simulation." Journal of the American Statistical Association 99, no. 467 (September 2004): 788–98. http://dx.doi.org/10.1198/016214504000001123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Chib, Siddhartha, and Edward Greenberg. "Markov Chain Monte Carlo Simulation Methods in Econometrics." Econometric Theory 12, no. 3 (August 1996): 409–31. http://dx.doi.org/10.1017/s0266466600006794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We present several Markov chain Monte Carlo simulation methods that have been widely used in recent years in econometrics and statistics. Among these is the Gibbs sampler, which has been of particular interest to econometricians. Although the paper summarizes some of the relevant theoretical literature, its emphasis is on the presentation and explanation of applications to important models that are studied in econometrics. We include a discussion of some implementation issues, the use of the methods in connection with the EM algorithm, and how the methods can be helpful in model specification questions. Many of the applications of these methods are of particular interest to Bayesians, but we also point out ways in which frequentist statisticians may find the techniques useful.
25

Fifield, Benjamin, ,. Michael Higgins, Kosuke Imai, and Alexander Tarr. "Automated Redistricting Simulation Using Markov Chain Monte Carlo." Journal of Computational and Graphical Statistics 29, no. 4 (May 7, 2020): 715–28. http://dx.doi.org/10.1080/10618600.2020.1739532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Leimkuhler, Benedict, Charles Matthews, and Jonathan Weare. "Ensemble preconditioning for Markov chain Monte Carlo simulation." Statistics and Computing 28, no. 2 (February 27, 2017): 277–90. http://dx.doi.org/10.1007/s11222-017-9730-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Alinovi, Davide, Gianluigi Ferrari, Francesco Pisani, and Riccardo Raheli. "Markov chain modeling and simulation of breathing patterns." Biomedical Signal Processing and Control 33 (March 2017): 245–54. http://dx.doi.org/10.1016/j.bspc.2016.12.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Qi, Xiao-Hui, Dian-Qing Li, Kok-Kwang Phoon, Zi-Jun Cao, and Xiao-Song Tang. "Simulation of geologic uncertainty using coupled Markov chain." Engineering Geology 207 (June 2016): 129–40. http://dx.doi.org/10.1016/j.enggeo.2016.04.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Mucha, Vladimír, Ivana Faybíková, and Ingrid Krčová. "Use of Markov Chain Simulation in Long Term Care Insurance." Statistika: Statistics and Economy Journal 102, no. 4 (December 16, 2022): 409–25. http://dx.doi.org/10.54694/stat.2022.20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The aim of this paper is to present the use of simulations of non-homogeneous Markov chains in discrete time in the context of the problem of long-term care delivery. The object of investigation is to model the distribution of clients into different states during specified time steps, then to estimate the average time a client stays in a given state, as well as to estimate the insurance premiums. Within the use of the Monte Carlo simulation method, the focus is on providing approaches that ensure more accurate results in the context of the number of simulations performed. Based on the statistical processing of the data obtained from the simulations, it is possible to obtain the information necessary for the provision of resources for the provision of health care and for the determination of the aforementioned premiums. For the implementation of the above techniques and their graphical presentation available packages such as markovchain, ggplot2 or custom code created using the R language were used.
30

Ge, Yuan, Yan Zhang, Wengen Gao, Fanyong Cheng, Nuo Yu, and Jincenzi Wu. "Modelling and Prediction of Random Delays in NCSs Using Double-Chain HMMs." Discrete Dynamics in Nature and Society 2020 (October 29, 2020): 1–16. http://dx.doi.org/10.1155/2020/6848420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper is concerned with the modelling and prediction of random delays in networked control systems. The stochastic distribution of the random delay in the current sampling period is assumed to be affected by the network state in the current sampling period as well as the random delay in the previous sampling period. Based on this assumption, the double-chain hidden Markov model (DCHMM) is proposed in this paper to model the delays. There are two Markov chains in this model. One is the hidden Markov chain which consists of the network states and the other is the observable Markov chain which consists of the delays. Moreover, the delays are also affected by the hidden network states, which constructs the DCHMM-based delay model. The initialization and optimization problems of the model parameters are solved by using the segmental K-mean clustering algorithm and the expectation maximization algorithm, respectively. Based on the model, the prediction of the controller-to-actuator (CA) delay in the current sampling period is obtained. The prediction can be used to design a controller to compensate the CA delay in the future research. Some comparative experiments are carried out to demonstrate the effectiveness and superiority of the proposed method.
31

Melnik, Roderick V. Nicholas. "Dynamic system evolution and markov chain approximation." Discrete Dynamics in Nature and Society 2, no. 1 (1998): 7–39. http://dx.doi.org/10.1155/s1026022698000028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper computational aspects of the mathematical modelling of dynamic system evolution have been considered as a problem in information theory. The construction of mathematical models is treated as a decision making process with limited available information.The solution of the problem is associated with a computational model based on heuristics of a Markov Chain in a discrete space–time of events. A stable approximation of the chain has been derived and the limiting cases are discussed. An intrinsic interconnection of constructive, sequential, and evolutionary approaches in related optimization problems provides new challenges for future work.
32

Mannion, David. "A Markov chain of triangle shapes." Advances in Applied Probability 20, no. 2 (June 1988): 348–70. http://dx.doi.org/10.2307/1427394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The process of choosing a random triangle inside a compact convex region, K, may be iterated when K itself is a triangle. In this way successive generations of random triangles are created. Properties of scale, location and orientation are filtered out, leaving only the shapes of the triangles as the objects of study. Various simulation investigations indicate quite clearly that, as n increases, the nth-generation triangle shape converges to collinearity. In this paper we attempt to establish such convergence; our results fall slightly short of a complete proof.
33

Mannion, David. "A Markov chain of triangle shapes." Advances in Applied Probability 20, no. 02 (June 1988): 348–70. http://dx.doi.org/10.1017/s0001867800017018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The process of choosing a random triangle inside a compact convex region, K, may be iterated when K itself is a triangle. In this way successive generations of random triangles are created. Properties of scale, location and orientation are filtered out, leaving only the shapes of the triangles as the objects of study. Various simulation investigations indicate quite clearly that, as n increases, the nth-generation triangle shape converges to collinearity. In this paper we attempt to establish such convergence; our results fall slightly short of a complete proof.
34

Azizah, Azizah. "PEMODELAN KLAIM ASURANSI MENGGUNAKAN PENDEKATAN BAYESIAN DAN MARKOV CHAIN MONTE CARLO." Jurnal Kajian Matematika dan Aplikasinya (JKMA) 2, no. 2 (June 11, 2021): 7. http://dx.doi.org/10.17977/um055v2i22021p7-13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The determination of the correct prediction of claims frequency and claims severity is very important in the insurance business to determine the outstanding claims reserve which should be prepared by an insurance company. One approach which may be used to predict a future value is the Bayesian approach. This approach combines the sample and the prior information The information is used to construct the posterior distribution and to determine the estimate of the parameters. However, in this approach, integrations of functions with high dimensions are often encountered. In this Thesis, a Markov Chain Monte Carlo (MCMC) simulation is used using the Gibbs Sampling algorithm to solve the problem. The MCMC simulation uses ergodic chain property in Markov Chain. In Ergodic Markov Chain, a stationary distribution, which is the target distribution, is obtained. The MCMC simulation is applied in Hierarchical Poisson Model. The OpenBUGS software is used to carry out the tasks. The MCMC simulation in Hierarchical Poisson Model can predict the claims frequency.
35

Strawderman, Robert L., and Dani Gamerman. "Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference." Journal of the American Statistical Association 95, no. 449 (March 2000): 346. http://dx.doi.org/10.2307/2669581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Au, S. K. "Probabilistic Failure Analysis by Importance Sampling Markov Chain Simulation." Journal of Engineering Mechanics 130, no. 3 (March 2004): 303–11. http://dx.doi.org/10.1061/(asce)0733-9399(2004)130:3(303).

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ahmed, S. E. "Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference." Technometrics 50, no. 1 (February 2008): 97. http://dx.doi.org/10.1198/tech.2008.s542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Brockwell, A. E. "Parallel Markov chain Monte Carlo Simulation by Pre-Fetching." Journal of Computational and Graphical Statistics 15, no. 1 (March 2006): 246–61. http://dx.doi.org/10.1198/106186006x100579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Melas, V. B. "Branching Technique for Markov Chain Simulation (Finite State Case)." Statistics 25, no. 2 (January 1994): 159–71. http://dx.doi.org/10.1080/02331889408802441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Athreya, Krishna B., Hani Doss, and Jayaram Sethuraman. "On the convergence of the Markov chain simulation method." Annals of Statistics 24, no. 1 (February 1996): 69–100. http://dx.doi.org/10.1214/aos/1033066200.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Kemp, Bob, and Hilbert A. C. Kamphuisen. "Simulation of Human Hypnograms Using a Markov Chain Model." Sleep 9, no. 3 (September 1986): 405–14. http://dx.doi.org/10.1093/sleep/9.3.405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Pelkowitz, L. "The general markov chain disorder problem." Stochastics 21, no. 2 (June 1987): 113–30. http://dx.doi.org/10.1080/17442508708833454.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Glynn, Peter W., and Donald L. Iglehart. "Conditions Under Which a Markov Chain Converges to its Steady State in Finite Time." Probability in the Engineering and Informational Sciences 2, no. 3 (July 1988): 377–82. http://dx.doi.org/10.1017/s0269964800000917.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Analysis of the initial transient problem of Monte Carlo steady-state simulation motivates the following question for Markov chains: when does there exist a deterministic Tsuch that P{X(T) = y|(0) = x} = ®(y), where ρ is the stationary distribution of X? We show that this can essentially never happen for a continuous-time Markov chain; in discrete time, such processes are i.i.d. provided the transition matrix is diagonalizable.
44

Cerqueira, Andressa, Aurélien Garivier, and Florencia Leonardi. "A note on perfect simulation for Exponential Random Graph Models." ESAIM: Probability and Statistics 24 (2020): 138–47. http://dx.doi.org/10.1051/ps/2019024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, we propose a perfect simulation algorithm for the Exponential Random Graph Model, based on the Coupling from the past method of Propp and Wilson (1996). We use a Glauber dynamics to construct the Markov Chain and we prove the monotonicity of the ERGM for a subset of the parametric space. We also obtain an upper bound on the running time of the algorithm that depends on the mixing time of the Markov chain.
45

Kalashnikov, Vladimir V. "Regeneration and general Markov chains." Journal of Applied Mathematics and Stochastic Analysis 7, no. 3 (January 1, 1994): 357–71. http://dx.doi.org/10.1155/s1048953394000304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ergodicity, continuity, finite approximations and rare visits of general Markov chains are investigated. The obtained results permit further quantitative analysis of characteristics, such as, rates of convergence, continuity (measured as a distance between perturbed and non-perturbed characteristics), deviations between Markov chains, accuracy of approximations and bounds on the distribution function of the first visit time to a chosen subset, etc. The underlying techniques use the embedding of the general Markov chain into a wide sense regenerative process with the help of splitting construction.
46

Jiang, Gao Yang, Jie Ning Wang, Chun Feng Zhang, and Mei Dong. "Runway Utilization Analysis Based on the Stochastic Petri Net Model." Applied Mechanics and Materials 411-414 (September 2013): 1750–56. http://dx.doi.org/10.4028/www.scientific.net/amm.411-414.1750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Runway utilization is one of the key indicators of airport operational efficiency. Firstly, stochastic Petri net was introduced to built runway system operational model, and then we analyzed the reachability graph of this model, which not only prove the reachability and boundedness of this model, but also can be used to transform to homogeneous Markov chain. Secondly, The system steady-state probability expressions in various states were established based on the homogeneous Markov chain. Thirdly, runway utilization was calculated based on the steady-state probability expressions. During simulation, runway utilizations in various conditions were analyzed by changing some transitions fire rate. Both Markov chain method and petri net simulation method are useful for runway utilization improvement.
47

Jang, Yoonsun, and Allan S. Cohen. "The Impact of Markov Chain Convergence on Estimation of Mixture IRT Model Parameters." Educational and Psychological Measurement 80, no. 5 (January 9, 2020): 975–94. http://dx.doi.org/10.1177/0013164419898228.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A nonconverged Markov chain can potentially lead to invalid inferences about model parameters. The purpose of this study was to assess the effect of a nonconverged Markov chain on the estimation of parameters for mixture item response theory models using a Markov chain Monte Carlo algorithm. A simulation study was conducted to investigate the accuracy of model parameters estimated with different degree of convergence. Results indicated the accuracy of the estimated model parameters for the mixture item response theory models decreased as the number of iterations of the Markov chain decreased. In particular, increasing the number of burn-in iterations resulted in more accurate estimation of mixture IRT model parameters. In addition, the different methods for monitoring convergence of a Markov chain resulted in different degrees of convergence despite almost identical accuracy of estimation.
48

Nasroallah, Abdelaziz, and Mohamed Yasser Bounnite. "A kind of dual form for coupling from the past algorithm, to sample from Markov chain steady-state probability." Monte Carlo Methods and Applications 25, no. 4 (December 1, 2019): 317–27. http://dx.doi.org/10.1515/mcma-2019-2050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract The standard coupling from the past (CFTP) algorithm is an interesting tool to sample from exact Markov chain steady-state probability. The CFTP detects, with probability one, the end of the transient phase (called burn-in period) of the chain and consequently the beginning of its stationary phase. For large and/or stiff Markov chains, the burn-in period is expensive in time consumption. In this work, we propose a kind of dual form for CFTP called D-CFTP that, in many situations, reduces the Monte Carlo simulation time and does not need to store the history of the used random numbers from one iteration to another. A performance comparison of CFTP and D-CFTP will be discussed, and some numerical Monte Carlo simulations are carried out to show the smooth running of the proposed D-CFTP.
49

Xu, Zhixin, Dingqing Guo, Jinkai Wang, Xueli Li, and Daochuan Ge. "A numerical simulation method for a repairable dynamic fault tree." Eksploatacja i Niezawodnosc - Maintenance and Reliability 23, no. 1 (January 2, 2021): 34–41. http://dx.doi.org/10.17531/ein.2021.1.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dynamic fault trees are important tools for modeling systems with sequence failure behaviors. The Markov chain state space method is the only analytical approach for a repairable dynamic fault tree (DFT). However, this method suffers from state space explosion, and is not suitable for analyzing a large scale repairable DFT. Furthermore, the Markov chain state space method requires the components’ time-to-failure to follow exponential distributions, which limits its application. In this study, motivated to efficiently analyze a repairable DFT, a Monte Carlo simulation method based on the coupling of minimal cut sequence set (MCSS) and its sequential failure region (SFR) is proposed. To validate the proposed method, a numerical case was studied. The results demonstrated that our proposed approach was more efficient than other methods and applicable for repairable DFTs with arbitrary time-to-failure distributed components. In contrast to the Markov chain state space method, the proposed method is straightforward, simple and efficient.
50

Imkeller, Peter, and Peter Kloeden. "On the Computation of Invariant Measures in Random Dynamical Systems." Stochastics and Dynamics 03, no. 02 (June 2003): 247–65. http://dx.doi.org/10.1142/s0219493703000711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Invariant measures of dynamical systems generated e.g. by difference equations can be computed by discretizing the originally continuum state space, and replacing the action of the generator by the transition mechanism of a Markov chain. In fact they are approximated by stationary vectors of these Markov chains. Here we extend this well-known approximation result and the underlying algorithm to the setting of random dynamical systems, i.e. dynamical systems on the skew product of a probability space carrying the underlying stationary stochasticity and the state space, a particular non-autonomous framework. The systems are generated by difference equations driven by stationary random processes modelled on a metric dynamical system. The approximation algorithm involves spatial discretizations and the definition of appropriate random Markov chains with stationary vectors converging to the random invariant measure of the system.

To the bibliography