Academic literature on the topic 'Markov chain simulation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Markov chain simulation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Markov chain simulation":

1

BOUCHER, THOMAS R., and DAREN B. H. CLINE. "PIGGYBACKING THRESHOLD PROCESSES WITH A FINITE STATE MARKOV CHAIN." Stochastics and Dynamics 09, no. 02 (June 2009): 187–204. http://dx.doi.org/10.1142/s0219493709002622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The state-space representations of certain nonlinear autoregressive time series are general state Markov chains. The transitions of a general state Markov chain among regions in its state-space can be modeled with the transitions among states of a finite state Markov chain. Stability of the time series is then informed by the stationary distributions of the finite state Markov chain. This approach generalizes some previous results.
2

Bucklew, James A., Peter Ney, and John S. Sadowsky. "Monte Carlo simulation and large deviations theory for uniformly recurrent Markov chains." Journal of Applied Probability 27, no. 1 (March 1990): 44–59. http://dx.doi.org/10.2307/3214594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Importance sampling is a Monte Carlo simulation technique in which the simulation distribution is different from the true underlying distribution. In order to obtain an unbiased Monte Carlo estimate of the desired parameter, simulated events are weighted to reflect their true relative frequency. In this paper, we consider the estimation via simulation of certain large deviations probabilities for time-homogeneous Markov chains. We first demonstrate that when the simulation distribution is also a homogeneous Markov chain, the estimator variance will vanish exponentially as the sample size n tends to∞. We then prove that the estimator variance is asymptotically minimized by the same exponentially twisted Markov chain which arises in large deviation theory, and furthermore, this optimization is unique among uniformly recurrent homogeneous Markov chain simulation distributions.
3

Bucklew, James A., Peter Ney, and John S. Sadowsky. "Monte Carlo simulation and large deviations theory for uniformly recurrent Markov chains." Journal of Applied Probability 27, no. 01 (March 1990): 44–59. http://dx.doi.org/10.1017/s0021900200038419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Importance sampling is a Monte Carlo simulation technique in which the simulation distribution is different from the true underlying distribution. In order to obtain an unbiased Monte Carlo estimate of the desired parameter, simulated events are weighted to reflect their true relative frequency. In this paper, we consider the estimation via simulation of certain large deviations probabilities for time-homogeneous Markov chains. We first demonstrate that when the simulation distribution is also a homogeneous Markov chain, the estimator variance will vanish exponentially as the sample size n tends to∞. We then prove that the estimator variance is asymptotically minimized by the same exponentially twisted Markov chain which arises in large deviation theory, and furthermore, this optimization is unique among uniformly recurrent homogeneous Markov chain simulation distributions.
4

Chung, Gunhui, Kyu Bum Sim, Deok Jun Jo, and Eung Seok Kim. "Hourly Precipitation Simulation Characteristic Analysis Using Markov Chain Model." Journal of Korean Society of Hazard Mitigation 16, no. 3 (June 30, 2016): 351–57. http://dx.doi.org/10.9798/kosham.2016.16.3.351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Glynn, Peter W., and Chang-Han Rhee. "Exact estimation for Markov chain equilibrium expectations." Journal of Applied Probability 51, A (December 2014): 377–89. http://dx.doi.org/10.1239/jap/1417528487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We introduce a new class of Monte Carlo methods, which we call exact estimation algorithms. Such algorithms provide unbiased estimators for equilibrium expectations associated with real-valued functionals defined on a Markov chain. We provide easily implemented algorithms for the class of positive Harris recurrent Markov chains, and for chains that are contracting on average. We further argue that exact estimation in the Markov chain setting provides a significant theoretical relaxation relative to exact simulation methods.
6

Glynn, Peter W., and Chang-Han Rhee. "Exact estimation for Markov chain equilibrium expectations." Journal of Applied Probability 51, A (December 2014): 377–89. http://dx.doi.org/10.1017/s0021900200021392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We introduce a new class of Monte Carlo methods, which we call exact estimation algorithms. Such algorithms provide unbiased estimators for equilibrium expectations associated with real-valued functionals defined on a Markov chain. We provide easily implemented algorithms for the class of positive Harris recurrent Markov chains, and for chains that are contracting on average. We further argue that exact estimation in the Markov chain setting provides a significant theoretical relaxation relative to exact simulation methods.
7

Jasra, Ajay, Kody J. H. Law, and Yaxian Xu. "Markov chain simulation for multilevel Monte Carlo." Foundations of Data Science 3, no. 1 (2021): 27. http://dx.doi.org/10.3934/fods.2021004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Weidong, Baoguo Li, and Yuanchun Shi. "Markov-chain simulation of soil textural profiles." Geoderma 92, no. 1-2 (September 1999): 37–53. http://dx.doi.org/10.1016/s0016-7061(99)00024-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Milios, Dimitrios, and Stephen Gilmore. "Markov Chain Simulation with Fewer Random Samples." Electronic Notes in Theoretical Computer Science 296 (August 2013): 183–97. http://dx.doi.org/10.1016/j.entcs.2013.07.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Skeel, Robert, and Youhan Fang. "Comparing Markov Chain Samplers for Molecular Simulation." Entropy 19, no. 10 (October 21, 2017): 561. http://dx.doi.org/10.3390/e19100561.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Markov chain simulation":

1

Suzuki, Yuya. "Rare-event Simulation with Markov Chain Monte Carlo." Thesis, KTH, Matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-138950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this thesis, we consider random sums with heavy-tailed increments. By the term random sum, we mean a sum of random variables where the number of summands is also random. Our interest is to analyse the tail behaviour of random sums and to construct an efficient method to calculate quantiles. For the sake of efficiency, we simulate rare-events (tail-events) using a Markov chain Monte Carlo (MCMC) method. The asymptotic behaviour of sum and the maximum of heavy-tailed random sums is identical. Therefore we compare random sum and maximum value for various distributions, to investigate from which point one can use the asymptotic approximation. Furthermore, we propose a new method to estimate quantiles and the estimator is shown to be efficient.
2

Gudmundsson, Thorbjörn. "Rare-event simulation with Markov chain Monte Carlo." Doctoral thesis, KTH, Matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-157522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Stochastic simulation is a popular method for computing probabilities or expecta- tions where analytical answers are difficult to derive. It is well known that standard methods of simulation are inefficient for computing rare-event probabilities and there- fore more advanced methods are needed to those problems. This thesis presents a new method based on Markov chain Monte Carlo (MCMC) algorithm to effectively compute the probability of a rare event. The conditional distri- bution of the underlying process given that the rare event occurs has the probability of the rare event as its normalising constant. Using the MCMC methodology a Markov chain is simulated, with that conditional distribution as its invariant distribution, and information about the normalising constant is extracted from its trajectory. In the first two papers of the thesis, the algorithm is described in full generality and applied to four problems of computing rare-event probability in the context of heavy- tailed distributions. The assumption of heavy-tails allows us to propose distributions which approximate the conditional distribution conditioned on the rare event. The first problem considers a random walk Y1 + · · · + Yn exceeding a high threshold, where the increments Y are independent and identically distributed and heavy-tailed. The second problem is an extension of the first one to a heavy-tailed random sum Y1+···+YN exceeding a high threshold,where the number of increments N is random and independent of Y1 , Y2 , . . .. The third problem considers the solution Xm to a stochastic recurrence equation, Xm = AmXm−1 + Bm, exceeding a high threshold, where the innovations B are independent and identically distributed and heavy-tailed and the multipliers A satisfy a moment condition. The fourth problem is closely related to the third and considers the ruin probability for an insurance company with risky investments. In last two papers of this thesis, the algorithm is extended to the context of light- tailed distributions and applied to four problems. The light-tail assumption ensures the existence of a large deviation principle or Laplace principle, which in turn allows us to propose distributions which approximate the conditional distribution conditioned on the rare event. The first problem considers a random walk Y1 + · · · + Yn exceeding a high threshold, where the increments Y are independent and identically distributed and light-tailed. The second problem considers a discrete-time Markov chains and the computation of general expectation, of its sample path, related to rare-events. The third problem extends the the discrete-time setting to Markov chains in continuous- time. The fourth problem is closely related to the third and considers a birth-and-death process with spatial intensities and the computation of first passage probabilities. An unbiased estimator of the reciprocal probability for each corresponding prob- lem is constructed with efficient rare-event properties. The algorithms are illustrated numerically and compared to existing importance sampling algorithms.

QC 20141216

3

Fan, Yanan. "Efficient implementation of Markov chain Monte Carlo." Thesis, University of Bristol, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cheal, Ryan. "Markov Chain Monte Carlo methods for simulation in pedigrees." Thesis, University of Bath, 1996. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

BALDIOTI, HUGO RIBEIRO. "MARKOV CHAIN MONTE CARLO FOR NATURAL INFLOW ENERGY SCENARIOS SIMULATION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=36058@1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
PROGRAMA DE EXCELENCIA ACADEMICA
Constituído por uma matriz eletro-energética predominantemente hídrica e território de proporções continentais, o Brasil apresenta características únicas, sendo possível realizar o aproveitamento dos fartos recursos hídricos presentes no território nacional. Aproximadamente 65 por cento da capacidade de geração de energia elétrica advém de recursos hidrelétricos enquanto 28 por cento de recursos termelétricos. Sabe-se que regimes hidrológicos de vazões naturais são de natureza estocástica e em função disso é preciso tratá-los para que se possa planejar a operação do sistema, sendo assim, o despacho hidrotérmico é de suma importância e caracterizado por sua dependência estocástica. A partir das vazões naturais é possível calcular a Energia Natural Afluente (ENA) que será utilizada diretamente no processo de simulação de séries sintéticas que, por sua vez, são utilizadas no processo de otimização, responsável pelo cálculo da política ótima visando minimizar os custos de operação do sistema. Os estudos referentes a simulação de cenários sintéticos de ENA vêm se desenvolvendo com novas propostas metodológicas ao longo dos anos. Tais desenvolvimentos muitas vezes pressupõem Gaussianidade dos dados, de forma que seja possível ajustar uma distribuição paramétrica nos mesmos. Percebeu-se que na maioria dos casos reais, no contexto do Setor Elétrico Brasileiro, os dados não podem ser tratados desta forma, uma vez que apresentam em sua densidade comportamentos de cauda relevantes e uma acentuada assimetria. É necessário para o planejamento da operação do Sistema Interligado Nacional (SIN) que a assimetria intrínseca a este comportamento seja passível de reprodução. Dessa forma, este trabalho propõe duas abordagens não paramétricas para simulação de cenários. A primeira refere-se ao processo de amostragem dos resíduos das séries de ENA, para tanto, utiliza-se a técnica Markov Chain Monte Carlo (MCMC) e o Kernel Density Estimation. A segunda metodologia proposta aplica o MCMC Interconfigurações diretamente nas séries de ENA para simulação de cenários sintéticos a partir de uma abordagem inovadora para transição entre as matrizes e períodos. Os resultados da implementação das metodologias, observados graficamente e a partir de testes estatísticos de aderência ao histórico de dados, apontam que as propostas conseguem reproduzir com uma maior acurácia as características assimétricas sem perder a capacidade de reproduzir estatísticas básicas. Destarte, pode-se afirmar que os modelos propostos são boas alternativas em relação ao modelo vigente utilizado pelo setor elétrico brasileiro.
Consisting of an electro-energetic matrix with hydro predominance and a continental proportion territory, Brazil presents unique characteristics, being able to make use of the abundant water resources in the national territory. Approximately 65 percent of the electricity generation capacity comes from hydropower while 28 percent from thermoelectric plants. It is known that hydrological regimes have a stochastic nature and it is necessary to treat them so the energy system can be planned, thus the hydrothermal dispatch is extremely important and characterized by its stochastic dependence. From the natural streamflows it is possible to calculate the Natural Inflow Energy (NIE) that will be used directly in the synthetic series simulation process, which, in turn, are used on the optimization process, responsible for optimal policy calculation in order to minimize the system operational costs. The studies concerning the simulation of synthetic scenarios of NIE have been developing with new methodological proposals over the years. Such developments often presuppose data Gaussianity, so that a parametric distribution can be fitted to them. It was noticed that in the majority of real cases, in the context of the Brazilian Electrical Sector, the data cannot be treated like that, since they present in their density relevant tail behavior and skewness. It is necessary for the National Interconnected System (SIN) operational planning that the intrinsic skewness behavior is amenable to reproduction. Thus, this paper proposes two non-parametric approaches to scenarios simulation. The first one refers to the process of NIE series residues sampling, using a Markov Chain Monte Carlo (MCMC) technique and the Kernel Density Estimation. The second methodology is also proposed where the MCMC is applied periodically and directly in the NIE series to simulate synthetic scenarios using an innovative approach for transitions between matrices. The methodologies implementation results, observed graphically and based on statistical tests of adherence to the historical data, indicate that the proposals can reproduce with greater accuracy the asymmetric characteristics without losing the ability to reproduce basic statistics. Thus, one can conclude that the proposed models are good alternatives in relation to the current model of the Brazilian Electric Sector.
6

Mehl, Christopher. "Bayesian Hierarchical Modeling and Markov Chain Simulation for Chronic Wasting Disease." Diss., University of Colorado at Denver, 2004. http://hdl.handle.net/10919/71563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this thesis, a dynamic spatial model for the spread of Chronic Wasting Disease in Colorado mule deer is derived from a system of differential equations that captures the qualitative spatial and temporal behaviour of the disease. These differential equations are incorporated into an empirical Bayesian hierarchical model through the unusual step of deterministic autoregressive updates. Spatial effects in the model are described directly in the differential equations rather than through the use of correlations in the data. The use of deterministic updates is a simplification that reduces the number of parameters that must be estimated, yet still provides a flexible model that gives reasonable predictions for the disease. The posterior distribution generated by the data model hierarchy possesses characteristics that are atypical for many Markov chain Monte Carlo simulation techniques. To address these difficulties, a new MCMC technique is developed that has qualities similar to recently introduced tempered Langevin type algorithms. The methodology is used to fit the CWD model, and posterior parameter estimates are then used to obtain predictions about Chronic Wasting Disease.
7

Zhou, Yi. "Simulation and Performance Analysis of Strategic Air Traffic Management under Weather Uncertainty." Thesis, University of North Texas, 2011. https://digital.library.unt.edu/ark:/67531/metadc68071/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this thesis, I introduce a promising framework for representing an air traffic flow (stream) and flow-management action operating under weather uncertainty. I propose to use a meshed queuing and Markov-chain model---specifically, a queuing model whose service-rates are modulated by an underlying Markov chain describing weather-impact evolution---to capture traffic management in an uncertain environment. Two techniques for characterizing flow-management performance using the model are developed, namely 1) a master-Markov-chain representation technique that yields accurate results but at relatively high computational cost, and 2) a jump-linear system-based approximation that has promising scalability. The model formulation and two analysis techniques are illustrated with numerous examples. Based on this initial study, I believe that the interfaced weather-impact and traffic-flow model analyzed here holds promise to inform strategic flow contingency management in NextGen.
8

Gudmundsson, Thorbjörn. "Markov chain Monte Carlo for rare-event simulation in heavy-tailed settings." Licentiate thesis, KTH, Matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134624.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ren, Ruichao. "Accelerating Markov chain Monte Carlo simulation through sequential updating and parallel computing." Diss., Restricted to subscribing institutions, 2007. http://proquest.umi.com/pqdweb?did=1428844711&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pitt, Michael K. "Bayesian inference for non-Gaussian state space model using simulation." Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389211.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Markov chain simulation":

1

Gamerman, Dani. Markov chain Monte Carlo: Stochastic simulation for Bayesian inference. London: Chapman & Hall, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gamerman, Dani. Markov chain Monte Carlo: Stochastic simulation for Bayesian inference. 2nd ed. Boca Raton: Taylor & Francis, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gamerman, D. Markov chain Monte Carlo: Stochastic simulation for Bayesian inference. London: Chapman & Hall, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jerrum, Mark. Uniform sampling modulo a group of symmetries using Markov chain simulation. Edinburgh: LFCS, Dept. of Computer Science, University of Edinburgh, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cowles, Mary Kathryn. A simulation approach to convergence rates for Markov chain Monte Carlo algorithms. [Toronto]: University of Toronto, Dept. of Statistics, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yücesan, Enver. Analysis of Markov chains using simulation graph models. Fontainebleau: INSEAD, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Brémaud, Pierre. Markov chains: Gibbs fields, Monte Carlo simulation, and queues. New York: Springer, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

J, Stewart William. Probability, Markov chains, queues and simulation: The mathematical basis of performance modeling. Princeton: Princeton University Press, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Berg, Bernd A. Markov chain Monte Carlo simulations and their statistical analysis: With web-based Fortran code. Hackensack, NJ: World Scientific, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Berg, Bernard A. Markov chain Monte Carlo simulations and their statistical analysis: With web-based fortran code. Singapore: World Scientific Publishing, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Markov chain simulation":

1

Li, Rongpeng, and Aiichiro Nakano. "Markov Chain, a Peek into the Future." In Simulation with Python, 19–37. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8185-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

DasGupta, Anirban. "Simulation and Markov Chain Monte Carlo." In Springer Texts in Statistics, 613–87. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4419-9634-3_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hautsch, Nikolaus, and Yangguoyi Ou. "Stochastic Volatility Estimation Using Markov Chain Simulation." In Applied Quantitative Finance, 249–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-540-69179-2_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

El Haddad, Rami, Joseph El Maalouf, Christian Lécot, and Pierre L’Ecuyer. "Sudoku Latin Square Sampling for Markov Chain Simulation." In Springer Proceedings in Mathematics & Statistics, 207–30. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-43465-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Meenakshisundaram, Swaminathan, Anirudh Srikanth, Viswanath Kumar Ganesan, Natarajan Vijayarangan, and Ananda Padmanaban Srinivas. "Forecasting: Bayesian Inference Using Markov Chain Monte Carlo Simulation." In Smart Innovation, Systems and Technologies, 215–28. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-5974-3_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bhattacharya, Rabi, Lizhen Lin, and Victor Patrangenaru. "Markov Chain Monte Carlo (MCMC) Simulation and Bayes Theory." In Springer Texts in Statistics, 325–32. New York, NY: Springer New York, 2016. http://dx.doi.org/10.1007/978-1-4939-4032-5_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Andrieu, Christophe, Arnaud Doucet, and Roman Holenstein. "Particle Markov Chain Monte Carlo for Efficient Numerical Simulation." In Monte Carlo and Quasi-Monte Carlo Methods 2008, 45–60. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04107-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Saini, Gurdeep, Naveen Yadav, Biju R. Mohan, and Nagaraj Naik. "Time Series Forecasting Using Markov Chain Probability Transition Matrix with Genetic Algorithm Optimisation." In Modeling, Simulation and Optimization, 439–51. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-9829-6_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rochani, Haresh, and Daniel F. Linder. "Markov Chain Monte-Carlo Methods for Missing Data Under Ignorability Assumptions." In Monte-Carlo Simulation-Based Statistical Modeling, 129–42. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-3307-0_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Glynn, Peter W., and Shane G. Henderson. "A Central Limit Theorem For Empirical Quantiles in the Markov Chain Setting." In Advances in Modeling and Simulation, 211–38. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-10193-9_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Markov chain simulation":

1

Li, Liang, Qi-sheng Guo, and Xiu-yue Yang. "Evaluation method based on markov chain model." In 2008 Asia Simulation Conference - 7th International Conference on System Simulation and Scientific Computing (ICSC). IEEE, 2008. http://dx.doi.org/10.1109/asc-icsc.2008.4675615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chan, Lay Guat, and Adriana Irawati Nur Binti Ibrahim. "Markov chain Monte Carlo simulation for Bayesian Hidden Markov Models." In THE 4TH INTERNATIONAL CONFERENCE ON QUANTITATIVE SCIENCES AND ITS APPLICATIONS (ICOQSIA 2016). Author(s), 2016. http://dx.doi.org/10.1063/1.4966059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Glushkov, A. N., V. V. Menshikh, N. S. Khohlov, O. I. Bokova, and M. Y. Kalinin. "Gaussian signals simulation using biconnected Markov chain." In 2017 2nd International Ural Conference on Measurements (UralCon). IEEE, 2017. http://dx.doi.org/10.1109/uralcon.2017.8120718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Limao, Ronald Ekyalimpa, Stephen Hague, Michael Werner, and Simaan AbouRizk. "Updating geological conditions using Bayes theorem and Markov chain." In 2015 Winter Simulation Conference (WSC). IEEE, 2015. http://dx.doi.org/10.1109/wsc.2015.7408498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bazargan, Hamid, Mike Christie, and Hamdi Tchelepi. "Efficient Markov Chain Monte Carlo Sampling Using Polynomial Chaos Expansion." In SPE Reservoir Simulation Symposium. Society of Petroleum Engineers, 2013. http://dx.doi.org/10.2118/163663-ms.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Suzuki, Yuya, and Thorbjörn Gudmundsson. "Markov Chain Monte Carlo for Risk Measures." In 4th International Conference on Simulation and Modeling Methodologies, Technologies and Applications. SCITEPRESS - Science and Technology Publications, 2014. http://dx.doi.org/10.5220/0005035303300338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ruessink, B. G. "Application of Markov Chain simulation for model calibration." In The 2006 IEEE International Joint Conference on Neural Network Proceedings. IEEE, 2006. http://dx.doi.org/10.1109/ijcnn.2006.247007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Israel, Wescott B., and John B. Ferris. "Developing Markov chain models for road surface simulation." In Defense and Security Symposium, edited by Kevin Schum and Dawn A. Trevisani. SPIE, 2007. http://dx.doi.org/10.1117/12.720066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

CHEN, Chun, Chao-hsin LIN, and Qingyan CHEN. "Predicting Transient Particle Transport In Enclosed Environments Based On Markov Chain." In 2017 Building Simulation Conference. IBPSA, 2013. http://dx.doi.org/10.26868/25222708.2013.1202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Buist, Eric, Wyean Chan, and Pierre L'Ecuyer. "Speeding up call center simulation and optimization by Markov chain uniformization." In 2008 Winter Simulation Conference (WSC). IEEE, 2008. http://dx.doi.org/10.1109/wsc.2008.4736250.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Markov chain simulation":

1

Calvin, James M. Markov Chain Moment Formulas for Regenerative Simulation. Fort Belvoir, VA: Defense Technical Information Center, June 1989. http://dx.doi.org/10.21236/ada210684.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Athreya, Krishna B., Hani Doss, and Jayaram Sethuraman. A Proof of Convergence of the Markov Chain Simulation Method. Fort Belvoir, VA: Defense Technical Information Center, July 1992. http://dx.doi.org/10.21236/ada255456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Glaser, R., G. Johannesson, S. Sengupta, B. Kosovic, S. Carle, G. Franz, R. Aines, et al. Stochastic Engine Final Report: Applying Markov Chain Monte Carlo Methods with Importance Sampling to Large-Scale Data-Driven Simulation. Office of Scientific and Technical Information (OSTI), March 2004. http://dx.doi.org/10.2172/15009813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Krebs, William B. Markov Chain Simulations of Binary Matrices. Fort Belvoir, VA: Defense Technical Information Center, January 1992. http://dx.doi.org/10.21236/ada249265.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography