Academic literature on the topic 'Markov chain Monte Carlo (MCMC)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Markov chain Monte Carlo (MCMC).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Markov chain Monte Carlo (MCMC)"

1

Borkar, Vivek S. "Markov Chain Monte Carlo (MCMC)." Resonance 27, no. 7 (July 2022): 1107–15. http://dx.doi.org/10.1007/s12045-022-1407-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Roy, Vivekananda. "Convergence Diagnostics for Markov Chain Monte Carlo." Annual Review of Statistics and Its Application 7, no. 1 (March 9, 2020): 387–412. http://dx.doi.org/10.1146/annurev-statistics-031219-041300.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) is one of the most useful approaches to scientific computing because of its flexible construction, ease of use, and generality. Indeed, MCMC is indispensable for performing Bayesian analysis. Two critical questions that MCMC practitioners need to address are where to start and when to stop the simulation. Although a great amount of research has gone into establishing convergence criteria and stopping rules with sound theoretical foundation, in practice, MCMC users often decide convergence by applying empirical diagnostic tools. This review article discusses the most widely used MCMC convergence diagnostic tools. Some recently proposed stopping rules with firm theoretical footing are also presented. The convergence diagnostics and stopping rules are illustrated using three detailed examples.
APA, Harvard, Vancouver, ISO, and other styles
3

Jones, Galin L., and Qian Qin. "Markov Chain Monte Carlo in Practice." Annual Review of Statistics and Its Application 9, no. 1 (March 7, 2022): 557–78. http://dx.doi.org/10.1146/annurev-statistics-040220-090158.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) is an essential set of tools for estimating features of probability distributions commonly encountered in modern applications. For MCMC simulation to produce reliable outcomes, it needs to generate observations representative of the target distribution, and it must be long enough so that the errors of Monte Carlo estimates are small. We review methods for assessing the reliability of the simulation effort, with an emphasis on those most useful in practically relevant settings. Both strengths and weaknesses of these methods are discussed. The methods are illustrated in several examples and in a detailed case study.
APA, Harvard, Vancouver, ISO, and other styles
4

Jones, Galin L., and Qian Qin. "Markov Chain Monte Carlo in Practice." Annual Review of Statistics and Its Application 9, no. 1 (March 7, 2022): 557–78. http://dx.doi.org/10.1146/annurev-statistics-040220-090158.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) is an essential set of tools for estimating features of probability distributions commonly encountered in modern applications. For MCMC simulation to produce reliable outcomes, it needs to generate observations representative of the target distribution, and it must be long enough so that the errors of Monte Carlo estimates are small. We review methods for assessing the reliability of the simulation effort, with an emphasis on those most useful in practically relevant settings. Both strengths and weaknesses of these methods are discussed. The methods are illustrated in several examples and in a detailed case study.
APA, Harvard, Vancouver, ISO, and other styles
5

Siems, Tobias. "Markov Chain Monte Carlo on finite state spaces." Mathematical Gazette 104, no. 560 (June 18, 2020): 281–87. http://dx.doi.org/10.1017/mag.2020.51.

Full text
Abstract:
We elaborate the idea behind Markov chain Monte Carlo (MCMC) methods in a mathematically coherent, yet simple and understandable way. To this end, we prove a pivotal convergence theorem for finite Markov chains and a minimal version of the Perron-Frobenius theorem. Subsequently, we briefly discuss two fundamental MCMC methods, the Gibbs and Metropolis-Hastings sampler. Only very basic knowledge about matrices, convergence of real sequences and probability theory is required.
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Qiaomu. "Brief Introduction to Markov Chain Monte Carlo and Its Algorithms." Theoretical and Natural Science 92, no. 1 (April 17, 2025): 108–15. https://doi.org/10.54254/2753-8818/2025.22031.

Full text
Abstract:
The Markov Chain Monte Carlo (MCMC) methods have become indispensable tools in modern statistical computation, enabling researchers to approximate complex probability distributions that are otherwise intractable. This paper focus on MCMC in Statistics and Probability area which is used to draw samples from a probability distribution. In order to introduce this algorithm in a relatively light and straightforward way, this paper breaks the content into two parts: Markov Chain and MCMC, and brings in stochastic process, Markov property, Ordinary Monte Carlo, and Monte Carlo Integration in succession. On this foundation, this paper formally introduces two algorithms of MCMC that are Metropolis-Hasting Algorithm and Gibbs Sampling, the most famous and significant algorithms of MCMC. With this approach, this paper concludes both the advantages and disadvantages of these two algorithms, laying the foundation of the further study and the deeper utilizations of MCMC and, hopefully, contributing to providing a certain degree of assistance for scholars.
APA, Harvard, Vancouver, ISO, and other styles
7

Chaudhary, A. K. "Bayesian Analysis of Two Parameter Complementary Exponential Power Distribution." NCC Journal 3, no. 1 (June 14, 2018): 1–23. http://dx.doi.org/10.3126/nccj.v3i1.20244.

Full text
Abstract:
In this paper, the Markov chain Monte Carlo (MCMC) method is used to estimate the parameters of CEP distribution based on a complete sample. A procedure is developed to obtain Bayes estimates of the parameters of the CEP distribution using Markov Chain Monte Carlo (MCMC) simulation method in OpenBUGS, established software for Bayesian analysis using Markov Chain Monte Carlo (MCMC) methods. The MCMC methods have been shown to be easier to implement computationally, the estimates always exist and are statistically consistent, and their probability intervals are convenient to construct. The R functions are developed to study the statistical properties, model validation and comparison tools of the distribution and the output analysis of MCMC samples generated from OpenBUGS. A real data set is considered for illustration under uniform and gamma sets of priors. NCC Journal Vol. 3, No. 1, 2018, Page: 1-23
APA, Harvard, Vancouver, ISO, and other styles
8

Chaudhary, Arun Kumar, and Vijay Kumar. "A Bayesian Estimation and Predictionof Gompertz Extension Distribution Using the MCMC Method." Nepal Journal of Science and Technology 19, no. 1 (July 1, 2020): 142–60. http://dx.doi.org/10.3126/njst.v19i1.29795.

Full text
Abstract:
In this paper, the Markov chain Monte Carlo (MCMC) method is used to estimate the parameters of the Gompertz extension distribution based on a complete sample. We have developed a procedure to obtain Bayes estimates of the parameters of the Gompertz extension distribution using Markov Chain Monte Carlo (MCMC) simulation method in OpenBUGS, established software for Bayesian analysis using Markov Chain Monte Carlo (MCMC) methods. We have obtained the Bayes estimates of the parameters, hazard and reliability functions, and their probability intervals are also presented. We have applied the predictive check method to discuss the issue of model compatibility. A real data set is considered for illustration under uniform and gamma priors.
APA, Harvard, Vancouver, ISO, and other styles
9

Chaudhary, A. K. "A Study of Perks-II Distribution via Bayesian Paradigm." Pravaha 24, no. 1 (June 12, 2018): 1–17. http://dx.doi.org/10.3126/pravaha.v24i1.20221.

Full text
Abstract:
In this paper, the Markov chain Monte Carlo (MCMC) method is used to estimate the parameters of Perks-II distribution based on a complete sample. The procedures are developed to perform full Bayesian analysis of the Perks-II distributions using Markov Chain Monte Carlo (MCMC) simulation method in OpenBUGS, established software for Bayesian analysis using Markov Chain Monte Carlo (MCMC) methods. We have obtained the Bayes estimates of the parameters, hazard and reliability functions, and their probability intervals are also presented. We have also discussed the issue of model compatibility for the given data set. A real data set is considered for illustration under gamma sets of priors.PravahaVol. 24, No. 1, 2018,page: 1-17
APA, Harvard, Vancouver, ISO, and other styles
10

Shadare, A. E., M. N. O. Sadiku, and S. M. Musa. "Markov Chain Monte Carlo Solution of Poisson’s Equation in Axisymmetric Regions." Advanced Electromagnetics 8, no. 5 (December 17, 2019): 29–36. http://dx.doi.org/10.7716/aem.v8i5.1255.

Full text
Abstract:
The advent of the Monte Carlo methods to the field of EM have seen floating random walk, fixed random walk and Exodus methods deployed to solve Poisson’s equation in rectangular coordinate and axisymmetric solution regions. However, when considering large EM domains, classical Monte Carlo methods could be time-consuming because they calculate potential one point at a time. Thus, Markov Chain Monte Carlo (MCMC) is generally preferred to other Monte Carlo methods when considering whole-field computation. In this paper, MCMC has been applied to solve Poisson’s equation in homogeneous and inhomogeneous axisymmetric regions. The MCMC results are compared with the analytical and finite difference solutions.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Markov chain Monte Carlo (MCMC)"

1

Guha, Subharup. "Benchmark estimation for Markov Chain Monte Carlo samplers." The Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=osu1085594208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Angelino, Elaine Lee. "Accelerating Markov chain Monte Carlo via parallel predictive prefetching." Thesis, Harvard University, 2014. http://nrs.harvard.edu/urn-3:HUL.InstRepos:13070022.

Full text
Abstract:
We present a general framework for accelerating a large class of widely used Markov chain Monte Carlo (MCMC) algorithms. This dissertation demonstrates that MCMC inference can be accelerated in a model of parallel computation that uses speculation to predict and complete computational work ahead of when it is known to be useful. By exploiting fast, iterative approximations to the target density, we can speculatively evaluate many potential future steps of the chain in parallel. In Bayesian inference problems, this approach can accelerate sampling from the target distribution, without compromising exactness, by exploiting subsets of data. It takes advantage of whatever parallel resources are available, but produces results exactly equivalent to standard serial execution. In the initial burn-in phase of chain evaluation, it achieves speedup over serial evaluation that is close to linear in the number of available cores.<br>Engineering and Applied Sciences
APA, Harvard, Vancouver, ISO, and other styles
3

Browne, William J. "Applying MCMC methods to multi-level models." Thesis, University of Bath, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.268210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Durmus, Alain. "High dimensional Markov chain Monte Carlo methods : theory, methods and applications." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLT001/document.

Full text
Abstract:
L'objet de cette thèse est l'analyse fine de méthodes de Monte Carlopar chaînes de Markov (MCMC) et la proposition de méthodologies nouvelles pour échantillonner une mesure de probabilité en grande dimension. Nos travaux s'articulent autour de trois grands sujets.Le premier thème que nous abordons est la convergence de chaînes de Markov en distance de Wasserstein. Nous établissons des bornes explicites de convergence géométrique et sous-géométrique. Nous appliquons ensuite ces résultats à l'étude d'algorithmes MCMC. Nous nous intéressons à une variante de l'algorithme de Metropolis-Langevin ajusté (MALA) pour lequel nous donnons des bornes explicites de convergence. Le deuxième algorithme MCMC que nous analysons est l'algorithme de Crank-Nicolson pré-conditionné, pour lequel nous montrerons une convergence sous-géométrique.Le second objet de cette thèse est l'étude de l'algorithme de Langevin unajusté (ULA). Nous nous intéressons tout d'abord à des bornes explicites en variation totale suivant différentes hypothèses sur le potentiel associé à la distribution cible. Notre étude traite le cas où le pas de discrétisation est maintenu constant mais aussi du cas d'une suite de pas tendant vers 0. Nous prêtons dans cette étude une attention toute particulière à la dépendance de l'algorithme en la dimension de l'espace d'état. Dans le cas où la densité est fortement convexe, nous établissons des bornes de convergence en distance de Wasserstein. Ces bornes nous permettent ensuite de déduire des bornes de convergence en variation totale qui sont plus précises que celles reportées précédemment sous des conditions plus faibles sur le potentiel. Le dernier sujet de cette thèse est l'étude des algorithmes de type Metropolis-Hastings par échelonnage optimal. Tout d'abord, nous étendons le résultat pionnier sur l'échelonnage optimal de l'algorithme de Metropolis à marche aléatoire aux densités cibles dérivables en moyenne Lp pour p ≥ 2. Ensuite, nous proposons de nouveaux algorithmes de type Metropolis-Hastings qui présentent un échelonnage optimal plus avantageux que celui de l'algorithme MALA. Enfin, nous analysons la stabilité et la convergence en variation totale de ces nouveaux algorithmes<br>The subject of this thesis is the analysis of Markov Chain Monte Carlo (MCMC) methods and the development of new methodologies to sample from a high dimensional distribution. Our work is divided into three main topics. The first problem addressed in this manuscript is the convergence of Markov chains in Wasserstein distance. Geometric and sub-geometric convergence with explicit constants, are derived under appropriate conditions. These results are then applied to thestudy of MCMC algorithms. The first analyzed algorithm is an alternative scheme to the Metropolis Adjusted Langevin algorithm for which explicit geometric convergence bounds are established. The second method is the pre-Conditioned Crank-Nicolson algorithm. It is shown that under mild assumption, the Markov chain associated with thisalgorithm is sub-geometrically ergodic in an appropriated Wasserstein distance. The second topic of this thesis is the study of the Unadjusted Langevin algorithm (ULA). We are first interested in explicit convergence bounds in total variation under different kinds of assumption on the potential associated with the target distribution. In particular, we pay attention to the dependence of the algorithm on the dimension of the state space. The case of fixed step sizes as well as the case of nonincreasing sequences of step sizes are dealt with. When the target density is strongly log-concave, explicit bounds in Wasserstein distance are established. These results are then used to derived new bounds in the total variation distance which improve the one previously derived under weaker conditions on the target density.The last part tackles new optimal scaling results for Metropolis-Hastings type algorithms. First, we extend the pioneer result on the optimal scaling of the random walk Metropolis algorithm to target densities which are differentiable in Lp mean for p ≥ 2. Then, we derive new Metropolis-Hastings type algorithms which have a better optimal scaling compared the MALA algorithm. Finally, the stability and the convergence in total variation of these new algorithms are studied
APA, Harvard, Vancouver, ISO, and other styles
5

Harkness, Miles Adam. "Parallel simulation, delayed rejection and reversible jump MCMC for object recognition." Thesis, University of Bristol, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Smith, Corey James. "Exact Markov Chain Monte Carlo with Likelihood Approximations for Functional Linear Models." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1531833318013379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Walker, Neil Rawlinson. "A Bayesian approach to the job search model and its application to unemployment durations using MCMC methods." Thesis, University of Newcastle Upon Tyne, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Durmus, Alain. "High dimensional Markov chain Monte Carlo methods : theory, methods and applications." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLT001.

Full text
Abstract:
L'objet de cette thèse est l'analyse fine de méthodes de Monte Carlopar chaînes de Markov (MCMC) et la proposition de méthodologies nouvelles pour échantillonner une mesure de probabilité en grande dimension. Nos travaux s'articulent autour de trois grands sujets.Le premier thème que nous abordons est la convergence de chaînes de Markov en distance de Wasserstein. Nous établissons des bornes explicites de convergence géométrique et sous-géométrique. Nous appliquons ensuite ces résultats à l'étude d'algorithmes MCMC. Nous nous intéressons à une variante de l'algorithme de Metropolis-Langevin ajusté (MALA) pour lequel nous donnons des bornes explicites de convergence. Le deuxième algorithme MCMC que nous analysons est l'algorithme de Crank-Nicolson pré-conditionné, pour lequel nous montrerons une convergence sous-géométrique.Le second objet de cette thèse est l'étude de l'algorithme de Langevin unajusté (ULA). Nous nous intéressons tout d'abord à des bornes explicites en variation totale suivant différentes hypothèses sur le potentiel associé à la distribution cible. Notre étude traite le cas où le pas de discrétisation est maintenu constant mais aussi du cas d'une suite de pas tendant vers 0. Nous prêtons dans cette étude une attention toute particulière à la dépendance de l'algorithme en la dimension de l'espace d'état. Dans le cas où la densité est fortement convexe, nous établissons des bornes de convergence en distance de Wasserstein. Ces bornes nous permettent ensuite de déduire des bornes de convergence en variation totale qui sont plus précises que celles reportées précédemment sous des conditions plus faibles sur le potentiel. Le dernier sujet de cette thèse est l'étude des algorithmes de type Metropolis-Hastings par échelonnage optimal. Tout d'abord, nous étendons le résultat pionnier sur l'échelonnage optimal de l'algorithme de Metropolis à marche aléatoire aux densités cibles dérivables en moyenne Lp pour p ≥ 2. Ensuite, nous proposons de nouveaux algorithmes de type Metropolis-Hastings qui présentent un échelonnage optimal plus avantageux que celui de l'algorithme MALA. Enfin, nous analysons la stabilité et la convergence en variation totale de ces nouveaux algorithmes<br>The subject of this thesis is the analysis of Markov Chain Monte Carlo (MCMC) methods and the development of new methodologies to sample from a high dimensional distribution. Our work is divided into three main topics. The first problem addressed in this manuscript is the convergence of Markov chains in Wasserstein distance. Geometric and sub-geometric convergence with explicit constants, are derived under appropriate conditions. These results are then applied to thestudy of MCMC algorithms. The first analyzed algorithm is an alternative scheme to the Metropolis Adjusted Langevin algorithm for which explicit geometric convergence bounds are established. The second method is the pre-Conditioned Crank-Nicolson algorithm. It is shown that under mild assumption, the Markov chain associated with thisalgorithm is sub-geometrically ergodic in an appropriated Wasserstein distance. The second topic of this thesis is the study of the Unadjusted Langevin algorithm (ULA). We are first interested in explicit convergence bounds in total variation under different kinds of assumption on the potential associated with the target distribution. In particular, we pay attention to the dependence of the algorithm on the dimension of the state space. The case of fixed step sizes as well as the case of nonincreasing sequences of step sizes are dealt with. When the target density is strongly log-concave, explicit bounds in Wasserstein distance are established. These results are then used to derived new bounds in the total variation distance which improve the one previously derived under weaker conditions on the target density.The last part tackles new optimal scaling results for Metropolis-Hastings type algorithms. First, we extend the pioneer result on the optimal scaling of the random walk Metropolis algorithm to target densities which are differentiable in Lp mean for p ≥ 2. Then, we derive new Metropolis-Hastings type algorithms which have a better optimal scaling compared the MALA algorithm. Finally, the stability and the convergence in total variation of these new algorithms are studied
APA, Harvard, Vancouver, ISO, and other styles
9

Jeon, Juncheol. "Deterioration model for ports in the Republic of Korea using Markov chain Monte Carlo with multiple imputation." Thesis, University of Dundee, 2019. https://discovery.dundee.ac.uk/en/studentTheses/1cc538ea-1468-4d51-bcf8-711f8b9912f9.

Full text
Abstract:
Condition of infrastructure is deteriorated over time as it gets older. It is the deterioration model that predicts how and when facilities will deteriorate over time. In most infrastructure management system, the deterioration model is a crucial element. Using the deterioration model, it is very helpful to estimate when repair will be carried out, how much will be needed for the maintenance of the entire facilities, and what maintenance costs will be required during the life cycle of the facility. However, the study of deterioration model for civil infrastructures of ports is still in its infancy. In particular, there is almost no related research in South Korea. Thus, this study aims to develop a deterioration model for civil infrastructure of ports in South Korea. There are various methods such as Deterministic, Stochastic, and Artificial Intelligence to develop deterioration model. In this research, Markov model using Markov chain theory, one of the Stochastic methods, is used to develop deterioration model for ports in South Korea. Markov chain is a probabilistic process among states. i.e., in Markov chain, transition among states follows some probability which is called as the transition probability. The key process of developing Markov model is to find this transition probability. This process is called calibration. In this study, the existing methods, Optimization method and Markov Chain Monte Carlo (MCMC), are reviewed, and methods to improve for these are presented. In addition, in this study, only a small amount of data are used, which causes distortion of the model. Thus, supplement techniques are presented to overcome the small size of data. In order to address the problem of the existing methods and the lack of data, the deterioration model developed by the four calibration methods: Optimization, Optimization with Bootstrap, MCMC (Markov Chain Monte Carlo), and MCMC with Multiple imputation, are finally proposed in this study. In addition, comparison between four models are carried out and good performance model is proposed. This research provides deterioration model for port in South Korea, and more accurate calibration technique is suggested. Furthermore, the method of supplementing insufficient data has been combined with existing calibration techniques.
APA, Harvard, Vancouver, ISO, and other styles
10

Fu, Jianlin. "A markov chain monte carlo method for inverse stochastic modeling and uncertainty assessment." Doctoral thesis, Universitat Politècnica de València, 2008. http://hdl.handle.net/10251/1969.

Full text
Abstract:
Unlike the traditional two-stage methods, a conditional and inverse-conditional simulation approach may directly generate independent, identically distributed realizations to honor both static data and state data in one step. The Markov chain Monte Carlo (McMC) method was proved a powerful tool to perform such type of stochastic simulation. One of the main advantages of the McMC over the traditional sensitivity-based optimization methods to inverse problems is its power, flexibility and well-posedness in incorporating observation data from different sources. In this work, an improved version of the McMC method is presented to perform the stochastic simulation of reservoirs and aquifers in the framework of multi-Gaussian geostatistics. First, a blocking scheme is proposed to overcome the limitations of the classic single-component Metropolis-Hastings-type McMC. One of the main characteristics of the blocking McMC (BMcMC) scheme is that, depending on the inconsistence between the prior model and the reality, it can preserve the prior spatial structure and statistics as users specified. At the same time, it improves the mixing of the Markov chain and hence enhances the computational efficiency of the McMC. Furthermore, the exploration ability and the mixing speed of McMC are efficiently improved by coupling the multiscale proposals, i.e., the coupled multiscale McMC method. In order to make the BMcMC method capable of dealing with the high-dimensional cases, a multi-scale scheme is introduced to accelerate the computation of the likelihood which greatly improves the computational efficiency of the McMC due to the fact that most of the computational efforts are spent on the forward simulations. To this end, a flexible-grid full-tensor finite-difference simulator, which is widely compatible with the outputs from various upscaling subroutines, is developed to solve the flow equations and a constant-displacement random-walk particle-tracking method, which enhances the com<br>Fu, J. (2008). A markov chain monte carlo method for inverse stochastic modeling and uncertainty assessment [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1969<br>Palancia
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Markov chain Monte Carlo (MCMC)"

1

1947-, Gianola Daniel, ed. Likelihood, Bayesian and MCMC methods in quantitative genetics. New York: Springer-Verlag, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cowles, Mary Kathryn. Possible biases induced by MCMC convergence diagnostics. Toronto: University of Toronto, Dept. of Statistics, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liang, Faming, Chuanhai Liu, and Raymond J. Carroll. Advanced Markov Chain Monte Carlo Methods. Chichester, UK: John Wiley & Sons, Ltd, 2010. http://dx.doi.org/10.1002/9780470669723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

R, Gilks W., Richardson S, and Spiegelhalter D. J, eds. Markov chain Monte Carlo in practice. London: Chapman & Hall, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

R, Gilks W., Richardson S, and Spiegelhalter D. J, eds. Markov chain Monte Carlo in practice. Boca Raton, Fla: Chapman & Hall, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

S, Kendall W., Liang F. 1970-, and Wang J. S. 1960-, eds. Markov chain Monte Carlo: Innovations and applications. Singapore: World Scientific, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Joseph, Anosh. Markov Chain Monte Carlo Methods in Quantum Field Theories. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-46044-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gamerman, Dani. Markov chain Monte Carlo: Stochastic simulation for Bayesian inference. London: Chapman & Hall, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Freitas, Lopes Hedibert, ed. Markov chain Monte Carlo: Stochastic simulation for Bayesian inference. 2nd ed. Boca Raton: Taylor & Francis, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liang, F. Advanced Markov chain Monte Carlo methods: Learning from past samples. Hoboken, NJ: Wiley, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Markov chain Monte Carlo (MCMC)"

1

Robert, Christian P., and Sylvia Richardson. "Markov Chain Monte Carlo Methods." In Discretization and MCMC Convergence Assessment, 1–25. New York, NY: Springer New York, 1998. http://dx.doi.org/10.1007/978-1-4612-1716-9_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hanada, Masanori, and So Matsuura. "Applications of Markov Chain Monte Carlo." In MCMC from Scratch, 113–68. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2715-7_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hanada, Masanori, and So Matsuura. "General Aspects of Markov Chain Monte Carlo." In MCMC from Scratch, 27–38. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2715-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Yan. "Markov Chain Monte Carlo (MCMC) Simulations." In Encyclopedia of Systems Biology, 1176. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4419-9863-7_403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

van Oijen, Marcel. "Markov Chain Monte Carlo Sampling (MCMC)." In Bayesian Compendium, 35–40. Cham: Springer International Publishing, 2024. http://dx.doi.org/10.1007/978-3-031-66085-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bhattacharya, Rabi, Lizhen Lin, and Victor Patrangenaru. "Markov Chain Monte Carlo (MCMC) Simulation and Bayes Theory." In Springer Texts in Statistics, 325–32. New York, NY: Springer New York, 2016. http://dx.doi.org/10.1007/978-1-4939-4032-5_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Walgama Wellalage, N. K., Tieling Zhang, Richard Dwight, and Khaled El-Akruti. "Bridge Deterioration Modeling by Markov Chain Monte Carlo (MCMC) Simulation Method." In Lecture Notes in Mechanical Engineering, 545–56. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-09507-3_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lundén, Daniel, Gizem Çaylak, Fredrik Ronquist, and David Broman. "Automatic Alignment in Higher-Order Probabilistic Programming Languages." In Programming Languages and Systems, 535–63. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30044-8_20.

Full text
Abstract:
AbstractProbabilistic Programming Languages (PPLs) allow users to encode statistical inference problems and automatically apply an inference algorithm to solve them. Popular inference algorithms for PPLs, such as sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC), are built around checkpoints—relevant events for the inference algorithm during the execution of a probabilistic program. Deciding the location of checkpoints is, in current PPLs, not done optimally. To solve this problem, we present a static analysis technique that automatically determines checkpoints in programs, relieving PPL users of this task. The analysis identifies a set of checkpoints that execute in the same order in every program run—they are aligned. We formalize alignment, prove the correctness of the analysis, and implement the analysis as part of the higher-order functional PPL Miking CorePPL. By utilizing the alignment analysis, we design two novel inference algorithm variants: aligned SMC and aligned lightweight MCMC. We show, through real-world experiments, that they significantly improve inference execution time and accuracy compared to standard PPL versions of SMC and MCMC.
APA, Harvard, Vancouver, ISO, and other styles
9

Wüthrich, Mario V., and Michael Merz. "Bayesian Methods, Regularization and Expectation-Maximization." In Springer Actuarial, 207–66. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12409-9_6.

Full text
Abstract:
AbstractThis chapter summarizes some techniques that use Bayes’ theorem. These are classical Bayesian statistical models using, e.g., the Markov chain Monte Carlo (MCMC) method for model fitting. We discuss regularization of regression models such as ridge and LASSO regularization, which has a Bayesian interpretation, and we consider the Expectation-Maximization (EM) algorithm. The EM algorithm is a general purpose tool that can handle incomplete data settings. We illustrate this for different examples coming from mixture distributions, censored and truncated claims data.
APA, Harvard, Vancouver, ISO, and other styles
10

Lundén, Daniel, Johannes Borgström, and David Broman. "Correctness of Sequential Monte Carlo Inference for Probabilistic Programming Languages." In Programming Languages and Systems, 404–31. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72019-3_15.

Full text
Abstract:
AbstractProbabilistic programming is an approach to reasoning under uncertainty by encoding inference problems as programs. In order to solve these inference problems, probabilistic programming languages (PPLs) employ different inference algorithms, such as sequential Monte Carlo (SMC), Markov chain Monte Carlo (MCMC), or variational methods. Existing research on such algorithms mainly concerns their implementation and efficiency, rather than the correctness of the algorithms themselves when applied in the context of expressive PPLs. To remedy this, we give a correctness proof for SMC methods in the context of an expressive PPL calculus, representative of popular PPLs such as WebPPL, Anglican, and Birch. Previous work have studied correctness of MCMC using an operational semantics, and correctness of SMC and MCMC in a denotational setting without term recursion. However, for SMC inference—one of the most commonly used algorithms in PPLs as of today—no formal correctness proof exists in an operational setting. In particular, an open question is if the resample locations in a probabilistic program affects the correctness of SMC. We solve this fundamental problem, and make four novel contributions: (i) we extend an untyped PPL lambda calculus and operational semantics to include explicit resample terms, expressing synchronization points in SMC inference; (ii) we prove, for the first time, that subject to mild restrictions, any placement of the explicit resample terms is valid for a generic form of SMC inference; (iii) as a result of (ii), our calculus benefits from classic results from the SMC literature: a law of large numbers and an unbiased estimate of the model evidence; and (iv) we formalize the bootstrap particle filter for the calculus and discuss how our results can be further extended to other SMC algorithms.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Markov chain Monte Carlo (MCMC)"

1

Vaiciulyte, Ingrida. "Adaptive Monte-Carlo Markov chain for multivariate statistical estimation." In International Workshop of "Stochastic Programming for Implementation and Advanced Applications". The Association of Lithuanian Serials, 2012. http://dx.doi.org/10.5200/stoprog.2012.21.

Full text
Abstract:
The estimation of the multivariate skew t-distribution by the Monte-Carlo Markov Chain (MCMC) method is considered in the paper. Thus, the MCMC procedure is constructed for recurrent estimation of skew t-distribution, following the maximum likelihood method, where the Monte-Carlo sample size is regulated to ensure the convergence and to decrease the total amount of Monte-Carlo trials, required for estimation. The confidence intervals of Monte-Carlo estimators are introduced because of their asymptotic normality. The termination rule is also implemented by testing statistical hypotheses on an insignificant change of estimates in two steps of the procedure. The algorithm developed has been tested by computer simulation with test example. The test sample, following from skew t-distribution, has been simulated by computer and parameters of the skew t-distribution have been estimated by MathCAD. Next, the chi-squared criterion confirmed the hypothesis of distribution of statistics with respect to under- lying distribution function. Thus, computer simulation confirmed the applicability of the Monte-Carlo Markov chain approach with adaptively regulated sample size for estimation of parameters of the skew t- distribution with acceptable accuracy.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Zhen, Xupeng He, Yiteng Li, Marwa AlSinan, Hyung Kwak, and Hussein Hoteit. "Parameter Inversion in Geothermal Reservoir Using Markov Chain Monte Carlo and Deep Learning." In SPE Reservoir Simulation Conference. SPE, 2023. http://dx.doi.org/10.2118/212185-ms.

Full text
Abstract:
Abstract Traditional history-matching process suffers from non-uniqueness solutions, subsurface uncertainties, and high computational cost. This work proposes a robust history-matching workflow utilizing the Bayesian Markov Chain Monte Carlo (MCMC) and Bidirectional Long-Short Term Memory (BiLSTM) network to perform history matching under uncertainties for geothermal resource development efficiently. There are mainly four steps. Step 1: Identifying uncertainty parameters. Step 2: The BiLSTM is built to map the nonlinear relationship between the key uncertainty parameters (e.g., injection rates, reservoir temperature, etc.) and time series outputs (temperature of producer). Bayesian optimization is used to automate the tuning process of the hyper-parameters. Step 3: The Bayesian MCMC is performed to inverse the uncertainty parameters. The BiLSTM is served as the forward model to reduce the computational expense. Step 4: If the errors of the predicted response between the high-fidelity model and Bayesian MCMC are high, we need to revisit the accuracy of the BiLSTM and the prior information on the uncertainty parameters. We demonstrate the proposed method using a 3D fractured geothermal reservoir, where the cold water is injected into a geothermal reservoir, and the energy is extracted by producing hot water in a producer. Results show that the proposed Bayesian MCMC and BiLSTM method can successfully inverse the uncertainty parameters with narrow uncertainties by comparing the inversed parameters and the ground truth. We then compare its superiority with models like PCE, Kriging, and SVR, and our method achieves the highest accuracy. We propose a Bayesian MCMC and BiLSTM-based history matching method for uncertainty parameters inversion and demonstrate its accuracy and robustness compared with other models. This approach provides an efficient and practical history-matching method for geothermal extraction with significant uncertainties.
APA, Harvard, Vancouver, ISO, and other styles
3

Auvinen, Harri, Tuomo Raitio, Samuli Siltanen, and Paavo Alku. "Utilizing Markov chain Monte Carlo (MCMC) method for improved glottal inverse filtering." In Interspeech 2012. ISCA: ISCA, 2012. http://dx.doi.org/10.21437/interspeech.2012-450.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Guzman, Rel. "Monte Carlo Methods on High Dimensional Data." In LatinX in AI at Neural Information Processing Systems Conference 2018. Journal of LatinX in AI Research, 2018. http://dx.doi.org/10.52591/lxai2018120314.

Full text
Abstract:
Markov Chain Monte Carlo (MCMC) simulation is a family of stochastic algorithms that are commonly used to approximate probability distributions by generating samples. The aim of this proposal is to deal with the problem of doing that job on a large scale because due to the increasing power computational demands of data being tall or wide, a study that combines statistical and engineering expertise can be made in order to achieve hardware-accelerated MCMC inference. In this work, I attempt to advance the theory and practice of approximate MCMC methods by developing a toolbox of distributed MCMC algorithms, and then a new method for dealing with large scale problems will be proposed, or else a framework for choosing the most appropriate method will be established. Papers like [1] provide a comprehensive review of the existing literature regarding methods to tackle big data problems. My idea is to tackle divide and conquer approaches since they can work distributed in several machines or else Graphics Processing Unit (GPUs), so I cover the theory behind these methods; then, exhaustive experimental tests will help me compare and categorize them according to their limitations in wide and tall data by considering the dataset size n, sample dimension d, and number of samples T to produce.
APA, Harvard, Vancouver, ISO, and other styles
5

Emery, A. F., and E. Valenti. "Estimating Parameters of a Packed Bed by Least Squares and Markov Chain Monte Carlo." In ASME 2005 International Mechanical Engineering Congress and Exposition. ASMEDC, 2005. http://dx.doi.org/10.1115/imece2005-82086.

Full text
Abstract:
Most parameter estimation is based upon the assumption of normally distributed errors using least squares and the confidence intervals are computed from the sensitivities and the statistics of the residuals. For nonlinear problems, the assumption of a normal distribution of the parameters may not be valid. Determining the probability density distribution can be difficult, particularly when there is more than one parameter to be estimated or there is uncertainty about other parameters. An alternative approach is Bayesian inference, but the numerical computations can be expensive. Markov Chain Monte Carlo (MCMC) may alleviate some of the expense. The paper describes the application of MCMC to estimate the mass flow rate, the heat transfer coefficient, and the specific heat of a packed bed regenerator.
APA, Harvard, Vancouver, ISO, and other styles
6

ur Rehman, M. Javvad, Sarat Chandra Dass, and Vijanth Sagayan Asirvadam. "Markov chain Monte Carlo (MCMC) method for parameter estimation of nonlinear dynamical systems." In 2015 IEEE International Conference on Signal and Image Processing Applications (ICSIPA). IEEE, 2015. http://dx.doi.org/10.1109/icsipa.2015.7412154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hassan, Badreldin G. H., Isameldin A. Atiem, and Ping Feng. "Rainfall Frequency Analysis of Sudan by Using Bayesian Markov chain Monte Carlo (MCMC) methods." In 2013 International Conference on Information Science and Technology Applications. Paris, France: Atlantis Press, 2013. http://dx.doi.org/10.2991/icista.2013.21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wiese, Jonas Gregor, Lisa Wimmer, Theodore Papamarkou, Bernd Bischl, Stephan Guennemann, and David Ruegamer. "Towards Efficient MCMC Sampling in Bayesian Neural Networks by Exploiting Symmetry (Extended Abstract)." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/943.

Full text
Abstract:
Bayesian inference in deep neural networks is challenging due to the high-dimensional, strongly multi-modal parameter posterior density landscape. Markov chain Monte Carlo approaches asymptotically recover the true posterior but are considered prohibitively expensive for large modern architectures. We argue that the dilemma between exact-but-unaffordable and cheap-but-inexact approaches can be mitigated by exploiting symmetries in the posterior landscape. We show theoretically that the posterior predictive density in Bayesian neural networks can be restricted to a symmetry-free parameter reference set. By further deriving an upper bound on the number of Monte Carlo chains required to capture the functional diversity, we propose a straightforward approach for feasible Bayesian inference.
APA, Harvard, Vancouver, ISO, and other styles
9

Niaki, Farbod Akhavan, Durul Ulutan, and Laine Mears. "Parameter Estimation Using Markov Chain Monte Carlo Method in Mechanistic Modeling of Tool Wear During Milling." In ASME 2015 International Manufacturing Science and Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/msec2015-9357.

Full text
Abstract:
Several models have been proposed to describe the relationship between cutting parameters and machining outputs such as cutting forces and tool wear. However, these models usually cannot be generalized, due to the inherent uncertainties that exist in the process. These uncertainties may originate from machining, workpiece material composition, and measurements. A stochastic approach can be utilized to compensate for the lack of certainty in machining, particularly for tool wear evolution. The Markov Chain Monte Carlo (MCMC) method is a powerful tool for addressing uncertainties in machining parameter estimation. The Hybrid Metropolis-Gibbs algorithm has been chosen in this work to estimate the unknown parameters in a mechanistic tool wear model for end milling of difficult-to-machine alloys. The results show a good potential of the Markov Chain Monte Carlo modeling in prediction of parameters in the presence of uncertainties.
APA, Harvard, Vancouver, ISO, and other styles
10

M, Avinash Subramaniam, and Yash Vasavada. "A Markov Chain Monte Carlo (MCMC) Gibbs Sampler Augmented with Zero Forcing Detection for OTFS Reception." In 2023 IEEE Wireless Antenna and Microwave Symposium (WAMS). IEEE, 2023. http://dx.doi.org/10.1109/wams57261.2023.10242942.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Markov chain Monte Carlo (MCMC)"

1

Pasupuleti, Murali Krishna. Stochastic Computation for AI: Bayesian Inference, Uncertainty, and Optimization. National Education Services, March 2025. https://doi.org/10.62311/nesx/rriv325.

Full text
Abstract:
Abstract: Stochastic computation is a fundamental approach in artificial intelligence (AI) that enables probabilistic reasoning, uncertainty quantification, and robust decision-making in complex environments. This research explores the theoretical foundations, computational techniques, and real-world applications of stochastic methods, focusing on Bayesian inference, Monte Carlo methods, stochastic optimization, and uncertainty-aware AI models. Key topics include probabilistic graphical models, Markov Chain Monte Carlo (MCMC), variational inference, stochastic gradient descent (SGD), and Bayesian deep learning. These techniques enhance AI's ability to handle uncertain, noisy, and high-dimensional data while ensuring scalability, interpretability, and trustworthiness in applications such as robotics, financial modeling, autonomous systems, and healthcare AI. Case studies demonstrate how stochastic computation improves self-driving car navigation, financial risk assessment, personalized medicine, and reinforcement learning-based automation. The findings underscore the importance of integrating probabilistic modeling with deep learning, reinforcement learning, and optimization techniques to develop AI systems that are more adaptable, scalable, and uncertainty-aware. Keywords Stochastic computation, Bayesian inference, probabilistic AI, Monte Carlo methods, Markov Chain Monte Carlo (MCMC), variational inference, uncertainty quantification, stochastic optimization, Bayesian deep learning, reinforcement learning, probabilistic graphical models, stochastic gradient descent (SGD), uncertainty-aware AI, probabilistic reasoning, risk assessment, AI in robotics, AI in finance, AI in healthcare, decision-making under uncertainty, trustworthiness in AI, scalable AI, interpretable AI.
APA, Harvard, Vancouver, ISO, and other styles
2

Oskolkov, Nikolay. Machine Learning for Computational Biology. Instats Inc., 2024. http://dx.doi.org/10.61700/l01vi14ohm8en1490.

Full text
Abstract:
This one-day workshop, led by Nikolay Oskolkov from Lund University, provides a comprehensive introduction to machine learning techniques in computational biology, focusing on both theoretical knowledge and practical coding skills in R and Python. Participants will learn to implement from scratch and optimize algorithms such as neural networks, random forest, k-means clustering, and Markov Chain Monte Carlo (MCMC), making it an essential resource for advancing research in biostatistics, genetics, and data science.
APA, Harvard, Vancouver, ISO, and other styles
3

Zang, Emma. Bayesian Statistics for Social and Health Scientists in R and Python. Instats Inc., 2023. http://dx.doi.org/10.61700/obtt1o65iw3ui469.

Full text
Abstract:
This seminar will introduce you to Bayesian statistics, which are increasingly popular and offer a powerful alternative to more traditional forms of statistical analysis. Targeted at a social and health science audience, the seminar will cover the fundamentals of Bayesian inference and illustrate a variety of techniques with applied examples of Bayesian regressions and hierarchical models. You will gain an understanding of Markov chain Monte Carlo (MCMC) methods and learn how to develop and validate Bayesian models so that you can apply them in your daily research, with the kinds of intuitive inferences that Bayesian methods allow. An official Instats certificate of completion is provided at the conclusion of the seminar. For European PhD students, the seminar offers 2 ECTS Equivalent points.
APA, Harvard, Vancouver, ISO, and other styles
4

Zang, Emma. Bayesian Statistics for Social and Health Scientists in R and Python + 2 Free Seminars. Instats Inc., 2022. http://dx.doi.org/10.61700/bgfpomu3wdhe5469.

Full text
Abstract:
This seminar will introduce you to Bayesian statistics, which are increasingly popular and offer a powerful alternative to more traditional forms of statistical analysis. Targeted at a social and health science audience, the seminar will cover the fundamentals of Bayesian inference and illustrate a variety of techniques with applied examples of Bayesian regressions and hierarchical models. You will gain an understanding of Markov chain Monte Carlo (MCMC) methods and learn how to develop and validate Bayesian models so that you can apply them in your daily research, with the kinds of intuitive inferences that Bayesian methods allow. When purchasing the seminar you will be freely enrolled in two on-demand seminars for Path Analysis in R and CFA/SEM in R with Bayesian estimation by Professor Zyphur, helping you to extend your learning and offering a substantial value. An official Instats certificate of completion is provided at the conclusion of the seminar. For European PhD students, the seminar offers 2 ECTS Equivalent points.
APA, Harvard, Vancouver, ISO, and other styles
5

Gelfand, Alan E., and Sujit K. Sahu. On Markov Chain Monte Carlo Acceleration. Fort Belvoir, VA: Defense Technical Information Center, April 1994. http://dx.doi.org/10.21236/ada279393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Safta, Cosmin, Mohammad Khalil, and Habib N. Najm. Transitional Markov Chain Monte Carlo Sampler in UQTk. Office of Scientific and Technical Information (OSTI), March 2020. http://dx.doi.org/10.2172/1606084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Warnes, Gregory R. HYDRA: A Java Library for Markov Chain Monte Carlo. Fort Belvoir, VA: Defense Technical Information Center, March 2002. http://dx.doi.org/10.21236/ada459649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Reddy, S., and A. Crisp. Deep Neural Network Informed Markov Chain Monte Carlo Methods. Office of Scientific and Technical Information (OSTI), November 2023. http://dx.doi.org/10.2172/2283285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bates, Cameron Russell, and Edward Allen Mckigney. Metis: A Pure Metropolis Markov Chain Monte Carlo Bayesian Inference Library. Office of Scientific and Technical Information (OSTI), January 2018. http://dx.doi.org/10.2172/1417145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Small, Matthew. Determining the Mass Function of Planetesimals Using Markov Chain Monte Carlo Simulations. Ames (Iowa): Iowa State University, May 2022. http://dx.doi.org/10.31274/cc-20240624-524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!