To see the other types of publications on this topic, follow the link: MCMC algoritmus.

Journal articles on the topic 'MCMC algoritmus'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'MCMC algoritmus.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Drugan, Mădălina M., and Dirk Thierens. "Geometrical Recombination Operators for Real-Coded Evolutionary MCMCs." Evolutionary Computation 18, no. 2 (2010): 157–98. http://dx.doi.org/10.1162/evco.2010.18.2.18201.

Full text
Abstract:
Markov chain Monte Carlo (MCMC) algorithms are sampling methods for intractable distributions. In this paper, we propose and investigate algorithms that improve the sampling process from multi-dimensional real-coded spaces. We present MCMC algorithms that run a population of samples and apply recombination operators in order to exchange useful information and preserve commonalities in highly probable individual states. We call this class of algorithms Evolutionary MCMCs (EMCMCs). We introduce and analyze various recombination operators which generate new samples by use of linear transformations, for instance, by translation or rotation. These recombination methods discover specific structures in the search space and adapt the population samples to the proposal distribution. We investigate how to integrate recombination in the MCMC framework to sample from a desired distribution. The recombination operators generate individuals with a computational effort that scales linearly in the number of dimensions and the number of parents. We present results from experiments conducted on a mixture of multivariate normal distributions. These results show that the recombinative EMCMCs outperform the standard MCMCs for target distributions that have a nontrivial structural relationship between the dimensions.
APA, Harvard, Vancouver, ISO, and other styles
2

Liang, Faming, and Ick-Hoon Jin. "A Monte Carlo Metropolis-Hastings Algorithm for Sampling from Distributions with Intractable Normalizing Constants." Neural Computation 25, no. 8 (2013): 2199–234. http://dx.doi.org/10.1162/neco_a_00466.

Full text
Abstract:
Simulating from distributions with intractable normalizing constants has been a long-standing problem in machine learning. In this letter, we propose a new algorithm, the Monte Carlo Metropolis-Hastings (MCMH) algorithm, for tackling this problem. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. It replaces the unknown normalizing constant ratio by a Monte Carlo estimate in simulations, while still converges, as shown in the letter, to the desired target distribution under mild conditions. The MCMH algorithm is illustrated with spatial autologistic models and exponential random graph models. Unlike other auxiliary variable Markov chain Monte Carlo (MCMC) algorithms, such as the Møller and exchange algorithms, the MCMH algorithm avoids the requirement for perfect sampling, and thus can be applied to many statistical models for which perfect sampling is not available or very expensive. The MCMH algorithm can also be applied to Bayesian inference for random effect models and missing data problems that involve simulations from a distribution with intractable integrals.
APA, Harvard, Vancouver, ISO, and other styles
3

Robert, Christian P., Víctor Elvira, Nick Tawn, and Changye Wu. "Accelerating MCMC algorithms." Wiley Interdisciplinary Reviews: Computational Statistics 10, no. 5 (2018): e1435. http://dx.doi.org/10.1002/wics.1435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Holden, Lars. "Mixing of MCMC algorithms." Journal of Statistical Computation and Simulation 89, no. 12 (2019): 2261–79. http://dx.doi.org/10.1080/00949655.2019.1615064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Papaioannou, Iason, Wolfgang Betz, Kilian Zwirglmaier, and Daniel Straub. "MCMC algorithms for Subset Simulation." Probabilistic Engineering Mechanics 41 (July 2015): 89–103. http://dx.doi.org/10.1016/j.probengmech.2015.06.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nguyen, Dao, Perry de Valpine, Yves Atchade, Daniel Turek, Nicholas Michaud, and Christopher Paciorek. "Nested Adaptation of MCMC Algorithms." Bayesian Analysis 15, no. 4 (2020): 1323–43. http://dx.doi.org/10.1214/19-ba1190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rosenthal, Jeffrey S., and Jinyoung Yang. "Ergodicity of Combocontinuous Adaptive MCMC Algorithms." Methodology and Computing in Applied Probability 20, no. 2 (2017): 535–51. http://dx.doi.org/10.1007/s11009-017-9574-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Browne, William J. "MCMC algorithms for constrained variance matrices." Computational Statistics & Data Analysis 50, no. 7 (2006): 1655–77. http://dx.doi.org/10.1016/j.csda.2005.02.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sinharay, Sandip. "Experiences With Markov Chain Monte Carlo Convergence Assessment in Two Psychometric Examples." Journal of Educational and Behavioral Statistics 29, no. 4 (2004): 461–88. http://dx.doi.org/10.3102/10769986029004461.

Full text
Abstract:
There is an increasing use of Markov chain Monte Carlo (MCMC) algorithms for fitting statistical models in psychometrics, especially in situations where the traditional estimation techniques are very difficult to apply. One of the disadvantages of using an MCMC algorithm is that it is not straightforward to determine the convergence of the algorithm. Using the output of an MCMC algorithm that has not converged may lead to incorrect inferences on the problem at hand. The convergence is not one to a point, but that of the distribution of a sequence of generated values to another distribution, and hence is not easy to assess; there is no guaranteed diagnostic tool to determine convergence of an MCMC algorithm in general. This article examines the convergence of MCMC algorithms using a number of convergence diagnostics for two real data examples from psychometrics. Findings from this research have the potential to be useful to researchers using the algorithms. For both the examples, the number of iterations required (suggested by the diagnostics) to be reasonably confident that the MCMC algorithm has converged may be larger than what many practitioners consider to be safe.
APA, Harvard, Vancouver, ISO, and other styles
10

Karagiannis, Georgios, and Christophe Andrieu. "Annealed Importance Sampling Reversible Jump MCMC Algorithms." Journal of Computational and Graphical Statistics 22, no. 3 (2013): 623–48. http://dx.doi.org/10.1080/10618600.2013.805651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Andrieu, Christophe, and Yves Atchade. "On the efficiency of adaptive MCMC algorithms." Electronic Communications in Probability 12 (2007): 336–49. http://dx.doi.org/10.1214/ecp.v12-1320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Liang, Faming. "Trajectory averaging for stochastic approximation MCMC algorithms." Annals of Statistics 38, no. 5 (2010): 2823–56. http://dx.doi.org/10.1214/10-aos807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Neal, Peter, and Gareth Roberts. "Optimal scaling for partially updating MCMC algorithms." Annals of Applied Probability 16, no. 2 (2006): 475–515. http://dx.doi.org/10.1214/105051605000000791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Mora Poblete, Freddy Luis, Sandra Perret, Carlos Alberto Scapim, Elias Nunes Martins, and María Paz Molina Brand. "Estimación de componentes de varianza y predicción de valores genéticos en poblaciones de Acacia azul usando el algoritmo de cadenas independientes." Ciencia & Investigación Forestal 13 (July 3, 2007): 133–42. http://dx.doi.org/10.52904/0718-4646.2007.81.

Full text
Abstract:
Diversos estudios han enfatizado las ventajas de la utilización de los métodos de estimación y predicción basados en Máxima Verosimilitud Restringida (REML) y la Mejor Predicción Linear Insesgada (BLUP) en el análisis genético de especies forestales. Por otra parte, existen otras metodologías que permiten la obtención también robusta, tanto de los componentes de varianza como la predicción de valores genéticos, como por ejemplo la inferencia Bayesiana basada en los algoritmos Markov Chain Monte Carlo (MCMC). El presente estudio busca examinar la viabilidad de la obtención de parámetros genéticos y valores predichos en el mejoramiento genético de especies forestales, usando una variante de la metodología MCMC llamado algoritmo de Cadenas Independientes (CI). Para ilustrar el análisis genético bayesiano se usó un ensayo de procedencias de Acacia Saligna establecido en el norte de Chile, en el marco de un proyecto financiado por el Fondo de Desarrollo e Innovación y desarrollado por el Instituto Forestal.
APA, Harvard, Vancouver, ISO, and other styles
15

Roberts, Gareth O., and Jeffrey S. Rosenthal. "General state space Markov chains and MCMC algorithms." Probability Surveys 1 (2004): 20–71. http://dx.doi.org/10.1214/154957804100000024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Liu, Shun Lan, and Lin Wang. "New Hybrid Blind Equalization Algorithms." Applied Mechanics and Materials 182-183 (June 2012): 1810–15. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.1810.

Full text
Abstract:
A novel decision-directed Modified Constant Modulus Algorithm (DD-MCMA) was proposed firstly. Then a constellation matched error (CME) function was added to the cost function of DD-MCMA and CME-DD-MCMA algorithm was presented. Furthermore, we improve the CME-DD-MCMA by replacing the fixed step with variable step size, that is VSS-CME-DD-MCMA algorithm. The simulation results show that the proposed new blind equalization algorithms can tremendously accelerate the convergence speed and achieve lower residual inter-symbol interference (ISI) than MCMA, and among the three proposed algorithms, VSS-CME-DD-MCMA has the fastest convergence speed and the lowest residual ISI, but it has the largest computation complexity.
APA, Harvard, Vancouver, ISO, and other styles
17

Müller, Christian, Holger Diedam, Thomas Mrziglod, and Andreas Schuppert. "A neural network assisted Metropolis adjusted Langevin algorithm." Monte Carlo Methods and Applications 26, no. 2 (2020): 93–111. http://dx.doi.org/10.1515/mcma-2020-2060.

Full text
Abstract:
AbstractIn this paper, we derive a Markov chain Monte Carlo (MCMC) algorithm supported by a neural network. In particular, we use the neural network to substitute derivative calculations made during a Metropolis adjusted Langevin algorithm (MALA) step with inexpensive neural network evaluations. Using a complex, high-dimensional blood coagulation model and a set of measurements, we define a likelihood function on which we evaluate the new MCMC algorithm. The blood coagulation model is a dynamic model, where derivative calculations are expensive and hence limit the efficiency of derivative-based MCMC algorithms. The MALA adaptation greatly reduces the time per iteration, while only slightly affecting the sample quality. We also test the new algorithm on a 2-dimensional example with a non-convex shape, a case where the MALA algorithm has a clear advantage over other state of the art MCMC algorithms. To assess the impact of the new algorithm, we compare the results to previously generated results of the MALA and the random walk Metropolis Hastings (RWMH).
APA, Harvard, Vancouver, ISO, and other styles
18

Rong, Teng Zhong, and Zhi Xiao. "MCMC Sampling Statistical Method to Solve the Optimization." Applied Mechanics and Materials 121-126 (October 2011): 937–41. http://dx.doi.org/10.4028/www.scientific.net/amm.121-126.937.

Full text
Abstract:
This paper designs a class of generalized density function and from which proposed a solution method for the multivariable nonlinear optimization problem based on MCMC statistical sampling. Theoretical analysis proved that the maximum statistic converge to the maximum point of probability density which establishing links between the optimization and MCMC sampling. This statistical computation algorithm demonstrates convergence property of maximum statistics in large samples and it is global search design to avoid on local optimal solution restrictions. The MCMC optimization algorithm has less iterate variables reserved so that the computing speed is relatively high. Finally, the MCMC sampling optimization algorithm is applied to solve TSP problem and compared with genetic algorithms.
APA, Harvard, Vancouver, ISO, and other styles
19

Padilla, Luis E., Luis O. Tellez, Luis A. Escamilla, and Jose Alberto Vazquez. "Cosmological Parameter Inference with Bayesian Statistics." Universe 7, no. 7 (2021): 213. http://dx.doi.org/10.3390/universe7070213.

Full text
Abstract:
Bayesian statistics and Markov Chain Monte Carlo (MCMC) algorithms have found their place in the field of Cosmology. They have become important mathematical and numerical tools, especially in parameter estimation and model comparison. In this paper, we review some fundamental concepts to understand Bayesian statistics and then introduce MCMC algorithms and samplers that allow us to perform the parameter inference procedure. We also introduce a general description of the standard cosmological model, known as the ΛCDM model, along with several alternatives, and current datasets coming from astrophysical and cosmological observations. Finally, with the tools acquired, we use an MCMC algorithm implemented in python to test several cosmological models and find out the combination of parameters that best describes the Universe.
APA, Harvard, Vancouver, ISO, and other styles
20

Gao, Wei, Hengyi Lv, Qiang Zhang, and Dunbo Cai. "Estimating the Volume of the Solution Space of SMT(LIA) Constraints by a Flat Histogram Method." Algorithms 11, no. 9 (2018): 142. http://dx.doi.org/10.3390/a11090142.

Full text
Abstract:
The satisfiability modulo theories (SMT) problem is to decide the satisfiability of a logical formula with respect to a given background theory. This work studies the counting version of SMT with respect to linear integer arithmetic (LIA), termed SMT(LIA). Specifically, the purpose of this paper is to count the number of solutions (volume) of a SMT(LIA) formula, which has many important applications and is computationally hard. To solve the counting problem, an approximate method that employs a recent Markov Chain Monte Carlo (MCMC) sampling strategy called “flat histogram” is proposed. Furthermore, two refinement strategies are proposed for the sampling process and result in two algorithms, MCMC-Flat1/2 and MCMC-Flat1/t, respectively. In MCMC-Flat1/t, a pseudo sampling strategy is introduced to evaluate the flatness of histograms. Experimental results show that our MCMC-Flat1/t method can achieve good accuracy on both structured and random instances, and our MCMC-Flat1/2 is scalable for instances of convex bodies with up to 7 variables.
APA, Harvard, Vancouver, ISO, and other styles
21

Finke, Axel, Arnaud Doucet, and Adam M. Johansen. "Limit theorems for sequential MCMC methods." Advances in Applied Probability 52, no. 2 (2020): 377–403. http://dx.doi.org/10.1017/apr.2020.9.

Full text
Abstract:
AbstractBoth sequential Monte Carlo (SMC) methods (a.k.a. ‘particle filters’) and sequential Markov chain Monte Carlo (sequential MCMC) methods constitute classes of algorithms which can be used to approximate expectations with respect to (a sequence of) probability distributions and their normalising constants. While SMC methods sample particles conditionally independently at each time step, sequential MCMC methods sample particles according to a Markov chain Monte Carlo (MCMC) kernel. Introduced over twenty years ago in [6], sequential MCMC methods have attracted renewed interest recently as they empirically outperform SMC methods in some applications. We establish an $\mathbb{L}_r$ -inequality (which implies a strong law of large numbers) and a central limit theorem for sequential MCMC methods and provide conditions under which errors can be controlled uniformly in time. In the context of state-space models, we also provide conditions under which sequential MCMC methods can indeed outperform standard SMC methods in terms of asymptotic variance of the corresponding Monte Carlo estimators.
APA, Harvard, Vancouver, ISO, and other styles
22

Roberts, Gareth O., and Jeffrey S. Rosenthal. "Complexity bounds for Markov chain Monte Carlo algorithms via diffusion limits." Journal of Applied Probability 53, no. 2 (2016): 410–20. http://dx.doi.org/10.1017/jpr.2016.9.

Full text
Abstract:
Abstract We connect known results about diffusion limits of Markov chain Monte Carlo (MCMC) algorithms to the computer science notion of algorithm complexity. Our main result states that any weak limit of a Markov process implies a corresponding complexity bound (in an appropriate metric). We then combine this result with previously-known MCMC diffusion limit results to prove that under appropriate assumptions, the random-walk Metropolis algorithm in d dimensions takes O(d) iterations to converge to stationarity, while the Metropolis-adjusted Langevin algorithm takes O(d1/3) iterations to converge to stationarity.
APA, Harvard, Vancouver, ISO, and other styles
23

Atchadé, Yves, and Yizao Wang. "On the convergence rates of some adaptive Markov chain Monte Carlo algorithms." Journal of Applied Probability 52, no. 03 (2015): 811–25. http://dx.doi.org/10.1017/s0021900200113452.

Full text
Abstract:
In this paper we study the mixing time of certain adaptive Markov chain Monte Carlo (MCMC) algorithms. Under some regularity conditions, we show that the convergence rate of importance resampling MCMC algorithms, measured in terms of the total variation distance, isO(n-1). By means of an example, we establish that, in general, this algorithm does not converge at a faster rate. We also study the interacting tempering algorithm, a simplified version of the equi-energy sampler, and establish that its mixing time is of orderO(n-1/2).
APA, Harvard, Vancouver, ISO, and other styles
24

Atchadé, Yves, and Yizao Wang. "On the convergence rates of some adaptive Markov chain Monte Carlo algorithms." Journal of Applied Probability 52, no. 3 (2015): 811–25. http://dx.doi.org/10.1239/jap/1445543848.

Full text
Abstract:
In this paper we study the mixing time of certain adaptive Markov chain Monte Carlo (MCMC) algorithms. Under some regularity conditions, we show that the convergence rate of importance resampling MCMC algorithms, measured in terms of the total variation distance, is O(n-1). By means of an example, we establish that, in general, this algorithm does not converge at a faster rate. We also study the interacting tempering algorithm, a simplified version of the equi-energy sampler, and establish that its mixing time is of order O(n-1/2).
APA, Harvard, Vancouver, ISO, and other styles
25

Septier, F., A. Carmi, S. K. Pang, and S. J. Godsill. "Multiple Object Tracking Using Evolutionary MCMC-Based Particle Algorithms." IFAC Proceedings Volumes 42, no. 10 (2009): 798–803. http://dx.doi.org/10.3182/20090706-3-fr-2004.00132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Łatuszyński, Krzysztof, Błażej Miasojedow, and Wojciech Niemiro. "Nonasymptotic bounds on the estimation error of MCMC algorithms." Bernoulli 19, no. 5A (2013): 2033–66. http://dx.doi.org/10.3150/12-bej442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Mossel, E. "Phylogenetic MCMC Algorithms Are Misleading on Mixtures of Trees." Science 309, no. 5744 (2005): 2207–9. http://dx.doi.org/10.1126/science.1115493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Gao, Chuanming, and Kajal Lahiri. "MCMC algorithms for two recent Bayesian limited information estimators." Economics Letters 66, no. 2 (2000): 121–26. http://dx.doi.org/10.1016/s0165-1765(99)00204-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Andrieu, Christophe, and Éric Moulines. "On the ergodicity properties of some adaptive MCMC algorithms." Annals of Applied Probability 16, no. 3 (2006): 1462–505. http://dx.doi.org/10.1214/105051606000000286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Yuan, Ke, Mark Girolami, and Mahesan Niranjan. "Markov Chain Monte Carlo Methods for State-Space Models with Point Process Observations." Neural Computation 24, no. 6 (2012): 1462–86. http://dx.doi.org/10.1162/neco_a_00281.

Full text
Abstract:
This letter considers how a number of modern Markov chain Monte Carlo (MCMC) methods can be applied for parameter estimation and inference in state-space models with point process observations. We quantified the efficiencies of these MCMC methods on synthetic data, and our results suggest that the Reimannian manifold Hamiltonian Monte Carlo method offers the best performance. We further compared such a method with a previously tested variational Bayes method on two experimental data sets. Results indicate similar performance on the large data sets and superior performance on small ones. The work offers an extensive suite of MCMC algorithms evaluated on an important class of models for physiological signal analysis.
APA, Harvard, Vancouver, ISO, and other styles
31

Sun, Wenbo, and Ivona Bezáková. "Sampling Random Chordal Graphs by MCMC (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 10 (2020): 13929–30. http://dx.doi.org/10.1609/aaai.v34i10.7237.

Full text
Abstract:
Chordal graphs are a widely studied graph class, with applications in several areas of computer science, including structural learning of Bayesian networks. Many problems that are hard on general graphs become solvable on chordal graphs. The random generation of instances of chordal graphs for testing these algorithms is often required. Nevertheless, there are only few known algorithms that generate random chordal graphs, and, as far as we know, none of them generate chordal graphs uniformly at random (where each chordal graph appears with equal probability). In this paper we propose a Markov chain Monte Carlo (MCMC) method to sample connected chordal graphs uniformly at random. Additionally, we propose a Markov chain that generates connected chordal graphs with a bounded treewidth uniformly at random. Bounding the treewidth parameter (which bounds the largest clique) has direct implications on the running time of various algorithms on chordal graphs. For each of the proposed Markov chains we prove that they are ergodic and therefore converge to the uniform distribution. Finally, as initial evidence that the Markov chains have the potential to mix rapidly, we prove that the chain on graphs with bounded treewidth mixes rapidly for trees (chordal graphs with treewidth bound of one).
APA, Harvard, Vancouver, ISO, and other styles
32

Uimari, Pekka, and Ina Hoeschele. "Mapping-Linked Quantitative Trait Loci Using Bayesian Analysis and Markov Chain Monte Carlo Algorithms." Genetics 146, no. 2 (1997): 735–43. http://dx.doi.org/10.1093/genetics/146.2.735.

Full text
Abstract:
A Bayesian method for mapping linked quantitative trait loci (QTL) using multiple linked genetic markers is presented. Parameter estimation and hypothesis testing was implemented via Markov chain Monte Carlo (MCMC) algorithms. Parameters included were allele frequencies and substitution effects for two biallelic QTL, map positions of the QTL and markers, allele frequencies of the markers, and polygenic and residual variances. Missing data were polygenic effects and multi-locus marker-QTL genotypes. Three different MCMC schemes for testing the presence of a single or two linked QTL on the chromosome were compared. The first approach includes a model indicator variable representing two unlinked QTL affecting the trait, one linked and one unlinked QTL, or both QTL linked with the markers. The second approach incorporates an indicator variable for each QTL into the model for phenotype, allowing or not allowing for a substitution effect of a QTL on phenotype, and the third approach is based on model determination by reversible jump MCMC. Methods were evaluated empirically by analyzing simulated granddaughter designs. All methods identified correctly a second, linked QTL and did not reject the one-QTL model when there was only a single QTL and no additional or an unlinked QTL.
APA, Harvard, Vancouver, ISO, and other styles
33

Ahmadian, Yashar, Jonathan W. Pillow, and Liam Paninski. "Efficient Markov Chain Monte Carlo Methods for Decoding Neural Spike Trains." Neural Computation 23, no. 1 (2011): 46–96. http://dx.doi.org/10.1162/neco_a_00059.

Full text
Abstract:
Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed into the spike trains of a group of neurons. The form of the GLM likelihood ensures that the posterior distribution over the stimuli that caused an observed set of spike trains is log concave so long as the prior is. This allows the maximum a posteriori (MAP) stimulus estimate to be obtained using efficient optimization algorithms. Unfortunately, the MAP estimate can have a relatively large average error when the posterior is highly nongaussian. Here we compare several Markov chain Monte Carlo (MCMC) algorithms that allow for the calculation of general Bayesian estimators involving posterior expectations (conditional on model parameters). An efficient version of the hybrid Monte Carlo (HMC) algorithm was significantly superior to other MCMC methods for gaussian priors. When the prior distribution has sharp edges and corners, on the other hand, the “hit-and-run” algorithm performed better than other MCMC methods. Using these algorithms, we show that for this latter class of priors, the posterior mean estimate can have a considerably lower average error than MAP, whereas for gaussian priors, the two estimators have roughly equal efficiency. We also address the application of MCMC methods for extracting nonmarginal properties of the posterior distribution. For example, by using MCMC to calculate the mutual information between the stimulus and response, we verify the validity of a computationally efficient Laplace approximation to this quantity for gaussian priors in a wide range of model parameters; this makes direct model-based computation of the mutual information tractable even in the case of large observed neural populations, where methods based on binning the spike train fail. Finally, we consider the effect of uncertainty in the GLM parameters on the posterior estimators.
APA, Harvard, Vancouver, ISO, and other styles
34

Brooks, S. P., P. Dellaportas, and G. O. Roberts. "An Approach to Diagnosing Total Variation Convergence of MCMC Algorithms." Journal of Computational and Graphical Statistics 6, no. 3 (1997): 251. http://dx.doi.org/10.2307/1390732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Atchadé, Yves, and Gersende Fort. "Limit theorems for some adaptive MCMC algorithms with subgeometric kernels." Bernoulli 16, no. 1 (2010): 116–54. http://dx.doi.org/10.3150/09-bej199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Brooks, S. P., P. Dellaportas, and G. O. Roberts. "An Approach to Diagnosing Total Variation Convergence of MCMC Algorithms." Journal of Computational and Graphical Statistics 6, no. 3 (1997): 251–65. http://dx.doi.org/10.1080/10618600.1997.10474741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Martino, Luca. "A review of multiple try MCMC algorithms for signal processing." Digital Signal Processing 75 (April 2018): 134–52. http://dx.doi.org/10.1016/j.dsp.2018.01.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Saibaba, Arvind K., Pranjal Prasad, Eric de Sturler, Eric Miller, and Misha E. Kilmer. "Randomized approaches to accelerate MCMC algorithms for Bayesian inverse problems." Journal of Computational Physics 440 (September 2021): 110391. http://dx.doi.org/10.1016/j.jcp.2021.110391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

van den Berg, Stéphanie M., Leo Beem, and Dorret I. Boomsma. "Fitting Genetic Models Using Markov Chain Monte Carlo Algorithms With BUGS." Twin Research and Human Genetics 9, no. 3 (2006): 334–42. http://dx.doi.org/10.1375/twin.9.3.334.

Full text
Abstract:
AbstractMaximum likelihood estimation techniques are widely used in twin and family studies, but soon reach computational boundaries when applied to highly complex models (e.g., models including gene-by-environment interaction and gene–environment correlation, item response theory measurement models, repeated measures, longitudinal structures, extended pedigrees). Markov Chain Monte Carlo (MCMC) algorithms are very well suited to fit complex models with hierarchically structured data. This article introduces the key concepts of Bayesian inference and MCMC parameter estimation and provides a number of scripts describing relatively simple models to be estimated by the freely obtainable BUGS software. In addition, inference using BUGS is illustrated using a data set on follicle-stimulating hormone and luteinizing hormone levels with repeated measures. The examples provided can serve as stepping stones for more complicated models, tailored to the specific needs of the individual researcher.
APA, Harvard, Vancouver, ISO, and other styles
40

Guozhen, Wei, Chi Zhang, Yu Li, Liu Haixing, and Huicheng Zhou. "Source identification of sudden contamination based on the parameter uncertainty analysis." Journal of Hydroinformatics 18, no. 6 (2016): 919–27. http://dx.doi.org/10.2166/hydro.2016.002.

Full text
Abstract:
It is important to identify the source information after a sudden water contamination incident occurs in a water supply system. The accuracy of the simulation model's parameters determines the accuracy of the source information. However, it is difficult to obtain the true value of these parameters by existing methods, so reduction of the errors caused by the uncertainty of these parameters is a crucial problem. A source identification framework which considers the uncertainty of the model's sensitive parameters and combines Bayesian inference and Markov Chain Monte Carlo (MCMC) algorithms simulation is established, and the South-to-North Water Diversion Project is taken as the case study in this paper. Compared with a framework which does not consider the uncertainty of the model's parameters, the proposed framework could solve the error caused by the wrong choice of model parameters and obtain more accurate results. In addition, the proposed framework based on traditional MCMC and that based on the Delayed Rejection and Adaptive Metropolis (DRAM-MCMC) are compared to prove that the DRAM-MCMC is more convergent and accurate. Lastly, the proposed framework based on DRAM-MCMC is proved to solve the problem with high practicality and generality in the studied long distance water diversion project.
APA, Harvard, Vancouver, ISO, and other styles
41

Spade, David A. "Estimating drift and minorization coefficients for Gibbs sampling algorithms." Monte Carlo Methods and Applications 27, no. 3 (2021): 195–209. http://dx.doi.org/10.1515/mcma-2021-2093.

Full text
Abstract:
Abstract Gibbs samplers are common Markov chain Monte Carlo (MCMC) algorithms that are used to sample from intractable probability distributions when sampling directly from full conditional distributions is possible. These types of MCMC algorithms come up frequently in many applications, and because of their popularity it is important to have a sense of how long it takes for the Gibbs sampler to become close to its stationary distribution. To this end, it is common to rely on the values of drift and minorization coefficients to bound the mixing time of the Gibbs sampler. This manuscript provides a computational method for estimating these coefficients. Herein, we detail the several advantages of the proposed methods, as well as the limitations of this approach. These limitations are primarily related to the “curse of dimensionality”, which for these methods is caused by necessary increases in the numbers of initial states from which chains need be run and the need for an exponentially increasing number of grid points for estimation of minorization coefficients.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhang, Chi, John P. Huelsenbeck, and Fredrik Ronquist. "Using Parsimony-Guided Tree Proposals to Accelerate Convergence in Bayesian Phylogenetic Inference." Systematic Biology 69, no. 5 (2020): 1016–32. http://dx.doi.org/10.1093/sysbio/syaa002.

Full text
Abstract:
Abstract Sampling across tree space is one of the major challenges in Bayesian phylogenetic inference using Markov chain Monte Carlo (MCMC) algorithms. Standard MCMC tree moves consider small random perturbations of the topology, and select from candidate trees at random or based on the distance between the old and new topologies. MCMC algorithms using such moves tend to get trapped in tree space, making them slow in finding the globally most probable trees (known as “convergence”) and in estimating the correct proportions of the different types of them (known as “mixing”). Here, we introduce a new class of moves, which propose trees based on their parsimony scores. The proposal distribution derived from the parsimony scores is a quickly computable albeit rough approximation of the conditional posterior distribution over candidate trees. We demonstrate with simulations that parsimony-guided moves correctly sample the uniform distribution of topologies from the prior. We then evaluate their performance against standard moves using six challenging empirical data sets, for which we were able to obtain accurate reference estimates of the posterior using long MCMC runs, a mix of topology proposals, and Metropolis coupling. On these data sets, ranging in size from 357 to 934 taxa and from 1740 to 5681 sites, we find that single chains using parsimony-guided moves usually converge an order of magnitude faster than chains using standard moves. They also exhibit better mixing, that is, they cover the most probable trees more quickly. Our results show that tree moves based on quick and dirty estimates of the posterior probability can significantly outperform standard moves. Future research will have to show to what extent the performance of such moves can be improved further by finding better ways of approximating the posterior probability, taking the trade-off between accuracy and speed into account. [Bayesian phylogenetic inference; MCMC; parsimony; tree proposal.]
APA, Harvard, Vancouver, ISO, and other styles
43

Łatuszyński, Krzysztof, and Jeffrey S. Rosenthal. "The Containment Condition and Adapfail Algorithms." Journal of Applied Probability 51, no. 04 (2014): 1189–95. http://dx.doi.org/10.1017/s0021900200012055.

Full text
Abstract:
This short note investigates convergence of adaptive Markov chain Monte Carlo algorithms, i.e. algorithms which modify the Markov chain update probabilities on the fly. We focus on the containment condition introduced Roberts and Rosenthal (2007). We show that if the containment condition is not satisfied, then the algorithm will perform very poorly. Specifically, with positive probability, the adaptive algorithm will be asymptotically less efficient then any nonadaptive ergodic MCMC algorithm. We call such algorithms AdapFail, and conclude that they should not be used.
APA, Harvard, Vancouver, ISO, and other styles
44

Łatuszyński, Krzysztof, and Jeffrey S. Rosenthal. "The Containment Condition and Adapfail Algorithms." Journal of Applied Probability 51, no. 4 (2014): 1189–95. http://dx.doi.org/10.1239/jap/1421763335.

Full text
Abstract:
This short note investigates convergence of adaptive Markov chain Monte Carlo algorithms, i.e. algorithms which modify the Markov chain update probabilities on the fly. We focus on the containment condition introduced Roberts and Rosenthal (2007). We show that if the containment condition is not satisfied, then the algorithm will perform very poorly. Specifically, with positive probability, the adaptive algorithm will be asymptotically less efficient then any nonadaptive ergodic MCMC algorithm. We call such algorithms AdapFail, and conclude that they should not be used.
APA, Harvard, Vancouver, ISO, and other styles
45

Song, Qifan, Mingqi Wu, and Faming Liang. "Weak Convergence Rates of Population Versus Single-Chain Stochastic Approximation MCMC Algorithms." Advances in Applied Probability 46, no. 04 (2014): 1059–83. http://dx.doi.org/10.1017/s0001867800007540.

Full text
Abstract:
In this paper we establish the theory of weak convergence (toward a normal distribution) for both single-chain and population stochastic approximation Markov chain Monte Carlo (MCMC) algorithms (SAMCMC algorithms). Based on the theory, we give an explicit ratio of convergence rates for the population SAMCMC algorithm and the single-chain SAMCMC algorithm. Our results provide a theoretic guarantee that the population SAMCMC algorithms are asymptotically more efficient than the single-chain SAMCMC algorithms when the gain factor sequence decreases slower than O(1 / t), where t indexes the number of iterations. This is of interest for practical applications.
APA, Harvard, Vancouver, ISO, and other styles
46

Song, Qifan, Mingqi Wu, and Faming Liang. "Weak Convergence Rates of Population Versus Single-Chain Stochastic Approximation MCMC Algorithms." Advances in Applied Probability 46, no. 4 (2014): 1059–83. http://dx.doi.org/10.1239/aap/1418396243.

Full text
Abstract:
In this paper we establish the theory of weak convergence (toward a normal distribution) for both single-chain and population stochastic approximation Markov chain Monte Carlo (MCMC) algorithms (SAMCMC algorithms). Based on the theory, we give an explicit ratio of convergence rates for the population SAMCMC algorithm and the single-chain SAMCMC algorithm. Our results provide a theoretic guarantee that the population SAMCMC algorithms are asymptotically more efficient than the single-chain SAMCMC algorithms when the gain factor sequence decreases slower than O(1 / t), where t indexes the number of iterations. This is of interest for practical applications.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhao, Qian, Yan Zhang, Shichun Shao, Yeqing Sun, and Zhengkui Lin. "Identification of hub genes and biological pathways in hepatocellular carcinoma by integrated bioinformatics analysis." PeerJ 9 (January 19, 2021): e10594. http://dx.doi.org/10.7717/peerj.10594.

Full text
Abstract:
Background Hepatocellular carcinoma (HCC), the main type of liver cancer in human, is one of the most prevalent and deadly malignancies in the world. The present study aimed to identify hub genes and key biological pathways by integrated bioinformatics analysis. Methods A bioinformatics pipeline based on gene co-expression network (GCN) analysis was built to analyze the gene expression profile of HCC. Firstly, differentially expressed genes (DEGs) were identified and a GCN was constructed with Pearson correlation analysis. Then, the gene modules were identified with 3 different community detection algorithms, and the correlation analysis between gene modules and clinical indicators was performed. Moreover, we used the Search Tool for the Retrieval of Interacting Genes (STRING) database to construct a protein protein interaction (PPI) network of the key gene module, and we identified the hub genes using nine topology analysis algorithms based on this PPI network. Further, we used the Oncomine analysis, survival analysis, GEO data set and random forest algorithm to verify the important roles of hub genes in HCC. Lastly, we explored the methylation changes of hub genes using another GEO data (GSE73003). Results Firstly, among the expression profiles, 4,130 up-regulated genes and 471 down-regulated genes were identified. Next, the multi-level algorithm which had the highest modularity divided the GCN into nine gene modules. Also, a key gene module (m1) was identified. The biological processes of GO enrichment of m1 mainly included the processes of mitosis and meiosis and the functions of catalytic and exodeoxyribonuclease activity. Besides, these genes were enriched in the cell cycle and mitotic pathway. Furthermore, we identified 11 hub genes, MCM3, TRMT6, AURKA, CDC20, TOP2A, ECT2, TK1, MCM2, FEN1, NCAPD2 and KPNA2 which played key roles in HCC. The results of multiple verification methods indicated that the 11 hub genes had highly diagnostic efficiencies to distinguish tumors from normal tissues. Lastly, the methylation changes of gene CDC20, TOP2A, TK1, FEN1 in HCC samples had statistical significance (P-value < 0.05). Conclusion MCM3, TRMT6, AURKA, CDC20, TOP2A, ECT2, TK1, MCM2, FEN1, NCAPD2 and KPNA2 could be potential biomarkers or therapeutic targets for HCC. Meanwhile, the metabolic pathway, the cell cycle and mitotic pathway might played vital roles in the progression of HCC.
APA, Harvard, Vancouver, ISO, and other styles
48

MEN, ZHONGXIAN, TONY S. WIRJANTO, and ADAM W. KOLKIEWICZ. "A MULTISCALE STOCHASTIC CONDITIONAL DURATION MODEL." Annals of Financial Economics 11, no. 04 (2016): 1650020. http://dx.doi.org/10.1142/s2010495216500202.

Full text
Abstract:
This paper studies a stochastic conditional duration model running on multiple time scales with the aim of better capturing the dynamics of a duration process of financial transaction data. New Markov chain Monte Carlo (MCMC) algorithms are developed for the model under three distributional assumptions about the innovation of the measurement equation for a two-component model. Simulation results suggest that the proposed model and MCMC method improve in-sample fits and duration forecasts. Most importantly applications to FIAT and IBM duration datasets indicate the existence of at least two factors (or components) governing the dynamics of the financial duration process.
APA, Harvard, Vancouver, ISO, and other styles
49

Choi, Hee Min, and James P. Hobert. "Analysis of MCMC algorithms for Bayesian linear regression with Laplace errors." Journal of Multivariate Analysis 117 (May 2013): 32–40. http://dx.doi.org/10.1016/j.jmva.2013.02.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Robert, Christian P., and Kerrie L. Mengersen. "Reparameterisation Issues in Mixture Modelling and their bearing on MCMC algorithms." Computational Statistics & Data Analysis 29, no. 3 (1999): 325–43. http://dx.doi.org/10.1016/s0167-9473(98)00058-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!