Academic literature on the topic 'Particle marginal Metropolis-Hastings sampler'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Particle marginal Metropolis-Hastings sampler.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Particle marginal Metropolis-Hastings sampler"

1

Murray, Lawrence M., Emlyn M. Jones, and John Parslow. "On Disturbance State-Space Models and the Particle Marginal Metropolis-Hastings Sampler." SIAM/ASA Journal on Uncertainty Quantification 1, no. 1 (January 2013): 494–521. http://dx.doi.org/10.1137/130915376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

He, Zhangyi, Xiaoyang Dai, Mark Beaumont, and Feng Yu. "Detecting and Quantifying Natural Selection at Two Linked Loci from Time Series Data of Allele Frequencies with Forward-in-Time Simulations." Genetics 216, no. 2 (August 21, 2020): 521–41. http://dx.doi.org/10.1534/genetics.120.303463.

Full text
Abstract:
Recent advances in DNA sequencing techniques have made it possible to monitor genomes in great detail over time. This improvement provides an opportunity for us to study natural selection based on time serial samples of genomes while accounting for genetic recombination effect and local linkage information. Such time series genomic data allow for more accurate estimation of population genetic parameters and hypothesis testing on the recent action of natural selection. In this work, we develop a novel Bayesian statistical framework for inferring natural selection at a pair of linked loci by capitalising on the temporal aspect of DNA data with the additional flexibility of modeling the sampled chromosomes that contain unknown alleles. Our approach is built on a hidden Markov model where the underlying process is a two-locus Wright-Fisher diffusion with selection, which enables us to explicitly model genetic recombination and local linkage. The posterior probability distribution for selection coefficients is computed by applying the particle marginal Metropolis-Hastings algorithm, which allows us to efficiently calculate the likelihood. We evaluate the performance of our Bayesian inference procedure through extensive simulations, showing that our approach can deliver accurate estimates of selection coefficients, and the addition of genetic recombination and local linkage brings about significant improvement in the inference of natural selection. We also illustrate the utility of our method on real data with an application to ancient DNA data associated with white spotting patterns in horses.
APA, Harvard, Vancouver, ISO, and other styles
3

Pinski, Francis J. "A Novel Hybrid Monte Carlo Algorithm for Sampling Path Space." Entropy 23, no. 5 (April 22, 2021): 499. http://dx.doi.org/10.3390/e23050499.

Full text
Abstract:
To sample from complex, high-dimensional distributions, one may choose algorithms based on the Hybrid Monte Carlo (HMC) method. HMC-based algorithms generate nonlocal moves alleviating diffusive behavior. Here, I build on an already defined HMC framework, hybrid Monte Carlo on Hilbert spaces (Beskos, et al. Stoch. Proc. Applic. 2011), that provides finite-dimensional approximations of measures π, which have density with respect to a Gaussian measure on an infinite-dimensional Hilbert (path) space. In all HMC algorithms, one has some freedom to choose the mass operator. The novel feature of the algorithm described in this article lies in the choice of this operator. This new choice defines a Markov Chain Monte Carlo (MCMC) method that is well defined on the Hilbert space itself. As before, the algorithm described herein uses an enlarged phase space Π having the target π as a marginal, together with a Hamiltonian flow that preserves Π. In the previous work, the authors explored a method where the phase space π was augmented with Brownian bridges. With this new choice, π is augmented by Ornstein–Uhlenbeck (OU) bridges. The covariance of Brownian bridges grows with its length, which has negative effects on the acceptance rate in the MCMC method. This contrasts with the covariance of OU bridges, which is independent of the path length. The ingredients of the new algorithm include the definition of the mass operator, the equations for the Hamiltonian flow, the (approximate) numerical integration of the evolution equations, and finally, the Metropolis–Hastings acceptance rule. Taken together, these constitute a robust method for sampling the target distribution in an almost dimension-free manner. The behavior of this novel algorithm is demonstrated by computer experiments for a particle moving in two dimensions, between two free-energy basins separated by an entropic barrier.
APA, Harvard, Vancouver, ISO, and other styles
4

Vu, Tuyet, Ba-Ngu Vo, and Rob Evans. "A Particle Marginal Metropolis-Hastings Multi-Target Tracker." IEEE Transactions on Signal Processing 62, no. 15 (August 2014): 3953–64. http://dx.doi.org/10.1109/tsp.2014.2329270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sherlock, Chris, Alexandre H. Thiery, and Anthony Lee. "Pseudo-marginal Metropolis–Hastings sampling using averages of unbiased estimators." Biometrika 104, no. 3 (June 21, 2017): 727–34. http://dx.doi.org/10.1093/biomet/asx031.

Full text
Abstract:
Summary We consider a pseudo-marginal Metropolis–Hastings kernel ${\mathbb{P}}_m$ that is constructed using an average of $m$ exchangeable random variables, and an analogous kernel ${\mathbb{P}}_s$ that averages $s<m$ of these same random variables. Using an embedding technique to facilitate comparisons, we provide a lower bound for the asymptotic variance of any ergodic average associated with ${\mathbb{P}}_m$ in terms of the asymptotic variance of the corresponding ergodic average associated with ${\mathbb{P}}_s$. We show that the bound is tight and disprove a conjecture that when the random variables to be averaged are independent, the asymptotic variance under ${\mathbb{P}}_m$ is never less than $s/m$ times the variance under ${\mathbb{P}}_s$. The conjecture does, however, hold for continuous-time Markov chains. These results imply that if the computational cost of the algorithm is proportional to $m$, it is often better to set $m=1$. We provide intuition as to why these findings differ so markedly from recent results for pseudo-marginal kernels employing particle filter approximations. Our results are exemplified through two simulation studies; in the first the computational cost is effectively proportional to $m$ and in the second there is a considerable start-up cost at each iteration.
APA, Harvard, Vancouver, ISO, and other styles
6

Dolgov, Sergey, Karim Anaya-Izquierdo, Colin Fox, and Robert Scheichl. "Approximation and sampling of multivariate probability distributions in the tensor train decomposition." Statistics and Computing 30, no. 3 (November 2, 2019): 603–25. http://dx.doi.org/10.1007/s11222-019-09910-z.

Full text
Abstract:
Abstract General multivariate distributions are notoriously expensive to sample from, particularly the high-dimensional posterior distributions in PDE-constrained inverse problems. This paper develops a sampler for arbitrary continuous multivariate distributions that is based on low-rank surrogates in the tensor train format, a methodology that has been exploited for many years for scalable, high-dimensional density function approximation in quantum physics and chemistry. We build upon recent developments of the cross approximation algorithms in linear algebra to construct a tensor train approximation to the target probability density function using a small number of function evaluations. For sufficiently smooth distributions, the storage required for accurate tensor train approximations is moderate, scaling linearly with dimension. In turn, the structure of the tensor train surrogate allows sampling by an efficient conditional distribution method since marginal distributions are computable with linear complexity in dimension. Expected values of non-smooth quantities of interest, with respect to the surrogate distribution, can be estimated using transformed independent uniformly-random seeds that provide Monte Carlo quadrature or transformed points from a quasi-Monte Carlo lattice to give more efficient quasi-Monte Carlo quadrature. Unbiased estimates may be calculated by correcting the transformed random seeds using a Metropolis–Hastings accept/reject step, while the quasi-Monte Carlo quadrature may be corrected either by a control-variate strategy or by importance weighting. We show that the error in the tensor train approximation propagates linearly into the Metropolis–Hastings rejection rate and the integrated autocorrelation time of the resulting Markov chain; thus, the integrated autocorrelation time may be made arbitrarily close to 1, implying that, asymptotic in sample size, the cost per effectively independent sample is one target density evaluation plus the cheap tensor train surrogate proposal that has linear cost with dimension. These methods are demonstrated in three computed examples: fitting failure time of shock absorbers; a PDE-constrained inverse diffusion problem; and sampling from the Rosenbrock distribution. The delayed rejection adaptive Metropolis (DRAM) algorithm is used as a benchmark. In all computed examples, the importance weight-corrected quasi-Monte Carlo quadrature performs best and is more efficient than DRAM by orders of magnitude across a wide range of approximation accuracies and sample sizes. Indeed, all the methods developed here significantly outperform DRAM in all computed examples.
APA, Harvard, Vancouver, ISO, and other styles
7

Silva, Edilson Marcelino, Thais Destefani Ribeiro Furtado, Ariana Campos Frühauf, Joel Augusto Muniz, and Tales Jesus Fernandes. "Bayesian approach to the zinc extraction curve of soil with sewage sludge." Acta Scientiarum. Technology 42 (November 29, 2019): e46893. http://dx.doi.org/10.4025/actascitechnol.v42i1.46893.

Full text
Abstract:
Zinc uptake is essential for crop development; thus, knowledge about soil zinc availability is fundamental for fertilization in periods of higher crop demand. A nonlinear first-order kinetic model has been employed to evaluate zinc availability. Studies usually employ few observations; however, inference in nonlinear models is only valid for sufficiently large samples. An alternative is the Bayesian method, where inferences are made in terms of probability, which is effective even with small samples. The aim of this study was to use Bayesian methodology to evaluate the fitness of a nonlinear first-order kinetic model to describe zinc extraction from soil with sewage sludge using seven different extraction solutions. The analysed data were obtained from an experiment using a completely randomized design and three replicates. Fifteen zinc extractions were evaluated for each extraction solution. Posterior distributions of a study that evaluated the nonlinear first-order kinetic model were used as prior distributions in the present study. Using the full conditionals, samples of posterior marginal distributions were generated using the Gibbs sampler and Metropolis-Hastings algorithms and implemented in R. The Bayesian method allowed the use of posterior distributions of another study that evaluated the model used as prior distributions for parameters in the present study. The posterior full conditional distributions for the parameters were normal distributions and gamma distributions, respectively. The Bayesian method was efficient for the study of the first-order kinetic model to describe zinc extraction from soil with sewage sludge using seven extraction solutions.
APA, Harvard, Vancouver, ISO, and other styles
8

Mejari, Manas, and Dario Piga. "Maximum—A Posteriori Estimation of Linear Time-Invariant State-Space Models via Efficient Monte-Carlo Sampling." ASME Letters in Dynamic Systems and Control 2, no. 1 (July 14, 2021). http://dx.doi.org/10.1115/1.4051491.

Full text
Abstract:
Abstract This article addresses maximum-a-posteriori (MAP) estimation of linear time-invariant state-space (LTI-SS) models. The joint posterior distribution of the model matrices and the unknown state sequence is approximated by using Rao-Blackwellized Monte-Carlo sampling algorithms. Specifically, the conditional distribution of the state sequence given the model parameters is derived analytically, while only the marginal posterior distribution of the model matrices is approximated using a Metropolis-Hastings Markov Chain Monte-Carlo sampler. From the joint distribution, MAP estimates of the unknown model matrices as well as the state sequence are computed. The performance of the proposed algorithm is demonstrated on a numerical example and on a real laboratory benchmark dataset of a hair dryer process.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Particle marginal Metropolis-Hastings sampler"

1

Dahlin, Johan. "Accelerating Monte Carlo methods for Bayesian inference in dynamical models." Doctoral thesis, Linköpings universitet, Reglerteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-125992.

Full text
Abstract:
Making decisions and predictions from noisy observations are two important and challenging problems in many areas of society. Some examples of applications are recommendation systems for online shopping and streaming services, connecting genes with certain diseases and modelling climate change. In this thesis, we make use of Bayesian statistics to construct probabilistic models given prior information and historical data, which can be used for decision support and predictions. The main obstacle with this approach is that it often results in mathematical problems lacking analytical solutions. To cope with this, we make use of statistical simulation algorithms known as Monte Carlo methods to approximate the intractable solution. These methods enjoy well-understood statistical properties but are often computational prohibitive to employ. The main contribution of this thesis is the exploration of different strategies for accelerating inference methods based on sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). That is, strategies for reducing the computational effort while keeping or improving the accuracy. A major part of the thesis is devoted to proposing such strategies for the MCMC method known as the particle Metropolis-Hastings (PMH) algorithm. We investigate two strategies: (i) introducing estimates of the gradient and Hessian of the target to better tailor the algorithm to the problem and (ii) introducing a positive correlation between the point-wise estimates of the target. Furthermore, we propose an algorithm based on the combination of SMC and Gaussian process optimisation, which can provide reasonable estimates of the posterior but with a significant decrease in computational effort compared with PMH. Moreover, we explore the use of sparseness priors for approximate inference in over-parametrised mixed effects models and autoregressive processes. This can potentially be a practical strategy for inference in the big data era. Finally, we propose a general method for increasing the accuracy of the parameter estimates in non-linear state space models by applying a designed input signal.
Borde Riksbanken höja eller sänka reporäntan vid sitt nästa möte för att nå inflationsmålet? Vilka gener är förknippade med en viss sjukdom? Hur kan Netflix och Spotify veta vilka filmer och vilken musik som jag vill lyssna på härnäst? Dessa tre problem är exempel på frågor där statistiska modeller kan vara användbara för att ge hjälp och underlag för beslut. Statistiska modeller kombinerar teoretisk kunskap om exempelvis det svenska ekonomiska systemet med historisk data för att ge prognoser av framtida skeenden. Dessa prognoser kan sedan användas för att utvärdera exempelvis vad som skulle hända med inflationen i Sverige om arbetslösheten sjunker eller hur värdet på mitt pensionssparande förändras när Stockholmsbörsen rasar. Tillämpningar som dessa och många andra gör statistiska modeller viktiga för många delar av samhället. Ett sätt att ta fram statistiska modeller bygger på att kontinuerligt uppdatera en modell allteftersom mer information samlas in. Detta angreppssätt kallas för Bayesiansk statistik och är särskilt användbart när man sedan tidigare har bra insikter i modellen eller tillgång till endast lite historisk data för att bygga modellen. En nackdel med Bayesiansk statistik är att de beräkningar som krävs för att uppdatera modellen med den nya informationen ofta är mycket komplicerade. I sådana situationer kan man istället simulera utfallet från miljontals varianter av modellen och sedan jämföra dessa mot de historiska observationerna som finns till hands. Man kan sedan medelvärdesbilda över de varianter som gav bäst resultat för att på så sätt ta fram en slutlig modell. Det kan därför ibland ta dagar eller veckor för att ta fram en modell. Problemet blir särskilt stort när man använder mer avancerade modeller som skulle kunna ge bättre prognoser men som tar för lång tid för att bygga. I denna avhandling använder vi ett antal olika strategier för att underlätta eller förbättra dessa simuleringar. Vi föreslår exempelvis att ta hänsyn till fler insikter om systemet och därmed minska antalet varianter av modellen som behöver undersökas. Vi kan således redan utesluta vissa modeller eftersom vi har en bra uppfattning om ungefär hur en bra modell ska se ut. Vi kan också förändra simuleringen så att den enklare rör sig mellan olika typer av modeller. På detta sätt utforskas rymden av alla möjliga modeller på ett mer effektivt sätt. Vi föreslår ett antal olika kombinationer och förändringar av befintliga metoder för att snabba upp anpassningen av modellen till observationerna. Vi visar att beräkningstiden i vissa fall kan minska ifrån några dagar till någon timme. Förhoppningsvis kommer detta i framtiden leda till att man i praktiken kan använda mer avancerade modeller som i sin tur resulterar i bättre prognoser och beslut.
APA, Harvard, Vancouver, ISO, and other styles
2

De, Freitas Allan. "A Monte-Carlo approach to dominant scatterer tracking of a single extended target in high range-resolution radar." Diss., 2013. http://hdl.handle.net/2263/33372.

Full text
Abstract:
In high range-resolution (HRR) radar systems, the returns from a single target may fall in multiple adjacent range bins which individually vary in amplitude. A target following this representation is commonly referred to as an extended target and results in more information about the target. However, extracting this information from the radar returns is challenging due to several complexities. These complexities include the single dimensional nature of the radar measurements, complexities associated with the scattering of electromagnetic waves, and complex environments in which radar systems are required to operate. There are several applications of HRR radar systems which extract target information with varying levels of success. A commonly used application is that of imaging referred to as synthetic aperture radar (SAR) and inverse SAR (ISAR) imaging. These techniques combine multiple single dimension measurements in order to obtain a single two dimensional image. These techniques rely on rotational motion between the target and the radar occurring during the collection of the single dimension measurements. In the case of ISAR, the radar is stationary while motion is induced by the target. There are several difficulties associated with the unknown motion of the target when standard Doppler processing techniques are used to synthesise ISAR images. In this dissertation, a non-standard Dop-pler approach, based on Bayesian inference techniques, was considered to address the difficulties. The target and observations were modelled with a non-linear state space model. Several different Bayesian techniques were implemented to infer the hidden states of the model, which coincide with the unknown characteristics of the target. A simulation platform was designed in order to analyse the performance of the implemented techniques. The implemented techniques were capable of successfully tracking a randomly generated target in a controlled environment. The influence of varying several parameters, related to the characteristics of the target and the implemented techniques, was explored. Finally, a comparison was made between standard Doppler processing and the Bayesian methods proposed.
Dissertation (MEng)--University of Pretoria, 2013.
gm2014
Electrical, Electronic and Computer Engineering
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography