To see the other types of publications on this topic, follow the link: Approximate posterior distribution.

Journal articles on the topic 'Approximate posterior distribution'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Approximate posterior distribution.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Asensio Ramos, A., C. J. Díaz Baso, and O. Kochukhov. "Approximate Bayesian neural Doppler imaging." Astronomy & Astrophysics 658 (February 2022): A162. http://dx.doi.org/10.1051/0004-6361/202142027.

Full text
Abstract:
Aims. The non-uniform surface temperature distribution of rotating active stars is routinely mapped with the Doppler imaging technique. Inhomogeneities in the surface produce features in high-resolution spectroscopic observations that shift in wavelength because of the Doppler effect, depending on their position on the visible hemisphere. The inversion problem has been systematically solved using maximum a posteriori regularized methods assuming smoothness or maximum entropy. Our aim in this work is to solve the full Bayesian inference problem by providing access to the posterior distribution of the surface temperature in the star compatible with the observations. Methods. We use amortized neural posterior estimation to produce a model that approximates the high-dimensional posterior distribution for spectroscopic observations of selected spectral ranges sampled at arbitrary rotation phases. The posterior distribution is approximated with conditional normalizing flows, which are flexible, tractable, and easy-to-sample approximations to arbitrary distributions. When conditioned on the spectroscopic observations, these normalizing flows provide a very efficient way of obtaining samples from the posterior distribution. The conditioning on observations is achieved through the use of Transformer encoders, which can deal with arbitrary wavelength sampling and rotation phases. Results. Our model can produce thousands of posterior samples per second, each one accompanied by an estimation of the log-probability. Our exhaustive validation of the model for very high-signal-to-noise observations shows that it correctly approximates the posterior, albeit with some overestimation of the broadening. We apply the model to the moderately fast rotator II Peg, producing the first Bayesian map of its temperature inhomogenities. We conclude that conditional normalizing flows are a very promising tool for carrying out approximate Bayesian inference in more complex problems in stellar physics, such as constraining the magnetic properties using polarimetry.
APA, Harvard, Vancouver, ISO, and other styles
2

Karabatsos, George. "Copula Approximate Bayesian Computation Using Distribution Random Forests." Stats 7, no. 3 (2024): 1002–50. http://dx.doi.org/10.3390/stats7030061.

Full text
Abstract:
Ongoing modern computational advancements continue to make it easier to collect increasingly large and complex datasets, which can often only be realistically analyzed using models defined by intractable likelihood functions. This Stats invited feature article introduces and provides an extensive simulation study of a new approximate Bayesian computation (ABC) framework for estimating the posterior distribution and the maximum likelihood estimate (MLE) of the parameters of models defined by intractable likelihoods, that unifies and extends previous ABC methods proposed separately. This framework, copulaABCdrf, aims to accurately estimate and describe the possibly skewed and high-dimensional posterior distribution by a novel multivariate copula-based meta-t distribution based on univariate marginal posterior distributions that can be accurately estimated by distribution random forests (drf), while performing automatic summary statistics (covariates) selection, based on robustly estimated copula dependence parameters. The copulaABCdrf framework also provides a novel multivariate mode estimator to perform MLE and posterior mode estimation and an optional step to perform model selection from a given set of models using posterior probabilities estimated by drf. The posterior distribution estimation accuracy of the ABC framework is illustrated and compared with previous standard ABC methods through several simulation studies involving low- and high-dimensional models with computable posterior distributions, which are either unimodal, skewed, or multimodal; and exponential random graph and mechanistic network models, each defined by an intractable likelihood from which it is costly to simulate large network datasets. This paper also proposes and studies a new solution to the simulation cost problem in ABC involving the posterior estimation of parameters from datasets simulated from the given model that are smaller compared to the potentially large size of the dataset being analyzed. This proposal is motivated by the fact that, for many models defined by intractable likelihoods, such as the network models when they are applied to analyze massive networks, the repeated simulation of large datasets (networks) for posterior-based parameter estimation can be too computationally costly and vastly slow down or prohibit the use of standard ABC methods. The copulaABCdrf framework and standard ABC methods are further illustrated through analyses of large real-life networks of sizes ranging between 28,000 and 65.6 million nodes (between 3 million and 1.8 billion edges), including a large multilayer network with weighted directed edges. The results of the simulation studies show that, in settings where the true posterior distribution is not highly multimodal, copulaABCdrf usually produced similar point estimates from the posterior distribution for low-dimensional parametric models as previous ABC methods, but the copula-based method can produce more accurate estimates from the posterior distribution for high-dimensional models, and, in both dimensionality cases, usually produced more accurate estimates of univariate marginal posterior distributions of parameters. Also, posterior estimation accuracy was usually improved when pre-selecting the important summary statistics using drf compared to ABC employing no pre-selection of the subset of important summaries. For all ABC methods studied, accurate estimation of a highly multimodal posterior distribution was challenging. In light of the results of all the simulation studies, this article concludes by discussing how the copulaABCdrf framework can be improved for future research.
APA, Harvard, Vancouver, ISO, and other styles
3

Posselt, Derek J., Daniel Hodyss, and Craig H. Bishop. "Errors in Ensemble Kalman Smoother Estimates of Cloud Microphysical Parameters." Monthly Weather Review 142, no. 4 (2014): 1631–54. http://dx.doi.org/10.1175/mwr-d-13-00290.1.

Full text
Abstract:
Abstract If forecast or observation error distributions are non-Gaussian, the true posterior mean and covariance depends on the distribution of observation errors and the observed values. The posterior distribution of analysis errors obtained from ensemble Kalman filters and smoothers is independent of observed values. Hence, the error in ensemble Kalman smoother (EnKS) state estimates is closely linked to the sensitivity of the true posterior to observed values. Here a Markov chain Monte Carlo (MCMC) algorithm is used to document the dependence of the errors in EnKS-based estimates of cloud microphysical parameters on observed values. It is shown that EnKS analysis distributions are grossly inaccurate for nonnegative microphysical parameters when parameter values are close to zero. Furthermore, numerical analysis is presented that shows that, by design, the posterior distributions given by EnKS and even nonlinear extensions of these smoothers approximate the average of all possible posterior analysis distributions associated with all possible observations given the prior. Multiple runs of the MCMC are made to approximate this distribution. This empirically derived average of Bayesian posterior analysis errors is shown to be qualitatively similar to the EnKS posterior. In this way, it is demonstrated that, in the presence of nonlinearity, EnKS algorithms do not estimate the true posterior error distribution given the specific values of the observations. Instead, they produce an error distribution that is consistent with an average of the true posterior variance, weighted by the probability of obtaining each possible observation. This seemingly subtle distinction gives rise to fundamental differences between the approximate EnKS posterior and the true Bayesian posterior distribution.
APA, Harvard, Vancouver, ISO, and other styles
4

Lele, Subhash R., C. George Glen, and José Miguel Ponciano. "Practical Consequences of the Bias in the Laplace Approximation to Marginal Likelihood for Hierarchical Models." Entropy 27, no. 3 (2025): 289. https://doi.org/10.3390/e27030289.

Full text
Abstract:
Due to the high dimensional integration over latent variables, computing marginal likelihood and posterior distributions for the parameters of a general hierarchical model is a difficult task. The Markov Chain Monte Carlo (MCMC) algorithms are commonly used to approximate the posterior distributions. These algorithms, though effective, are computationally intensive and can be slow for large, complex models. As an alternative to the MCMC approach, the Laplace approximation (LA) has been successfully used to obtain fast and accurate approximations to the posterior mean and other derived quantities related to the posterior distribution. In the last couple of decades, LA has also been used to approximate the marginal likelihood function and the posterior distribution. In this paper, we show that the bias in the Laplace approximation to the marginal likelihood has substantial practical consequences.
APA, Harvard, Vancouver, ISO, and other styles
5

Burr, Tom, and Alexei Skurikhin. "Selecting Summary Statistics in Approximate Bayesian Computation for Calibrating Stochastic Models." BioMed Research International 2013 (2013): 1–10. http://dx.doi.org/10.1155/2013/210646.

Full text
Abstract:
Approximate Bayesian computation (ABC) is an approach for using measurement data to calibrate stochastic computer models, which are common in biology applications. ABC is becoming the “go-to” option when the data and/or parameter dimension is large because it relies on user-chosen summary statistics rather than the full data and is therefore computationally feasible. One technical challenge with ABC is that the quality of the approximation to the posterior distribution of model parameters depends on the user-chosen summary statistics. In this paper, the user requirement to choose effective summary statistics in order to accurately estimate the posterior distribution of model parameters is investigated and illustrated by example, using a model and corresponding real data of mitochondrial DNA population dynamics. We show that for some choices of summary statistics, the posterior distribution of model parameters is closely approximated and for other choices of summary statistics, the posterior distribution is not closely approximated. A strategy to choose effective summary statistics is suggested in cases where the stochastic computer model can be run at many trial parameter settings, as in the example.
APA, Harvard, Vancouver, ISO, and other styles
6

MacKay, David J. C. "Comparison of Approximate Methods for Handling Hyperparameters." Neural Computation 11, no. 5 (1999): 1035–68. http://dx.doi.org/10.1162/089976699300016331.

Full text
Abstract:
I examine two approximate methods for computational implementation of Bayesian hierarchical models, that is, models that include unknown hyperparameters such as regularization constants and noise levels. In the evidence framework, the model parameters are integrated over, and the resulting evidence is maximized over the hyperparameters. The optimized hyperparameters are used to define a gaussian approximation to the posterior distribution. In the alternative MAP method, the true posterior probability is found by integrating over the hyperparameters. The true posterior is then maximized over the model parameters, and a gaussian approximation is made. The similarities of the two approaches and their relative merits are discussed, and comparisons are made with the ideal hierarchical Bayesian solution. In moderately ill-posed problems, integration over hyperparameters yields a probability distribution with a skew peak, which causes signifi-cant biases to arise in the MAP method. In contrast, the evidence framework is shown to introduce negligible predictive error under straightforward conditions. General lessons are drawn concerning inference in many dimensions.
APA, Harvard, Vancouver, ISO, and other styles
7

Chi, Jinjin, Zhichao Zhang, Zhiyao Yang, Jihong Ouyang, and Hongbin Pei. "Generalized Variational Inference via Optimal Transport." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (2024): 11534–42. http://dx.doi.org/10.1609/aaai.v38i10.29035.

Full text
Abstract:
Variational Inference (VI) has gained popularity as a flexible approximate inference scheme for computing posterior distributions in Bayesian models. Original VI methods use Kullback-Leibler (KL) divergence to construct variational objectives. However, KL divergence has zero-forcing behavior and is completely agnostic to the metric of the underlying data distribution, resulting in bad approximations. To alleviate this issue, we propose a new variational objective by using Optimal Transport (OT) distance, which is a metric-aware divergence, to measure the difference between approximate posteriors and priors. The superior performance of OT distance enables us to learn more accurate approximations. We further enhance the objective by gradually including the OT term using a hyperparameter λ for over-parameterized models. We develop a Variational inference method with OT (VOT) which presents a gradient-based black-box framework for solving Bayesian models, even when the density function of approximate distribution is not available. We provide the consistency analysis of approximate posteriors and demonstrate the practical effectiveness on Bayesian neural networks and variational autoencoders.
APA, Harvard, Vancouver, ISO, and other styles
8

Dean, Thomas A., Sumeetpal S. Singh, and Ajay Jasra. "Asymptotic behaviour of the posterior distribution in approximate Bayesian computation." Stochastic Analysis and Applications 39, no. 5 (2021): 944–79. http://dx.doi.org/10.1080/07362994.2020.1859386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ba, Yuming, Jana de Wiljes, Dean S. Oliver, and Sebastian Reich. "Randomized maximum likelihood based posterior sampling." Computational Geosciences 26, no. 1 (2021): 217–39. http://dx.doi.org/10.1007/s10596-021-10100-y.

Full text
Abstract:
AbstractMinimization of a stochastic cost function is commonly used for approximate sampling in high-dimensional Bayesian inverse problems with Gaussian prior distributions and multimodal posterior distributions. The density of the samples generated by minimization is not the desired target density, unless the observation operator is linear, but the distribution of samples is useful as a proposal density for importance sampling or for Markov chain Monte Carlo methods. In this paper, we focus on applications to sampling from multimodal posterior distributions in high dimensions. We first show that sampling from multimodal distributions is improved by computing all critical points instead of only minimizers of the objective function. For applications to high-dimensional geoscience inverse problems, we demonstrate an efficient approximate weighting that uses a low-rank Gauss-Newton approximation of the determinant of the Jacobian. The method is applied to two toy problems with known posterior distributions and a Darcy flow problem with multiple modes in the posterior.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Jinwei, Hang Zhang, Mert Sabuncu, Pascal Spincemaille, Thanh Nguyen, and Yi Wang. "Probabilistic dipole inversion for adaptive quantitative susceptibility mapping." Machine Learning for Biomedical Imaging 1, MIDL 2020 (2021): 1–19. http://dx.doi.org/10.59275/j.melba.2021-bbf2.

Full text
Abstract:
A learning-based posterior distribution estimation method, Probabilistic Dipole Inversion (PDI), is proposed to solve the quantitative susceptibility mapping (QSM) inverse problem in MRI with uncertainty estimation. In PDI, a deep convolutional neural network (CNN) is used to represent the multivariate Gaussian distribution as the approximate posterior distribution of susceptibility given the input measured field. Such CNN is first trained on healthy subjects via posterior density estimation, where the training dataset contains samples from the true posterior distribution. Domain adaptations are then deployed on patient datasets with new pathologies not included in pre-training, where PDI updates the pre-trained CNN’s weights in an unsupervised fashion by minimizing the Kullback-Leibler divergence between the approximate posterior distribution represented by CNN and the true posterior distribution from the likelihood distribution of a known physical model and pre-defined prior distribution. Based on our experiments, PDI provides additional uncertainty estimation compared to the conventional MAP approach, while addressing the potential issue of the pre-trained CNN when test data deviates from training. Our code is available at https://github.com/Jinwei1209/Bayesian_QSM.
APA, Harvard, Vancouver, ISO, and other styles
11

Pandey, Ranjita. "Posterior Analysis of State Space Model with Spherical Symmetricity." Journal of Probability and Statistics 2015 (2015): 1–7. http://dx.doi.org/10.1155/2015/612024.

Full text
Abstract:
The present work investigates state space model with nonnormal disturbances when the deviation from normality has been observed only with respect to kurtosis and the distribution of disturbances continues to follow a symmetric family of distributions. Spherically symmetric distribution is used to approximate behavior of symmetric nonnormal disturbances for discrete time series. The conditional posterior densities of the involved parameters are derived, which are further utilized in Gibbs sampler scheme for estimating the marginal posterior densities. The state space model with disturbances following multivariate-tdistribution, which is a particular case of spherically symmetric distribution, is discussed.
APA, Harvard, Vancouver, ISO, and other styles
12

Percival, Will J., Oliver Friedrich, Elena Sellentin, and Alan Heavens. "Matching Bayesian and frequentist coverage probabilities when using an approximate data covariance matrix." Monthly Notices of the Royal Astronomical Society 510, no. 3 (2021): 3207–21. http://dx.doi.org/10.1093/mnras/stab3540.

Full text
Abstract:
ABSTRACT Observational astrophysics consists of making inferences about the Universe by comparing data and models. The credible intervals placed on model parameters are often as important as the maximum a posteriori probability values, as the intervals indicate concordance or discordance between models and with measurements from other data. Intermediate statistics (e.g. the power spectrum) are usually measured and inferences are made by fitting models to these rather than the raw data, assuming that the likelihood for these statistics has multivariate Gaussian form. The covariance matrix used to calculate the likelihood is often estimated from simulations, such that it is itself a random variable. This is a standard problem in Bayesian statistics, which requires a prior to be placed on the true model parameters and covariance matrix, influencing the joint posterior distribution. As an alternative to the commonly used independence Jeffreys prior, we introduce a prior that leads to a posterior that has approximately frequentist matching coverage. This is achieved by matching the covariance of the posterior to that of the distribution of true values of the parameters around the maximum likelihood values in repeated trials, under certain assumptions. Using this prior, credible intervals derived from a Bayesian analysis can be interpreted approximately as confidence intervals, containing the truth a certain proportion of the time for repeated trials. Linking frequentist and Bayesian approaches that have previously appeared in the astronomical literature, this offers a consistent and conservative approach for credible intervals quoted on model parameters for problems where the covariance matrix is itself an estimate.
APA, Harvard, Vancouver, ISO, and other styles
13

Zoubeidi, Toufik. "Asymptotic approximations to the Bayes posterior risk." Journal of Applied Mathematics and Stochastic Analysis 3, no. 2 (1990): 99–116. http://dx.doi.org/10.1155/s1048953390000090.

Full text
Abstract:
Suppose that, given ω=(ω1,ω2)∈ℜ2, X1,X2,… and Y1,Y2,… are independent random variables and their respective distribution functions Gω1 and Gω2 belong to a one parameter exponential family of distributions. We derive approximations to the posterior probabilities of ω lying in closed convex subsets of the parameter space under a general prior density. Using this, we then approximate the Bayes posterior risk for testing the hypotheses H0:ω∈Ω1 versus H1:ω∈Ω2 using a zero-one loss function, where Ω1 and Ω2 are disjoint closed convex subsets of the parameter space.
APA, Harvard, Vancouver, ISO, and other styles
14

TURKKAN, N., and T. PHAM-GIA. "EXACT BAYESIAN ESTIMATION OF SYSTEM RELIABILITY WITH POTENTIAL MISCLASSIFICATIONS IN SAMPLING." International Journal of Reliability, Quality and Safety Engineering 11, no. 03 (2004): 223–41. http://dx.doi.org/10.1142/s0218539304001488.

Full text
Abstract:
We provide the exact expression of the reliability of a system under a Bayesian approach, using beta distributions as both native and induced priors at the system level, and allowing uncertainties in sampling, expressed under the form of misclassifications, or noises, that can affect the final posterior distribution. Exact 100(1-α)% highest posterior density credible intervals, for system reliability, are computed, and comparisons are made with results from approximate methods proposed in the literature.
APA, Harvard, Vancouver, ISO, and other styles
15

De Santis, Fulvio, and Stefania Gubbiotti. "Sample Size Requirements for Calibrated Approximate Credible Intervals for Proportions in Clinical Trials." International Journal of Environmental Research and Public Health 18, no. 2 (2021): 595. http://dx.doi.org/10.3390/ijerph18020595.

Full text
Abstract:
In Bayesian analysis of clinical trials data, credible intervals are widely used for inference on unknown parameters of interest, such as treatment effects or differences in treatments effects. Highest Posterior Density (HPD) sets are often used because they guarantee the shortest length. In most of standard problems, closed-form expressions for exact HPD intervals do not exist, but they are available for intervals based on the normal approximation of the posterior distribution. For small sample sizes, approximate intervals may be not calibrated in terms of posterior probability, but for increasing sample sizes their posterior probability tends to the correct credible level and they become closer and closer to exact sets. The article proposes a predictive analysis to select appropriate sample sizes needed to have approximate intervals calibrated at a pre-specified level. Examples are given for interval estimation of proportions and log-odds.
APA, Harvard, Vancouver, ISO, and other styles
16

Shui, Yongtao, Xiaogang Wang, Wutao Qin, Yu Wang, Baojun Pang, and Naigang Cui. "A Novel Robust Student’s t-Based Cubature Information Filter with Heavy-Tailed Noises." International Journal of Aerospace Engineering 2020 (July 17, 2020): 1–11. http://dx.doi.org/10.1155/2020/7075037.

Full text
Abstract:
In this paper, a novel robust Student’s t-based cubature information filter is proposed for a nonlinear multisensor system with heavy-tailed process and measurement noises. At first, the predictive probability density function (PDF) and the likelihood PDF are approximated as two different Student’s t distributions. To avoid the process uncertainty induced by the heavy-tailed process noise, the scale matrix of the predictive PDF is modeled as an inverse Wishart distribution and estimated dynamically. Then, the predictive PDF and the likelihood PDF are transformed into a hierarchical Gaussian form to obtain the approximate solution of posterior PDF. Based on the variational Bayesian approximation method, the posterior PDF is approximated iteratively by minimizing the Kullback-Leibler divergence function. Based on the posterior PDF of the auxiliary parameters, the predicted covariance and measurement noise covariance are modified. And then the information matrix and information state are updated by summing the local information contributions, which are computed based on the modified covariance. Finally, the state, scale matrix, and posterior densities are estimated after fixed point iterations. And the simulation results for a target tracking example demonstrate the superiority of the proposed filter.
APA, Harvard, Vancouver, ISO, and other styles
17

Anumasa, Srinivas, and P. K. Srijith. "Latent Time Neural Ordinary Differential Equations." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (2022): 6010–18. http://dx.doi.org/10.1609/aaai.v36i6.20547.

Full text
Abstract:
Neural ordinary differential equations (NODE) have been proposed as a continuous depth generalization to popular deep learning models such as Residual networks (ResNets). They provide parameter efficiency and automate the model selection process in deep learning models to some extent. However, they lack the much-required uncertainty modelling and robustness capabilities which are crucial for their use in several real-world applications such as autonomous driving and healthcare. We propose a novel and unique approach to model uncertainty in NODE by considering a distribution over the end-time T of the ODE solver. The proposed approach, latent time NODE (LT-NODE), treats T as a latent variable and apply Bayesian learning to obtain a posterior distribution over T from the data. In particular, we use variational inference to learn an approximate posterior and the model parameters. Prediction is done by considering the NODE representations from different samples of the posterior and can be done efficiently using a single forward pass. As T implicitly defines the depth of a NODE, posterior distribution over T would also help in model selection in NODE. We also propose, adaptive latent time NODE (ALT-NODE), which allow each data point to have a distinct posterior distribution over end-times. ALT-NODE uses amortized variational inference to learn an approximate posterior using inference networks. We demonstrate the effectiveness of the proposed approaches in modelling uncertainty and robustness through experiments on synthetic and several real-world image classification data.
APA, Harvard, Vancouver, ISO, and other styles
18

Da, Longchao, Porter Jenkins, Trevor Schwantes, Jeffrey Dotson, and Hua Wei. "Probabilistic Offline Policy Ranking with Approximate Bayesian Computation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 18 (2024): 20370–78. http://dx.doi.org/10.1609/aaai.v38i18.30019.

Full text
Abstract:
In practice, it is essential to compare and rank candidate policies offline before real-world deployment for safety and reliability. Prior work seeks to solve this offline policy ranking (OPR) problem through value-based methods, such as Off-policy evaluation (OPE). However, they fail to analyze special case performance (e.g., worst or best cases), due to the lack of holistic characterization of policies’ performance. It is even more difficult to estimate precise policy values when the reward is not fully accessible under sparse settings. In this paper, we present Probabilistic Offline Policy Ranking (POPR), a framework to address OPR problems by leveraging expert data to characterize the probability of a candidate policy behaving like experts, and approximating its entire performance posterior distribution to help with ranking. POPR does not rely on value estimation, and the derived performance posterior can be used to distinguish candidates in worst-, best-, and average-cases. To estimate the posterior, we propose POPR-EABC, an Energy-based Approximate Bayesian Computation (ABC) method conducting likelihood-free inference. POPR-EABC reduces the heuristic nature of ABC by a smooth energy function, and improves the sampling efficiency by a pseudo-likelihood. We empirically demonstrate that POPR-EABC is adequate for evaluating policies in both discrete and continuous action spaces across various experiment environments, and facilitates probabilistic comparisons of candidate policies before deployment.
APA, Harvard, Vancouver, ISO, and other styles
19

Crisan, Dan, and Joaquín Míguez. "Uniform convergence over time of a nested particle filtering scheme for recursive parameter estimation in state-space Markov models." Advances in Applied Probability 49, no. 4 (2017): 1170–200. http://dx.doi.org/10.1017/apr.2017.38.

Full text
Abstract:
Abstract We analyse the performance of a recursive Monte Carlo method for the Bayesian estimation of the static parameters of a discrete-time state-space Markov model. The algorithm employs two layers of particle filters to approximate the posterior probability distribution of the model parameters. In particular, the first layer yields an empirical distribution of samples on the parameter space, while the filters in the second layer are auxiliary devices to approximate the (analytically intractable) likelihood of the parameters. This approach relates the novel algorithm to the recent sequential Monte Carlo square method, which provides a nonrecursive solution to the same problem. In this paper we investigate the approximation of integrals of real bounded functions with respect to the posterior distribution of the system parameters. Under assumptions related to the compactness of the parameter support and the stability and continuity of the sequence of posterior distributions for the state-space model, we prove that the Lp norms of the approximation errors vanish asymptotically (as the number of Monte Carlo samples generated by the algorithm increases) and uniformly over time. We also prove that, under the same assumptions, the proposed scheme can asymptotically identify the parameter values for a class of models. We conclude the paper with a numerical example that illustrates the uniform convergence results by exploring the accuracy and stability of the proposed algorithm operating with long sequences of observations.
APA, Harvard, Vancouver, ISO, and other styles
20

Singh, Kundan, Yogesh Mani Tripathi, Liang Wang, and Shuo-Jye Wu. "Analysis of Block Adaptive Type-II Progressive Hybrid Censoring with Weibull Distribution." Mathematics 12, no. 24 (2024): 4026. https://doi.org/10.3390/math12244026.

Full text
Abstract:
The estimation of unknown model parameters and reliability characteristics is considered under a block adaptive progressive hybrid censoring scheme, where data are observed from a Weibull model. This censoring scheme enhances experimental efficiency by conducting experiments across different testing facilities. Point and interval estimates for parameters and reliability assessments are derived using both classical and Bayesian approaches. The existence and uniqueness of maximum likelihood estimates are established. Consequently, reliability performance and differences across different testing facilities are analyzed. In addition, a Metropolis–Hastings sampling algorithm is developed to approximate complex posterior computations. Approximate confidence intervals and highest posterior density credible intervals are obtained for the parametric functions. The performance of all estimators is evaluated through an extensive simulation study, and observations are discussed. A cancer dataset is analyzed to illustrate the findings under the block adaptive censoring scheme.
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Hongqiao, and Jinglai Li. "Adaptive Gaussian Process Approximation for Bayesian Inference with Expensive Likelihood Functions." Neural Computation 30, no. 11 (2018): 3072–94. http://dx.doi.org/10.1162/neco_a_01127.

Full text
Abstract:
We consider Bayesian inference problems with computationally intensive likelihood functions. We propose a Gaussian process (GP)–based method to approximate the joint distribution of the unknown parameters and the data, built on recent work (Kandasamy, Schneider, & Póczos, 2015 ). In particular, we write the joint density approximately as a product of an approximate posterior density and an exponentiated GP surrogate. We then provide an adaptive algorithm to construct such an approximation, where an active learning method is used to choose the design points. With numerical examples, we illustrate that the proposed method has competitive performance against existing approaches for Bayesian computation.
APA, Harvard, Vancouver, ISO, and other styles
22

Yang, Dezhi, Guoxian Yu, Jun Wang, Zhengtian Wu, and Maozu Guo. "Reinforcement Causal Structure Learning on Order Graph." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (2023): 10737–44. http://dx.doi.org/10.1609/aaai.v37i9.26274.

Full text
Abstract:
Learning directed acyclic graph (DAG) that describes the causality of observed data is a very challenging but important task. Due to the limited quantity and quality of observed data, and non-identifiability of causal graph, it is almost impossible to infer a single precise DAG. Some methods approximate the posterior distribution of DAGs to explore the DAG space via Markov chain Monte Carlo (MCMC), but the DAG space is over the nature of super-exponential growth, accurately characterizing the whole distribution over DAGs is very intractable. In this paper, we propose Reinforcement Causal Structure Learning on Order Graph (RCL-OG) that uses order graph instead of MCMC to model different DAG topological orderings and to reduce the problem size. RCL-OG first defines reinforcement learning with a new reward mechanism to approximate the posterior distribution of orderings in an efficacy way, and uses deep Q-learning to update and transfer rewards between nodes. Next, it obtains the probability transition model of nodes on order graph, and computes the posterior probability of different orderings. In this way, we can sample on this model to obtain the ordering with high probability. Experiments on synthetic and benchmark datasets show that RCL-OG provides accurate posterior probability approximation and achieves better results than competitive causal discovery algorithms.
APA, Harvard, Vancouver, ISO, and other styles
23

Lu, Yunan, Liang He, Fan Min, Weiwei Li, and Xiuyi Jia. "Generative Label Enhancement with Gaussian Mixture and Partial Ranking." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (2023): 8975–83. http://dx.doi.org/10.1609/aaai.v37i7.26078.

Full text
Abstract:
Label distribution learning (LDL) is an effective learning paradigm for dealing with label ambiguity. When applying LDL, the datasets annotated with label distributions (i.e., the real-valued vectors like the probability distribution) are typically required. Unfortunately, most existing datasets only contain the logical labels, and manual annotating with label distributions is costly. To address this problem, we treat the label distribution as a latent vector and infer its posterior by variational Bayes. Specifically, we propose a generative label enhancement model to encode the process of generating feature vectors and logical label vectors from label distributions in a principled way. In terms of features, we assume that the feature vector is generated by a Gaussian mixture dominated by the label distribution, which captures the one-to-many relationship from the label distribution to the feature vector and thus reduces the feature generation error. In terms of logical labels, we design a probability distribution to generate the logical label vector from a label distribution, which captures partial label ranking in the logical label vector and thus provides a more accurate guidance for inferring the label distribution. Besides, to approximate the posterior of the label distribution, we design a inference model, and derive the variational learning objective. Finally, extensive experiments on real-world datasets validate our proposal.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhao, Zeyu, and Mrinal K. Sen. "A gradient-based Markov chain Monte Carlo method for full-waveform inversion and uncertainty analysis." GEOPHYSICS 86, no. 1 (2020): R15—R30. http://dx.doi.org/10.1190/geo2019-0585.1.

Full text
Abstract:
Traditional full-waveform inversion (FWI) methods only render a “best-fit” model that cannot account for uncertainties of the ill-posed inverse problem. Additionally, local optimization-based FWI methods cannot always converge to a geologically meaningful solution unless the inversion starts with an accurate background model. We seek the solution for FWI in the Bayesian inference framework to address those two issues. In Bayesian inference, the model space is directly probed by sampling methods such that we obtain a reliable uncertainty appraisal, determine optimal models, and avoid entrapment in a small local region of the model space. The solution of such a statistical inverse method is completely described by the posterior distribution, which quantifies the distributions for parameters and inversion uncertainties. To efficiently sample the posterior distribution, we introduce a sampling algorithm in which the proposal distribution is constructed by the local gradient and the diagonal approximate Hessian of the local log posterior. Our algorithm is called the gradient-based Markov chain Monte Carlo (GMCMC) method. The GMCMC FWI method can quantify inversion uncertainties with estimated posterior distribution given sufficiently long Markov chains. By directly sampling the posterior distribution, we obtain a global view of the model space. Theoretically speaking, statistical assessments do not depend on starting models. Our method is applied to the 2D Marmousi model with the frequency-domain FWI setting. Numerical results suggest that our method can be readily applied to 2D cases with affordable computational efforts.
APA, Harvard, Vancouver, ISO, and other styles
25

Gershman, Samuel J., Edward Vul, and Joshua B. Tenenbaum. "Multistability and Perceptual Inference." Neural Computation 24, no. 1 (2012): 1–24. http://dx.doi.org/10.1162/neco_a_00226.

Full text
Abstract:
Ambiguous images present a challenge to the visual system: How can uncertainty about the causes of visual inputs be represented when there are multiple equally plausible causes? A Bayesian ideal observer should represent uncertainty in the form of a posterior probability distribution over causes. However, in many real-world situations, computing this distribution is intractable and requires some form of approximation. We argue that the visual system approximates the posterior over underlying causes with a set of samples and that this approximation strategy produces perceptual multistability—stochastic alternation between percepts in consciousness. Under our analysis, multistability arises from a dynamic sample-generating process that explores the posterior through stochastic diffusion, implementing a rational form of approximate Bayesian inference known as Markov chain Monte Carlo (MCMC). We examine in detail the most extensively studied form of multistability, binocular rivalry, showing how a variety of experimental phenomena—gamma-like stochastic switching, patchy percepts, fusion, and traveling waves—can be understood in terms of MCMC sampling over simple graphical models of the underlying perceptual tasks. We conjecture that the stochastic nature of spiking neurons may lend itself to implementing sample-based posterior approximations in the brain.
APA, Harvard, Vancouver, ISO, and other styles
26

Kijsason, Sasipong, Sa-Aat Niwitpong, and Suparat Niwitpong. "Confidence Intervals for the Parameter Mean of Zero-Inflated Two-Parameter Rayleigh Distribution." Symmetry 17, no. 7 (2025): 1019. https://doi.org/10.3390/sym17071019.

Full text
Abstract:
The Rayleigh distribution is a continuous probability distribution that is inherently asymmetric and commonly used to model right-skewed data. It holds significant importance across a wide range of scientific and engineering disciplines and exhibits structural relationships with several other asymmetric probability distributions, for example, Weibull and exponential distribution. This research proposes techniques for establishing credible intervals and confidence intervals for the single mean of the zero-inflated two-parameter Rayleigh distribution. The study introduces methods such as the percentile bootstrap, generalized confidence interval, standard confidence interval, approximate normal using the delta method, Bayesian credible interval, and Bayesian highest posterior density. The effectiveness of the proposed methods is assessed by evaluating coverage probability and expected length through Monte Carlo simulations. The results indicate that the Bayesian highest posterior density method outperforms the other approaches. Finally, the study applies the proposed methods to construct confidence intervals for the single mean using real-world data on COVID-19 total deaths in Singapore during October 2022.
APA, Harvard, Vancouver, ISO, and other styles
27

Durante, Daniele. "Conjugate Bayes for probit regression via unified skew-normal distributions." Biometrika 106, no. 4 (2019): 765–79. http://dx.doi.org/10.1093/biomet/asz034.

Full text
Abstract:
Summary Regression models for dichotomous data are ubiquitous in statistics. Besides being useful for inference on binary responses, these methods serve as building blocks in more complex formulations, such as density regression, nonparametric classification and graphical models. Within the Bayesian framework, inference proceeds by updating the priors for the coefficients, typically taken to be Gaussians, with the likelihood induced by probit or logit regressions for the responses. In this updating, the apparent absence of a tractable posterior has motivated a variety of computational methods, including Markov chain Monte Carlo routines and algorithms that approximate the posterior. Despite being implemented routinely, Markov chain Monte Carlo strategies have mixing or time-inefficiency issues in large-$p$ and small-$n$ studies, whereas approximate routines fail to capture the skewness typically observed in the posterior. In this article it is proved that the posterior distribution for the probit coefficients has a unified skew-normal kernel under Gaussian priors. This result allows efficient Bayesian inference for a wide class of applications, especially in large-$p$ and small-to-moderate-$n$ settings where state-of-the-art computational methods face notable challenges. These advances are illustrated in a genetic study, and further motivate the development of a wider class of conjugate priors for probit models, along with methods for obtaining independent and identically distributed samples from the unified skew-normal posterior.
APA, Harvard, Vancouver, ISO, and other styles
28

Singh, Sanjay Kumar, Umesh Singh, and Vikas Kumar Sharma. "Bayesian Estimation and Prediction for Flexible Weibull Model under Type-II Censoring Scheme." Journal of Probability and Statistics 2013 (2013): 1–16. http://dx.doi.org/10.1155/2013/146140.

Full text
Abstract:
We have developed the Bayesian estimation procedure for flexible Weibull distribution under Type-II censoring scheme assuming Jeffrey's scale invariant (noninformative) and Gamma (informative) priors for the model parameters. The interval estimation for the model parameters has been performed through normal approximation, bootstrap, and highest posterior density (HPD) procedures. Further, we have also derived the predictive posteriors and the corresponding predictive survival functions for the future observations based on Type-II censored data from the flexible Weibull distribution. Since the predictive posteriors are not in the closed form, we proposed to use the Monte Carlo Markov chain (MCMC) methods to approximate the posteriors of interest. The performance of the Bayes estimators has also been compared with the classical estimators of the model parameters through the Monte Carlo simulation study. A real data set representing the time between failures of secondary reactor pumps has been analysed for illustration purpose.
APA, Harvard, Vancouver, ISO, and other styles
29

Jiang, Wenxin. "Some Simple Formulas for Posterior Convergence Rates." International Scholarly Research Notices 2014 (October 29, 2014): 1–8. http://dx.doi.org/10.1155/2014/469340.

Full text
Abstract:
We derive some simple relations that demonstrate how the posterior convergence rate is related to two driving factors: a “penalized divergence” of the prior, which measures the ability of the prior distribution to propose a nonnegligible set of working models to approximate the true model and a “norm complexity” of the prior, which measures the complexity of the prior support, weighted by the prior probability masses. These formulas are explicit and involve no essential assumptions and are easy to apply. We apply this approach to the case with model averaging and derive some useful oracle inequalities that can optimize the performance adaptively without knowing the true model.
APA, Harvard, Vancouver, ISO, and other styles
30

Li, Qirui, Cuixian Li, Zhiping Peng, Delong Cui, and Jieguang He. "Mixed Student’s T-Distribution Regression Soft Measurement Model and Its Application Based on VI and MCMC." Processes 13, no. 3 (2025): 861. https://doi.org/10.3390/pr13030861.

Full text
Abstract:
The conventional diagnostic techniques for ethylene cracker furnace tube coking rely on manual expertise, offline analysis and on-site inspection. However, these methods have inherent limitations, including prolonged inspection times, low accuracy and poor real-time performance. This makes it challenging to meet the requirements of chemical production. The necessity for high efficiency, high reliability and high safety, coupled with the inherent complexity of the production process, results in data that is characterized by multimodal, nonlinear, non-Gaussian and strong noise. This renders the traditional data processing and analysis methods ineffective. In order to address these issues, this paper puts forth a novel soft measurement approach, namely the ‘Mixed Student’s t-distribution regression soft measurement model based on Variational Inference (VI) and Markov Chain Monte Carlo (MCMC)’. The initial variational distribution is selected during the initialization step of VI. Subsequently, VI is employed to iteratively refine the distribution in order to more closely approximate the true posterior distribution. Subsequently, the outcomes of VI are employed to initiate the MCMC, which facilitates the placement of the iterative starting point of the MCMC in a region that more closely approximates the true posterior distribution. This approach allows the convergence process of MCMC to be accelerated, thereby enabling a more rapid approach to the true posterior distribution. The model integrates the efficiency of VI with the accuracy of the MCMC, thereby enhancing the precision of the posterior distribution approximation while preserving computational efficiency. The experimental results demonstrate that the model exhibits enhanced accuracy and robustness in the diagnosis of ethylene cracker tube coking compared to the conventional Partial Least Squares Regression (PLSR), Gaussian Process Regression (GPR), Gaussian Mixture Regression (GMR), Bayesian Student’s T-Distribution Mixture Regression (STMR) and Semi-supervised Bayesian T-Distribution Mixture Regression (SsSMM). This method provides a scientific basis for optimizing and maintaining the ethylene cracker, enhancing its production efficiency and reliability, and effectively addressing the multimodal, non-Gaussian distribution and uncertainty of the coking data of the ethylene cracker furnace tube.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhang, Huaguo, Wenting Luo, Xu Zhou, Hao Mu, Lin Gao, and Xiaodong Wang. "A Robust Trajectory Multi-Bernoulli Filter for Superpositional Sensors." Electronics 13, no. 20 (2024): 4001. http://dx.doi.org/10.3390/electronics13204001.

Full text
Abstract:
This paper proposes a trajectory multi-Bernoulli filter applied to the superpositional sensor model for multi-target tracking in the presence of unknown measurement noise. This filter can provide a Multi-Bernoulli approximation of the posterior density on a set of alive trajectories at the current time step. We also provide a Gaussian mixture (GM) implementation of this filter, employing a mixture of Gaussian and inverse Wishart distributions to represent the combined state of measurement noise and target information. Subsequently, the variational Bayesian (VB) method is employed to approximate the posterior distribution, ensuring its form remains consistent with the prior distribution. This method is capable of directly generating trajectory estimates and can jointly estimate both multi-object tracking and measurement noise covariance. The performance of this algorithm is verified through simulation. Finally, a computationally more efficient L-scan approximation is provided. The simulation results indicate that the filter can achieve robust tracking performance, adapting to unknown measurement noise.
APA, Harvard, Vancouver, ISO, and other styles
32

Beaumont, Mark A., Wenyang Zhang, and David J. Balding. "Approximate Bayesian Computation in Population Genetics." Genetics 162, no. 4 (2002): 2025–35. http://dx.doi.org/10.1093/genetics/162.4.2025.

Full text
Abstract:
Abstract We propose a new method for approximate Bayesian statistical inference on the basis of summary statistics. The method is suited to complex problems that arise in population genetics, extending ideas developed in this setting by earlier authors. Properties of the posterior distribution of a parameter, such as its mean or density curve, are approximated without explicit likelihood calculations. This is achieved by fitting a local-linear regression of simulated parameter values on simulated summary statistics, and then substituting the observed summary statistics into the regression equation. The method combines many of the advantages of Bayesian statistical inference with the computational efficiency of methods based on summary statistics. A key advantage of the method is that the nuisance parameters are automatically integrated out in the simulation step, so that the large numbers of nuisance parameters that arise in population genetics problems can be handled without difficulty. Simulation results indicate computational and statistical efficiency that compares favorably with those of alternative methods previously proposed in the literature. We also compare the relative efficiency of inferences obtained using methods based on summary statistics with those obtained directly from the data using MCMC.
APA, Harvard, Vancouver, ISO, and other styles
33

Muhammad, Isyaku, Xingang Wang, Changyou Li, Mingming Yan, and Miaoxin Chang. "Estimation of the Reliability of a Stress–Strength System from Poisson Half Logistic Distribution." Entropy 22, no. 11 (2020): 1307. http://dx.doi.org/10.3390/e22111307.

Full text
Abstract:
This paper discussed the estimation of stress-strength reliability parameter R=P(Y<X) based on complete samples when the stress-strength are two independent Poisson half logistic random variables (PHLD). We have addressed the estimation of R in the general case and when the scale parameter is common. The classical and Bayesian estimation (BE) techniques of R are studied. The maximum likelihood estimator (MLE) and its asymptotic distributions are obtained; an approximate asymptotic confidence interval of R is computed using the asymptotic distribution. The non-parametric percentile bootstrap and student’s bootstrap confidence interval of R are discussed. The Bayes estimators of R are computed using a gamma prior and discussed under various loss functions such as the square error loss function (SEL), absolute error loss function (AEL), linear exponential error loss function (LINEX), generalized entropy error loss function (GEL) and maximum a posteriori (MAP). The Metropolis–Hastings algorithm is used to estimate the posterior distributions of the estimators of R. The highest posterior density (HPD) credible interval is constructed based on the SEL. Monte Carlo simulations are used to numerically analyze the performance of the MLE and Bayes estimators, the results were quite satisfactory based on their mean square error (MSE) and confidence interval. Finally, we used two real data studies to demonstrate the performance of the proposed estimation techniques in practice and to illustrate how PHLD is a good candidate in reliability studies.
APA, Harvard, Vancouver, ISO, and other styles
34

Feroze, Navid, Ali Al-Alwan, Muhammad Noor-ul-Amin, Shajib Ali, and R. Alshenawy. "Bayesian Estimation for the Doubly Censored Topp Leone Distribution using Approximate Methods and Fuzzy Type of Priors." Journal of Function Spaces 2022 (March 19, 2022): 1–15. http://dx.doi.org/10.1155/2022/4816748.

Full text
Abstract:
The Topp Leone distribution (TLD) is a lifetime model having finite support and U-shaped hazard rate; these features distinguish it from the famous lifetime models such as gamma, Weibull, or Log-normal distribution. The Bayesian methods are very much linked to the Fuzzy sets. The Fuzzy priors can be used as prior information in the Bayesian models. This paper considers the posterior analysis of TLD, when the samples are doubly censored. The independent informative priors (IPs) which are very close to the Fuzzy priors have been proposed for the analysis. The symmetric and asymmetric loss functions have also been assumed for the analysis. As the marginal PDs are not available in a closed form, therefore, we have used a Quadrature method (QM), Lindley’s approximation (LA), Tierney and Kadane’s approximation (TKA), and Gibbs sampler (GS) for the approximate estimation of the parameters. A simulation study has been conducted to assess and compare the performance of various posterior estimators. In addition, a real dataset has been analyzed for the illustration of the applicability of the results obtained in the study. The study suggests that the TKA performs better than its counterparts.
APA, Harvard, Vancouver, ISO, and other styles
35

Zhang, Chen, Riccardo Barbano, and Bangti Jin. "Conditional Variational Autoencoder for Learned Image Reconstruction." Computation 9, no. 11 (2021): 114. http://dx.doi.org/10.3390/computation9110114.

Full text
Abstract:
Learned image reconstruction techniques using deep neural networks have recently gained popularity and have delivered promising empirical results. However, most approaches focus on one single recovery for each observation, and thus neglect information uncertainty. In this work, we develop a novel computational framework that approximates the posterior distribution of the unknown image at each query observation. The proposed framework is very flexible: it handles implicit noise models and priors, it incorporates the data formation process (i.e., the forward operator), and the learned reconstructive properties are transferable between different datasets. Once the network is trained using the conditional variational autoencoder loss, it provides a computationally efficient sampler for the approximate posterior distribution via feed-forward propagation, and the summarizing statistics of the generated samples are used for both point-estimation and uncertainty quantification. We illustrate the proposed framework with extensive numerical experiments on positron emission tomography (with both moderate and low-count levels) showing that the framework generates high-quality samples when compared with state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
36

White, S. R., T. Kypraios, and S. P. Preston. "Piecewise Approximate Bayesian Computation: fast inference for discretely observed Markov models using a factorised posterior distribution." Statistics and Computing 25, no. 2 (2013): 289–301. http://dx.doi.org/10.1007/s11222-013-9432-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Havasi, Marton, Jasper Snoek, Dustin Tran, Jonathan Gordon, and José Miguel Hernández-Lobato. "Sampling the Variational Posterior with Local Refinement." Entropy 23, no. 11 (2021): 1475. http://dx.doi.org/10.3390/e23111475.

Full text
Abstract:
Variational inference is an optimization-based method for approximating the posterior distribution of the parameters in Bayesian probabilistic models. A key challenge of variational inference is to approximate the posterior with a distribution that is computationally tractable yet sufficiently expressive. We propose a novel method for generating samples from a highly flexible variational approximation. The method starts with a coarse initial approximation and generates samples by refining it in selected, local regions. This allows the samples to capture dependencies and multi-modality in the posterior, even when these are absent from the initial approximation. We demonstrate theoretically that our method always improves the quality of the approximation (as measured by the evidence lower bound). In experiments, our method consistently outperforms recent variational inference methods in terms of log-likelihood and ELBO across three example tasks: the Eight-Schools example (an inference task in a hierarchical model), training a ResNet-20 (Bayesian inference in a large neural network), and the Mushroom task (posterior sampling in a contextual bandit problem).
APA, Harvard, Vancouver, ISO, and other styles
38

Li, Rongfan, Fan Zhou, Goce Trajcevski, Kunpeng Zhang, and Ting Zhong. "A Probabilistic Framework for Land Deformation Prediction (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (2022): 13001–2. http://dx.doi.org/10.1609/aaai.v36i11.21637.

Full text
Abstract:
The development of InSAR (satellite Interferometric Synthetic Aperture Radar) enables accurate monitoring of land surface deformations, and has led to advances of deformation forecast for preventing landslide, which is one of the severe geological disasters. Despite the unparalleled success, existing spatio-temporal models typically make predictions on static adjacency relationships, simplifying the conditional dependencies and neglecting the distributions of variables. To overcome those limitations, we propose a Distribution Aware Probabilistic Framework (DAPF), which learns manifold embeddings while maintaining the distribution of deformations. We obtain a dynamic adjacency matrix upon which we approximate the true posterior while emphasizing the spatio-temporal characteristics. Experimental results on real-world dataset validate the superior performance of our method.
APA, Harvard, Vancouver, ISO, and other styles
39

Feng, Jie, Xuguang Wang, and Jonathan Poterjoy. "A Comparison of Two Local Moment-Matching Nonlinear Filters: Local Particle Filter (LPF) and Local Nonlinear Ensemble Transform Filter (LNETF)." Monthly Weather Review 148, no. 11 (2020): 4377–95. http://dx.doi.org/10.1175/mwr-d-19-0368.1.

Full text
Abstract:
AbstractThe local particle filter (LPF) and the local nonlinear ensemble transform filter (LNETF) are two moment-matching nonlinear filters to approximate the classical particle filter (PF). They adopt different strategies to alleviate filter degeneracy. LPF and LNETF localize observational impact but use different localization functions. They assimilate observations in a partially sequential and a simultaneous manner, respectively. In addition, LPF applies the resampling step, whereas LNETF applies the deterministic square root transformation to update particles. Both methods preserve the posterior mean and variance of the PF. LNETF additionally preserves the posterior correlation of the PF for state variables within a local volume. These differences lead to their differing performance in filter stability and posterior moment estimation. LPF and LNETF are systematically compared and analyzed here through a set of experiments with a Lorenz model. Strategies to improve the LNETF are proposed. The original LNETF is inferior to the original LPF in filter stability and analysis accuracy, particularly for small particle numbers. This is attributed to both the localization function and particle update differences. The LNETF localization function imposes a stronger observation impact than the LPF for remote grids and thus is more susceptible to filter degeneracy. The LNETF update causes an overall narrower range of posteriors that excludes true states more frequently. After applying the same localization function as the LPF and additional posterior inflation to the LNETF, the two filters reach similar filter stability and analysis accuracy for all particle numbers. The improved LNETF shows more accurate posterior probability distribution but slightly worse spatial correlation of posteriors than the LPF.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhao, Yuanying, Dengke Xu, Xingde Duan, and Yicheng Pang. "Bayesian Subset Selection for Reproductive Dispersion Linear Models." Journal of Systems Science and Information 2, no. 1 (2014): 77–85. http://dx.doi.org/10.1515/jssi-2014-0077.

Full text
Abstract:
AbstractWe propose a full Bayesian subset selection method for reproductive dispersion linear models, which bases on expanding the usual link function to a function that incorporates all possible subsets of predictors by adding indictors as parameters. The vector of indicator variables dictates which predictors to delete. An efficient MCMC procedure that combining Gibbs sampler and Metropolis-Hastings algorithm is presented to approximate the posterior distribution of the indicator variables. The promising subsets of predictors can be identified as those with higher posterior probability. Several numerical examples are used to illustrate the newly developed methodology.
APA, Harvard, Vancouver, ISO, and other styles
41

Zaiser, Fabian, Andrzej S. Murawski, and C. H. Luke Ong. "Guaranteed Bounds on Posterior Distributions of Discrete Probabilistic Programs with Loops." Proceedings of the ACM on Programming Languages 9, POPL (2025): 1104–35. https://doi.org/10.1145/3704874.

Full text
Abstract:
We study the problem of bounding the posterior distribution of discrete probabilistic programs with unbounded support, loops, and conditioning. Loops pose the main difficulty in this setting: even if exact Bayesian inference is possible, the state of the art requires user-provided loop invariant templates. By contrast, we aim to find guaranteed bounds , which sandwich the true distribution. They are fully automated, applicable to more programs and provide more provable guarantees than approximate sampling-based inference. Since lower bounds can be obtained by unrolling loops, the main challenge is upper bounds, and we attack it in two ways. The first is called residual mass semantics , which is a flat bound based on the residual probability mass of a loop. The approach is simple, efficient, and has provable guarantees. The main novelty of our work is the second approach, called geometric bound semantics . It operates on a novel family of distributions, called eventually geometric distributions (EGDs), and can bound the distribution of loops with a new form of loop invariants called contraction invariants . The invariant synthesis problem reduces to a system of polynomial inequality constraints, which is a decidable problem with automated solvers. If a solution exists, it yields an exponentially decreasing bound on the whole distribution, and can therefore bound moments and tail asymptotics as well, not just probabilities as in the first approach. Both semantics enjoy desirable theoretical properties. In particular, we prove soundness and convergence, i.e. the bounds converge to the exact posterior as loops are unrolled further. We also investigate sufficient and necessary conditions for the existence of geometric bounds. On the practical side, we describe Diabolo , a fully-automated implementation of both semantics, and evaluate them on a variety of benchmarks from the literature, demonstrating their general applicability and the utility of the resulting bounds.
APA, Harvard, Vancouver, ISO, and other styles
42

Hansen, Thomas M. "Efficient probabilistic inversion using the rejection sampler—exemplified on airborne EM data." Geophysical Journal International 224, no. 1 (2020): 543–57. http://dx.doi.org/10.1093/gji/ggaa491.

Full text
Abstract:
SUMMARY Probabilistic inversion methods, typically based on Markov chain Monte Carlo, exist that allow exploring the full uncertainty of geophysical inverse problems. The use of such methods is though limited by significant computational demands, and non-trivial analysis of the obtained set of dependent models. Here, a novel approach, for sampling the posterior distribution is suggested based on using pre-calculated lookup tables with the extended rejection sampler. The method is (1) fast, (2) generates independent realizations of the posterior, and (3) does not get stuck in local minima. It can be applied to any inverse problem (and sample an approximate posterior distribution) but is most promising applied to problems with informed prior information and/or localized inverse problems. The method is tested on the inversion of airborne electromagnetic data and shows an increase in the computational efficiency of many orders of magnitude as compared to using the extended Metropolis algorithm.
APA, Harvard, Vancouver, ISO, and other styles
43

Ribeiro, Fabiano, and Manfred Opper. "Expectation Propagation with Factorizing Distributions: A Gaussian Approximation and Performance Results for Simple Models." Neural Computation 23, no. 4 (2011): 1047–69. http://dx.doi.org/10.1162/neco_a_00104.

Full text
Abstract:
We discuss the expectation propagation (EP) algorithm for approximate Bayesian inference using a factorizing posterior approximation. For neural network models, we use a central limit theorem argument to make EP tractable when the number of parameters is large. For two types of models, we show that EP can achieve optimal generalization performance when data are drawn from a simple distribution.
APA, Harvard, Vancouver, ISO, and other styles
44

Culpepper, Steven Andrew, and Aaron Hudson. "An Improved Strategy for Bayesian Estimation of the Reduced Reparameterized Unified Model." Applied Psychological Measurement 42, no. 2 (2017): 99–115. http://dx.doi.org/10.1177/0146621617707511.

Full text
Abstract:
A Bayesian formulation for a popular conjunctive cognitive diagnosis model, the reduced reparameterized unified model (rRUM), is developed. The new Bayesian formulation of the rRUM employs a latent response data augmentation strategy that yields tractable full conditional distributions. A Gibbs sampling algorithm is described to approximate the posterior distribution of the rRUM parameters. A Monte Carlo study supports accurate parameter recovery and provides evidence that the Gibbs sampler tended to converge in fewer iterations and had a larger effective sample size than a commonly employed Metropolis–Hastings algorithm. The developed method is disseminated for applied researchers as an R package titled “rRUM.”
APA, Harvard, Vancouver, ISO, and other styles
45

Alotaibi, Refah, Mazen Nassar, and Ahmed Elshahhat. "Computational Analysis of XLindley Parameters Using Adaptive Type-II Progressive Hybrid Censoring with Applications in Chemical Engineering." Mathematics 10, no. 18 (2022): 3355. http://dx.doi.org/10.3390/math10183355.

Full text
Abstract:
This work addresses the estimation issues of the XLindley distribution using an adaptive Type-II progressive hybrid censoring scheme. Maximum likelihood and Bayesian approaches are used to estimate the unknown parameter, reliability, and hazard rate functions. Bayesian estimators are explored under the assumption of independent gamma priors and a symmetric loss function. The approximate confidence intervals and the highest posterior density credible intervals are also computed. An extensive simulation study that takes into account various sample sizes and censoring schemes is implemented to evaluate the various estimating methods. Finally, for an explanation, two real data sets from the chemical engineering field are provided to show that the XLindley distribution is the best model compared to some competitive models for the same real data. The Bayesian paradigm utilizing the Metropolis–Hastings algorithm to generate samples from the posterior distribution is recommended to estimate any parameter of life of the XLindley distribution when data are obtained from adaptive Type-II progressively hybrid censored sample.
APA, Harvard, Vancouver, ISO, and other styles
46

Huang, Yanping, and Rajesh P. N. Rao. "Bayesian Inference and Online Learning in Poisson Neuronal Networks." Neural Computation 28, no. 8 (2016): 1503–26. http://dx.doi.org/10.1162/neco_a_00851.

Full text
Abstract:
Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.
APA, Harvard, Vancouver, ISO, and other styles
47

Rashad, Mohamed El-Sagheer. "Bayesian Estimation Based on Record Values From Exponentiated Weibull Distribution: an Markov Chain Monte Carlo Approach." American Based Research Journal 3, no. 10 (2014): 01–11. https://doi.org/10.5281/zenodo.3413398.

Full text
Abstract:
<em>&nbsp;In this paper, we consider the Bayes estimators of the unknown parameters of the exponentiated Weibull distribution (EWD) under the assumptions of gamma priors on both shape parameters. Point estimation and confidence intervals based on maximum likelihood and bootstrap methods are proposed. The Bayes estimators cannot be obtained in explicit forms. So we propose Markov chain Monte Carlo (MCMC) techniques to generate samples from the posterior distributions and in turn computing the Bayes estimators. The approximate Bayes estimators obtained under the assumptions of non-informative priors are compared with the maximum likelihood estimators using Monte Carlo simulations. A numerical example is also presented for illustrative purposes.</em>
APA, Harvard, Vancouver, ISO, and other styles
48

Tsai, Tzong-Ru, Hua Xin, Ya-Yen Fan, and Yuhlong Lio. "Bias-Corrected Maximum Likelihood Estimation and Bayesian Inference for the Process Performance Index Using Inverse Gaussian Distribution." Stats 5, no. 4 (2022): 1079–96. http://dx.doi.org/10.3390/stats5040064.

Full text
Abstract:
In this study, the estimation methods of bias-corrected maximum likelihood (BCML), bootstrap BCML (B-BCML) and Bayesian using Jeffrey’s prior distribution were proposed for the inverse Gaussian distribution with small sample cases to obtain the ML and Bayes estimators of the model parameters and the process performance index based on the lower specification process performance index. Moreover, an approximate confidence interval and the highest posterior density interval of the process performance index were established via the delta and Bayesian inference methods, respectively. To overcome the computational difficulty of sampling from the posterior distribution in Bayesian inference, the Markov chain Monte Carlo approach was used to implement the proposed Bayesian inference procedures. Monte Carlo simulations were conducted to evaluate the performance of the proposed BCML, B-BCML and Bayesian estimation methods. An example of the active repair times for an airborne communication transceiver is used for illustration.
APA, Harvard, Vancouver, ISO, and other styles
49

EL-Sagheer, Rashad M., Mohamed S. Eliwa, Khaled M. Alqahtani, and Mahmoud EL-Morshedy. "Asymmetric Randomly Censored Mortality Distribution: Bayesian Framework and Parametric Bootstrap with Application to COVID-19 Data." Journal of Mathematics 2022 (March 22, 2022): 1–14. http://dx.doi.org/10.1155/2022/8300753.

Full text
Abstract:
This article investigates a survival analysis under randomly censored mortality distribution. From the perspective of frequentist, we derive the point estimations through the method of maximum likelihood estimation. Furthermore, approximate confidence intervals for the parameters are constructed based on the asymptotic distribution of the maximum likelihood estimators. Besides, two parametric bootstraps are implemented to construct the approximate confidence intervals for the unknown parameters. In Bayesian framework, the Bayes estimates of the unknown parameters are evaluated by applying the Markov chain Monte Carlo technique, and highest posterior density credible intervals are also carried out. In addition, the Bayes inference based on symmetric and asymmetric loss functions is obtained. Finally, Monte Carlo simulation is performed to observe the behavior of the proposed methods, and a real data set of COVID-19 mortality rate is analyzed for illustration.
APA, Harvard, Vancouver, ISO, and other styles
50

Birolleau, Alexandre, Gaël Poëtte, and Didier Lucor. "Adaptive Bayesian Inference for Discontinuous Inverse Problems, Application to Hyperbolic Conservation Laws." Communications in Computational Physics 16, no. 1 (2014): 1–34. http://dx.doi.org/10.4208/cicp.240113.071113a.

Full text
Abstract:
AbstractVarious works from the literature aimed at accelerating Bayesian inference in inverse problems. Stochastic spectral methods have been recently proposed as surrogate approximations of the forward uncertainty propagation model over the support of the prior distribution. These representations are efficient because they allow affordable simulation of a large number of samples from the posterior distribution. Unfortunately, they do not perform well when the forward model exhibits strong nonlinear behavior with respect to its input.In this work, we first relate the fast (exponential) L2-convergence of the forward approximation to the fast (exponential) convergence (in terms of Kullback-Leibler divergence) of the approximate posterior. In particular, we prove that in case the prior distribution is uniform, the posterior is at least twice as fast as the convergence rate of the forwardmodel in those norms. The Bayesian inference strategy is developed in the framework of a stochastic spectral projectionmethod. The predicted convergence rates are then demonstrated for simple nonlinear inverse problems of varying smoothness.We then propose an efficient numerical approach for the Bayesian solution of inverse problems presenting strongly nonlinear or discontinuous systemresponses. This comes with the improvement of the forward model that is adaptively approximated by an iterative generalized Polynomial Chaos-based representation. The numerical approximations and predicted convergence rates of the former approach are compared to the new iterative numericalmethod for nonlinear time-dependent test cases of varying dimension and complexity, which are relevant regarding our hydrodynamics motivations and therefore regarding hyperbolic conservation laws and the apparition of discontinuities in finite time.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!