Academic literature on the topic 'Prior distribution and approximate posterior distribution'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Prior distribution and approximate posterior distribution.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Prior distribution and approximate posterior distribution"

1

Posselt, Derek J., Daniel Hodyss, and Craig H. Bishop. "Errors in Ensemble Kalman Smoother Estimates of Cloud Microphysical Parameters." Monthly Weather Review 142, no. 4 (2014): 1631–54. http://dx.doi.org/10.1175/mwr-d-13-00290.1.

Full text
Abstract:
Abstract If forecast or observation error distributions are non-Gaussian, the true posterior mean and covariance depends on the distribution of observation errors and the observed values. The posterior distribution of analysis errors obtained from ensemble Kalman filters and smoothers is independent of observed values. Hence, the error in ensemble Kalman smoother (EnKS) state estimates is closely linked to the sensitivity of the true posterior to observed values. Here a Markov chain Monte Carlo (MCMC) algorithm is used to document the dependence of the errors in EnKS-based estimates of cloud m
APA, Harvard, Vancouver, ISO, and other styles
2

Araveeporn, Autcha. "Bayesian Approach for Confidence Intervals of Variance on the Normal Distribution." International Journal of Mathematics and Mathematical Sciences 2022 (August 27, 2022): 1–15. http://dx.doi.org/10.1155/2022/8043260.

Full text
Abstract:
This research aims to compare estimating the confidence intervals of variance based on the normal distribution with the primary method and the Bayesian approach. The maximum likelihood is the well-known method to approximate variance, and the Chi-squared distribution performs the confidence interval. The central Bayesian approach forms the posterior distribution that makes the variance estimator, which depends on the probability and prior distributions. Most introductory prior information looks for the availability of the prior distribution, informative prior distribution, and noninformative p
APA, Harvard, Vancouver, ISO, and other styles
3

Ba, Yuming, Jana de Wiljes, Dean S. Oliver, and Sebastian Reich. "Randomized maximum likelihood based posterior sampling." Computational Geosciences 26, no. 1 (2021): 217–39. http://dx.doi.org/10.1007/s10596-021-10100-y.

Full text
Abstract:
AbstractMinimization of a stochastic cost function is commonly used for approximate sampling in high-dimensional Bayesian inverse problems with Gaussian prior distributions and multimodal posterior distributions. The density of the samples generated by minimization is not the desired target density, unless the observation operator is linear, but the distribution of samples is useful as a proposal density for importance sampling or for Markov chain Monte Carlo methods. In this paper, we focus on applications to sampling from multimodal posterior distributions in high dimensions. We first show t
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Jinwei, Hang Zhang, Mert Sabuncu, Pascal Spincemaille, Thanh Nguyen, and Yi Wang. "Probabilistic dipole inversion for adaptive quantitative susceptibility mapping." Machine Learning for Biomedical Imaging 1, MIDL 2020 (2021): 1–19. http://dx.doi.org/10.59275/j.melba.2021-bbf2.

Full text
Abstract:
A learning-based posterior distribution estimation method, Probabilistic Dipole Inversion (PDI), is proposed to solve the quantitative susceptibility mapping (QSM) inverse problem in MRI with uncertainty estimation. In PDI, a deep convolutional neural network (CNN) is used to represent the multivariate Gaussian distribution as the approximate posterior distribution of susceptibility given the input measured field. Such CNN is first trained on healthy subjects via posterior density estimation, where the training dataset contains samples from the true posterior distribution. Domain adaptations a
APA, Harvard, Vancouver, ISO, and other styles
5

Nakagawa, Tomoyuki, and Shintaro Hashimoto. "On Default Priors for Robust Bayesian Estimation with Divergences." Entropy 23, no. 1 (2020): 29. http://dx.doi.org/10.3390/e23010029.

Full text
Abstract:
This paper presents objective priors for robust Bayesian estimation against outliers based on divergences. The minimum γ-divergence estimator is well-known to work well in estimation against heavy contamination. The robust Bayesian methods by using quasi-posterior distributions based on divergences have been also proposed in recent years. In the objective Bayesian framework, the selection of default prior distributions under such quasi-posterior distributions is an important problem. In this study, we provide some properties of reference and moment matching priors under the quasi-posterior dis
APA, Harvard, Vancouver, ISO, and other styles
6

Hong, Junhyeok, Kipum Kim, and Seong W. Kim. "Default Priors in a Zero-Inflated Poisson Distribution: Intrinsic Versus Integral Priors." Mathematics 13, no. 5 (2025): 773. https://doi.org/10.3390/math13050773.

Full text
Abstract:
Prior elicitation is an important issue in both subjective and objective Bayesian frameworks, where prior distributions impose certain information on parameters before data are observed. Caution is warranted when utilizing noninformative priors for hypothesis testing or model selection. Since noninformative priors are often improper, the Bayes factor, i.e., the ratio of two marginal distributions, is not properly determined due to unspecified constants contained in the Bayes factor. An adjusted Bayes factor using a data-splitting idea, which is called the intrinsic Bayes factor, can often be u
APA, Harvard, Vancouver, ISO, and other styles
7

Zoubeidi, Toufik. "Asymptotic approximations to the Bayes posterior risk." Journal of Applied Mathematics and Stochastic Analysis 3, no. 2 (1990): 99–116. http://dx.doi.org/10.1155/s1048953390000090.

Full text
Abstract:
Suppose that, given ω=(ω1,ω2)∈ℜ2, X1,X2,… and Y1,Y2,… are independent random variables and their respective distribution functions Gω1 and Gω2 belong to a one parameter exponential family of distributions. We derive approximations to the posterior probabilities of ω lying in closed convex subsets of the parameter space under a general prior density. Using this, we then approximate the Bayes posterior risk for testing the hypotheses H0:ω∈Ω1 versus H1:ω∈Ω2 using a zero-one loss function, where Ω1 and Ω2 are disjoint closed convex subsets of the parameter space.
APA, Harvard, Vancouver, ISO, and other styles
8

Percival, Will J., Oliver Friedrich, Elena Sellentin, and Alan Heavens. "Matching Bayesian and frequentist coverage probabilities when using an approximate data covariance matrix." Monthly Notices of the Royal Astronomical Society 510, no. 3 (2021): 3207–21. http://dx.doi.org/10.1093/mnras/stab3540.

Full text
Abstract:
ABSTRACT Observational astrophysics consists of making inferences about the Universe by comparing data and models. The credible intervals placed on model parameters are often as important as the maximum a posteriori probability values, as the intervals indicate concordance or discordance between models and with measurements from other data. Intermediate statistics (e.g. the power spectrum) are usually measured and inferences are made by fitting models to these rather than the raw data, assuming that the likelihood for these statistics has multivariate Gaussian form. The covariance matrix used
APA, Harvard, Vancouver, ISO, and other styles
9

Geweke, John. "Priors for Macroeconomic Time Series and Their Application." Econometric Theory 10, no. 3-4 (1994): 609–32. http://dx.doi.org/10.1017/s0266466600008690.

Full text
Abstract:
This paper takes up Bayesian inference in a general trend stationary model for macroeconomic time series with independent Student-t disturbances. The model is linear in the data, but nonlinear in parameters. An informative but nonconjugate family of prior distributions for the parameters is introduced, indexed by a single parameter that can be readily elicited. The main technical contribution is the construction of posterior moments, densities, and odd ratios by using a six-step Gibbs sampler. Mappings from the index parameter of the family of prior distribution to posterior moments, densities
APA, Harvard, Vancouver, ISO, and other styles
10

Fudenberg, Drew, Giacomo Lanzani, and Philipp Strack. "Pathwise concentration bounds for Bayesian beliefs." Theoretical Economics 18, no. 4 (2023): 1585–622. http://dx.doi.org/10.3982/te5206.

Full text
Abstract:
We show that Bayesian posteriors concentrate on the outcome distributions that approximately minimize the Kullback–Leibler divergence from the empirical distribution, uniformly over sample paths, even when the prior does not have full support. This generalizes Diaconis and Freedman's (1990) uniform convergence result to, e.g., priors that have finite support, are constrained by independence assumptions, or have a parametric form that cannot match some probability distributions. The concentration result lets us provide a rate of convergence for Berk's (1966) result on the limiting behavior of p
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Prior distribution and approximate posterior distribution"

1

Ozbozkurt, Pelin. "Bayesian Inference In Anova Models." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12611532/index.pdf.

Full text
Abstract:
Estimation of location and scale parameters from a random sample of size n is of paramount importance in Statistics. An estimator is called fully efficient if it attains the Cramer-Rao minimum variance bound besides being unbiased. The method that yields such estimators, at any rate for large n, is the method of modified maximum likelihood estimation. Apparently, such estimators cannot be made more efficient by using sample based classical methods. That makes room for Bayesian method of estimation which engages prior distributions and likelihood functions. A formal combination of the prior kno
APA, Harvard, Vancouver, ISO, and other styles
2

Ruli, Erlis. "Recent Advances in Approximate Bayesian Computation Methods." Doctoral thesis, Università degli studi di Padova, 2014. http://hdl.handle.net/11577/3423529.

Full text
Abstract:
The Bayesian approach to statistical inference in fundamentally probabilistic. Exploiting the internal consistency of the probability framework, the posterior distribution extracts the relevant information in the data, and provides a complete and coherent summary of post data uncertainty. However, summarising the posterior distribution often requires the calculation of awkward multidimensional integrals. A further complication with the Bayesian approach arises when the likelihood functions is unavailable. In this respect, promising advances have been made by theory of Approximate Bayesian Comp
APA, Harvard, Vancouver, ISO, and other styles
3

Hornik, Kurt, and Bettina Grün. "On standard conjugate families for natural exponential families with bounded natural parameter space." Elsevier, 2014. http://dx.doi.org/10.1016/j.jmva.2014.01.003.

Full text
Abstract:
Diaconis and Ylvisaker (1979) give necessary conditions for conjugate priors for distributions from the natural exponential family to be proper as well as to have the property of linear posterior expectation of the mean parameter of the family. Their conditions for propriety and linear posterior expectation are also sufficient if the natural parameter space is equal to the set of all d-dimensional real numbers. In this paper their results are extended to characterize when conjugate priors are proper if the natural parameter space is bounded. For the special case where the natural exponential
APA, Harvard, Vancouver, ISO, and other styles
4

Frühwirth-Schnatter, Sylvia. "On Fuzzy Bayesian Inference." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1990. http://epub.wu.ac.at/384/1/document.pdf.

Full text
Abstract:
In the paper at hand we apply it to Bayesian statistics to obtain "Fuzzy Bayesian Inference". In the subsequent sections we will discuss a fuzzy valued likelihood function, Bayes' theorem for both fuzzy data and fuzzy priors, a fuzzy Bayes' estimator, fuzzy predictive densities and distributions, and fuzzy H.P.D .-Regions. (author's abstract)<br>Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
5

Novotová, Simona. "Bayesovské přístupy ve stochastickém rezervování." Master's thesis, 2014. http://www.nusl.cz/ntk/nusl-335061.

Full text
Abstract:
In the master thesis the issue of bayesian approach to stochastic reserving is solved. Reserving problem is very discussed in insurance industry. The text introduces the basic actuarial notation and terminology and explains the bayesian inference in statistics and estimation. The main part of the thesis is framed by the description of the particular bayesian models. It is focused on the derivation of estimators for the reserves and ultimate claims. The aim of the thesis is to show the practical uses of the models and the relations between them. For this purpose the methods are applied on a rea
APA, Harvard, Vancouver, ISO, and other styles
6

Tuyl, Frank Adrianus Wilhelmus Maria. "Estimation of the Binomial parameter: in defence of Bayes (1763)." Thesis, 2007. http://hdl.handle.net/1959.13/25730.

Full text
Abstract:
Research Doctorate - Doctor of Philosophy (PhD)<br>Interval estimation of the Binomial parameter è, representing the true probability of a success, is a problem of long standing in statistical inference. The landmark work is by Bayes (1763) who applied the uniform prior to derive the Beta posterior that is the normalised Binomial likelihood function. It is not well known that Bayes favoured this ‘noninformative’ prior as a result of considering the observable random variable x as opposed to the unknown parameter è, which is an important difference. In this thesis we develop additional argument
APA, Harvard, Vancouver, ISO, and other styles
7

Tuyl, Frank Adrianus Wilhelmus Maria. "Estimation of the Binomial parameter: in defence of Bayes (1763)." 2007. http://hdl.handle.net/1959.13/25730.

Full text
Abstract:
Research Doctorate - Doctor of Philosophy (PhD)<br>Interval estimation of the Binomial parameter è, representing the true probability of a success, is a problem of long standing in statistical inference. The landmark work is by Bayes (1763) who applied the uniform prior to derive the Beta posterior that is the normalised Binomial likelihood function. It is not well known that Bayes favoured this ‘noninformative’ prior as a result of considering the observable random variable x as opposed to the unknown parameter è, which is an important difference. In this thesis we develop additional argument
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Prior distribution and approximate posterior distribution"

1

Mitros, John, Arjun Pakrashi, and Brian Mac Namee. "Ramifications of Approximate Posterior Inference for Bayesian Deep Learning in Adversarial and Out-of-Distribution Settings." In Computer Vision – ECCV 2020 Workshops. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66415-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Badings, Thom S., Nils Jansen, Sebastian Junges, Marielle Stoelinga, and Matthias Volk. "Sampling-Based Verification of CTMCs with Uncertain Rates." In Computer Aided Verification. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13188-2_2.

Full text
Abstract:
AbstractWe employ uncertain parametric CTMCs with parametric transition rates and a prior on the parameter values. The prior encodes uncertainty about the actual transition rates, while the parameters allow dependencies between transition rates. Sampling the parameter values from the prior distribution then yields a standard CTMC, for which we may compute relevant reachability probabilities. We provide a principled solution, based on a technique called scenario-optimization, to the following problem: From a finite set of parameter samples and a user-specified confidence level, compute prediction regions on the reachability probabilities. The prediction regions should (with high probability) contain the reachability probabilities of a CTMC induced by any additional sample. To boost the scalability of the approach, we employ standard abstraction techniques and adapt our methodology to support approximate reachability probabilities. Experiments with various well-known benchmarks show the applicability of the approach.
APA, Harvard, Vancouver, ISO, and other styles
3

Reich, Sebastian. "Data Assimilation: A Dynamic Homotopy-Based Coupling Approach." In Mathematics of Planet Earth. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40094-0_12.

Full text
Abstract:
AbstractHomotopy approaches to Bayesian inference have found wide- spread use especially if the Kullback–Leibler divergence between the prior and the posterior distribution is large. Here we extend one of these homotopy approaches to include an underlying stochastic diffusion process. The underlying mathematical problem is closely related to the Schrödinger bridge problem for given marginal distributions. We demonstrate that the proposed homotopy approach provides a computationally tractable approximation to the underlying bridge problem. In particular, our implementation builds upon the widely used ensemble Kalman filter methodology and extends it to Schrödinger bridge problems within the context of sequential data assimilation.
APA, Harvard, Vancouver, ISO, and other styles
4

Graziani, Rebecca. "Stochastic Population Forecasting: A Bayesian Approach Based on Evaluation by Experts." In Developments in Demographic Forecasting. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-42472-5_2.

Full text
Abstract:
Abstract We suggest a procedure for deriving expert based stochastic population forecasts within the Bayesian approach. According to the traditional and commonly used cohort-component model, the inputs of the forecasting procedures are the fertility and mortality age schedules along with the distribution of migrants by age. Age schedules and distributions are derived from summary indicators, such as total fertility rates, male and female life expectancy at birth, and male and female number of immigrants and emigrants. The joint distributions of all summary indicators are obtained based on evaluations by experts, elicited according to a conditional procedure that makes it possible to derive information on the centres of the indicators, their variability, their across-time correlations, and the correlations between the indicators. The forecasting method is based on a mixture model within the Supra-Bayesian approach that treats the evaluations by experts as data and the summary indicators as parameters. The derived posterior distributions are used as forecast distributions of the summary indicators of interest. A Markov Chain Monte Carlo algorithm is designed to approximate such posterior distributions.
APA, Harvard, Vancouver, ISO, and other styles
5

Donovan, Therese M., and Ruth M. Mickey. "The Shark Attack Problem: The Gamma-Poisson Conjugate." In Bayesian Statistics for Beginners. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198841296.003.0011.

Full text
Abstract:
This chapter introduces the gamma-Poisson conjugate. Many Bayesian analyses consider alternative parameter values as hypotheses. The prior distribution for an unknown parameter can be represented by a continuous probability density function when the number of hypotheses is infinite. There are special cases where a Bayesian prior probability distribution for an unknown parameter of interest can be quickly updated to a posterior distribution of the same form as the prior. In the “Shark Attack Problem,” a gamma distribution is used as the prior distribution of λ‎, the mean number of shark attacks in a given year. Poisson data are then collected to determine the number of attacks in a given year. The prior distribution is updated to the posterior distribution in light of this new information. In short, a gamma prior distribution + Poisson data → gamma posterior distribution. The gamma distribution is said to be “conjugate to” the Poisson distribution.
APA, Harvard, Vancouver, ISO, and other styles
6

Moreau M., Mahood J., Moreau K., Berg D., Hill D., and Raso J. "Assessing the Impact of Pelvic Obliquity in Post-operative Neuromuscular Scoliosis." In Studies in Health Technology and Informatics. IOS Press, 2002. https://doi.org/10.3233/978-1-60750-935-6-481.

Full text
Abstract:
The goal of this pilot study was to explore the relationship between pelvic obliquity and patient pain, sitting tolerance, pressure sores, and function. Five neuromuscular patients who underwent spinal surgery 6-26 weeks prior to assessment took part in this on-going study (4F; IM); age at surgery (14.6 &amp;plusmn; 2.6 years). Pelvic obliquity was measured from pre- and post-operative anterior-posterior radiographs. A force-sensing pad with a grid of sensors was placed on a flat surface and the weight distribution pattern was recorded. The pressures were divided into left and right sides and peak levels were noted on each side. The parents or caregivers completed a questionnaire on their child&amp;apos;s pain, sitting tolerance, pressure sores, and functional abilities. Pelvic obliquity was reduced after surgery by approximately 50% depending on the method used to assess pelvic obliquity. The major curve was reduced from 64&amp;deg;(10&amp;deg;) to 39&amp;deg; (10 ). Post operatively, the average pressure (left/right side) ranged from 1.2 to 2.0 (average 1.6). The peak pressure ratio ranged from 1.1 to 1.9 (average 1.4). The ratio of left/right pressure correlated with improvement in pelvic obliquity (r2=0.9). Pain was moderate/severe in the 2 patients with the least correction as measured with the Cobb angle from surgery; both improved following surgery. Two patients suffered pressure sores pre-operatively and one post-operatively. Only 3/5 felt sitting endurance had increased. All parents felt their child sat straighter after surgery. The outcome measures of pain, pressure sores, sitting tolerance, and function were not well related to the amount of pelvic obliquity. More candidates and a longer follow-up may shed light on the many relationships.
APA, Harvard, Vancouver, ISO, and other styles
7

Donovan, Therese M., and Ruth M. Mickey. "The White House Problem: The Beta-Binomial Conjugate." In Bayesian Statistics for Beginners. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198841296.003.0010.

Full text
Abstract:
This chapter introduces the beta-binomial conjugate. There are special cases where a Bayesian prior probability distribution for an unknown parameter of interest can be quickly updated to a posterior distribution of the same form as the prior. In the “White House Problem,” a beta distribution is used to set the priors for all hypotheses of p, the probability that a famous person can get into the White House without an invitation. Binomial data are then collected, and provide the number of times a famous person gained entry out of a fixed number of attempts. The prior distribution is updated to a posterior distribution (also a beta distribution) in light of this new information. In short, a beta prior distribution for the unknown parameter + binomial data → beta posterior distribution for the unknown parameter, p. The beta distribution is said to be “conjugate to” the binomial distribution.
APA, Harvard, Vancouver, ISO, and other styles
8

Neuhäuser, Markus, and Graeme D. Ruxton. "Bayesian analysis." In The Statistical Analysis of Small Data Sets. Oxford University PressOxford, 2024. http://dx.doi.org/10.1093/oso/9780198872979.003.0011.

Full text
Abstract:
Abstract The book finishes with a chapter on Bayesian statistics. In a Bayesian analysis a prior distribution and a sample of data are combined to provide a refined posterior distribution. Hence, the posterior distribution can be seen as a fusion between the prior distribution and the observations. In this chapter, ways to obtain the posterior distribution but also some general comments on the particular issues of small samples for Bayesian approaches (particularly enhanced importance on selection of an appropriate prior) are discussed, and entry points into the relevant literatures are given. The highest density interval and the region of practical equivalence are introduced as useful concepts.
APA, Harvard, Vancouver, ISO, and other styles
9

Donovan, Therese M., and Ruth M. Mickey. "The Maple Syrup Problem: The Normal-Normal Conjugate." In Bayesian Statistics for Beginners. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198841296.003.0012.

Full text
Abstract:
In this chapter, Bayesian methods are used to estimate the two parameters that identify a normal distribution, μ‎ and σ‎. Many Bayesian analyses consider alternative parameter values as hypotheses. The prior distribution for an unknown parameter can be represented by a continuous probability density function when the number of hypotheses is infinite. In the “Maple Syrup Problem,” a normal distribution is used as the prior distribution of μ‎, the mean number of millions of gallons of maple syrup produced in Vermont in a year. The amount of syrup produced in multiple years is determined, and assumed to follow a normal distribution with known σ‎. The prior distribution is updated to the posterior distribution in light of this new information. In short, a normal prior distribution + normally distributed data → normal posterior distribution.
APA, Harvard, Vancouver, ISO, and other styles
10

Barron, Andrew R. "Information-theoretic Characterization of Bayes Performance and the Choice of Priors in Parametric and Nonparametric Problems." In Bayesian Statistics 6. Oxford University PressOxford, 1999. http://dx.doi.org/10.1093/oso/9780198504856.003.0002.

Full text
Abstract:
Abstract The risk of Bayes procedures (predictive densities) with Kullback-Leibler loss and the asymptotics of the posterior distribution are examined for densities with Kullback neighborhoods assigned positive prior probability. Necessary and sufficient conditions for consistency of the posterior distribution are proved. Examples reveal that the posterior distribution can behave quite peculiarly, while, in contrast, the cumulative risk of predictive densities has nice properties that can be used in advance of observing the data to help choose the prior.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Prior distribution and approximate posterior distribution"

1

Rust, Steven W., Patrick H. Vieth, Elden R. Johnson, and Michael L. Cox. "Quantitative Corrosion Risk Assessment Based on Pig Data." In CORROSION 1996. NACE International, 1996. https://doi.org/10.5006/c1996-96051.

Full text
Abstract:
Abstract In-line inspection of underground pipelines for corrosion damage using “smart” pigs is now quite common. With the advent of high-resolution pigs that can identify large numbers of potential anomalies, more sophisticated methodologies are required for interpreting the results of an in-line inspection. Of particular interest is the probability that the depth of corrosion in a particular location exceeds a critical depth defined by the local pipe characteristics and maximum operating pressure. In this paper, a Bayesian statistical methodology for determining the probability that corrosio
APA, Harvard, Vancouver, ISO, and other styles
2

Gao, Qinghe, Daniel C. Miedema, Yidong Zhao, Jana M. Weber, Qian Tao, and Artur M. Schweidtmann. "Bayesian uncertainty quantification of graph neural networks using stochastic gradient Hamiltonian Monte Carlo." In The 35th European Symposium on Computer Aided Process Engineering. PSE Press, 2025. https://doi.org/10.69997/sct.111298.

Full text
Abstract:
Graph neural networks (GNNs) have proven state-of-the-art performance in molecular property prediction tasks. However, a significant challenge with GNNs is the reliability of their predictions, particularly in critical domains where quantifying model confidence is essential. Therefore, assessing uncertainty in GNN predictions is crucial to improving their robustness. Existing uncertainty quantification methods, such as Deep ensembles and Monte Carlo Dropout, have been applied to GNNs with some success, but these methods are limited to approximate the full posterior distribution. In this work,
APA, Harvard, Vancouver, ISO, and other styles
3

Lian, Rongzhong, Min Xie, Fan Wang, Jinhua Peng, and Hua Wu. "Learning to Select Knowledge for Response Generation in Dialog Systems." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/706.

Full text
Abstract:
End-to-end neural models for intelligent dialogue systems suffer from the problem of generating uninformative responses. Various methods were proposed to generate more informative responses by leveraging external knowledge. However, few previous work has focused on selecting appropriate knowledge in the learning process. The inappropriate selection of knowledge could prohibit the model from learning to make full use of the knowledge. Motivated by this, we propose an end-to-end neural model which employs a novel knowledge selection mechanism where both prior and posterior distributions over kno
APA, Harvard, Vancouver, ISO, and other styles
4

Shen, Gehui, Xi Chen, and Zhihong Deng. "Variational Learning of Bayesian Neural Networks via Bayesian Dark Knowledge." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/282.

Full text
Abstract:
Bayesian neural networks (BNNs) have received more and more attention because they are capable of modeling epistemic uncertainty which is hard for conventional neural networks. Markov chain Monte Carlo (MCMC) methods and variational inference (VI) are two mainstream methods for Bayesian deep learning. The former is effective but its storage cost is prohibitive since it has to save many samples of neural network parameters. The latter method is more time and space efficient, however the approximate variational posterior limits its performance. In this paper, we aim to combine the advantages of
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Yunhao, Junchi Yan, Xiaolu Zhang, Jun Zhou, and Xiaokang Yang. "Learning Mixture of Neural Temporal Point Processes for Multi-dimensional Event Sequence Clustering." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/523.

Full text
Abstract:
Multi-dimensional event sequence clustering applies to many scenarios e.g. e-Commerce and electronic health. Traditional clustering models fail to characterize complex real-world processes due to the strong parametric assumption. While Neural Temporal Point Processes (NTPPs) mainly focus on modeling similar sequences instead of clustering. To fill the gap, we propose Mixture of Neural Temporal Point Processes (NTPP-MIX), a general framework that can utilize many existing NTPPs for multi-dimensional event sequence clustering. In NTPP-MIX, the prior distribution of coefficients for cluster assig
APA, Harvard, Vancouver, ISO, and other styles
6

Chou, Yi, and Sriram Sankaranarayanan. "Bayesian Parameter Estimation for Nonlinear Dynamics Using Sensitivity Analysis." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/791.

Full text
Abstract:
We investigate approximate Bayesian inference techniques for nonlinear systems described by ordinary differential equation (ODE) models. In particular, the approximations will be based on set-valued reachability analysis approaches, yielding approximate models for the posterior distribution. Nonlinear ODEs are widely used to mathematically describe physical and biological models. However, these models are often described by parameters that are not directly measurable and have an impact on the system behaviors. Often, noisy measurement data combined with physical/biological intuition serve as t
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Yunhao, and Junchi Yan. "Neural Relation Inference for Multi-dimensional Temporal Point Processes via Message Passing Graph." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/469.

Full text
Abstract:
Relation discovery for multi-dimensional temporal point processes (MTPP) has received increasing interest for its importance in prediction and interpretability of the underlying dynamics. Traditional statistical MTPP models like Hawkes Process have difficulty in capturing complex relation due to their limited parametric form of the intensity function. While recent neural-network-based models suffer poor interpretability. In this paper, we propose a neural relation inference model namely TPP-NRI. Given MTPP data, it adopts a variational inference framework to model the posterior relation of MTP
APA, Harvard, Vancouver, ISO, and other styles
8

Sun, Yinbo, Lintao Ma, Yu Liu, et al. "Memory Augmented State Space Model for Time Series Forecasting." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/479.

Full text
Abstract:
State space model (SSM) provides a general and flexible forecasting framework for time series. Conventional SSM with fixed-order Markovian assumption often falls short in handling the long-range temporal dependencies and/or highly non-linear correlation in time-series data, which is crucial for accurate forecasting. To this extend, we present External Memory Augmented State Space Model (EMSSM) within the sequential Monte Carlo (SMC) framework. Unlike the common fixed-order Markovian SSM, our model features an external memory system, in which we store informative latent state experience, whereb
APA, Harvard, Vancouver, ISO, and other styles
9

IGEA, FELIPE, MANOLIS N. CHATZIS, and ALICE CICIRELLO. "STRUCTURAL MODEL UPDATING USING VARIATIONAL INFERENCE." In Structural Health Monitoring 2021. Destech Publications, Inc., 2022. http://dx.doi.org/10.12783/shm2021/36282.

Full text
Abstract:
Monte Carlo sampling approaches are frequently used for probabilistic model updating of physics-based models under parametric uncertainty due to their high accuracy. The model updating framework produces a model that represents the real system more accurately than the prior knowledge or assumptions. This statistically updated model may prove useful if Structural Health Monitoring (SHM) techniques are to be applied. However, the updating of the models requires the use of a high number of samples, implying a high computational cost. Another additional disadvantage of these methods is that most o
APA, Harvard, Vancouver, ISO, and other styles
10

Mishra, Mayank, Dhiraj Madan, Gaurav Pandey, and Danish Contractor. "Variational Learning for Unsupervised Knowledge Grounded Dialogs." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/597.

Full text
Abstract:
Recent methods for knowledge grounded dialogs generate responses by incorporating information from an external textual document. These methods do not require the exact document to be known during training and rely on the use of a retrieval system to fetch relevant documents from a large index. The documents used to generate the responses are modeled as latent variables whose prior probabilities need to be estimated. Models such as RAG, marginalize the document probabilities over the documents retrieved from the index to define the log-likelihood loss function which is optimized end-to-end. In
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Prior distribution and approximate posterior distribution"

1

Read, Matthew, and Dan Zhu. Fast Posterior Sampling in Tightly Identified SVARs Using 'Soft' Sign Restrictions. Reserve Bank of Australia, 2025. https://doi.org/10.47688/rdp2025-03.

Full text
Abstract:
We propose algorithms for conducting Bayesian inference in structural vector autoregressions identified using sign restrictions. The key feature of our approach is a sampling step based on 'soft' sign restrictions. This step draws from a target density that smoothly penalises parameter values violating the restrictions, facilitating the use of computationally efficient Markov chain Monte Carlo sampling algorithms. An importance-sampling step yields draws from the desired distribution conditional on the 'hard' sign restrictions. Relative to standard accept-reject sampling, the method substantia
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!