Academic literature on the topic 'Approximate posterior distribution'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Approximate posterior distribution.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Approximate posterior distribution"

1

Asensio Ramos, A., C. J. Díaz Baso, and O. Kochukhov. "Approximate Bayesian neural Doppler imaging." Astronomy & Astrophysics 658 (February 2022): A162. http://dx.doi.org/10.1051/0004-6361/202142027.

Full text
Abstract:
Aims. The non-uniform surface temperature distribution of rotating active stars is routinely mapped with the Doppler imaging technique. Inhomogeneities in the surface produce features in high-resolution spectroscopic observations that shift in wavelength because of the Doppler effect, depending on their position on the visible hemisphere. The inversion problem has been systematically solved using maximum a posteriori regularized methods assuming smoothness or maximum entropy. Our aim in this work is to solve the full Bayesian inference problem by providing access to the posterior distribution of the surface temperature in the star compatible with the observations. Methods. We use amortized neural posterior estimation to produce a model that approximates the high-dimensional posterior distribution for spectroscopic observations of selected spectral ranges sampled at arbitrary rotation phases. The posterior distribution is approximated with conditional normalizing flows, which are flexible, tractable, and easy-to-sample approximations to arbitrary distributions. When conditioned on the spectroscopic observations, these normalizing flows provide a very efficient way of obtaining samples from the posterior distribution. The conditioning on observations is achieved through the use of Transformer encoders, which can deal with arbitrary wavelength sampling and rotation phases. Results. Our model can produce thousands of posterior samples per second, each one accompanied by an estimation of the log-probability. Our exhaustive validation of the model for very high-signal-to-noise observations shows that it correctly approximates the posterior, albeit with some overestimation of the broadening. We apply the model to the moderately fast rotator II Peg, producing the first Bayesian map of its temperature inhomogenities. We conclude that conditional normalizing flows are a very promising tool for carrying out approximate Bayesian inference in more complex problems in stellar physics, such as constraining the magnetic properties using polarimetry.
APA, Harvard, Vancouver, ISO, and other styles
2

Karabatsos, George. "Copula Approximate Bayesian Computation Using Distribution Random Forests." Stats 7, no. 3 (2024): 1002–50. http://dx.doi.org/10.3390/stats7030061.

Full text
Abstract:
Ongoing modern computational advancements continue to make it easier to collect increasingly large and complex datasets, which can often only be realistically analyzed using models defined by intractable likelihood functions. This Stats invited feature article introduces and provides an extensive simulation study of a new approximate Bayesian computation (ABC) framework for estimating the posterior distribution and the maximum likelihood estimate (MLE) of the parameters of models defined by intractable likelihoods, that unifies and extends previous ABC methods proposed separately. This framework, copulaABCdrf, aims to accurately estimate and describe the possibly skewed and high-dimensional posterior distribution by a novel multivariate copula-based meta-t distribution based on univariate marginal posterior distributions that can be accurately estimated by distribution random forests (drf), while performing automatic summary statistics (covariates) selection, based on robustly estimated copula dependence parameters. The copulaABCdrf framework also provides a novel multivariate mode estimator to perform MLE and posterior mode estimation and an optional step to perform model selection from a given set of models using posterior probabilities estimated by drf. The posterior distribution estimation accuracy of the ABC framework is illustrated and compared with previous standard ABC methods through several simulation studies involving low- and high-dimensional models with computable posterior distributions, which are either unimodal, skewed, or multimodal; and exponential random graph and mechanistic network models, each defined by an intractable likelihood from which it is costly to simulate large network datasets. This paper also proposes and studies a new solution to the simulation cost problem in ABC involving the posterior estimation of parameters from datasets simulated from the given model that are smaller compared to the potentially large size of the dataset being analyzed. This proposal is motivated by the fact that, for many models defined by intractable likelihoods, such as the network models when they are applied to analyze massive networks, the repeated simulation of large datasets (networks) for posterior-based parameter estimation can be too computationally costly and vastly slow down or prohibit the use of standard ABC methods. The copulaABCdrf framework and standard ABC methods are further illustrated through analyses of large real-life networks of sizes ranging between 28,000 and 65.6 million nodes (between 3 million and 1.8 billion edges), including a large multilayer network with weighted directed edges. The results of the simulation studies show that, in settings where the true posterior distribution is not highly multimodal, copulaABCdrf usually produced similar point estimates from the posterior distribution for low-dimensional parametric models as previous ABC methods, but the copula-based method can produce more accurate estimates from the posterior distribution for high-dimensional models, and, in both dimensionality cases, usually produced more accurate estimates of univariate marginal posterior distributions of parameters. Also, posterior estimation accuracy was usually improved when pre-selecting the important summary statistics using drf compared to ABC employing no pre-selection of the subset of important summaries. For all ABC methods studied, accurate estimation of a highly multimodal posterior distribution was challenging. In light of the results of all the simulation studies, this article concludes by discussing how the copulaABCdrf framework can be improved for future research.
APA, Harvard, Vancouver, ISO, and other styles
3

Posselt, Derek J., Daniel Hodyss, and Craig H. Bishop. "Errors in Ensemble Kalman Smoother Estimates of Cloud Microphysical Parameters." Monthly Weather Review 142, no. 4 (2014): 1631–54. http://dx.doi.org/10.1175/mwr-d-13-00290.1.

Full text
Abstract:
Abstract If forecast or observation error distributions are non-Gaussian, the true posterior mean and covariance depends on the distribution of observation errors and the observed values. The posterior distribution of analysis errors obtained from ensemble Kalman filters and smoothers is independent of observed values. Hence, the error in ensemble Kalman smoother (EnKS) state estimates is closely linked to the sensitivity of the true posterior to observed values. Here a Markov chain Monte Carlo (MCMC) algorithm is used to document the dependence of the errors in EnKS-based estimates of cloud microphysical parameters on observed values. It is shown that EnKS analysis distributions are grossly inaccurate for nonnegative microphysical parameters when parameter values are close to zero. Furthermore, numerical analysis is presented that shows that, by design, the posterior distributions given by EnKS and even nonlinear extensions of these smoothers approximate the average of all possible posterior analysis distributions associated with all possible observations given the prior. Multiple runs of the MCMC are made to approximate this distribution. This empirically derived average of Bayesian posterior analysis errors is shown to be qualitatively similar to the EnKS posterior. In this way, it is demonstrated that, in the presence of nonlinearity, EnKS algorithms do not estimate the true posterior error distribution given the specific values of the observations. Instead, they produce an error distribution that is consistent with an average of the true posterior variance, weighted by the probability of obtaining each possible observation. This seemingly subtle distinction gives rise to fundamental differences between the approximate EnKS posterior and the true Bayesian posterior distribution.
APA, Harvard, Vancouver, ISO, and other styles
4

Lele, Subhash R., C. George Glen, and José Miguel Ponciano. "Practical Consequences of the Bias in the Laplace Approximation to Marginal Likelihood for Hierarchical Models." Entropy 27, no. 3 (2025): 289. https://doi.org/10.3390/e27030289.

Full text
Abstract:
Due to the high dimensional integration over latent variables, computing marginal likelihood and posterior distributions for the parameters of a general hierarchical model is a difficult task. The Markov Chain Monte Carlo (MCMC) algorithms are commonly used to approximate the posterior distributions. These algorithms, though effective, are computationally intensive and can be slow for large, complex models. As an alternative to the MCMC approach, the Laplace approximation (LA) has been successfully used to obtain fast and accurate approximations to the posterior mean and other derived quantities related to the posterior distribution. In the last couple of decades, LA has also been used to approximate the marginal likelihood function and the posterior distribution. In this paper, we show that the bias in the Laplace approximation to the marginal likelihood has substantial practical consequences.
APA, Harvard, Vancouver, ISO, and other styles
5

Burr, Tom, and Alexei Skurikhin. "Selecting Summary Statistics in Approximate Bayesian Computation for Calibrating Stochastic Models." BioMed Research International 2013 (2013): 1–10. http://dx.doi.org/10.1155/2013/210646.

Full text
Abstract:
Approximate Bayesian computation (ABC) is an approach for using measurement data to calibrate stochastic computer models, which are common in biology applications. ABC is becoming the “go-to” option when the data and/or parameter dimension is large because it relies on user-chosen summary statistics rather than the full data and is therefore computationally feasible. One technical challenge with ABC is that the quality of the approximation to the posterior distribution of model parameters depends on the user-chosen summary statistics. In this paper, the user requirement to choose effective summary statistics in order to accurately estimate the posterior distribution of model parameters is investigated and illustrated by example, using a model and corresponding real data of mitochondrial DNA population dynamics. We show that for some choices of summary statistics, the posterior distribution of model parameters is closely approximated and for other choices of summary statistics, the posterior distribution is not closely approximated. A strategy to choose effective summary statistics is suggested in cases where the stochastic computer model can be run at many trial parameter settings, as in the example.
APA, Harvard, Vancouver, ISO, and other styles
6

MacKay, David J. C. "Comparison of Approximate Methods for Handling Hyperparameters." Neural Computation 11, no. 5 (1999): 1035–68. http://dx.doi.org/10.1162/089976699300016331.

Full text
Abstract:
I examine two approximate methods for computational implementation of Bayesian hierarchical models, that is, models that include unknown hyperparameters such as regularization constants and noise levels. In the evidence framework, the model parameters are integrated over, and the resulting evidence is maximized over the hyperparameters. The optimized hyperparameters are used to define a gaussian approximation to the posterior distribution. In the alternative MAP method, the true posterior probability is found by integrating over the hyperparameters. The true posterior is then maximized over the model parameters, and a gaussian approximation is made. The similarities of the two approaches and their relative merits are discussed, and comparisons are made with the ideal hierarchical Bayesian solution. In moderately ill-posed problems, integration over hyperparameters yields a probability distribution with a skew peak, which causes signifi-cant biases to arise in the MAP method. In contrast, the evidence framework is shown to introduce negligible predictive error under straightforward conditions. General lessons are drawn concerning inference in many dimensions.
APA, Harvard, Vancouver, ISO, and other styles
7

Chi, Jinjin, Zhichao Zhang, Zhiyao Yang, Jihong Ouyang, and Hongbin Pei. "Generalized Variational Inference via Optimal Transport." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (2024): 11534–42. http://dx.doi.org/10.1609/aaai.v38i10.29035.

Full text
Abstract:
Variational Inference (VI) has gained popularity as a flexible approximate inference scheme for computing posterior distributions in Bayesian models. Original VI methods use Kullback-Leibler (KL) divergence to construct variational objectives. However, KL divergence has zero-forcing behavior and is completely agnostic to the metric of the underlying data distribution, resulting in bad approximations. To alleviate this issue, we propose a new variational objective by using Optimal Transport (OT) distance, which is a metric-aware divergence, to measure the difference between approximate posteriors and priors. The superior performance of OT distance enables us to learn more accurate approximations. We further enhance the objective by gradually including the OT term using a hyperparameter λ for over-parameterized models. We develop a Variational inference method with OT (VOT) which presents a gradient-based black-box framework for solving Bayesian models, even when the density function of approximate distribution is not available. We provide the consistency analysis of approximate posteriors and demonstrate the practical effectiveness on Bayesian neural networks and variational autoencoders.
APA, Harvard, Vancouver, ISO, and other styles
8

Dean, Thomas A., Sumeetpal S. Singh, and Ajay Jasra. "Asymptotic behaviour of the posterior distribution in approximate Bayesian computation." Stochastic Analysis and Applications 39, no. 5 (2021): 944–79. http://dx.doi.org/10.1080/07362994.2020.1859386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ba, Yuming, Jana de Wiljes, Dean S. Oliver, and Sebastian Reich. "Randomized maximum likelihood based posterior sampling." Computational Geosciences 26, no. 1 (2021): 217–39. http://dx.doi.org/10.1007/s10596-021-10100-y.

Full text
Abstract:
AbstractMinimization of a stochastic cost function is commonly used for approximate sampling in high-dimensional Bayesian inverse problems with Gaussian prior distributions and multimodal posterior distributions. The density of the samples generated by minimization is not the desired target density, unless the observation operator is linear, but the distribution of samples is useful as a proposal density for importance sampling or for Markov chain Monte Carlo methods. In this paper, we focus on applications to sampling from multimodal posterior distributions in high dimensions. We first show that sampling from multimodal distributions is improved by computing all critical points instead of only minimizers of the objective function. For applications to high-dimensional geoscience inverse problems, we demonstrate an efficient approximate weighting that uses a low-rank Gauss-Newton approximation of the determinant of the Jacobian. The method is applied to two toy problems with known posterior distributions and a Darcy flow problem with multiple modes in the posterior.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Jinwei, Hang Zhang, Mert Sabuncu, Pascal Spincemaille, Thanh Nguyen, and Yi Wang. "Probabilistic dipole inversion for adaptive quantitative susceptibility mapping." Machine Learning for Biomedical Imaging 1, MIDL 2020 (2021): 1–19. http://dx.doi.org/10.59275/j.melba.2021-bbf2.

Full text
Abstract:
A learning-based posterior distribution estimation method, Probabilistic Dipole Inversion (PDI), is proposed to solve the quantitative susceptibility mapping (QSM) inverse problem in MRI with uncertainty estimation. In PDI, a deep convolutional neural network (CNN) is used to represent the multivariate Gaussian distribution as the approximate posterior distribution of susceptibility given the input measured field. Such CNN is first trained on healthy subjects via posterior density estimation, where the training dataset contains samples from the true posterior distribution. Domain adaptations are then deployed on patient datasets with new pathologies not included in pre-training, where PDI updates the pre-trained CNN’s weights in an unsupervised fashion by minimizing the Kullback-Leibler divergence between the approximate posterior distribution represented by CNN and the true posterior distribution from the likelihood distribution of a known physical model and pre-defined prior distribution. Based on our experiments, PDI provides additional uncertainty estimation compared to the conventional MAP approach, while addressing the potential issue of the pre-trained CNN when test data deviates from training. Our code is available at https://github.com/Jinwei1209/Bayesian_QSM.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Approximate posterior distribution"

1

Ruli, Erlis. "Recent Advances in Approximate Bayesian Computation Methods." Doctoral thesis, Università degli studi di Padova, 2014. http://hdl.handle.net/11577/3423529.

Full text
Abstract:
The Bayesian approach to statistical inference in fundamentally probabilistic. Exploiting the internal consistency of the probability framework, the posterior distribution extracts the relevant information in the data, and provides a complete and coherent summary of post data uncertainty. However, summarising the posterior distribution often requires the calculation of awkward multidimensional integrals. A further complication with the Bayesian approach arises when the likelihood functions is unavailable. In this respect, promising advances have been made by theory of Approximate Bayesian Computations (ABC). This thesis focuses on computational methods for the approximation of posterior distributions, and it discusses six original contributions. The first contribution concerns the approximation of marginal posterior distributions for scalar parameters. By combining higher-order tail area approximation with the inverse transform sampling, we define the HOTA algorithm which draws independent random sample from the approximate marginal posterior. The second discusses the HOTA algorithm with pseudo-posterior distributions, \eg, posterior distributions obtained by the combination of a pseudo-likelihood with a prior within Bayes' rule. The third contribution extends the use of tail-area approximations to contexts with multidimensional parameters, and proposes a method which gives approximate Bayesian credible regions with good sampling coverage properties. The forth presents an improved Laplace approximation which can be used for computing marginal likelihoods. The fifth contribution discusses a model-based procedure for choosing good summary statistics for ABC, by using composite score functions. Lastly, the sixth contribution discusses the choice of a default proposal distribution for ABC that is based on the notion of quasi-likelihood.<br>L'approccio bayesiano all'inferenza statistica è fondamentalmente probabilistico. Attraverso il calcolo delle probabilità, la distribuzione a posteriori estrae l'informazione rilevante offerta dai dati e produce una descrizione completa e coerente dell'incertezza condizionatamente ai dati osservati. Tuttavia, la descrizione della distribuzione a posteriori spesso richiede il computo di integrali multivariati e complicati. Un'ulteriore difficoltà dell'approccio bayesiano è legata alla funzione di verosimiglianza e nasce quando quest'ultima è matematicamento o computazionalmente intrattabile. In questa direzione, notevoli sviluppi sono stati compiuti dalla cosiddetta teaoria di Approximate Bayesian Computations (ABC). Questa tesi si focalizza su metodi computazionali per l'approssimazione della distribuzione a posteriori e propone sei contributi originali. Il primo contributo concerne l'approssimazione della distributione a posteriori marginale per un parametro scalare. Combinando l'approssimazione di ordine superiore per tail-area con il metodo della simulazione per inversione, si ottiene l'algorimo denominato HOTA, il quale può essere usato per simulare in modo indipendente da un'approssimazione della distribuzione a posteriori. Il secondo contributo si propone di estendere l'uso dell'algoritmo HOTA in contesti di distributioni pseudo-posterior, ovvero una distribuzione a posteriori ottenuta attraverso la combinazione di una pseudo-verosimiglianza con una prior, tramite il teorema di Bayes. Il terzo contributo estende l'uso dell'approssimazione di tail-area in contesti con parametri multidimensionali e propone un metodo per calcolare delle regioni di credibilità le quali presentano buone proprietà di copertura frequentista. Il quarto contributo presenta un'approssimazione di Laplace di terzo ordine per il calcolo della verosimiglianza marginale. Il quinto contributo si focalizza sulla scelta delle statistiche descrittive per ABC e propone un metodo parametrico, basato sulla funzione di score composita, per la scelta di tali statistiche. Infine, l'ultimo contributo si focalizza sulla scelta di una distribuzione di proposta da defalut per algoritmi ABC, dove la procedura di derivazione di tale distributzione è basata sulla nozione della quasi-verosimiglianza.
APA, Harvard, Vancouver, ISO, and other styles
2

Nembot, Simo Annick Joëlle. "Approximation de la distribution a posteriori d'un modèle Gamma-Poisson hiérarchique à effets mixtes." Thèse, 2011. http://hdl.handle.net/1866/4872.

Full text
Abstract:
La méthode que nous présentons pour modéliser des données dites de "comptage" ou données de Poisson est basée sur la procédure nommée Modélisation multi-niveau et interactive de la régression de Poisson (PRIMM) développée par Christiansen et Morris (1997). Dans la méthode PRIMM, la régression de Poisson ne comprend que des effets fixes tandis que notre modèle intègre en plus des effets aléatoires. De même que Christiansen et Morris (1997), le modèle étudié consiste à faire de l'inférence basée sur des approximations analytiques des distributions a posteriori des paramètres, évitant ainsi d'utiliser des méthodes computationnelles comme les méthodes de Monte Carlo par chaînes de Markov (MCMC). Les approximations sont basées sur la méthode de Laplace et la théorie asymptotique liée à l'approximation normale pour les lois a posteriori. L'estimation des paramètres de la régression de Poisson est faite par la maximisation de leur densité a posteriori via l'algorithme de Newton-Raphson. Cette étude détermine également les deux premiers moments a posteriori des paramètres de la loi de Poisson dont la distribution a posteriori de chacun d'eux est approximativement une loi gamma. Des applications sur deux exemples de données ont permis de vérifier que ce modèle peut être considéré dans une certaine mesure comme une généralisation de la méthode PRIMM. En effet, le modèle s'applique aussi bien aux données de Poisson non stratifiées qu'aux données stratifiées; et dans ce dernier cas, il comporte non seulement des effets fixes mais aussi des effets aléatoires liés aux strates. Enfin, le modèle est appliqué aux données relatives à plusieurs types d'effets indésirables observés chez les participants d'un essai clinique impliquant un vaccin quadrivalent contre la rougeole, les oreillons, la rub\'eole et la varicelle. La régression de Poisson comprend l'effet fixe correspondant à la variable traitement/contrôle, ainsi que des effets aléatoires liés aux systèmes biologiques du corps humain auxquels sont attribués les effets indésirables considérés.<br>We propose a method for analysing count or Poisson data based on the procedure called Poisson Regression Interactive Multilevel Modeling (PRIMM) introduced by Christiansen and Morris (1997). The Poisson regression in the PRIMM method has fixed effects only, whereas our model incorporates random effects. As well as Christiansen and Morris (1997), the model studied aims at doing inference based on adequate analytical approximations of posterior distributions of the parameters. This avoids the use of computationally expensive methods such as Markov chain Monte Carlo (MCMC) methods. The approximations are based on the Laplace's method and asymptotic theory. Estimates of Poisson mixed effects regression parameters are obtained through the maximization of their joint posterior density via the Newton-Raphson algorithm. This study also provides the first two posterior moments of the Poisson parameters involved. The posterior distributon of these parameters is approximated by a gamma distribution. Applications to two datasets show that our model can be somehow considered as a generalization of the PRIMM method since it also allows clustered count data. Finally, the model is applied to data involving many types of adverse events recorded by the participants of a drug clinical trial which involved a quadrivalent vaccine containing measles, mumps, rubella and varicella. The Poisson regression incorporates the fixed effect corresponding to the covariate treatment/control as well as a random effect associated with the biological system of the body affected by the adverse events.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Approximate posterior distribution"

1

Delaney, Anthony. Physiology of body fluids. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780199600830.003.0068.

Full text
Abstract:
An understanding of the physiology of body fluids is essential when considering appropriate fluid resuscitation and fluid replacement therapy in critically-ill patients. In healthy humans, the body is composed of approximately 60% water, distributed between intracellular and an extracellular compartments. The extracellular compartment is divided into intravascular, interstitial and transcellular compartments. The movement of fluids between the intravascular and interstitial compartments, is classically described as being governed by Starling forces, leading to a small net efflux of fluid from the intravascular to the interstitial compartment. More recent evidence suggests that a model incorporating the effect of the endothelial glycoclayx layer, a web of glycoproteins and proteoglycans that are bound on the luminal side of the vascular endothelium, better explains the observed distribution of fluids. The movement of fluid to and from the intracellular compartment and the interstitial fluid compartment, is governed by the relative osmolarities of the two compartments. Body fluid status is governed by the difference between fluid inputs and outputs; fluid input is regulated by the thirst mechanism, with fluid outputs consisting of gastrointestinal, renal, and insensible losses. The regulation of intracellular fluid status is largely governed by the regulation of the interstitial fluid osmolarity, which is regulated by the secretion of antidiuretic hormone from the posterior pituitary gland. The regulation of extracellular volume status is regulated by a complex neuro-endocrine mechanism, designed to regulate sodium in the extracellular fluid.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Approximate posterior distribution"

1

Graziani, Rebecca. "Stochastic Population Forecasting: A Bayesian Approach Based on Evaluation by Experts." In Developments in Demographic Forecasting. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-42472-5_2.

Full text
Abstract:
Abstract We suggest a procedure for deriving expert based stochastic population forecasts within the Bayesian approach. According to the traditional and commonly used cohort-component model, the inputs of the forecasting procedures are the fertility and mortality age schedules along with the distribution of migrants by age. Age schedules and distributions are derived from summary indicators, such as total fertility rates, male and female life expectancy at birth, and male and female number of immigrants and emigrants. The joint distributions of all summary indicators are obtained based on evaluations by experts, elicited according to a conditional procedure that makes it possible to derive information on the centres of the indicators, their variability, their across-time correlations, and the correlations between the indicators. The forecasting method is based on a mixture model within the Supra-Bayesian approach that treats the evaluations by experts as data and the summary indicators as parameters. The derived posterior distributions are used as forecast distributions of the summary indicators of interest. A Markov Chain Monte Carlo algorithm is designed to approximate such posterior distributions.
APA, Harvard, Vancouver, ISO, and other styles
2

Mitros, John, Arjun Pakrashi, and Brian Mac Namee. "Ramifications of Approximate Posterior Inference for Bayesian Deep Learning in Adversarial and Out-of-Distribution Settings." In Computer Vision – ECCV 2020 Workshops. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66415-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Abbey, Craig K., Eric Clarkson, Harrison H. Barrett, Stefan P. Müller, and Frank J. Rybicki. "Approximate distributions for Maximum Likelihood and maximum a posteriori estimates under a Gaussian noise model." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63046-5_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sweeting, T. J. "Approximate Bayesian Computation Based on Signed Roots of Log-Density Ratios." In Bayesian Statistics 5. Oxford University PressOxford, 1996. http://dx.doi.org/10.1093/oso/9780198523567.003.0022.

Full text
Abstract:
Abstract We discuss approximate Bayesian computation based on the asymptotic theory of signed root log-likelihood, or log-posterior density ratios. Third-order correct formulae for univariate posterior distribution functions are reviewed and some new fourth-order correct formulae for Bartlett corrections, posterior expectations and predictive distributions presented. All the approximations described in the paper are available at little additional computational cost over simple first-order approximations and are particularly useful for sensitivity and influence analyses. Some illustrative examples are given. It is argued that analytic approximation still has an important role to play in Bayesian statistics.
APA, Harvard, Vancouver, ISO, and other styles
5

Gilbert, Hugo, Mohamed Ouaguenouni, Meltem Öztürk, and Olivier Spanjaard. "A Hybrid Approach to Preference Learning with Interaction Terms." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230351.

Full text
Abstract:
Preference learning is an essential component in numerous applications, such as recommendation systems, decision-making processes, and personalized services. We propose here a novel approach to preference learning that interleaves Gaussian Processes (GP) and Robust Ordinal Regression (ROR). A Gaussian process gives a probability distribution on the latent function values that generate users’ preferences. Our method extends the traditional non-parametric Gaussian process framework by approximating the latent function by a very flexible parameterized function, that we call θ-additive function, where θ is the parameter set. The set θ reflects the degree of sophistication of the generalized additive model that can potentially represent the user’s preferences. To learn what are the components of θ, we update a probability distribution on the space of all possible sets θ, depending on the ability of the parameterized function to approximate the latent function. We predict pairwise preferences by using the parameter set θ that maximizes the posterior distribution and by performing robust ordinal regression based on this parameter set. Experimental results on synthetic data demonstrate the effectiveness and robustness of our proposed methodology.
APA, Harvard, Vancouver, ISO, and other styles
6

West, Mike. "Modelling with Mixtures." In Bayesian Statistics 4. Oxford University PressOxford, 1992. http://dx.doi.org/10.1093/oso/9780198522669.003.0028.

Full text
Abstract:
Abstract Discrete mixtures of distributions of standard parametric forms commonly arise in statistical modelling and with methods of analysis that exploit mixture structure. This paper discusses general issues of modelling with mixtures that arise in fitting mixtures to data distributions, using mixtures to approximate functional forms, such as posterior distributions in parametric models, and development of mixture pruning methods useful for reducing the number of components of large mixtures. These issues arise in problems of density estimation using mixtures of Dirichlet processes, adaptive importance sampling function design in Monte Carlo integration, and Bayesian discrimination and cluster analysis.
APA, Harvard, Vancouver, ISO, and other styles
7

Schurz, Gerhard. "From Optimal Inductive Methods to Optimal Beliefs." In Optimality Justifications. Oxford University PressOxford, 2024. http://dx.doi.org/10.1093/oso/9780198887546.003.0008.

Full text
Abstract:
Abstract Chapter 7 begins with the application of meta-induction to the prediction of probability distributions, in the form of probabilistic prediction games. The attractivity-weighted aggregation of candidate distributions yields a predictively optimal probability distribution. The adoption of this distribution as one’s rational degree of belief is justified by the optimality principle, since acting according to this distribution maximizes expected utility. By tracking the success of distributions over games with varying event sequences a meta-inductive a posteriori justification of inductive generalizations in the form of exchangeability assumptions is possible. Section 7.3 is devoted to the optimality of the principle of total evidence as an important complement to methods of induction. It is shown that conditionalizing degrees of belief on one’s total relevant evidence may only increase, not decrease, one’s expected success. The two final sections deal with the justificational transition from rational degrees of beliefs to qualitative (yes-or-no) beliefs. The question of when it is rational to accept a highly probable proposition as a qualitative belief leads into difficult problems involving a clash between different rationality principles, such as Locke’s condition and the principle of conjunctive closure. This clash is exemplified in the lottery paradox and the paradox of the preface. It is argued that the optimal strategy for extracting qualitative beliefs out of rational degrees of belief depends on the context: while for success-essential beliefs a high probability of strict truth is mandatory, for global theories a high probability of approximate truth is sufficient.
APA, Harvard, Vancouver, ISO, and other styles
8

Stephens, D. A., and P. Dellaportas. "Bayesian Analysis of Generalised Linear Models with Covariate Measurement Error." In Bayesian Statistics 4. Oxford University PressOxford, 1992. http://dx.doi.org/10.1093/oso/9780198522669.003.0058.

Full text
Abstract:
Abstract Use of generalised linear models when covariates are masked by measurement errors is appropriate in many practical epidemiological problems. However, inference based on such models is by no means straightforward. In previous analyses, simplifying assumptions were made. In this paper, we analyse such models in full generality under a Bayesian formulation. In order to compute the necessary posterior distributions, we utilize various numerical and sampling-based approximate and exact techniques. A specific example involving a logistic regression is considered.
APA, Harvard, Vancouver, ISO, and other styles
9

Schetinin, V., and L. Jakaite. "Assessment and Confidence Estimates of Newborn Brain Maturity from Sleep EEG." In E-Health Technologies and Improving Patient Safety: Exploring Organizational Factors. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2657-7.ch014.

Full text
Abstract:
Electroencephalograms (EEGs) recorded from sleeping newborns contain information about their brain maturity. Although these EEGs are very weak and distorted by artifacts, and widely vary during sleep hours as well as between patients, the main maturity-related patterns are recognizable by experts. However, experts are typically incapable of quantitatively providing accurate estimates of confidence in assessments. The most accurate estimates are, in theory, provided with the Bayesian methodology of probabilistic inference which has been practically implemented with Markov Chain Monte Carlo (MCMC) integration over a model parameter space. Typically this technique aims to approximate the integral by sampling areas of interests with high likelihood of the true model. In practice, the likelihood distributions are typically multimodal, and for this reason, the existing MCMC techniques have been shown incapable of providing the proportional sampling of multiple areas of interest. Besides, the lack of prior information increases this problem especially for a large model parameter space, making its detailed exploration impossible within a reasonable time. Specifically, the absence of information about EEG features has been shown affecting the results of the Bayesian assessment of EEG maturity. In this chapter, authors discuss how the posterior information can be used in order to mitigate the problem of disproportional sampling in order to improve the accuracy of assessments. Having analyzed the posterior information, they found that the MCMC integration tends to oversample the areas in which a model parameter space includes EEG features making a weak contribution to the assessment. This observation has motivated the authors to cure the results of MCMC integration, and when they tested the proposed method on the EEG recordings, they found an increase in the accuracy of assessment.
APA, Harvard, Vancouver, ISO, and other styles
10

Verdoorn Todd A., McCarten J. Riley, Arcienegas David B., et al. "Evaluation and Tracking of Alzheimer's Disease Severity Using Resting-State Magnetoencephalography." In Advances in Alzheimer’s Disease. IOS Press, 2011. https://doi.org/10.3233/978-1-60750-793-2-445.

Full text
Abstract:
We have conducted multicenter clinical studies in which brain function was evaluated with brief, resting-state magnetoencephalography (MEG) scans. A study cohort of 117 AD patients and 123 elderly cognitively normal volunteers was recruited from community neurology clinics in Denver, Colorado and Minneapolis, Minnesota. Each subject was evaluated through neurological examination, medical history, and a modest battery of standard neuropsychological tests. Brain function was measured by a one-minute, resting-state, eyes-open MEG scan. Cross-sectional analysis of MEG scans revealed global changes in the distribution of relative spectral power (centroid frequency of healthy subjects = 8.24 &amp;plusmn; 0.2 Hz and AD patients = 6.78 &amp;plusmn; 0.25 Hz) indicative of generalized slowing of brain signaling. Functional connectivity patterns were measured using the synchronous neural interactions (SNI) test, which showed a global increase in the strength of functional connectivity (cO2value of healthy subjects = 0.059 &amp;plusmn; 0.0007 versus AD patients = 0.066 &amp;plusmn; 0.001) associated with AD. The largest magnitude disease-associated changes were localized to sensors near posterior and lateral cortical regions. Part of the cohort (31 AD and 46 cognitively normal) was evaluated in an identical fashion approximately 10 months after the first assessments. Follow-up scans revealed multiple MEG scan features that correlated significantly with changes in neuropsychological test scores. Linear combinations of these MEG scan features generated an accurate multivariate model of disease progression over 10 months. Our results demonstrate the utility of brief resting-state tests based on MEG. The non-invasive, rapid and convenient nature of these scans offers a new tool for translational AD research and early phase development of novel treatments for AD.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Approximate posterior distribution"

1

Lian, Rongzhong, Min Xie, Fan Wang, Jinhua Peng, and Hua Wu. "Learning to Select Knowledge for Response Generation in Dialog Systems." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/706.

Full text
Abstract:
End-to-end neural models for intelligent dialogue systems suffer from the problem of generating uninformative responses. Various methods were proposed to generate more informative responses by leveraging external knowledge. However, few previous work has focused on selecting appropriate knowledge in the learning process. The inappropriate selection of knowledge could prohibit the model from learning to make full use of the knowledge. Motivated by this, we propose an end-to-end neural model which employs a novel knowledge selection mechanism where both prior and posterior distributions over knowledge are used to facilitate knowledge selection. Specifically, a posterior distribution over knowledge is inferred from both utterances and responses, and it ensures the appropriate selection of knowledge during the training process. Meanwhile, a prior distribution, which is inferred from utterances only, is used to approximate the posterior distribution so that appropriate knowledge can be selected even without responses during the inference process. Compared with the previous work, our model can better incorporate appropriate knowledge in response generation. Experiments on both automatic and human evaluation verify the superiority of our model over previous baselines.
APA, Harvard, Vancouver, ISO, and other styles
2

Shen, Gehui, Xi Chen, and Zhihong Deng. "Variational Learning of Bayesian Neural Networks via Bayesian Dark Knowledge." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/282.

Full text
Abstract:
Bayesian neural networks (BNNs) have received more and more attention because they are capable of modeling epistemic uncertainty which is hard for conventional neural networks. Markov chain Monte Carlo (MCMC) methods and variational inference (VI) are two mainstream methods for Bayesian deep learning. The former is effective but its storage cost is prohibitive since it has to save many samples of neural network parameters. The latter method is more time and space efficient, however the approximate variational posterior limits its performance. In this paper, we aim to combine the advantages of above two methods by distilling MCMC samples into an approximate variational posterior. On the basis of an existing distillation technique we first propose variational Bayesian dark knowledge method. Moreover, we propose Bayesian dark prior knowledge, a novel distillation method which considers MCMC posterior as the prior of a variational BNN. Two proposed methods both not only can reduce the space overhead of the teacher model so that are scalable, but also maintain a distilled posterior distribution capable of modeling epistemic uncertainty. Experimental results manifest our methods outperform existing distillation method in terms of predictive accuracy and uncertainty modeling.
APA, Harvard, Vancouver, ISO, and other styles
3

Dresdner, Gideon, Saurav Shekhar, Fabian Pedregosa, Francesco Locatello, and Gunnar Rätsch. "Boosting Variational Inference With Locally Adaptive Step-Sizes." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/322.

Full text
Abstract:
Variational Inference makes a trade-off between the capacity of the variational family and the tractability of finding an approximate posterior distribution. Instead, Boosting Variational Inference allows practitioners to obtain increasingly good posterior approximations by spending more compute. The main obstacle to widespread adoption of Boosting Variational Inference is the amount of resources necessary to improve over a strong Variational Inference baseline. In our work, we trace this limitation back to the global curvature of the KL-divergence. We characterize how the global curvature impacts time and memory consumption, address the problem with the notion of local curvature, and provide a novel approximate backtracking algorithm for estimating local curvature. We give new theoretical convergence rates for our algorithms and provide experimental validation on synthetic and real-world datasets.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Yunhao, Junchi Yan, Xiaolu Zhang, Jun Zhou, and Xiaokang Yang. "Learning Mixture of Neural Temporal Point Processes for Multi-dimensional Event Sequence Clustering." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/523.

Full text
Abstract:
Multi-dimensional event sequence clustering applies to many scenarios e.g. e-Commerce and electronic health. Traditional clustering models fail to characterize complex real-world processes due to the strong parametric assumption. While Neural Temporal Point Processes (NTPPs) mainly focus on modeling similar sequences instead of clustering. To fill the gap, we propose Mixture of Neural Temporal Point Processes (NTPP-MIX), a general framework that can utilize many existing NTPPs for multi-dimensional event sequence clustering. In NTPP-MIX, the prior distribution of coefficients for cluster assignment is modeled by a Dirichlet distribution. When the assignment is given, the conditional probability of a sequence is modeled by the mixture of a series of NTPPs. We combine variational EM algorithm and Stochastic Gradient Descent (SGD) to efficiently train the framework. In E-step, we fix parameters for NTPPs and approximate the true posterior with variational distributions. In M-step, we fix variational distributions and use SGD to update parameters of NTPPs. Extensive experimental results on four synthetic datasets and three real-world datasets show the effectiveness of NTPP-MIX against state-of-the-arts.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Yunhao, and Junchi Yan. "Neural Relation Inference for Multi-dimensional Temporal Point Processes via Message Passing Graph." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/469.

Full text
Abstract:
Relation discovery for multi-dimensional temporal point processes (MTPP) has received increasing interest for its importance in prediction and interpretability of the underlying dynamics. Traditional statistical MTPP models like Hawkes Process have difficulty in capturing complex relation due to their limited parametric form of the intensity function. While recent neural-network-based models suffer poor interpretability. In this paper, we propose a neural relation inference model namely TPP-NRI. Given MTPP data, it adopts a variational inference framework to model the posterior relation of MTPP data for probabilistic estimation. Specifically, assuming the prior of the relation is known, the conditional probability of the MTPP conditional on a sampled relation is captured by a message passing graph neural network (GNN) based MTPP model. A variational distribution is introduced to approximate the true posterior. Experiments on synthetic and real-world data show that our model outperforms baseline methods on both inference capability and scalability for high-dimensional data.
APA, Harvard, Vancouver, ISO, and other styles
6

Gang, Jinhyuk, Jooho Choi, Bonghee Lee, and Jinwon Joo. "Material Parameter Identification of Viscoplastic Model for Solder Alloy in Electronics Package Using Bayesian Calibration." In ASME 2010 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/detc2010-28603.

Full text
Abstract:
In this study, a method of computer model calibration is applied to quantify the uncertainties arising in the material characterization of the solder joint in the microelectronics package subject to a thermal cycle. In this study, all uncertainties are addressed by using a Bayesian calibration approach. A special specimen that characterizes the solder property due to the shear deformation is prepared, from which the Moire´ fringe is measured by running a thermal cycle. Viscoplastic finite element analysis procedure is constructed for the specimen based on the Anand model. Gaussian process model known as Kriging is employed to approximate the original finite element analysis (FEA) model. Posterior distribution for the unknown Anand parameters is formulated from the likelihood function for joint full-field displacements of computation and experiment. Markov Chain Monte Carlo (MCMC) method is employed to simulate posterior distribution. As a result, the displacements are predicted in the form of confidence interval. The results show that the proposed approach can be a useful tool in the estimation of the unknown material parameters in a probabilistic manner by effectively accounting for the uncertainties due to the experimental and computational models.
APA, Harvard, Vancouver, ISO, and other styles
7

Chou, Yi, and Sriram Sankaranarayanan. "Bayesian Parameter Estimation for Nonlinear Dynamics Using Sensitivity Analysis." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/791.

Full text
Abstract:
We investigate approximate Bayesian inference techniques for nonlinear systems described by ordinary differential equation (ODE) models. In particular, the approximations will be based on set-valued reachability analysis approaches, yielding approximate models for the posterior distribution. Nonlinear ODEs are widely used to mathematically describe physical and biological models. However, these models are often described by parameters that are not directly measurable and have an impact on the system behaviors. Often, noisy measurement data combined with physical/biological intuition serve as the means for finding appropriate values of these parameters. Our approach operates under a Bayesian framework, given prior distribution over the parameter space and noisy observations under a known sampling distribution. We explore subsets of the space of model parameters, computing bounds on the likelihood for each subset. This is performed using nonlinear set-valued reachability analysis that is made faster by means of linearization around a reference trajectory. The tiling of the parameter space can be adaptively refined to make bounds on the likelihood tighter. We evaluate our approach on a variety of nonlinear benchmarks and compare our results with Markov Chain Monte Carlo and Sequential Monte Carlo approaches.
APA, Harvard, Vancouver, ISO, and other styles
8

Sun, Yinbo, Lintao Ma, Yu Liu, et al. "Memory Augmented State Space Model for Time Series Forecasting." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/479.

Full text
Abstract:
State space model (SSM) provides a general and flexible forecasting framework for time series. Conventional SSM with fixed-order Markovian assumption often falls short in handling the long-range temporal dependencies and/or highly non-linear correlation in time-series data, which is crucial for accurate forecasting. To this extend, we present External Memory Augmented State Space Model (EMSSM) within the sequential Monte Carlo (SMC) framework. Unlike the common fixed-order Markovian SSM, our model features an external memory system, in which we store informative latent state experience, whereby to create ``memoryful" latent dynamics modeling complex long-term dependencies. Moreover, conditional normalizing flows are incorporated in our emission model, enabling the adaptation to a broad class of underlying data distributions. We further propose a Monte Carlo Objective that employs an efficient variational proposal distribution, which fuses the filtering and the dynamic prior information, to approximate the posterior state with proper particles. Our results demonstrate the competitiveness of forecasting performance of our proposed model comparing with other state-of-the-art SSMs.
APA, Harvard, Vancouver, ISO, and other styles
9

Gao, Guohua, Hao Lu, and Carl Blom. "Characterizing Joint Distribution of Uncertainty Parameters and Production Forecasts Using Gaussian Mixture Model and a Two-Loop Expectation-Maximization Algorithm." In SPE Annual Technical Conference and Exhibition. SPE, 2024. http://dx.doi.org/10.2118/220846-ms.

Full text
Abstract:
Abstract Uncertainty quantification of reservoirs with multiple geological concepts and robust optimization are key technologies for oil/gas field development planning, which require properly characterizing joint distribution of model parameters and/or production forecasts after conditioning to historical production data. In this work, an ensemble of conditional realizations is generated by a multi-realization history-matching (MHM) workflow. The posterior probability-density-function (PDF) of model parameters and/or production forecasts is non-Gaussian and we approximate it by a Gaussian-mixture-model (GMM) using an expectation-maximization (EM) algorithm. This paper first discusses major limitations of the traditional EM algorithm--not robust and converging to suboptimal solutions. We develop a two-loop EM algorithm (EM-EVD-TL) using the compact form of eigenvalue decomposition (EVD) and propose new strategies to overcome these limitations: (1) Reduce the dimension of a Gaussian component if its covariance matrix becomes singular; (2) introduce an inner EM-loop in which only the diagonal matrix in EVD of the covariance matrix is updated. The first strategy improves the stability and convergence of the EM algorithm in dealing with degeneration of Gaussian components. The second strategy reduces the computational cost and further improves the convergence rate. The proposed EM-EVD-TL algorithm was validated on an analytical testing example, and its performance is compared against the single-loop, traditional EM algorithms which use either Cholesky-decomposition (EM-CD) or EVD (EM-EVD). An ensemble of conditional realizations is generated from sampling the actual PDF using the Markov-Chain-Monte-Carlo (MCMC) approach. For the analytical example, the GMMs approximated by three EM algorithms are very close to the actual distribution with negligible difference. Finally, we applied the proposed EM-EVD-TL algorithm to realistic history matching problems with different number of uncertainty parameters and production forecasts. We first generate an ensemble of conditional realizations using either MCMC method or distributed Gauss-Newton (DGN) optimization method. Then, we construct GMMs using different EM algorithms by fitting the conditional realizations, starting from different initial configurations and settings. Our numerical results confirm that the proposed EM-EVD and EM-EVD-TL algorithms performs robustly. In contrast, the traditional EM-CD algorithm without regularization fails to converge for most testing cases. The EM-EVD-TL algorithm converges faster to better solutions than the EM-CD algorithm. The proposed two-loop EM-EVD-TL algorithm has many potential applications and thus helps make better decisions: (1) Close gaps between theoretical formulations of history matching and real applications; (2) characterize posterior distribution of reservoir models having multiple geological concepts or categories; (3) select high-quality P10-P50-P90 representative models; (4) reparametrize gridblock-based properties; and (5) conduct robust well-location and well-control optimization (WLO/WCO) under uncertainty, e.g., through seamless integration of EM-GMM with our advanced multi-objective optimization techniques.
APA, Harvard, Vancouver, ISO, and other styles
10

Masada, Tomonari. "Document Modeling with Implicit Approximate Posterior Distributions." In the International Conference. ACM Press, 2018. http://dx.doi.org/10.1145/3224207.3224214.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Approximate posterior distribution"

1

MALDONADO, KARELYS, JUAN ESPINOZA, DANIELA ASTUDILLO, and WILSON BRAVO. Fatigue and fracture resistance and survival of occlusal veneers of composite resin and ceramics blocks in posterior teeth with occlusal wear: A protocol for a systematic review. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, 2021. http://dx.doi.org/10.37766/inplasy2021.10.0036.

Full text
Abstract:
Review question / Objective: The aim of this systematic review is to synthesize the scientific evidence that evaluates fatigue and fracture resistance, survival, and stress distribution, of composite resin CAD/CAM and ceramic CAD/CAM occlusal veneers in posterior teeth with severe occlusal wear. Condition being studied: Currently there is an increase in cases of dental wear, due to several factors such as: excessive consumption of carbonated drinks, a diet high in acids, gastric diseases, anorexia, bulimia, dental grinding, use of highly abrasive toothpastes, or a combination of these(9) (10) (11) (12); which affect the patient in several aspects: loss of vertical dimension, sensitivity due to the exposure of dentin, esthetics, affectation of the neuromuscular system(11) (13) (14). With the advent of minimally invasive dentistry, occlusal veneers have been found to be a valid option to rehabilitate this type of cases and thus avoid greater wear of the dental structure with full coverage restorations. Sometimes when performing a tabletop it is not necessary to perform any preparation, thus preserving the maximum amount of dental tissue(3) (6) (15). Due to the masticatory load either in patients without parafunction where the maximum masticatory force is approximately 424 N for women and 630 N for men or in those who present parafunction where the maximum bite force can vary from 780 to 1120N(7), it is necessary that the occlusal veneers support that load which makes indispensable a compilation of studies investigating both fatigue and fracture resistance and the survival rate of occlusal veneers in different materials and thicknesses.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!