Academic literature on the topic 'Bayesian models of generalization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Bayesian models of generalization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Bayesian models of generalization"

1

Zhu, Lin, Xinbing Wang, Chenghu Zhou, and Nanyang Ye. "Bayesian Cross-Modal Alignment Learning for Few-Shot Out-of-Distribution Generalization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (2023): 11461–69. http://dx.doi.org/10.1609/aaai.v37i9.26355.

Full text
Abstract:
Recent advances in large pre-trained models showed promising results in few-shot learning. However, their generalization ability on two-dimensional Out-of-Distribution (OoD) data, i.e., correlation shift and diversity shift, has not been thoroughly investigated. Researches have shown that even with a significant amount of training data, few methods can achieve better performance than the standard empirical risk minimization method (ERM) in OoD generalization. This few-shot OoD generalization dilemma emerges as a challenging direction in deep neural network generalization research, where the performance suffers from overfitting on few-shot examples and OoD generalization errors. In this paper, leveraging a broader supervision source, we explore a novel Bayesian cross-modal image-text alignment learning method (Bayes-CAL) to address this issue. Specifically, the model is designed as only text representations are fine-tuned via a Bayesian modelling approach with gradient orthogonalization loss and invariant risk minimization (IRM) loss. The Bayesian approach is essentially introduced to avoid overfitting the base classes observed during training and improve generalization to broader unseen classes. The dedicated loss is introduced to achieve better image-text alignment by disentangling the causal and non-casual parts of image features. Numerical experiments demonstrate that Bayes-CAL achieved state-of-the-art OoD generalization performances on two-dimensional distribution shifts. Moreover, compared with CLIP-like models, Bayes-CAL yields more stable generalization performances on unseen classes. Our code is available at https://github.com/LinLLLL/BayesCAL.
APA, Harvard, Vancouver, ISO, and other styles
2

Tenenbaum, Joshua B., and Thomas L. Griffiths. "Generalization, similarity, and Bayesian inference." Behavioral and Brain Sciences 24, no. 4 (2001): 629–40. http://dx.doi.org/10.1017/s0140525x01000061.

Full text
Abstract:
Shepard has argued that a universal law should govern generalization across different domains of perception and cognition, as well as across organisms from different species or even different planets. Starting with some basic assumptions about natural kinds, he derived an exponential decay function as the form of the universal generalization gradient, which accords strikingly well with a wide range of empirical data. However, his original formulation applied only to the ideal case of generalization from a single encountered stimulus to a single novel stimulus, and for stimuli that can be represented as points in a continuous metric psychological space. Here we recast Shepard's theory in a more general Bayesian framework and show how this naturally extends his approach to the more realistic situation of generalizing from multiple consequential stimuli with arbitrary representational structure. Our framework also subsumes a version of Tversky's set-theoretic model of similarity, which is conventionally thought of as the primary alternative to Shepard's continuous metric space model of similarity and generalization. This unification allows us not only to draw deep parallels between the set-theoretic and spatial approaches, but also to significantly advance the explanatory power of set-theoretic models.
APA, Harvard, Vancouver, ISO, and other styles
3

San Martín, Ernesto, Alejandro Jara, Jean-Marie Rolin, and Michel Mouchart. "On the Bayesian Nonparametric Generalization of IRT-Type Models." Psychometrika 76, no. 3 (2011): 385–409. http://dx.doi.org/10.1007/s11336-011-9213-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

X. Linares Cedeño, Francisco, Gabriel German, Juan Carlos Hidalgo та Ariadna Montiel. "Bayesian analysis for a class of α-attractor inflationary models". Journal of Cosmology and Astroparticle Physics 2023, № 03 (2023): 038. http://dx.doi.org/10.1088/1475-7516/2023/03/038.

Full text
Abstract:
Abstract We perform a Bayesian study of a generalization of the basic α-attractor T model given by the potential V(ϕ) = V 0[1-sech p (ϕ/√(6α)M pl)] where ϕ is the inflaton field and the parameter α corresponds to the inverse curvature of the scalar manifold in the conformal or superconformal realizations of the attractor models. Such generalization is characterized by the power p which includes the basic or base model for p = 2. Once the priors for the parameters of the α-attractor potential are set by numerical exploration, we perform the corresponding statistical analysis for the cases p = 1, 2, 3, 4, and derive posteriors. Considering the original α-attractor potential as the base model, we calculate the evidence for our generalization, and conclude that the p = 4 model is preferred by the CMB data. We also present constraints for the parameter α. Interestingly, all the cases studied prefer a specific value for the tensor-to-scalar ratio given by r ≃ 0.0025.
APA, Harvard, Vancouver, ISO, and other styles
5

Ahuja, Kabir, Vidhisha Balachandran, Madhur Panwar, et al. "Learning Syntax Without Planting Trees: Understanding Hierarchical Generalization in Transformers." Transactions of the Association for Computational Linguistics 13 (February 12, 2024): 121–41. https://doi.org/10.1162/tacl_a_00733.

Full text
Abstract:
Abstract Transformers trained on natural language data have been shown to exhibit hierarchical generalization without explicitly encoding any structural bias. In this work, we investigate sources of inductive bias in transformer models and their training that could cause such preference for hierarchical generalization. We extensively experiment with transformers trained on five synthetic, controlled datasets using several training objectives and show that, while objectives such as sequence-to-sequence modeling, classification, etc., often fail to lead to hierarchical generalization, the language modeling objective consistently leads to transformers generalizing hierarchically. We then study how different generalization behaviors emerge during the training by conducting pruning experiments that reveal the joint existence of subnetworks within the model implementing different generalizations. Finally, we take a Bayesian perspective to understand transformers’ preference for hierarchical generalization: We establish a correlation between whether transformers generalize hierarchically on a dataset and if the simplest explanation of that dataset is provided by a hierarchical grammar compared to regular grammars exhibiting linear generalization. Overall, our work presents new insights on the origins of hierarchical generalization in transformers and provides a theoretical framework for studying generalization in language models.
APA, Harvard, Vancouver, ISO, and other styles
6

Gentner, Dedre. "Exhuming similarity." Behavioral and Brain Sciences 24, no. 4 (2001): 669. http://dx.doi.org/10.1017/s0140525x01350082.

Full text
Abstract:
Tenenbaum and Griffiths' paper attempts to subsume theories of similarity – including spatial models, featural models, and structure-mapping models – into a framework based on Bayesian generalization. But in so doing it misses significant phenomena of comparison. It would be more fruitful to examine how comparison processes suggest hypotheses than to try to derive similarity from Bayesian reasoning. [Shepard; Tenenbaum & Griffiths]
APA, Harvard, Vancouver, ISO, and other styles
7

Tenenbaum, Joshua B., and Thomas L. Griffiths. "Some specifics about generalization." Behavioral and Brain Sciences 24, no. 4 (2001): 762–78. http://dx.doi.org/10.1017/s0140525x01780089.

Full text
Abstract:
We address two kinds of criticisms of our Bayesian framework for generalization: those that question the correctness or the coverage of our analysis, and those that question its intrinsic value. Speaking to the first set, we clarify the origins and scope of our size principle for weighting hypotheses or features, focusing on its potential status as a cognitive universal; outline several variants of our framework to address additional phenomena of generalization raised in the commentaries; and discuss the subtleties of our claims about the relationship between similarity and generalization. Speaking to the second set, we identify the unique contributions that a rational statistical approach to generalization offers over traditional models that focus on mental representation and cognitive processes.
APA, Harvard, Vancouver, ISO, and other styles
8

Shalaeva, Vera, Alireza Fakhrizadeh Esfahani, Pascal Germain, and Mihaly Petreczky. "Improved PAC-Bayesian Bounds for Linear Regression." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 5660–67. http://dx.doi.org/10.1609/aaai.v34i04.6020.

Full text
Abstract:
In this paper, we improve the PAC-Bayesian error bound for linear regression derived in Germain et al. (2016). The improvements are two-fold. First, the proposed error bound is tighter, and converges to the generalization loss with a well-chosen temperature parameter. Second, the error bound also holds for training data that are not independently sampled. In particular, the error bound applies to certain time series generated by well-known classes of dynamical models, such as ARX models.
APA, Harvard, Vancouver, ISO, and other styles
9

MacKay, David J. C. "A Practical Bayesian Framework for Backpropagation Networks." Neural Computation 4, no. 3 (1992): 448–72. http://dx.doi.org/10.1162/neco.1992.4.3.448.

Full text
Abstract:
A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible (1) objective comparisons between solutions using alternative network architectures, (2) objective stopping rules for network pruning or growing procedures, (3) objective choice of magnitude and type of weight decay terms or additive regularizers (for penalizing large weights, etc.), (4) a measure of the effective number of well-determined parameters in a model, (5) quantified estimates of the error bars on network parameters and on network output, and (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian "evidence" automatically embodies "Occam's razor," penalizing overflexible and overcomplex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalization ability and the Bayesian evidence is obtained.
APA, Harvard, Vancouver, ISO, and other styles
10

Hinton, Geoffrey E., and Zoubin Ghahramani. "Generative models for discovering sparse distributed representations." Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 352, no. 1358 (1997): 1177–90. http://dx.doi.org/10.1098/rstb.1997.0101.

Full text
Abstract:
We describe a hierarchical, generative model that can be viewed as a nonlinear generalization of factor analysis and can be implemented in a neural network. The model uses bottom–up, top–down and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has been performed the connection strengths can be updated using a very simple learning rule that only requires locally available information. We demonstrate that the network learns to extract sparse, distributed, hierarchical representations.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Bayesian models of generalization"

1

Tang, Yun. "Hierarchical Generalization Models for Cognitive Decision-making Processes." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1370560139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schoergendorfer, Angela. "BAYESIAN SEMIPARAMETRIC GENERALIZATIONS OF LINEAR MODELS USING POLYA TREES." UKnowledge, 2011. http://uknowledge.uky.edu/gradschool_diss/214.

Full text
Abstract:
In a Bayesian framework, prior distributions on a space of nonparametric continuous distributions may be defined using Polya trees. This dissertation addresses statistical problems for which the Polya tree idea can be utilized to provide efficient and practical methodological solutions. One problem considered is the estimation of risks, odds ratios, or other similar measures that are derived by specifying a threshold for an observed continuous variable. It has been previously shown that fitting a linear model to the continuous outcome under the assumption of a logistic error distribution leads to more efficient odds ratio estimates. We will show that deviations from the assumption of logistic error can result in great bias in odds ratio estimates. A one-step approximation to the Savage-Dickey ratio will be presented as a Bayesian test for distributional assumptions in the traditional logistic regression model. The approximation utilizes least-squares estimates in the place of a full Bayesian Markov Chain simulation, and the equivalence of inferences based on the two implementations will be shown. A framework for flexible, semiparametric estimation of risks in the case that the assumption of logistic error is rejected will be proposed. A second application deals with regression scenarios in which residuals are correlated and their distribution evolves over an ordinal covariate such as time. In the context of prediction, such complex error distributions need to be modeled carefully and flexibly. The proposed model introduces dependent, but separate Polya tree priors for each time point, thus pooling information across time points to model gradual changes in distributional shapes. Theoretical properties of the proposed model will be outlined, and its potential predictive advantages in simulated scenarios and real data will be demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
3

Nearing, Grey Stephen. "Diagnostics and Generalizations for Parametric State Estimation." Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/293533.

Full text
Abstract:
This dissertation is comprised of a collection of five distinct research projects which apply, evaluate and extend common methods for land surface data assimilation. The introduction of novel diagnostics and extensions of existing algorithms is motivated by an example, related to estimating agricultural productivity, of failed application of current methods. We subsequently develop methods, based on Shannon's theory of communication, to quantify the contributions from all possible factors to the residual uncertainty in state estimates after data assimilation, and to measure the amount of information contained in observations which is lost due to erroneous assumptions in the assimilation algorithm. Additionally, we discuss an appropriate interpretation of Shannon information which allows us to measure the amount of information contained in a model, and use this interpretation to measure the amount of information introduced during data assimilation-based system identification. Finally, we propose a generalization of the ensemble Kalman filter designed to alleviate one of the primary assumptions - that the observation function is linear.
APA, Harvard, Vancouver, ISO, and other styles
4

Schustek, Philipp. "Probabilistic models for human judgments about uncertainty in intuitive inference tasks." Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/586057.

Full text
Abstract:
Updating beliefs to maintain coherence with observational evidence is a cornerstone of rationality. This entails the compliance with probabilistic principles which acknowledge that real-world observations are consistent with several possible interpretations. This work presents two novel experimental paradigms and computational analyses of how human participants quantify uncertainty in perceptual inference tasks. Their behavioral responses feature non-trivial patterns of probabilistic inference such as reliability-based belief updating over hierarchical state representations of the environment. Despite characteristic generalization biases, behavior cannot be explained well by alternative heuristic accounts. These results suggest that uncertainty is an integral part of our inferences and that we indeed have the potential to resort to rational inference mechanisms that adhere to probabilistic principles. Furthermore, they appear consistent with ubiquitous representations of uncertainty posited by framework theories such as Bayesian hierarchical modeling and predictive coding.<br>Un pilar fundamental de la racionalidad es actualizar las creencias con la finalidad de mantener la coherencia con la evidencia observacional. Esto implica cumplir con principios probabilísticos, los cuales reconocen que las observaciones del mundo real son consistentes con varias interpretaciones posibles. Este estudio presenta dos novedosas pruebas experimentales, así como análisis computacionales, de cómo participantes humanos cuantifican la incertidumbre en tareas de inferencia perceptiva. Sus respuestas conductuales muestran patrones no triviales de inferencia probabilística, tales como la actualización de creencias basadas en la confiabilidad sobre las representaciones jerárquicas del estado del entorno. A pesar de los sesgos característicos de generalización, el comportamiento no puede ser correctamente explicado con descripciones heurísticas alternativas. Estos resultados sugieren que la incertidumbre es una parte integral de nuestras inferencias y que efectivamente tenemos el potencial para recurrir a mecanismos de inferencia racional, los cuales adhieren a principios probabilísticos. Además, dichos resultados son compatibles con la idea de que representaciones de incertidumbre internas son ubicuas, lo cual presuponen teorías generales como Bayesian hierarchical modeling y predictive coding.
APA, Harvard, Vancouver, ISO, and other styles
5

Matsuoka, Yoky 1971. "Models of generalization in motor control." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/9634.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.<br>Includes bibliographical references (p. 207-215).<br>Motor learning for humans is based on the capacity of the central nervous system (CNS) to perform computation and build an internal model for a task. This thesis investigates the CNS's ability to generalize a learned motor skill throughout neighboring spatial locations, its ability to divide the spatial general­ization with variation of context, and proposes models of how these generalizations might be implemented. The investigation involved human psychophysics and simulations. The experi­mental paradigm was to study human neuromuscular adaptation to viscous force perturbation. When external perturbations were applied to the hand during a reach­ing task, the movement became distorted. This distortion motivated the CNS to produce counterbalancing forces, which resulted in the modification of the internal model for the task. Experimental results indicated that the introduction of interfering perturbations near the trained location disturbed the learned skill. In addition, if the same move­ment was perturbed in two opposite directions in sequence, neither of the forces are learned. Conversely, the adaptation to two opposite forces was possible within the same space when the forces were applied to two contextually distinguished movements. This was possible only when these movements were interleaved fairly regularly. During the adaptation to a difficult task, such as contextual distinction in the same spatial location, humans often used other strategies to avoid learning the actual paradigm. These strategies allowed subjects to perform the task -- without changing their internal models appropriately, and thus this was also investigated as a part of the learning process. Finally, a multiple function model was constructed which allowed multiple contex­tually dependent functions to co-exist within one state space. The sensory feedback affected all functions, however, only one function was active to output a motor com­mand. This model supported the experimental data presented. The results of the psychophysical experiments as well as an explanation of the simulations and models that were developed will be presented in this thesis.<br>by Yoky Matsuoka.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
6

Kim, Yong Ku. "Bayesian multiresolution dynamic models." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1180465799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Quintana, José Mario. "Multivariate Bayesian forecasting models." Thesis, University of Warwick, 1987. http://wrap.warwick.ac.uk/34805/.

Full text
Abstract:
This thesis concerns theoretical and practical Bayesian modelling of multivariate time series. Our main goal is to intruduce useful, flexible and tractable multivariate forecasting models and provide the necessary theory for their practical implementation. After a brief review of the dynamic linear model we formulate a new matrix-v-ariate generalization in which a significant part of the variance-covariance structure is unknown. And a new general algorithm, based on the sweep operator is provided for its recursive implementation. This enables important advances to be made in long-standing problems related with the specification of the variances. We address the problem of plug-in estimation and apply our results in the context of dynamic linear models. We extend our matrix-variate model by considering the unknown part of the variance-covariance structure to be dynamic. Furthermore, we formulate the dynamic recursive model which is a general counterpart of fully recursive econometric models. The latter part of the dissertation is devoted to modelling aspects. The usefulness of the methods proposed is illustrated with several examples involving real and simulated data.
APA, Harvard, Vancouver, ISO, and other styles
8

McDonald, Daniel J. "Generalization Error Bounds for Time Series." Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/184.

Full text
Abstract:
In this thesis, I derive generalization error bounds — bounds on the expected inaccuracy of the predictions — for time series forecasting models. These bounds allow forecasters to select among competing models, and to declare that, with high probability, their chosen model will perform well — without making strong assumptions about the data generating process or appealing to asymptotic theory. Expanding upon results from statistical learning theory, I demonstrate how these techniques can help time series forecasters to choose models which behave well under uncertainty. I also show how to estimate the β-mixing coefficients for dependent data so that my results can be used empirically. I use the bound explicitly to evaluate different predictive models for the volatility of IBM stock and for a standard set of macroeconomic variables. Taken together my results show how to control the generalization error of time series models with fixed or growing memory.
APA, Harvard, Vancouver, ISO, and other styles
9

Ridgeway, Gregory Kirk. "Generalization of boosting algorithms and applications of Bayesian inference for massive datasets /." Thesis, Connect to this title online; UW restricted, 1999. http://hdl.handle.net/1773/8986.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Michel, Axel. "Personalising privacy contraints in Generalization-based Anonymization Models." Thesis, Bourges, INSA Centre Val de Loire, 2019. http://www.theses.fr/2019ISAB0001/document.

Full text
Abstract:
Les bénéfices engendrés par les études statistiques sur les données personnelles des individus sont nombreux, que ce soit dans le médical, l'énergie ou la gestion du trafic urbain pour n'en citer que quelques-uns. Les initiatives publiques de smart-disclosure et d'ouverture des données rendent ces études statistiques indispensables pour les institutions et industries tout autour du globe. Cependant, ces calculs peuvent exposer les données personnelles des individus, portant ainsi atteinte à leur vie privée. Les individus sont alors de plus en plus réticent à participer à des études statistiques malgré les protections garanties par les instituts. Pour retrouver la confiance des individus, il devient nécessaire de proposer dessolutions de user empowerment, c'est-à-dire permettre à chaque utilisateur de contrôler les paramètres de protection des données personnelles les concernant qui sont utilisées pour des calculs.Cette thèse développe donc un nouveau concept d'anonymisation personnalisé, basé sur la généralisation de données et sur le user empowerment.En premier lieu, ce manuscrit propose une nouvelle approche mettant en avant la personnalisation des protections de la vie privée par les individus, lors de calculs d'agrégation dans une base de données. De cette façon les individus peuvent fournir des données de précision variable, en fonction de leur perception du risque. De plus, nous utilisons une architecture décentralisée basée sur du matériel sécurisé assurant ainsi les garanties de respect de la vie privée tout au long des opérations d'agrégation.En deuxième lieu, ce manuscrit étudie la personnalisations des garanties d'anonymat lors de la publication de jeux de données anonymisés. Nous proposons l'adaptation d'heuristiques existantes ainsi qu'une nouvelle approche basée sur la programmation par contraintes. Des expérimentations ont été menées pour étudier l'impact d’une telle personnalisation sur la qualité des données. Les contraintes d’anonymat ont été construites et simulées de façon réaliste en se basant sur des résultats d'études sociologiques<br>The benefit of performing Big data computations over individual’s microdata is manifold, in the medical, energy or transportation fields to cite only a few, and this interest is growing with the emergence of smart-disclosure initiatives around the world. However, these computations often expose microdata to privacy leakages, explaining the reluctance of individuals to participate in studies despite the privacy guarantees promised by statistical institutes. To regain indivuals’trust, it becomes essential to propose user empowerment solutions, that is to say allowing individuals to control the privacy parameter used to make computations over their microdata.This work proposes a novel concept of personalized anonymisation based on data generalization and user empowerment.Firstly, this manuscript proposes a novel approach to push personalized privacy guarantees in the processing of database queries so that individuals can disclose different amounts of information (i.e. data at different levels of accuracy) depending on their own perception of the risk. Moreover, we propose a decentralized computing infrastructure based on secure hardware enforcing these personalized privacy guarantees all along the query execution process.Secondly, this manuscript studies the personalization of anonymity guarantees when publishing data. We propose the adaptation of existing heuristics and a new approach based on constraint programming. Experiments have been done to show the impact of such personalization on the data quality. Individuals’privacy constraints have been built and realistically using social statistic studies
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Bayesian models of generalization"

1

Kempthorne, Peter J. Bayesian parametric models. Alfred P. Sloan School of Management, Massachusetts Institute of Technology, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Quintana, Jose Mario. Multivariate Bayesian forecasting models. typescript, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Barber, David, A. Taylan Cemgil, and Silvia Chiappa, eds. Bayesian Time Series Models. Cambridge University Press, 2009. http://dx.doi.org/10.1017/cbo9780511984679.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Barber, David. Bayesian time series models. Cambridge University Press, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jansen, Paulus Gerardus Wilhelmus, 1954-, ed. Validity generalization revisited. Delft University Press, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hooten, Mevin B., and Trevor J. Hefley. Bringing Bayesian Models to Life. CRC Press, 2019. http://dx.doi.org/10.1201/9780429243653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Young, Simon Christopher. Bayesian models and repeated games. typescript, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

West, Mike, and Jeff Harrison. Bayesian Forecasting and Dynamic Models. Springer New York, 1989. http://dx.doi.org/10.1007/978-1-4757-9365-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Congdon, Peter. Bayesian Models for Categorical Data. John Wiley & Sons, Ltd, 2005. http://dx.doi.org/10.1002/0470092394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Weber, Philippe, and Christophe Simon. Benefits of Bayesian Network Models. John Wiley & Sons, Inc., 2016. http://dx.doi.org/10.1002/9781119347316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Bayesian models of generalization"

1

Zhou, Wan-Huan, Zhen-Yu Yin, and Ka-Veng Yuen. "Model Class Selection for Sand with Generalization Ability Evaluation." In Practice of Bayesian Probability Theory in Geotechnical Engineering. Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-9105-1_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Huateng, Tianxing Chen, Jianfu Huang, Ziyou Feng, Zhenjie Mo, and Tao Wu. "The Generalization Ability of the Tire Model Based on Bayesian Regularized Artificial Neural Network." In Proceedings of China SAE Congress 2020: Selected Papers. Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-2090-4_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Otter, Thomas. "Bayesian Models." In Handbook of Market Research. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-05542-8_24-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hooten, Mevin B., and Trevor J. Hefley. "Bayesian Models." In Bringing Bayesian Models to Life. CRC Press, 2019. http://dx.doi.org/10.1201/9780429243653-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chakraborty, Ashis Kumar, Soumen Dey, Poulami Chakraborty, and Aleena Chanda. "Bayesian Models." In Springer Handbook of Engineering Statistics. Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-7503-2_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gulati, Sneh, and William J. Padgett. "Bayesian Models." In Parametric and Nonparametric Inference from Record-Breaking Data. Springer New York, 2003. http://dx.doi.org/10.1007/978-0-387-21549-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Otter, Thomas. "Bayesian Models." In Handbook of Market Research. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-319-57413-4_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sucar, Luis Enrique. "Bayesian Classifiers." In Probabilistic Graphical Models. Springer London, 2015. http://dx.doi.org/10.1007/978-1-4471-6699-3_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sucar, Luis Enrique. "Bayesian Classifiers." In Probabilistic Graphical Models. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61943-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zwanzig, Silvelyn, and Rauf Ahmad. "Normal Linear Models." In Bayesian Inference. Chapman and Hall/CRC, 2024. http://dx.doi.org/10.1201/9781003221623-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Bayesian models of generalization"

1

Wold, Sondre, Étienne Simon, Lucas Charpentier, Egor Kostylev, Erik Velldal, and Lilja Øvrelid. "Compositional Generalization with Grounded Language Models." In Findings of the Association for Computational Linguistics ACL 2024. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.findings-acl.205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ismayilzada, Mete, Defne Circi, Jonne Sälevä, et al. "Evaluating Morphological Compositional Generalization in Large Language Models." In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). Association for Computational Linguistics, 2025. https://doi.org/10.18653/v1/2025.naacl-long.59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fang, Yanbo, Zuohui Fu, Xin Dong, Yongfeng Zhang, and Gerard de Melo. "Assessing Combinational Generalization of Language Models in Biased Scenarios." In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.aacl-short.48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Chuanhao, Chenchen Jing, Zhen Li, Mingliang Zhai, Yuwei Wu, and Yunde Jia. "In-Context Compositional Generalization for Large Vision-Language Models." In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.emnlp-main.996.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Choudhary, Ashok, Cornelius Thiels, and Hojjat Salehinejad. "Synthetic Feature Augmentation Improves Generalization Performance of Language Models." In 2025 IEEE Symposium on Computational Intelligence in Natural Language Processing and Social Media (CI-NLPSoMe Companion). IEEE, 2025. https://doi.org/10.1109/ci-nlpsomecompanion65206.2025.10977896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bhaskar, Adithya, Dan Friedman, and Danqi Chen. "The Heuristic Core: Understanding Subnetwork Generalization in Pretrained Language Models." In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.acl-long.774.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hosseini, Mohammad Javad, Andrey Petrov, Alex Fabrikant, and Annie Louis. "A synthetic data approach for domain generalization of NLI models." In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.acl-long.120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Suyuchen, Ivan Kobyzev, Peng Lu, Mehdi Rezagholizadeh, and Bang Liu. "Resonance RoPE: Improving Context Length Generalization of Large Language Models." In Findings of the Association for Computational Linguistics ACL 2024. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.findings-acl.32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lee, Gyuseong, Wooseok Jang, Jinhyeon Kim, Jaewoo Jung, and Seungryong Kim. "Domain Generalization using Large Pretrained Models with Mixture-of-Adapters." In 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, 2025. https://doi.org/10.1109/wacv61041.2025.00801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ahsan, Muhammad, Guy Ben-Yosef, and Gemma Roig. "Beyond Data Augmentations: Generalization Abilities of Few-Shot Segmentation Models." In 20th International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and Technology Publications, 2025. https://doi.org/10.5220/0013179200003912.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Bayesian models of generalization"

1

Pasupuleti, Murali Krishna. Decision Theory and Model-Based AI: Probabilistic Learning, Inference, and Explainability. National Education Services, 2025. https://doi.org/10.62311/nesx/rriv525.

Full text
Abstract:
Abstract Decision theory and model-based AI provide the foundation for probabilistic learning, optimal inference, and explainable decision-making, enabling AI systems to reason under uncertainty, optimize long-term outcomes, and provide interpretable predictions. This research explores Bayesian inference, probabilistic graphical models, reinforcement learning (RL), and causal inference, analyzing their role in AI-driven decision systems across various domains, including healthcare, finance, robotics, and autonomous systems. The study contrasts model-based and model-free approaches in decision-making, emphasizing the trade-offs between sample efficiency, generalization, and computational complexity. Special attention is given to uncertainty quantification, explainability techniques, and ethical AI, ensuring AI models remain transparent, accountable, and risk-aware. By integrating probabilistic reasoning, deep learning, and structured decision models, this research highlights how AI can make reliable, interpretable, and human-aligned decisions in complex, high-stakes environments. The findings underscore the importance of hybrid AI frameworks, explainable probabilistic models, and uncertainty-aware optimization, shaping the future of trustworthy, scalable, and ethically responsible AI-driven decision-making. Keywords Decision theory, model-based AI, probabilistic learning, Bayesian inference, probabilistic graphical models, reinforcement learning, Markov decision processes, uncertainty quantification, explainable AI, causal inference, model-free learning, Monte Carlo methods, variational inference, hybrid AI frameworks, ethical AI, risk-aware decision-making, optimal control, trust in AI, interpretable machine learning, autonomous systems, financial AI, healthcare AI, AI governance, explainability techniques, real-world AI applications.
APA, Harvard, Vancouver, ISO, and other styles
2

Howland, Scott, Jessica Yaros, and Noriaki Kono. MetaText: Compositional Generalization in Deep Language Models. Office of Scientific and Technical Information (OSTI), 2022. http://dx.doi.org/10.2172/1987883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nadiga, Balasubramanya T., and Daniel Livescu. Bayesian Analysis of RANS Models. Office of Scientific and Technical Information (OSTI), 2016. http://dx.doi.org/10.2172/1257091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

George, Edward I., and Christian P. Robert. Capture-Recapture Models and Bayesian Sampling. Defense Technical Information Center, 1990. http://dx.doi.org/10.21236/ada226853.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wayne, Martin. Hierarchical Bayesian Models for Assessing Reliability. DEVCOM Army Research Laboratory, 2023. http://dx.doi.org/10.21236/ad1209640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gelfand, Alan E., and Bani K. Mallick. Bayesian Analysis of Semiparametric Proportional Hazards Models. Defense Technical Information Center, 1994. http://dx.doi.org/10.21236/ada279394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Griffiths, Thomas L. Theory-based Bayesian Models of Inductive Inference. Defense Technical Information Center, 2010. http://dx.doi.org/10.21236/ada566965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tenenbaum, Joshua B. Theory-Based Bayesian Models of Inductive Inference. Defense Technical Information Center, 2010. http://dx.doi.org/10.21236/ada567195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kitagawa, Toru, and Raffaella Giacomini. Robust Bayesian inference for set-identified models. The IFS, 2020. http://dx.doi.org/10.1920/wp.cem.2020.1220.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Giacomini, Raffaella, and Toru Kitagawa. Robust Bayesian inference for set-identified models. The IFS, 2018. http://dx.doi.org/10.1920/wp.cem.2018.6118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography