Academic literature on the topic 'Latent variable models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Latent variable models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Latent variable models"

1

Ziegel, Eric R., and J. Loehlin. "Latent Variable Models." Technometrics 35, no. 4 (1993): 465. http://dx.doi.org/10.2307/1270304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sarle, Warren S. "Latent Variable Models." Technometrics 31, no. 4 (1989): 484–85. http://dx.doi.org/10.1080/00401706.1989.10488603.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Song, Xinyuan, Zhaohua Lu, and Xiangnan Feng. "Latent variable models with nonparametric interaction effects of latent variables." Statistics in Medicine 33, no. 10 (2013): 1723–37. http://dx.doi.org/10.1002/sim.6065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bartolucci, Francesco, Silvia Pandolfi, and Fulvia Pennoni. "Discrete Latent Variable Models." Annual Review of Statistics and Its Application 9, no. 1 (2022): 425–52. http://dx.doi.org/10.1146/annurev-statistics-040220-091910.

Full text
Abstract:
We review the discrete latent variable approach, which is very popular in statistics and related fields. It allows us to formulate interpretable and flexible models that can be used to analyze complex datasets in the presence of articulated dependence structures among variables. Specific models including discrete latent variables are illustrated, such as finite mixture, latent class, hidden Markov, and stochastic block models. Algorithms for maximum likelihood and Bayesian estimation of these models are reviewed, focusing, in particular, on the expectation–maximization algorithm and the Markov chain Monte Carlo method with data augmentation. Model selection, particularly concerning the number of support points of the latent distribution, is discussed. The approach is illustrated by summarizing applications available in the literature; a brief review of the main software packages to handle discrete latent variable models is also provided. Finally, some possible developments in this literature are suggested.
APA, Harvard, Vancouver, ISO, and other styles
5

Clogg, Clifford C., and Ton Heinen. "Discrete Latent Variable Models." Journal of the American Statistical Association 89, no. 427 (1994): 1141. http://dx.doi.org/10.2307/2290950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liao, Tim Futing, and Ton Heinen. "Discrete Latent Variable Models." Contemporary Sociology 23, no. 6 (1994): 895. http://dx.doi.org/10.2307/2076117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Molenaar, Peter C. M. "Latent variable models are network models." Behavioral and Brain Sciences 33, no. 2-3 (2010): 166. http://dx.doi.org/10.1017/s0140525x10000798.

Full text
Abstract:
AbstractCramer et al. present an original and interesting network perspective on comorbidity and contrast this perspective with a more traditional interpretation of comorbidity in terms of latent variable theory. My commentary focuses on the relationship between the two perspectives; that is, it aims to qualify the presumed contrast between interpretations in terms of networks and latent variables.
APA, Harvard, Vancouver, ISO, and other styles
8

IRINCHEEVA, IRINA, EVA CANTONI, and MARC G. GENTON. "Generalized Linear Latent Variable Models with Flexible Distribution of Latent Variables." Scandinavian Journal of Statistics 39, no. 4 (2012): 663–80. http://dx.doi.org/10.1111/j.1467-9469.2011.00777.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Eickhoff, Jens C., and Yasuo Amemiya. "Latent variable models for misclassified polytomous outcome variables." British Journal of Mathematical and Statistical Psychology 58, no. 2 (2005): 359–75. http://dx.doi.org/10.1348/000711005x64970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kvalheim, Olav M., Reidar Arneberg, Olav Bleie, Tarja Rajalahti, Age K. Smilde, and Johan A. Westerhuis. "Variable importance in latent variable regression models." Journal of Chemometrics 28, no. 8 (2014): 615–22. http://dx.doi.org/10.1002/cem.2626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Latent variable models"

1

Moustaki, Irini. "Latent variable models for mixed manifest variables." Thesis, London School of Economics and Political Science (University of London), 1996. http://etheses.lse.ac.uk/78/.

Full text
Abstract:
Latent variable models are widely used in social sciences in which interest is centred on entities such as attitudes, beliefs or abilities for which there e)dst no direct measuring instruments. Latent modelling tries to extract these entities, here described as latent (unobserved) variables, from measurements on related manifest (observed) variables. Methodology already exists for fitting a latent variable model to manifest data that is either categorical (latent trait and latent class analysis) or continuous (factor analysis and latent profile analysis). In this thesis a latent trait and a latent class model are presented for analysing the relationships among a set of mixed manifest variables using one or more latent variables. The set of manifest variables contains metric (continuous or discrete) and binary items. The latent dimension is continuous for the latent trait model and discrete for the latent class model. Scoring methods for allocating individuals on the identified latent dimen-sions based on their responses to the mixed manifest variables are discussed. ' Item nonresponse is also discussed in attitude scales with a mixture of binary and metric variables using the latent trait model. The estimation and the scoring methods for the latent trait model have been generalized for conditional distributions of the observed variables given the vector of latent variables other than the normal and the Bernoulli in the exponential family. To illustrate the use of the naixed model four data sets have been analyzed. Two of the data sets contain five memory questions, the first on Thatcher's resignation and the second on the Hillsborough football disaster; these five questions were included in BMRBI's August 1993 face to face omnibus survey. The third and the fourth data sets are from the 1990 and 1991 British Social Attitudes surveys; the questions which have been analyzed are from the sexual attitudes sections and the environment section respectively.
APA, Harvard, Vancouver, ISO, and other styles
2

Xiong, Hao. "Diversified Latent Variable Models." Thesis, The University of Sydney, 2018. http://hdl.handle.net/2123/18512.

Full text
Abstract:
Latent variable model is a common probabilistic framework which aims to estimate the hidden states of observations. More specifically, the hidden states can be the position of a robot, the low dimensional representation of an observation. Meanwhile, various latent variable models have been explored, such as hidden Markov models (HMM), Gaussian mixture model (GMM), Bayesian Gaussian process latent variable model (BGPLVM), etc. Moreover, these latent variable models have been successfully applied to a wide range of fields, such as robotic navigation, image and video compression, natural language processing. So as to make the learning of latent variable more efficient and robust, some approaches seek to integrate latent variables with related priors. For instance, the dynamic prior can be incorporated so that the learned latent variables take into account the time sequence. Besides, some methods introduce inducing points as a small set representing the large size latent variable to enhance the optimization speed of the model. Though those priors are effective to facilitate the robustness of the latent variable models, the learned latent variables are inclined to be dense rather than diverse. This is to say that there are significant overlapping between the generated latent variables. Consequently, the latent variable model will be ambiguous after optimization. Clearly, a proper diversity prior play a pivotal role in having latent variables capture more diverse features of the observations data. In this thesis, we propose diversified latent variable models incorporated by different types of diversity priors, such as single/dual diversity encouraging prior, multi-layered DPP prior, shared diversity prior. Furthermore, we also illustrate how to formulate the diversity priors in different latent variable models and perform learning, inference on the reformulated latent variable models.
APA, Harvard, Vancouver, ISO, and other styles
3

Creagh-Osborne, Jane. "Latent variable generalized linear models." Thesis, University of Plymouth, 1998. http://hdl.handle.net/10026.1/1885.

Full text
Abstract:
Generalized Linear Models (GLMs) (McCullagh and Nelder, 1989) provide a unified framework for fixed effect models where response data arise from exponential family distributions. Much recent research has attempted to extend the framework to include random effects in the linear predictors. Different methodologies have been employed to solve different motivating problems, for example Generalized Linear Mixed Models (Clayton, 1994) and Multilevel Models (Goldstein, 1995). A thorough review and classification of this and related material is presented. In Item Response Theory (IRT) subjects are tested using banks of pre-calibrated test items. A useful model is based on the logistic function with a binary response dependent on the unknown ability of the subject. Item parameters contribute to the probability of a correct response. Within the framework of the GLM, a latent variable, the unknown ability, is introduced as a new component of the linear predictor. This approach affords the opportunity to structure intercept and slope parameters so that item characteristics are represented. A methodology for fitting such GLMs with latent variables, based on the EM algorithm (Dempster, Laird and Rubin, 1977) and using standard Generalized Linear Model fitting software GLIM (Payne, 1987) to perform the expectation step, is developed and applied to a model for binary response data. Accurate numerical integration to evaluate the likelihood functions is a vital part of the computational process. A study of the comparative benefits of two different integration strategies is undertaken and leads to the adoption, unusually, of Gauss-Legendre rules. It is shown how the fitting algorithms are implemented with GLIM programs which incorporate FORTRAN subroutines. Examples from IRT are given. A simulation study is undertaken to investigate the sampling distributions of the estimators and the effect of certain numerical attributes of the computational process. Finally a generalized latent variable model is developed for responses from any exponential family distribution.
APA, Harvard, Vancouver, ISO, and other styles
4

Dallaire, Patrick. "Bayesian nonparametric latent variable models." Doctoral thesis, Université Laval, 2016. http://hdl.handle.net/20.500.11794/26848.

Full text
Abstract:
L’un des problèmes importants en apprentissage automatique est de déterminer la complexité du modèle à apprendre. Une trop grande complexité mène au surapprentissage, ce qui correspond à trouver des structures qui n’existent pas réellement dans les données, tandis qu’une trop faible complexité mène au sous-apprentissage, c’est-à-dire que l’expressivité du modèle est insuffisante pour capturer l’ensemble des structures présentes dans les données. Pour certains modèles probabilistes, la complexité du modèle se traduit par l’introduction d’une ou plusieurs variables cachées dont le rôle est d’expliquer le processus génératif des données. Il existe diverses approches permettant d’identifier le nombre approprié de variables cachées d’un modèle. Cette thèse s’intéresse aux méthodes Bayésiennes nonparamétriques permettant de déterminer le nombre de variables cachées à utiliser ainsi que leur dimensionnalité. La popularisation des statistiques Bayésiennes nonparamétriques au sein de la communauté de l’apprentissage automatique est assez récente. Leur principal attrait vient du fait qu’elles offrent des modèles hautement flexibles et dont la complexité s’ajuste proportionnellement à la quantité de données disponibles. Au cours des dernières années, la recherche sur les méthodes d’apprentissage Bayésiennes nonparamétriques a porté sur trois aspects principaux : la construction de nouveaux modèles, le développement d’algorithmes d’inférence et les applications. Cette thèse présente nos contributions à ces trois sujets de recherches dans le contexte d’apprentissage de modèles à variables cachées. Dans un premier temps, nous introduisons le Pitman-Yor process mixture of Gaussians, un modèle permettant l’apprentissage de mélanges infinis de Gaussiennes. Nous présentons aussi un algorithme d’inférence permettant de découvrir les composantes cachées du modèle que nous évaluons sur deux applications concrètes de robotique. Nos résultats démontrent que l’approche proposée surpasse en performance et en flexibilité les approches classiques d’apprentissage. Dans un deuxième temps, nous proposons l’extended cascading Indian buffet process, un modèle servant de distribution de probabilité a priori sur l’espace des graphes dirigés acycliques. Dans le contexte de réseaux Bayésien, ce prior permet d’identifier à la fois la présence de variables cachées et la structure du réseau parmi celles-ci. Un algorithme d’inférence Monte Carlo par chaîne de Markov est utilisé pour l’évaluation sur des problèmes d’identification de structures et d’estimation de densités. Dans un dernier temps, nous proposons le Indian chefs process, un modèle plus général que l’extended cascading Indian buffet process servant à l’apprentissage de graphes et d’ordres. L’avantage du nouveau modèle est qu’il admet les connections entres les variables observables et qu’il prend en compte l’ordre des variables. Nous présentons un algorithme d’inférence Monte Carlo par chaîne de Markov avec saut réversible permettant l’apprentissage conjoint de graphes et d’ordres. L’évaluation est faite sur des problèmes d’estimations de densité et de test d’indépendance. Ce modèle est le premier modèle Bayésien nonparamétrique permettant d’apprendre des réseaux Bayésiens disposant d’une structure complètement arbitraire.<br>One of the important problems in machine learning is determining the complexity of the model to learn. Too much complexity leads to overfitting, which finds structures that do not actually exist in the data, while too low complexity leads to underfitting, which means that the expressiveness of the model is insufficient to capture all the structures present in the data. For some probabilistic models, the complexity depends on the introduction of one or more latent variables whose role is to explain the generative process of the data. There are various approaches to identify the appropriate number of latent variables of a model. This thesis covers various Bayesian nonparametric methods capable of determining the number of latent variables to be used and their dimensionality. The popularization of Bayesian nonparametric statistics in the machine learning community is fairly recent. Their main attraction is the fact that they offer highly flexible models and their complexity scales appropriately with the amount of available data. In recent years, research on Bayesian nonparametric learning methods have focused on three main aspects: the construction of new models, the development of inference algorithms and new applications. This thesis presents our contributions to these three topics of research in the context of learning latent variables models. Firstly, we introduce the Pitman-Yor process mixture of Gaussians, a model for learning infinite mixtures of Gaussians. We also present an inference algorithm to discover the latent components of the model and we evaluate it on two practical robotics applications. Our results demonstrate that the proposed approach outperforms, both in performance and flexibility, the traditional learning approaches. Secondly, we propose the extended cascading Indian buffet process, a Bayesian nonparametric probability distribution on the space of directed acyclic graphs. In the context of Bayesian networks, this prior is used to identify the presence of latent variables and the network structure among them. A Markov Chain Monte Carlo inference algorithm is presented and evaluated on structure identification problems and as well as density estimation problems. Lastly, we propose the Indian chefs process, a model more general than the extended cascading Indian buffet process for learning graphs and orders. The advantage of the new model is that it accepts connections among observable variables and it takes into account the order of the variables. We also present a reversible jump Markov Chain Monte Carlo inference algorithm which jointly learns graphs and orders. Experiments are conducted on density estimation problems and testing independence hypotheses. This model is the first Bayesian nonparametric model capable of learning Bayesian learning networks with completely arbitrary graph structures.
APA, Harvard, Vancouver, ISO, and other styles
5

Christmas, Jacqueline. "Robust spatio-temporal latent variable models." Thesis, University of Exeter, 2011. http://hdl.handle.net/10036/3051.

Full text
Abstract:
Principal Component Analysis (PCA) and Canonical Correlation Analysis (CCA) are widely-used mathematical models for decomposing multivariate data. They capture spatial relationships between variables, but ignore any temporal relationships that might exist between observations. Probabilistic PCA (PPCA) and Probabilistic CCA (ProbCCA) are versions of these two models that explain the statistical properties of the observed variables as linear mixtures of an alternative, hypothetical set of hidden, or latent, variables and explicitly model noise. Both the noise and the latent variables are assumed to be Gaussian distributed. This thesis introduces two new models, named PPCA-AR and ProbCCA-AR, that augment PPCA and ProbCCA respectively with autoregressive processes over the latent variables to additionally capture temporal relationships between the observations. To make PPCA-AR and ProbCCA-AR robust to outliers and able to model leptokurtic data, the Gaussian assumptions are replaced with infinite scale mixtures of Gaussians, using the Student-t distribution. Bayesian inference calculates posterior probability distributions for each of the parameter variables, from which we obtain a measure of confidence in the inference. It avoids the pitfalls associated with the maximum likelihood method: integrating over all possible values of the parameter variables guards against overfitting. For these new models the integrals required for exact Bayesian inference are intractable; instead a method of approximation, the variational Bayesian approach, is used. This enables the use of automatic relevance determination to estimate the model orders. PPCA-AR and ProbCCA-AR can be viewed as linear dynamical systems, so the forward-backward algorithm, also known as the Baum-Welch algorithm, is used as an efficient method for inferring the posterior distributions of the latent variables. The exact algorithm is tractable because Gaussian assumptions are made regarding the distribution of the latent variables. This thesis introduces a variational Bayesian forward-backward algorithm based on Student-t assumptions. The new models are demonstrated on synthetic datasets and on real remote sensing and EEG data.
APA, Harvard, Vancouver, ISO, and other styles
6

Paquet, Ulrich. "Bayesian inference for latent variable models." Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.613111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

O'Sullivan, Aidan Michael. "Bayesian latent variable models with applications." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/19191.

Full text
Abstract:
The massive increases in computational power that have occurred over the last two decades have contributed to the increasing prevalence of Bayesian reasoning in statistics. The often intractable integrals required as part of the Bayesian approach to inference can be approximated or estimated using intensive sampling or optimisation routines. This has extended the realm of applications beyond simple models for which fully analytic solutions are possible. Latent variable models are ideally suited to this approach as it provides a principled method for resolving one of the more difficult issues associated with this class of models, the question of the appropriate number of latent variables. This thesis explores the use of latent variable models in a number of different settings employing Bayesian methods for inference. The first strand of this research focusses on the use of a latent variable model to perform simultaneous clustering and latent structure analysis of multivariate data. In this setting the latent variables are of key interest providing information on the number of sub-populations within a heterogeneous data set and also the differences in latent structure that define them. In the second strand latent variable models are used as a tool to study relational or network data. The analysis of this type of data, which describes the interconnections between different entities or nodes, is complicated due to the dependencies between nodes induced by these connections. The conditional independence assumptions of the latent variable framework provide a means of taking these dependencies into account, the nodes are independent conditioned on an associated latent variable. This allows us to perform model based clustering of a network making inference on the number of clusters. Finally the latent variable representation of the network, which captures the structure of the network in a different form, can be studied as part of a latent variable framework for detecting differences between networks. Approximation schemes are required as part of the Bayesian approach to model estimation. The two methods that are considered in this thesis are stochastic Markov chain Monte Carlo methods and deterministic variational approximations. Where possible these are extended to incorporate model selection over the number of latent variables and a comparison, the first of its kind in this setting, of their relative performance in unsupervised model selection for a range of different settings is presented. The findings of the study help to ascertain in which settings one method may be preferred to the other.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Cheng. "Structured Representation Using Latent Variable Models." Doctoral thesis, KTH, Datorseende och robotik, CVAP, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-191455.

Full text
Abstract:
Over the past two centuries the industrial revolution automated a great part of work that involved human muscles. Recently, since the beginning of the 21st century, the focus has shifted towards automating work that is involving our brain to further improve our lives. This is accomplished by establishing human-level intelligence through machines, which lead to the growth of the field of artificial intelligence. Machine learning is a core component of artificial intelligence. While artificial intelligence focuses on constructing an entire intelligence system, machine learning focuses on the learning ability and the ability to further use the learned knowledge for different tasks. This thesis targets the field of machine learning, especially structured representation learning, which is key for various machine learning approaches. Humans sense the environment, extract information and make action decisions based on abstracted information. Similarly, machines receive data, abstract information from data through models and make decisions about the unknown through inference. Thus, models provide a mechanism for machines to abstract information. This commonly involves learning useful representations which are desirably compact, interpretable and useful for different tasks. In this thesis, the contribution relates to the design of efficient representation models with latent variables. To make the models useful, efficient inference algorithms are derived to fit the models to data. We apply our models to various applications from different domains, namely E-health, robotics, text mining, computer vision and recommendation systems. The main contribution of this thesis relates to advancing latent variable models and deriving associated inference schemes for representation learning. This is pursued in three different directions. Firstly, through supervised models, where better representations can be learned knowing the tasks, corresponding to situated knowledge of humans. Secondly, through structured representation models, with which different structures, such as factorized ones, are used for latent variable models to form more efficient representations. Finally, through non-parametric models, where the representation is determined completely by the data. Specifically, we propose several new models combining supervised learning and factorized representation as well as a further model combining non-parametric modeling and supervised approaches. Evaluations show that these new models provide generally more efficient representations and a higher degree of interpretability. Moreover, this thesis contributes by applying these proposed models in different practical scenarios, demonstrating that these models can provide efficient latent representations. Experimental results show that our models improve the performance for classical tasks, such as image classification and annotations, robotic scene and action understanding. Most notably, one of our models is applied to a novel problem in E-health, namely diagnostic prediction using discomfort drawings. Experimental investigation show here that our model can achieve significant results in automatic diagnosing and provides profound understanding of typical symptoms. This motivates novel decision support systems for healthcare personnel.<br><p>QC 20160905</p>
APA, Harvard, Vancouver, ISO, and other styles
9

Surian, Didi. "Novel Applications Using Latent Variable Models." Thesis, The University of Sydney, 2015. http://hdl.handle.net/2123/14014.

Full text
Abstract:
Latent variable models have achieved a great success in many research communities, including machine learning, information retrieval, data mining, natural language processing, etc. Latent variable models use an assumption that the data, which is observable, has an affinity to some hidden/latent variables. In this thesis, we present a suite of novel applications using latent variable models. In particular, we (i) extend topic models using directional distributions, (ii) propose novel solutions using latent variable models to detect outliers (anomalies) and (iii) to answer cross-modal retrieval problem. We present a study of directional distributions in modeling data. Specifically, we implement the von Mises-Fisher (vMF) distribution and develop latent variable models which are based on directed graphical models. The directed graphical models are commonly used to represent the conditional dependency among the variables. Under Bayesian treatment, we propose approximate posterior inference algorithms using variational methods for the models. We show that by incorporating the vMF distribution, the quality of clustering is improved rather than by using word count-based topic models. Furthermore, with the properties of directional distributions in hand, we extend the applications to detect outliers in various data sets and settings. Finally, we present latent variable models that are based on supervised learning to answer the cross-modal retrieval problem. In the cross-modal retrieval problem, the objective is to find matching content across different modalities such as text and image. We explore various approaches such as by using one-class learning methods, generating negative instances and using ranking methods. We show that our models outperform generic approaches such as Canonical Correlation Analysis (CCA) and its variants.
APA, Harvard, Vancouver, ISO, and other styles
10

Parsons, S. "Approximation methods for latent variable models." Thesis, University College London (University of London), 2016. http://discovery.ucl.ac.uk/1513250/.

Full text
Abstract:
Modern statistical models are often intractable, and approximation methods can be required to perform inference on them. Many different methods can be employed in most contexts, but not all are fully understood. The current thesis is an investigation into the use of various approximation methods for performing inference on latent variable models. Composite likelihoods are used as surrogates for the likelihood function of state space models (SSM). In chapter 3, variational approximations to their evaluation are investigated, and the interaction of biases as composite structure changes is observed. The bias effect of increasing the block size in composite likelihoods is found to balance the statistical benefit of including more data in each component. Predictions and smoothing estimates are made using approximate Expectation- Maximisation (EM) techniques. Variational EM estimators are found to produce predictions and smoothing estimates of a lesser quality than stochastic EM estimators, but at a massively reduced computational cost. Surrogate latent marginals are introduced in chapter 4 into a non-stationary SSM with i.i.d. replicates. They are cheap to compute, and break functional dependencies on parameters for previous time points, giving estimation algorithms linear computational complexity. Gaussian variational approximations are integrated with the surrogate marginals to produce an approximate EM algorithm. Using these Gaussians as proposal distributions in importance sampling is found to offer a positive trade-off in terms of the accuracy of predictions and smoothing estimates made using estimators. A cheap to compute model based hierarchical clustering algorithm is proposed in chapter 5. A cluster dissimilarity measure based on method of moments estimators is used to avoid likelihood function evaluation. Computation time for hierarchical clustering sequences is further reduced with the introduction of short-lists that are linear in the number of clusters at each iteration. The resulting clustering sequences are found to have plausible characteristics in both real and synthetic datasets.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Latent variable models"

1

A, Marcoulides George, and Moustaki Irini, eds. Latent variable and latent structure models. Lawrence Earlbaum Publishers, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Heinen, Ton. Discrete latent variable models. Tilburg University Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bartholomew, David J. Latent variable models and factor analysis. 2nd ed. Arnold, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bartholomew, David J. Latent variable models and factor analysis. C. Griffin, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bartholomew, David, Martin Knott, and Irini Moustaki. Latent Variable Models and Factor Analysis. John Wiley & Sons, Ltd, 2011. http://dx.doi.org/10.1002/9781119970583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

R, Hancock Gregory, and Samuelsen Karen M, eds. Advances in latent variable mixture models. Information Age Pub., 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sik-Yum, Lee, ed. Handbook of latent variable and related models. North-Holland, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fang, Guanhua. Latent Variable Models in Measurement: Theory and Application. [publisher not identified], 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ward, Owen Gerard. Latent Variable Models for Events on Social Networks. [publisher not identified], 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bartholomew, David J. Latent variable models and factor analysis: A unified approach. 3rd ed. Wiley, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Latent variable models"

1

Beaujean, A. Alexander, and Grant B. Morgan. "Latent Variable Models." In Human–Computer Interaction Series. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-26633-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

McGrath, Robert E. "Latent-variable models." In Quantitative models in psychology. American Psychological Association, 2011. http://dx.doi.org/10.1037/12316-007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bishop, Christopher M. "Latent Variable Models." In Learning in Graphical Models. Springer Netherlands, 1998. http://dx.doi.org/10.1007/978-94-011-5014-9_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tomczak, Jakub M. "Latent Variable Models." In Deep Generative Modeling. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93158-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tomczak, Jakub M. "Latent Variable Models." In Deep Generative Modeling. Springer International Publishing, 2024. http://dx.doi.org/10.1007/978-3-031-64087-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hsiao, Cheng. "Nonlinear Latent Variable Models." In Advanced Studies in Theoretical and Applied Econometrics. Springer Netherlands, 1992. http://dx.doi.org/10.1007/978-94-009-0375-3_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fong, Daniel Yee Tak. "Latent Variable Path Models." In Encyclopedia of Quality of Life and Well-Being Research. Springer Netherlands, 2014. http://dx.doi.org/10.1007/978-94-007-0753-5_1606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Keith, Timothy Z. "Latent Variable Models II." In Multiple Regression and Beyond. Routledge, 2019. http://dx.doi.org/10.4324/9781315162348-18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fong, Daniel Yee Tak. "Latent Variable Path Models." In Encyclopedia of Quality of Life and Well-Being Research. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-17299-1_1606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Finch, W. Holmes, and Jocelyn E. Bolin. "Multilevel Latent Variable Models." In Multilevel Modeling Using R, 3rd ed. Chapman and Hall/CRC, 2024. http://dx.doi.org/10.1201/b23166-10.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Latent variable models"

1

Gaebert, Carl, and Ulrike Thomas. "Generating Dual-Arm Inverse Kinematics Solutions using Latent Variable Models." In 2024 IEEE-RAS 23rd International Conference on Humanoid Robots (Humanoids). IEEE, 2024. https://doi.org/10.1109/humanoids58906.2024.10769854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Saha, Surojit, Sarang Joshi, and Ross Whitaker. "Disentanglement Analysis in Deep Latent Variable Models Matching Aggregate Posterior Distributions." In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025. https://doi.org/10.1109/icassp49660.2025.10889788.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ma, Xiangdong, Xiaoling Zhang, Xu Zhan, Tianjiao Zeng, Jun Shi, and Shunjun Wei. "Unsupervised Near-Field Array SAR Imaging Method Based on Latent Variable Generative Models." In IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2024. http://dx.doi.org/10.1109/igarss53475.2024.10641621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pérez-Gonzalo, Raül, Andreas Espersen, and Antonio Agudo. "Generalized Nested Latent Variable Models For Lossy Coding Applied To Wind Turbine Scenarios." In 2024 IEEE International Conference on Image Processing (ICIP). IEEE, 2024. http://dx.doi.org/10.1109/icip51287.2024.10648110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Serez, Dario, Marco Cristani, Alessio Del Bue, Vittorio Murino, and Pietro Morerio. "Pre-trained Multiple Latent Variable Generative Models are Good Defenders Against Adversarial Attacks." In 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, 2025. https://doi.org/10.1109/wacv61041.2025.00634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jiang, Xiubao, Xinge You, Yi Mou, Shujian Yu, and Wu Zeng. "Gaussian latent variable models for variable selection." In 2014 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC). IEEE, 2014. http://dx.doi.org/10.1109/spac.2014.6982714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Urtasun, Raquel, David J. Fleet, Andreas Geiger, Jovan Popović, Trevor J. Darrell, and Neil D. Lawrence. "Topologically-constrained latent variable models." In the 25th international conference. ACM Press, 2008. http://dx.doi.org/10.1145/1390156.1390292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Willems, J. C., and J. W. Nieuwenhuis. "Continuity of latent variable models." In 29th IEEE Conference on Decision and Control. IEEE, 1990. http://dx.doi.org/10.1109/cdc.1990.203519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

"Initialization Framework for Latent Variable Models." In International Conference on Pattern Recognition Applications and Methods. SCITEPRESS - Science and and Technology Publications, 2014. http://dx.doi.org/10.5220/0004826302270232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ahmed, Amr, Moahmed Aly, Joseph Gonzalez, Shravan Narayanamurthy, and Alexander J. Smola. "Scalable inference in latent variable models." In the fifth ACM international conference. ACM Press, 2012. http://dx.doi.org/10.1145/2124295.2124312.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Latent variable models"

1

Mislevy, Robert J., and Kathleen M. Sheehan. The Information Matrix in Latent-Variable Models. Defense Technical Information Center, 1988. http://dx.doi.org/10.21236/ada196609.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Anandkumar, Anima, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgarsky. Tensor Decompositions for Learning Latent Variable Models. Defense Technical Information Center, 2012. http://dx.doi.org/10.21236/ada604494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Collins, David H. Jr. Latent Variable Models for Quantification of Margins and Uncertainties. Office of Scientific and Technical Information (OSTI), 2013. http://dx.doi.org/10.2172/1088891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Zhen. From CFA to SEM with Moderated Mediation in Mplus. Instats Inc., 2022. http://dx.doi.org/10.61700/e6lwwzg27rqsr469.

Full text
Abstract:
This seminar introduces the Mplus latent variable modeling framework and explores measurement models including bi-factor and hierarchical factor models and scale reliability in CFA, as well as SEMs with latent variable interactions (moderation), indirect effects (mediation), latent conditional indirect effects (moderated mediation), and latent instrumental variable methods in an framework (IV-SEM).
APA, Harvard, Vancouver, ISO, and other styles
5

Zyphur, Michael. From CFA to SEM with Moderated Mediation in R. Instats Inc., 2022. http://dx.doi.org/10.61700/75sjvfs0ve1d4469.

Full text
Abstract:
This seminar introduces the Lavaan latent variable modeling framework and explores measurement models including bi-factor and hierarchical factor models and scale reliability in CFA, as well as SEMs with latent variable interactions (moderation), indirect effects (mediation), latent conditional indirect effects (moderated mediation), and latent instrumental variable methods in an SEM framework (IV-SEM). An official Instats certificate of completion is provided at the conclusion of the seminar. For European PhD students, the seminar offers 2 ECTS Equivalent point.
APA, Harvard, Vancouver, ISO, and other styles
6

Zyphur, Michael. From CFA to SEM with Moderated Mediation in Mplus. Instats Inc., 2022. http://dx.doi.org/10.61700/a6tru90pc9miu469.

Full text
Abstract:
This seminar introduces the Mplus latent variable modeling framework and explores measurement models including bi-factor and hierarchical factor models and scale reliability in CFA, as well as SEMs with latent variable interactions (moderation), indirect effects (mediation), latent conditional indirect effects (moderated mediation), and latent instrumental variable methods in an SEM framework (IV-SEM). An official Instats certificate of completion is provided at the conclusion of the seminar. For European PhD students, the seminar offers 2 ECTS Equivalent point.
APA, Harvard, Vancouver, ISO, and other styles
7

Zyphur, Michael. From CFA to SEM with Moderated Mediation in R (Free On-Demand Seminar). Instats Inc., 2022. http://dx.doi.org/10.61700/xria1if8u3nip469.

Full text
Abstract:
This seminar introduces the Lavaan latent variable modeling framework and explores measurement models including bi-factor and hierarchical factor models and scale reliability in CFA, as well as SEMs with latent variable interactions (moderation), indirect effects (mediation), latent conditional indirect effects (moderated mediation), and latent instrumental variable methods in an SEM framework (IV-SEM). An official Instats certificate of completion is provided at the conclusion of the seminar. For European PhD students, the seminar offers 2 ECTS Equivalent point.
APA, Harvard, Vancouver, ISO, and other styles
8

Zyphur, Michael. Intermediate SEM in Stata: From CFA to SEM. Instats Inc., 2022. http://dx.doi.org/10.61700/9qo0ssbbzp4nl469.

Full text
Abstract:
This seminar introduces the Stata ‘sem’ latent variable modeling framework and explores measurement models including bi-factor and hierarchical factor models and scale reliability in CFA, as well as SEMs with latent variable interactions (moderation), indirect effects (mediation), latent conditional indirect effects (moderated mediation), and latent instrumental variable methods in an SEM framework (IV-SEM). An official Instats certificate of completion is provided at the conclusion of the seminar. For European PhD students, each seminar offers 2 ECTS Equivalent points.
APA, Harvard, Vancouver, ISO, and other styles
9

Banerjee, Souvik, Anirban Basu, and Shubham Das. Choosing Wisely: Evaluating Latent Factor Models in the Presence of a Contaminated Instrumental Variable with Varying Strength. National Bureau of Economic Research, 2025. https://doi.org/10.3386/w33620.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Zhen. Multilevel SEM in Mplus. Instats Inc., 2022. http://dx.doi.org/10.61700/p80oftrbgz4z3469.

Full text
Abstract:
This seminar introduces the Mplus multilevel latent variable modeling framework and describes topics including multilevel variance and effect decomposition, and random and fixed effects (including random slopes), and then proceeds to explore multilevel path analysis, multilevel CFA including multilevel bi-factor models, and multilevel SEM, including approaches for handling strong correlations at the between-group level, indirect effects (multilevel mediation), interaction effects (multilevel moderation), and conditional indirect effects (multilevel moderated mediation). An official Instats certificate of completion is provided at the conclusion of the seminar.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography