Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Gaussian mixture models.

Artykuły w czasopismach na temat „Gaussian mixture models”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Gaussian mixture models”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Ju, Zhaojie, and Honghai Liu. "Fuzzy Gaussian Mixture Models." Pattern Recognition 45, no. 3 (March 2012): 1146–58. http://dx.doi.org/10.1016/j.patcog.2011.08.028.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

McNicholas, Paul David, and Thomas Brendan Murphy. "Parsimonious Gaussian mixture models." Statistics and Computing 18, no. 3 (April 19, 2008): 285–96. http://dx.doi.org/10.1007/s11222-008-9056-0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Viroli, Cinzia, and Geoffrey J. McLachlan. "Deep Gaussian mixture models." Statistics and Computing 29, no. 1 (December 1, 2017): 43–51. http://dx.doi.org/10.1007/s11222-017-9793-z.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Verbeek, J. J., N. Vlassis, and B. Kröse. "Efficient Greedy Learning of Gaussian Mixture Models." Neural Computation 15, no. 2 (February 1, 2003): 469–85. http://dx.doi.org/10.1162/089976603762553004.

Pełny tekst źródła
Streszczenie:
This article concerns the greedy learning of gaussian mixtures. In the greedy approach, mixture components are inserted into the mixture one aftertheother.We propose a heuristic for searching for the optimal component to insert. In a randomized manner, a set of candidate new components is generated. For each of these candidates, we find the locally optimal new component and insert it into the existing mixture. The resulting algorithm resolves the sensitivity to initialization of state-of-the-art methods, like expectation maximization, and has running time linear in the number of data points and quadratic in the (final) number of mixture components. Due to its greedy nature, the algorithm can be particularly useful when the optimal number of mixture components is unknown. Experimental results comparing the proposed algorithm to other methods on density estimation and texture segmentation are provided.
Style APA, Harvard, Vancouver, ISO itp.
5

Kunkel, Deborah, and Mario Peruggia. "Anchored Bayesian Gaussian mixture models." Electronic Journal of Statistics 14, no. 2 (2020): 3869–913. http://dx.doi.org/10.1214/20-ejs1756.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Chassagnol, Bastien, Antoine Bichat, Cheïma Boudjeniba, Pierre-Henri Wuillemin, Mickaël Guedj, David Gohel, Gregory Nuel, and Etienne Becht. "Gaussian Mixture Models in R." R Journal 15, no. 2 (November 1, 2023): 56–76. http://dx.doi.org/10.32614/rj-2023-043.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

chassagnol, bastien. "Gaussian Mixture Models in R." R Journal 2023-2, no. 2023-2 (November 14, 2023): 56–76. https://doi.org/10.32614/RJ-2023-043.

Pełny tekst źródła
Streszczenie:
  Gaussian mixture models (GMMs) are widely used for modelling stochastic problems. Indeed, a wide diversity of packages have been developed in R. However, no recent review describing the main features offered by these packages and comparing their performances has been performed. In this article, we first introduce GMMs and the EM algorithm used to retrieve the parameters of the model and analyse the main features implemented among seven of the most widely used R packages. We then empirically compare their statistical and computational performances in relation with the choice of the initialisation algorithm and the complexity of the mixture. We demonstrate that the best estimation with well-separated components or with a small number of components with distinguishable modes is obtained with REBMIX initialisation, implemented in the \CRANpkg{rebmix} package, while the best estimation with highly overlapping components is obtained with *k*-means or random initialisation. Importantly, we show that implementation details in the EM algorithm yield differences in the parameters' estimation. Especially, packages \CRANpkg{mixtools} [@R-mixtools]  and \CRANpkg{Rmixmod} [@R-Rmixmod]  estimate the parameters of the mixture with smaller bias, while the RMSE and variability of the estimates is smaller with packages \CRANpkg{bgmm} [@R-bgmm] , \CRANpkg{EMCluster} [@R-EMCluster] , \CRANpkg{GMKMcharlie} [@R-GMKMcharlie], \CRANpkg{flexmix} [@R-flexmix]  and \CRANpkg{mclust} [@R-mclust]. The comparison of these packages provides R users with useful recommendations for improving the computational and statistical performance of their clustering and for identifying  common deficiencies. Additionally, we propose several improvements in the development of a future, unified mixture model package.
Style APA, Harvard, Vancouver, ISO itp.
8

Ruzgas, Tomas, and Indrė Drulytė. "Kernel Density Estimators for Gaussian Mixture Models." Lietuvos statistikos darbai 52, no. 1 (December 20, 2013): 14–21. http://dx.doi.org/10.15388/ljs.2013.13919.

Pełny tekst źródła
Streszczenie:
The problem of nonparametric estimation of probability density function is considered. The performance of kernel estimators based on various common kernels and a new kernel K (see (14)) with both fixed and adaptive smoothing bandwidth is compared in terms of the symmetric mean absolute percentage error using the Monte Carlo method. The kernel K is everywhere positive but has lighter tails than the Gaussian density. Gaussian mixture models from a collection introduced by Marron and Wand (1992) are taken for Monte Carlo simulations. The adaptive kernel method outperforms the smoothing with a fixed bandwidth in the majority of models. The kernel K shows better performance for Gaussian mixtures with considerably overlapping components and multiple peaks (double claw distribution).
Style APA, Harvard, Vancouver, ISO itp.
9

Chen, Yongxin, Tryphon T. Georgiou, and Allen Tannenbaum. "Optimal Transport for Gaussian Mixture Models." IEEE Access 7 (2019): 6269–78. http://dx.doi.org/10.1109/access.2018.2889838.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Nasios, N., and A. G. Bors. "Variational learning for Gaussian mixture models." IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 36, no. 4 (August 2006): 849–62. http://dx.doi.org/10.1109/tsmcb.2006.872273.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Zhang, Baibo, Changshui Zhang, and Xing Yi. "Active curve axis Gaussian mixture models." Pattern Recognition 38, no. 12 (December 2005): 2351–62. http://dx.doi.org/10.1016/j.patcog.2005.01.017.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Zeng, Jia, Lei Xie, and Zhi-Qiang Liu. "Type-2 fuzzy Gaussian mixture models." Pattern Recognition 41, no. 12 (December 2008): 3636–43. http://dx.doi.org/10.1016/j.patcog.2008.06.006.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Bolin, David, Jonas Wallin, and Finn Lindgren. "Latent Gaussian random field mixture models." Computational Statistics & Data Analysis 130 (February 2019): 80–93. http://dx.doi.org/10.1016/j.csda.2018.08.007.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

MAEBASHI, K., N. SUEMATSU, and A. HAYASHI. "Component Reduction for Gaussian Mixture Models." IEICE Transactions on Information and Systems E91-D, no. 12 (December 1, 2008): 2846–53. http://dx.doi.org/10.1093/ietisy/e91-d.12.2846.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Nowakowska, Ewa, Jacek Koronacki, and Stan Lipovetsky. "Clusterability assessment for Gaussian mixture models." Applied Mathematics and Computation 256 (April 2015): 591–601. http://dx.doi.org/10.1016/j.amc.2014.12.038.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Di Zio, Marco, Ugo Guarnera, and Orietta Luzi. "Imputation through finite Gaussian mixture models." Computational Statistics & Data Analysis 51, no. 11 (July 2007): 5305–16. http://dx.doi.org/10.1016/j.csda.2006.10.002.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Yin, Jian Jun, and Jian Qiu Zhang. "Convolution PHD Filtering for Nonlinear Non-Gaussian Models." Advanced Materials Research 213 (February 2011): 344–48. http://dx.doi.org/10.4028/www.scientific.net/amr.213.344.

Pełny tekst źródła
Streszczenie:
A novel probability hypothesis density (PHD) filter, called the Gaussian mixture convolution PHD (GMCPHD) filter was proposed. The PHD within the filter is approximated by a Gaussian sum, as in the Gaussian mixture PHD (GMPHD) filter, but the model may be non-Gaussian and nonlinear. This is implemented by a bank of convolution filters with Gaussian approximations to the predicted and posterior densities. The analysis results show the lower complexity, more amenable for parallel implementation of the GMCPHD filter than the convolution PHD (CPHD) filter and the ability to deal with complex observation model, small observation noise and non-Gaussian noise of the proposed filter over the existing Gaussian mixture particle PHD (GMPPHD) filter. The multi-target tracking simulation results verify the effectiveness of the proposed method.
Style APA, Harvard, Vancouver, ISO itp.
18

Maleki, Mohsen, and A. R. Nematollahi. "Autoregressive Models with Mixture of Scale Mixtures of Gaussian Innovations." Iranian Journal of Science and Technology, Transactions A: Science 41, no. 4 (April 21, 2017): 1099–107. http://dx.doi.org/10.1007/s40995-017-0237-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Meinicke, Peter, and Helge Ritter. "Resolution-Based Complexity Control for Gaussian Mixture Models." Neural Computation 13, no. 2 (February 2001): 453–75. http://dx.doi.org/10.1162/089976601300014600.

Pełny tekst źródła
Streszczenie:
In the domain of unsupervised learning, mixtures of gaussians have become a popular tool for statistical modeling. For this class of generative models, we present a complexity control scheme, which provides an effective means for avoiding the problem of overfitting usually encountered with unconstrained (mixtures of) gaussians in high dimensions. According to some prespecified level of resolution as implied by a fixed variance noise model, the scheme provides an automatic selection of the dimensionalities of some local signal subspaces by maximum likelihood estimation. Together with a resolution-based control scheme for adjusting the number of mixture components, we arrive at an incremental model refinement procedure within a common deterministic annealing framework, which enables an efficient exploration of the model space. The advantages of the resolution-based framework are illustrated by experimental results on synthetic and high-dimensional real-world data.
Style APA, Harvard, Vancouver, ISO itp.
20

Wei, Hui, and Wei Zheng. "Image Denoising Based on Improved Gaussian Mixture Model." Scientific Programming 2021 (September 22, 2021): 1–8. http://dx.doi.org/10.1155/2021/7982645.

Pełny tekst źródła
Streszczenie:
An image denoising method is proposed based on the improved Gaussian mixture model to reduce the noises and enhance the image quality. Unlike the traditional image denoising methods, the proposed method models the pixel information in the neighborhood around each pixel in the image. The Gaussian mixture model is employed to measure the similarity between pixels by calculating the L2 norm between the Gaussian mixture models corresponding to the two pixels. The Gaussian mixture model can model the statistical information such as the mean and variance of the pixel information in the image area. The L2 norm between the two Gaussian mixture models represents the difference in the local grayscale intensity and the richness of the details of the pixel information around the two pixels. In this sense, the L2 norm between Gaussian mixture models can more accurately measure the similarity between pixels. The experimental results show that the proposed method can improve the denoising performance of the images while retaining the detailed information of the image.
Style APA, Harvard, Vancouver, ISO itp.
21

Ueda, Naonori, Ryohei Nakano, Zoubin Ghahramani, and Geoffrey E. Hinton. "SMEM Algorithm for Mixture Models." Neural Computation 12, no. 9 (September 1, 2000): 2109–28. http://dx.doi.org/10.1162/089976600300015088.

Pełny tekst źródła
Streszczenie:
We present a split-and-merge expectation-maximization (SMEM) algorithm to overcome the local maxima problem in parameter estimation of finite mixture models. In the case of mixture models, local maxima often involve having too many components of a mixture model in one part of the space and too few in another, widely separated part of the space. To escape from such configurations, we repeatedly perform simultaneous split-and-merge operations using a new criterion for efficiently selecting the split-and-merge candidates. We apply the proposed algorithm to the training of gaussian mixtures and mixtures of factor analyzers using synthetic and real data and show the effectiveness of using the split- and-merge operations to improve the likelihood of both the training data and of held-out test data. We also show the practical usefulness of the proposed algorithm by applying it to image compression and pattern recognition problems.
Style APA, Harvard, Vancouver, ISO itp.
22

Krasilnikov, A. I. "Classification of Models of Two-component Mixtures of Symmetrical Distributions with Zero Kurtosis Coefficient." Èlektronnoe modelirovanie 45, no. 5 (October 10, 2023): 20–38. http://dx.doi.org/10.15407/emodel.45.05.020.

Pełny tekst źródła
Streszczenie:
On the basis of a family of two-component mixtures of distributions, a class K of symmetric non-Gaussian distributions with a zero kurtosis coefficient is defined, which is divided into two groups and five types. The dependence of the fourth-order cumulant on the weight coefficient of the mixture is studied, as a result of which the conditions are determined under which the kurtosis coefficient of the mixture is equal to zero. The use of a two-component mixture of Subbotin distributions for modeling single-vertex symmetric distributions with a zero kurtosis coefficient is justified. Examples of symmetric non-Gaussian distributions with zero kurtosis coefficient are given. The use of class K models gives a practical opportunity at the design stage to compare the effectiveness of the developed methods and systems for non-Gaussian signals with zero coefficients of asymmetry and kurtosis processing.
Style APA, Harvard, Vancouver, ISO itp.
23

Masmoudi, Khalil, and Afif Masmoudi. "An EM algorithm for singular Gaussian mixture models." Filomat 33, no. 15 (2019): 4753–67. http://dx.doi.org/10.2298/fil1915753m.

Pełny tekst źródła
Streszczenie:
In this paper, we introduce finite mixture models with singular multivariate normal components. These models are useful when the observed data involves collinearities, that is when the covariance matrices are singular. They are also useful when the covariance matrices are ill-conditioned. In the latter case, the classical approaches may lead to numerical instabilities and give inaccurate estimations. Hence, an extension of the Expectation Maximization algorithm, with complete proof, is proposed to derive the maximum likelihood estimators and cluster the data instances for mixtures of singular multivariate normal distributions. The accuracy of the proposed algorithm is then demonstrated on the grounds of several numerical experiments. Finally, we discuss the application of the proposed distribution to financial asset returns modeling and portfolio selection.
Style APA, Harvard, Vancouver, ISO itp.
24

Røge, Rasmus E., Kristoffer H. Madsen, Mikkel N. Schmidt, and Morten Mørup. "Infinite von Mises–Fisher Mixture Modeling of Whole Brain fMRI Data." Neural Computation 29, no. 10 (October 2017): 2712–41. http://dx.doi.org/10.1162/neco_a_01000.

Pełny tekst źródła
Streszczenie:
Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises–Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians subsequently normalized. Thus, when performing model selection, the two models are not in agreement. Analyzing multisubject whole brain resting-state fMRI data from healthy adult subjects, we find that the vMF mixture model is considerably more reliable than the gaussian mixture model when comparing solutions across models trained on different groups of subjects, and again we find that the two models disagree on the optimal number of components. The analysis indicates that the fMRI data support more than a thousand clusters, and we confirm this is not a result of overfitting by demonstrating better prediction on data from held-out subjects. Our results highlight the utility of using directional statistics to model standardized fMRI data and demonstrate that whole brain segmentation of fMRI data requires a very large number of functional units in order to adequately account for the discernible statistical patterns in the data.
Style APA, Harvard, Vancouver, ISO itp.
25

YAMADA, Makoto, and Masashi SUGIYAMA. "Direct Importance Estimation with Gaussian Mixture Models." IEICE Transactions on Information and Systems E92-D, no. 10 (2009): 2159–62. http://dx.doi.org/10.1587/transinf.e92.d.2159.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

YE, Peng, Fang LIU, and Zhiyong ZHAO. "Multiple Gaussian Mixture Models for Image Registration." IEICE Transactions on Information and Systems E97.D, no. 7 (2014): 1927–29. http://dx.doi.org/10.1587/transinf.e97.d.1927.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Yang, Jianbo, Xin Yuan, Xuejun Liao, Patrick Llull, David J. Brady, Guillermo Sapiro, and Lawrence Carin. "Video Compressive Sensing Using Gaussian Mixture Models." IEEE Transactions on Image Processing 23, no. 11 (November 2014): 4863–78. http://dx.doi.org/10.1109/tip.2014.2344294.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Zhiwu Lu and H. H. S. Ip. "Generalized Competitive Learning of Gaussian Mixture Models." IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 39, no. 4 (August 2009): 901–9. http://dx.doi.org/10.1109/tsmcb.2008.2012119.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Akaho, Shotaro, and Hilbert J. Kappen. "Nonmonotonic Generalization Bias of Gaussian Mixture Models." Neural Computation 12, no. 6 (June 1, 2000): 1411–27. http://dx.doi.org/10.1162/089976600300015439.

Pełny tekst źródła
Streszczenie:
Theories of learning and generalization hold that the generalization bias, defined as the difference between the training error and the generalization error, increases on average with the number of adaptive parameters. This article, however, shows that this general tendency is violated for a gaussian mixture model. For temperatures just below the first symmetry breaking point, the effective number of adaptive parameters increases and the generalization bias decreases. We compute the dependence of the neural information criterion on temperature around the symmetry breaking. Our results are confirmed by numerical cross-validation experiments.
Style APA, Harvard, Vancouver, ISO itp.
30

Yu, Guoshen, and Guillermo Sapiro. "Statistical Compressed Sensing of Gaussian Mixture Models." IEEE Transactions on Signal Processing 59, no. 12 (December 2011): 5842–58. http://dx.doi.org/10.1109/tsp.2011.2168521.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Matza, Avi, and Yuval Bistritz. "Skew Gaussian mixture models for speaker recognition." IET Signal Processing 8, no. 8 (October 2014): 860–67. http://dx.doi.org/10.1049/iet-spr.2013.0270.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Arellano, Claudia, and Rozenn Dahyot. "Robust ellipse detection with Gaussian mixture models." Pattern Recognition 58 (October 2016): 12–26. http://dx.doi.org/10.1016/j.patcog.2016.01.017.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Lee, Kevin H., and Lingzhou Xue. "Nonparametric Finite Mixture of Gaussian Graphical Models." Technometrics 60, no. 4 (October 2, 2018): 511–21. http://dx.doi.org/10.1080/00401706.2017.1408497.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Jones, Daniel M., and Alan F. Heavens. "Gaussian mixture models for blended photometric redshifts." Monthly Notices of the Royal Astronomical Society 490, no. 3 (September 27, 2019): 3966–86. http://dx.doi.org/10.1093/mnras/stz2687.

Pełny tekst źródła
Streszczenie:
ABSTRACT Future cosmological galaxy surveys such as the Large Synoptic Survey Telescope (LSST) will photometrically observe very large numbers of galaxies. Without spectroscopy, the redshifts required for the analysis of these data will need to be inferred using photometric redshift techniques that are scalable to large sample sizes. The high number density of sources will also mean that around half are blended. We present a Bayesian photometric redshift method for blended sources that uses Gaussian mixture models to learn the joint flux–redshift distribution from a set of unblended training galaxies, and Bayesian model comparison to infer the number of galaxies comprising a blended source. The use of Gaussian mixture models renders both of these applications computationally efficient and therefore suitable for upcoming galaxy surveys.
Style APA, Harvard, Vancouver, ISO itp.
35

Wallet, Bradley C., and Robert Hardisty. "Unsupervised seismic facies using Gaussian mixture models." Interpretation 7, no. 3 (August 1, 2019): SE93—SE111. http://dx.doi.org/10.1190/int-2018-0119.1.

Pełny tekst źródła
Streszczenie:
As the use of seismic attributes becomes more widespread, multivariate seismic analysis has become more commonplace for seismic facies analysis. Unsupervised machine-learning techniques provide methods of automatically finding patterns in data with minimal user interaction. When using unsupervised machine-learning techniques, such as [Formula: see text]-means or Kohonen self-organizing maps (SOMs), the number of clusters can often be ambiguously defined and there is no measure of how confident the algorithm is in the classification of data vectors. The model-based probabilistic formulation of Gaussian mixture models (GMMs) allows for the number and shape of clusters to be determined in a more objective manner using a Bayesian framework that considers a model’s likelihood and complexity. Furthermore, the development of alternative expectation-maximization (EM) algorithms has allowed GMMs to be more tailored to unsupervised seismic facies analysis. The classification EM algorithm classifies data vectors according to their posterior probabilities that provide a measurement of uncertainty and ambiguity (often called a soft classification). The neighborhood EM (NEM) algorithm allows for spatial correlations to be considered to make classification volumes more realistic by enforcing spatial continuity. Corendering the classification with the uncertainty and ambiguity measurements produces an intuitive map of unsupervised seismic facies. We apply a model-based classification approach using GMMs to a turbidite system in Canterbury Basin, New Zealand, to clarify results from an initial SOM and highlight areas of uncertainty and ambiguity. Special focus on a channel feature in the turbidite system using an NEM algorithm shows it to be more realistic by considering spatial correlations within the data.
Style APA, Harvard, Vancouver, ISO itp.
36

Silva, Diogo S. F., and Clayton V. Deutsch. "Multivariate data imputation using Gaussian mixture models." Spatial Statistics 27 (October 2018): 74–90. http://dx.doi.org/10.1016/j.spasta.2016.11.002.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Hedelin, P., and J. Skoglund. "Vector quantization based on Gaussian mixture models." IEEE Transactions on Speech and Audio Processing 8, no. 4 (July 2000): 385–401. http://dx.doi.org/10.1109/89.848220.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Zhang, J., and D. Ma. "Nonlinear Prediction for Gaussian Mixture Image Models." IEEE Transactions on Image Processing 13, no. 6 (June 2004): 836–47. http://dx.doi.org/10.1109/tip.2004.828197.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Jiucang Hao, Te-Won Lee, and Terrence J. Sejnowski. "Speech Enhancement Using Gaussian Scale Mixture Models." IEEE Transactions on Audio, Speech, and Language Processing 18, no. 6 (August 2010): 1127–36. http://dx.doi.org/10.1109/tasl.2009.2030012.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Fiez, Tanner, and Lillian J. Ratliff. "Gaussian Mixture Models for Parking Demand Data." IEEE Transactions on Intelligent Transportation Systems 21, no. 8 (August 2020): 3571–80. http://dx.doi.org/10.1109/tits.2019.2939499.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Burges, Christopher John. "Discriminative Gaussian mixture models for speaker verification." Journal of the Acoustical Society of America 113, no. 5 (2003): 2393. http://dx.doi.org/10.1121/1.1584172.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Morgan, Grant B. "Generating Nonnormal Distributions via Gaussian Mixture Models." Structural Equation Modeling: A Multidisciplinary Journal 27, no. 6 (February 5, 2020): 964–74. http://dx.doi.org/10.1080/10705511.2020.1718502.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Reynolds, Douglas A., Thomas F. Quatieri, and Robert B. Dunn. "Speaker Verification Using Adapted Gaussian Mixture Models." Digital Signal Processing 10, no. 1-3 (January 2000): 19–41. http://dx.doi.org/10.1006/dspr.1999.0361.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Chuan, Ching-Hua. "Audio Classification and Retrieval Using Wavelets and Gaussian Mixture Models." International Journal of Multimedia Data Engineering and Management 4, no. 1 (January 2013): 1–20. http://dx.doi.org/10.4018/jmdem.2013010101.

Pełny tekst źródła
Streszczenie:
This paper presents an audio classification and retrieval system using wavelets for extracting low-level acoustic features. The author performed multiple-level decomposition using discrete wavelet transform to extract acoustic features from audio recordings at different scales and times. The extracted features are then translated into a compact vector representation. Gaussian mixture models with expectation maximization algorithm are used to build models for audio classes and individual audio examples. The system is evaluated using three audio classification tasks: speech/music, male/female speech, and music genre. They also show how wavelets and Gaussian mixture models are used for class-based audio retrieval in two approaches: indexing using only wavelets versus indexing by Gaussian components. By evaluating the system through 10-fold cross-validation, the author shows the promising capability of wavelets and Gaussian mixture models for audio classification and retrieval. They also compare how parameters including frame size, wavelet level, Gaussian components, and sampling size affect performance in Gaussian models.
Style APA, Harvard, Vancouver, ISO itp.
45

Améndola, Carlos, Alexander Engström, and Christian Haase. "Maximum number of modes of Gaussian mixtures." Information and Inference: A Journal of the IMA 9, no. 3 (June 29, 2019): 587–600. http://dx.doi.org/10.1093/imaiai/iaz013.

Pełny tekst źródła
Streszczenie:
Abstract Gaussian mixture models are widely used in Statistics. A fundamental aspect of these distributions is the study of the local maxima of the density or modes. In particular, it is not known how many modes a mixture of $k$ Gaussians in $d$ dimensions can have. We give a brief account of this problem’s history. Then, we give improved lower bounds and the first upper bound on the maximum number of modes, provided it is finite.
Style APA, Harvard, Vancouver, ISO itp.
46

Zeng, Xiaoying, and Eugene Pinsky. "Elliptical Mixture Models Improve the Accuracy of Gaussian Mixture Models with Expectation-maximization Algorithm." International Journal on Cybernetics & Informatics 14, no. 2 (March 28, 2025): 87–106. https://doi.org/10.5121/ijci.2025.140206.

Pełny tekst źródła
Streszczenie:
This study addresses the limitations of Gaussian Mixture Models (GMMs) in clustering complex datasets and proposes Elliptical Mixture Models (EMMs) as a robust and flexible alternative. By adapting the Expectation-Maximization (EM) algorithm to handle elliptical distributions, the study introduces a novel computational framework that enhances clustering performance for data with irregular shapes and heavy tails. Leveraging the integration of R’s advanced statistical tools into Python workflows, this approach enables practical implementation of EMMs. Empirical evaluations on three datasets Rice, Customer Churn, and Glass Identification demonstrate the superiority of EMMs over GMMs across multiple metrics, including Weighted Average Purity, Dunn Index, Rand Index, and Silhouette Score. The re- search highlights EMMs as a valuable tool for advanced clustering tasks and provides insights into their potential applications in handling real-world datasets with complex covariance structures.
Style APA, Harvard, Vancouver, ISO itp.
47

Zhao, Mingyang, Xiaohong Jia, Lubin Fan, Yuan Liang, and Dong-Ming Yan. "Robust Ellipse Fitting Using Hierarchical Gaussian Mixture Models." IEEE Transactions on Image Processing 30 (2021): 3828–43. http://dx.doi.org/10.1109/tip.2021.3065799.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Belton, D., S. Moncrieff, and J. Chapman. "Processing tree point clouds using Gaussian Mixture Models." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-5/W2 (October 16, 2013): 43–48. http://dx.doi.org/10.5194/isprsannals-ii-5-w2-43-2013.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

NISHIMOTO, Hiroki, Renyuan ZHANG, and Yasuhiko NAKASHIMA. "GPGPU Implementation of Variational Bayesian Gaussian Mixture Models." IEICE Transactions on Information and Systems E105.D, no. 3 (March 1, 2022): 611–22. http://dx.doi.org/10.1587/transinf.2021edp7121.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Liang, Xi-Long, Yu-Qin Chen, Jing-Kun Zhao, and Gang Zhao. "Partitioning the Galactic halo with Gaussian Mixture Models." Research in Astronomy and Astrophysics 21, no. 5 (June 1, 2021): 128. http://dx.doi.org/10.1088/1674-4527/21/5/128.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii