Auswahl der wissenschaftlichen Literatur zum Thema „Mixture of Gaussian“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Mixture of Gaussian" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Mixture of Gaussian"

1

Zickert, Gustav, and Can Evren Yarman. "Gaussian mixture model decomposition of multivariate signals." Signal, Image and Video Processing 16, no. 2 (2021): 429–36. http://dx.doi.org/10.1007/s11760-021-01961-y.

Der volle Inhalt der Quelle
Annotation:
AbstractWe propose a greedy variational method for decomposing a non-negative multivariate signal as a weighted sum of Gaussians, which, borrowing the terminology from statistics, we refer to as a Gaussian mixture model. Notably, our method has the following features: (1) It accepts multivariate signals, i.e., sampled multivariate functions, histograms, time series, images, etc., as input. (2) The method can handle general (i.e., ellipsoidal) Gaussians. (3) No prior assumption on the number of mixture components is needed. To the best of our knowledge, no previous method for Gaussian mixture model decomposition simultaneously enjoys all these features. We also prove an upper bound, which cannot be improved by a global constant, for the distance from any mode of a Gaussian mixture model to the set of corresponding means. For mixtures of spherical Gaussians with common variance $$\sigma ^2$$ σ 2 , the bound takes the simple form $$\sqrt{n}\sigma $$ n σ . We evaluate our method on one- and two-dimensional signals. Finally, we discuss the relation between clustering and signal decomposition, and compare our method to the baseline expectation maximization algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

MA, JINWEN, and TAIJUN WANG. "ENTROPY PENALIZED AUTOMATED MODEL SELECTION ON GAUSSIAN MIXTURE." International Journal of Pattern Recognition and Artificial Intelligence 18, no. 08 (2004): 1501–12. http://dx.doi.org/10.1142/s0218001404003812.

Der volle Inhalt der Quelle
Annotation:
Gaussian mixture modeling is a powerful approach for data analysis and the determination of the number of Gaussians, or clusters, is actually the problem of Gaussian mixture model selection which has been investigated from several respects. This paper proposes a new kind of automated model selection algorithm for Gaussian mixture modeling via an entropy penalized maximum-likelihood estimation. It is demonstrated by the experiments that the proposed algorithm can make model selection automatically during the parameter estimation, with the mixing proportions of the extra Gaussians attenuating to zero. As compared with the BYY automated model selection algorithms, it converges more stably and accurately as the number of samples becomes large.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Daemi, Atefeh, Hariprasad Kodamana, and Biao Huang. "Gaussian process modelling with Gaussian mixture likelihood." Journal of Process Control 81 (September 2019): 209–20. http://dx.doi.org/10.1016/j.jprocont.2019.06.007.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ma, Jinwen, Lei Xu, and Michael I. Jordan. "Asymptotic Convergence Rate of the EM Algorithm for Gaussian Mixtures." Neural Computation 12, no. 12 (2000): 2881–907. http://dx.doi.org/10.1162/089976600300014764.

Der volle Inhalt der Quelle
Annotation:
It is well known that the convergence rate of the expectation-maximization (EM) algorithm can be faster than those of convention first-order iterative algorithms when the overlap in the given mixture is small. But this argument has not been mathematically proved yet. This article studies this problem asymptotically in the setting of gaussian mixtures under the theoretical framework of Xu and Jordan (1996). It has been proved that the asymptotic convergence rate of the EM algorithm for gaussian mixtures locally around the true solution Θ* is o(e0.5−ε(Θ*)), where ε > 0 is an arbitrarily small number, o(x) means that it is a higher-order infinitesimal as x → 0, and e(Θ*) is a measure of the average overlap of gaussians in the mixture. In other words, the large sample local convergence rate for the EM algorithm tends to be asymptotically superlinear when e(Θ*) tends to zero.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Fabisch, Alexander. "gmr: Gaussian Mixture Regression." Journal of Open Source Software 6, no. 62 (2021): 3054. http://dx.doi.org/10.21105/joss.03054.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ali-Loytty, Simo Sakari. "Box Gaussian Mixture Filter $ $." IEEE Transactions on Automatic Control 55, no. 9 (2010): 2165–69. http://dx.doi.org/10.1109/tac.2010.2051486.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Maragakis, Paul, Arjan van der Vaart, and Martin Karplus. "Gaussian-Mixture Umbrella Sampling." Journal of Physical Chemistry B 113, no. 14 (2009): 4664–73. http://dx.doi.org/10.1021/jp808381s.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Freitas, Breno L., Renato M. Silva, and Tiago A. Almeida. "Gaussian Mixture Descriptors Learner." Knowledge-Based Systems 188 (January 2020): 105039. http://dx.doi.org/10.1016/j.knosys.2019.105039.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ju, Zhaojie, and Honghai Liu. "Fuzzy Gaussian Mixture Models." Pattern Recognition 45, no. 3 (2012): 1146–58. http://dx.doi.org/10.1016/j.patcog.2011.08.028.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

McNicholas, Paul David, and Thomas Brendan Murphy. "Parsimonious Gaussian mixture models." Statistics and Computing 18, no. 3 (2008): 285–96. http://dx.doi.org/10.1007/s11222-008-9056-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Mixture of Gaussian"

1

Kunkel, Deborah Elizabeth. "Anchored Bayesian Gaussian Mixture Models." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1524134234501475.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Nkadimeng, Calvin. "Language identification using Gaussian mixture models." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/4170.

Der volle Inhalt der Quelle
Annotation:
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2010.<br>ENGLISH ABSTRACT: The importance of Language Identification for African languages is seeing a dramatic increase due to the development of telecommunication infrastructure and, as a result, an increase in volumes of data and speech traffic in public networks. By automatically processing the raw speech data the vital assistance given to people in distress can be speeded up, by referring their calls to a person knowledgeable in that language. To this effect a speech corpus was developed and various algorithms were implemented and tested on raw telephone speech data. These algorithms entailed data preparation, signal processing, and statistical analysis aimed at discriminating between languages. The statistical model of Gaussian Mixture Models (GMMs) were chosen for this research due to their ability to represent an entire language with a single stochastic model that does not require phonetic transcription. Language Identification for African languages using GMMs is feasible, although there are some few challenges like proper classification and accurate study into the relationship of langauges that need to be overcome. Other methods that make use of phonetically transcribed data need to be explored and tested with the new corpus for the research to be more rigorous.<br>AFRIKAANSE OPSOMMING: Die belang van die Taal identifiseer vir Afrika-tale is sien ’n dramatiese toename te danke aan die ontwikkeling van telekommunikasie-infrastruktuur en as gevolg ’n toename in volumes van data en spraak verkeer in die openbaar netwerke.Deur outomaties verwerking van die ruwe toespraak gegee die noodsaaklike hulp verleen aan mense in nood kan word vinniger-up ”, deur te verwys hul oproepe na ’n persoon ingelichte in daardie taal. Tot hierdie effek van ’n toespraak corpus het ontwikkel en die verskillende algoritmes is gemplementeer en getoets op die ruwe telefoon toespraak gegee.Hierdie algoritmes behels die data voorbereiding, seinverwerking, en statistiese analise wat gerig is op onderskei tussen tale.Die statistiese model van Gauss Mengsel Modelle (GGM) was gekies is vir hierdie navorsing as gevolg van hul vermo te verteenwoordig ’n hele taal met’ n enkele stogastiese model wat nodig nie fonetiese tanscription nie. Taal identifiseer vir die Afrikatale gebruik GGM haalbaar is, alhoewel daar enkele paar uitdagings soos behoorlike klassifikasie en akkurate ondersoek na die verhouding van TALE wat moet oorkom moet word.Ander metodes wat gebruik maak van foneties getranskribeerde data nodig om ondersoek te word en getoets word met die nuwe corpus vir die ondersoek te word strenger.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Gundersen, Terje. "Voice Transformation based on Gaussian mixture models." Thesis, Norwegian University of Science and Technology, Department of Electronics and Telecommunications, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10878.

Der volle Inhalt der Quelle
Annotation:
<p>In this thesis, a probabilistic model for transforming a voice to sound like another specific voice is tested. The model is fully automatic and only requires some 100 training sentences from both speakers with the same acoustic content. The classical source-filter decomposition allows prosodic and spectral transformation to be performed independently. The transformations are based on a Gaussian mixture model and a transformation function suggested by Y. Stylianou. Feature vectors of the same content from the source and target speaker, aligned in time by dynamic time warping, are fitted to a GMM. The short time spectra, represented as cepstral coefficients and derived from LPC, and the pitch periods, represented as fundamental frequency estimated from the RAPT algorithm, are transformed with the same probabilistic transformation function. Several techniques of spectrum and pitch transformation were assessed in addition to some novel smoothing techniques of the fundamental frequency contour. The pitch transform was implemented on the excitation signal from the inverse LP filtering by time domain PSOLA. The transformed spectrum parameters were used in the synthesis filter with the transformed excitation as input to yield the transformed voice. A listening test was performed with the best setup from objective tests and the results indicate that it is possible to recognise the transformed voice as the target speaker with a 72 % probability. However, the synthesised voice was affected by a muffling effect due to incorrect frequency transformation and the prosody sounded somewhat robotic.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Subramaniam, Anand D. "Gaussian mixture models in compression and communication /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2003. http://wwwlib.umi.com/cr/ucsd/fullcit?p3112847.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Marek, Petr. "Gaussian mixtures in R." Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-193077.

Der volle Inhalt der Quelle
Annotation:
Using Gaussian mixtures is a popular and very flexible approach to statistical modelling. The standard approach of maximum likelihood estimation cannot be used for some of these models. The estimates are, however, obtainable by iterative solutions, such as the EM (Expectation-Maximization) algorithm. The aim of this thesis is to present Gaussian mixture models and their implementation in R. The non-trivial case of having to use the EM algorithm is assumed. Existing methods and packages are presented, investigated and compared. Some of them are extended by custom R code. Several exhaustive simulations are run and some of the interesting results are presented. For these simulations, a notion of usual fit is presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Dahmen, Jörg. "Invariant image object recognition using Gaussian mixture densities." [S.l.] : [s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=964586940.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Cilliers, Francois Dirk. "Tree-based Gaussian mixture models for speaker verification." Thesis, Link to the online version, 2005. http://hdl.handle.net/10019.1/1639.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Robbiati, Stefano Andrea. "Sequential Gaussian mixture techniques for target tracking applications." Thesis, Imperial College London, 2006. http://hdl.handle.net/10044/1/11886.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lu, Liang. "Subspace Gaussian mixture models for automatic speech recognition." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/8065.

Der volle Inhalt der Quelle
Annotation:
In most of state-of-the-art speech recognition systems, Gaussian mixture models (GMMs) are used to model the density of the emitting states in the hidden Markov models (HMMs). In a conventional system, the model parameters of each GMM are estimated directly and independently given the alignment. This results a large number of model parameters to be estimated, and consequently, a large amount of training data is required to fit the model. In addition, different sources of acoustic variability that impact the accuracy of a recogniser such as pronunciation variation, accent, speaker factor and environmental noise are only weakly modelled and factorized by adaptation techniques such as maximum likelihood linear regression (MLLR), maximum a posteriori adaptation (MAP) and vocal tract length normalisation (VTLN). In this thesis, we will discuss an alternative acoustic modelling approach — the subspace Gaussian mixture model (SGMM), which is expected to deal with these two issues better. In an SGMM, the model parameters are derived from low-dimensional model and speaker subspaces that can capture phonetic and speaker correlations. Given these subspaces, only a small number of state-dependent parameters are required to derive the corresponding GMMs. Hence, the total number of model parameters can be reduced, which allows acoustic modelling with a limited amount of training data. In addition, the SGMM-based acoustic model factorizes the phonetic and speaker factors and within this framework, other source of acoustic variability may also be explored. In this thesis, we propose a regularised model estimation for SGMMs, which avoids overtraining in case that the training data is sparse. We will also take advantage of the structure of SGMMs to explore cross-lingual acoustic modelling for low-resource speech recognition. Here, the model subspace is estimated from out-domain data and ported to the target language system. In this case, only the state-dependent parameters need to be estimated which relaxes the requirement of the amount of training data. To improve the robustness of SGMMs against environmental noise, we propose to apply the joint uncertainty decoding (JUD) technique that is shown to be efficient and effective. We will report experimental results on the Wall Street Journal (WSJ) database and GlobalPhone corpora to evaluate the regularisation and cross-lingual modelling of SGMMs. Noise compensation using JUD for SGMM acoustic models is evaluated on the Aurora 4 database.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Pinto, Rafael Coimbra. "Continuous reinforcement learning with incremental Gaussian mixture models." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/157591.

Der volle Inhalt der Quelle
Annotation:
A contribução original desta tese é um novo algoritmo que integra um aproximador de funções com alta eficiência amostral com aprendizagem por reforço em espaços de estados contínuos. A pesquisa completa inclui o desenvolvimento de um algoritmo online e incremental capaz de aprender por meio de uma única passada sobre os dados. Este algoritmo, chamado de Fast Incremental Gaussian Mixture Network (FIGMN) foi empregado como um aproximador de funções eficiente para o espaço de estados de tarefas contínuas de aprendizagem por reforço, que, combinado com Q-learning linear, resulta em performance competitiva. Então, este mesmo aproximador de funções foi empregado para modelar o espaço conjunto de estados e valores Q, todos em uma única FIGMN, resultando em um algoritmo conciso e com alta eficiência amostral, i.e., um algoritmo de aprendizagem por reforço capaz de aprender por meio de pouquíssimas interações com o ambiente. Um único episódio é suficiente para aprender as tarefas investigadas na maioria dos experimentos. Os resultados são analisados a fim de explicar as propriedades do algoritmo obtido, e é observado que o uso da FIGMN como aproximador de funções oferece algumas importantes vantagens para aprendizagem por reforço em relação a redes neurais convencionais.<br>This thesis’ original contribution is a novel algorithm which integrates a data-efficient function approximator with reinforcement learning in continuous state spaces. The complete research includes the development of a scalable online and incremental algorithm capable of learning from a single pass through data. This algorithm, called Fast Incremental Gaussian Mixture Network (FIGMN), was employed as a sample-efficient function approximator for the state space of continuous reinforcement learning tasks, which, combined with linear Q-learning, results in competitive performance. Then, this same function approximator was employed to model the joint state and Q-values space, all in a single FIGMN, resulting in a concise and data-efficient algorithm, i.e., a reinforcement learning algorithm that learns from very few interactions with the environment. A single episode is enough to learn the investigated tasks in most trials. Results are analysed in order to explain the properties of the obtained algorithm, and it is observed that the use of the FIGMN function approximator brings some important advantages to reinforcement learning in relation to conventional neural networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Mixture of Gaussian"

1

1st, Krishna M. Vamsi. Brain Tumor Segmentation Using Bivariate Gaussian Mixture Models. Selfypage Developers Pvt Ltd, 2022.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Gaussian Mixture Reduction for Tracking Multiple Maneuvering Targets in Clutter. Storming Media, 2003.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Speaker Verification in the Presence of Channel Mismatch Using Gaussian Mixture Models. Storming Media, 1997.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Cheng, Russell. Finite Mixture Examples; MAPIS Details. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198505044.003.0018.

Der volle Inhalt der Quelle
Annotation:
Two detailed numerical examples are given in this chapter illustrating and comparing mainly the reversible jump Markov chain Monte Carlo (RJMCMC) and the maximum a posteriori/importance sampling (MAPIS) methods. The numerical examples are the well-known galaxy data set with sample size 82, and the Hidalgo stamp issues thickness data with sample size 485. A comparison is made of the estimates obtained by the RJMCMC and MAPIS methods for (i) the posterior k-distribution of the number of components, k, (ii) the predictive finite mixture distribution itself, and (iii) the posterior distributions of the component parameters and weights. The estimates obtained by MAPIS are shown to be more satisfactory and meaningful. Details are given of the practical implementation of MAPIS for five non-normal mixture models, namely: the extreme value, gamma, inverse Gaussian, lognormal, and Weibull. Mathematical details are also given of the acceptance-rejection importance sampling used in MAPIS.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Wilson, Roland. Multiresolution Gaussian Mixtures in Image Analysis (Synthesis Lectures on Image, Video, and Multimedia Processing). Morgan & Claypool Publishers, 2007.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Mixture of Gaussian"

1

Yu, Dong, and Li Deng. "Gaussian Mixture Models." In Automatic Speech Recognition. Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-5779-3_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Reynolds, Douglas. "Gaussian Mixture Models." In Encyclopedia of Biometrics. Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-73003-5_196.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Reynolds, Douglas. "Gaussian Mixture Models." In Encyclopedia of Biometrics. Springer US, 2015. http://dx.doi.org/10.1007/978-1-4899-7488-4_196.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Sarang, Poornachandra. "Gaussian Mixture Model." In Thinking Data Science. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-02363-7_11.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Aylward, Stephen, and Stephen Pizer. "Continuous Gaussian mixture modeling." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63046-5_14.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Liu, Honghai, Zhaojie Ju, Xiaofei Ji, Chee Seng Chan, and Mehdi Khoury. "Fuzzy Gaussian Mixture Models." In Human Motion Sensing and Recognition. Springer Berlin Heidelberg, 2017. http://dx.doi.org/10.1007/978-3-662-53692-6_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Bourrier, Anthony, Rémi Gribonval, and Patrick Pérez. "Compressive Gaussian Mixture Estimation." In Compressed Sensing and its Applications. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16042-9_8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lee, Hyoung-joo, and Sungzoon Cho. "Combining Gaussian Mixture Models." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-28651-6_98.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Oliva, Diego, Mohamed Abd Elaziz, and Salvador Hinojosa. "Image Segmentation by Gaussian Mixture." In Metaheuristic Algorithms for Image Segmentation: Theory and Applications. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-12931-6_12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Nickisch, Hannes, and Carl Edward Rasmussen. "Gaussian Mixture Modeling with Gaussian Process Latent Variable Models." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15986-2_28.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Mixture of Gaussian"

1

Sulam, Jeremias, Yaniv Romano, and Michael Elad. "Gaussian mixture diffusion." In 2016 IEEE International Conference on the Science of Electrical Engineering (ICSEE). IEEE, 2016. http://dx.doi.org/10.1109/icsee.2016.7806173.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Garcia, Vincent, Frank Nielsen, and Richard Nock. "Hierarchical Gaussian mixture model." In 2010 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2010. http://dx.doi.org/10.1109/icassp.2010.5495750.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Bourrier, Anthony, Remi Gribonval, and Patrick Perez. "Compressive Gaussian Mixture estimation." In ICASSP 2013 - 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2013. http://dx.doi.org/10.1109/icassp.2013.6638821.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Pohjalainen, Jouni, and Paavo Alku. "Gaussian mixture linear prediction." In ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2014. http://dx.doi.org/10.1109/icassp.2014.6854813.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Psiaki, Mark, Jonathan Schoenberg, and Isaac Miller. "Gaussian Mixture Approximation by Another Gaussian Mixture for "Blob" Filter Re-Sampling." In AIAA Guidance, Navigation, and Control Conference. American Institute of Aeronautics and Astronautics, 2010. http://dx.doi.org/10.2514/6.2010-7747.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Zhao, Yanpeng, Liwen Zhang, and Kewei Tu. "Gaussian Mixture Latent Vector Grammars." In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/p18-1109.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

JIa, Bin, Ming Xin, and Tao Sun. "Distributed Cubature Gaussian Mixture Filters." In AIAA Guidance, Navigation, and Control Conference. American Institute of Aeronautics and Astronautics, 2017. http://dx.doi.org/10.2514/6.2017-1262.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Gong, Dayong, and Zhihua Wang. "An improved Gaussian mixture model." In 2012 International Conference on Graphic and Image Processing, edited by Zeng Zhu. SPIE, 2013. http://dx.doi.org/10.1117/12.2010876.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Maas, Ryan, Jeremy Hyrkas, Olivia Grace Telford, Magdalena Balazinska, Andrew Connolly, and Bill Howe. "Gaussian Mixture Models Use-Case." In the 3rd VLDB Workshop. ACM Press, 2015. http://dx.doi.org/10.1145/2803140.2803143.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Zhang, Hongwei, Qi Hu, and Liyan Chen. "Minimax Gaussian Mixture Particle Filtering." In 2022 16th IEEE International Conference on Signal Processing (ICSP). IEEE, 2022. http://dx.doi.org/10.1109/icsp56322.2022.9965297.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Mixture of Gaussian"

1

Yu, Guoshen, and Guillermo Sapiro. Statistical Compressive Sensing of Gaussian Mixture Models. Defense Technical Information Center, 2010. http://dx.doi.org/10.21236/ada540728.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

De Leon, Phillip L., and Richard D. McClanahan. Efficient speaker verification using Gaussian mixture model component clustering. Office of Scientific and Technical Information (OSTI), 2012. http://dx.doi.org/10.2172/1039402.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Hogden, J., and J. C. Scovel. MALCOM X: Combining maximum likelihood continuity mapping with Gaussian mixture models. Office of Scientific and Technical Information (OSTI), 1998. http://dx.doi.org/10.2172/677150.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Gerlach, K. R. Detection Performance of Signals in Dependent Noise From a Gaussian Mixture Uncertainty Class. Defense Technical Information Center, 1998. http://dx.doi.org/10.21236/ada352456.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Yu, Guoshen, Guillermo Sapiro, and Stephane Mallat. Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity. Defense Technical Information Center, 2010. http://dx.doi.org/10.21236/ada540722.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Lee, Jhong S., Leonard E. Miller, Robert H. French, and Young K. Kim. Ocean Surveillance Detection Studies. Part 1. Detection in Gaussian Mixture Noise. Part 2. An Investigation of Canonical Correlation as an Automatic Detection and Beamforming Technique. Defense Technical Information Center, 1985. http://dx.doi.org/10.21236/ada160931.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Baker, C. R., and A. F. Gualtierotti. Absolute Continuity and Mutual Information for Gaussian Mixtures. Defense Technical Information Center, 1988. http://dx.doi.org/10.21236/ada215185.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Jordan, Michael, and Lei Xu. On Convergence Properties of the EM Algorithm for Gaussian Mixtures. Defense Technical Information Center, 1995. http://dx.doi.org/10.21236/ada295637.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ramakrishnan, Aravind, Ashraf Alrajhi, Egemen Okte, Hasan Ozer, and Imad Al-Qadi. Truck-Platooning Impacts on Flexible Pavements: Experimental and Mechanistic Approaches. Illinois Center for Transportation, 2021. http://dx.doi.org/10.36501/0197-9191/21-038.

Der volle Inhalt der Quelle
Annotation:
Truck platoons are expected to improve safety and reduce fuel consumption. However, their use is projected to accelerate pavement damage due to channelized-load application (lack of wander) and potentially reduced duration between truck-loading applications (reduced rest period). The effect of wander on pavement damage is well documented, while relatively few studies are available on the effect of rest period on pavement permanent deformation. Therefore, the main objective of this study was to quantify the impact of rest period theoretically, using a numerical method, and experimentally, using laboratory testing. A 3-D finite-element (FE) pavement model was developed and run to quantify the effect of rest period. Strain recovery and accumulation were predicted by fitting Gaussian mixture models to the strain values computed from the FE model. The effect of rest period was found to be insignificant for truck spacing greater than 10 ft. An experimental program was conducted, and several asphalt concrete (AC) mixes were considered at various stress levels, temperatures, and rest periods. Test results showed that AC deformation increased with rest period, irrespective of AC-mix type, stress level, and/or temperature. This observation was attributed to a well-documented hardening–relaxation mechanism, which occurs during AC plastic deformation. Hence, experimental and FE-model results are conflicting due to modeling AC as a viscoelastic and the difference in the loading mechanism. A shift model was developed by extending the time–temperature superposition concept to incorporate rest period, using the experimental data. The shift factors were used to compute the equivalent number of cycles for various platoon scenarios (truck spacings or rest period). The shift model was implemented in AASHTOware pavement mechanic–empirical design (PMED) guidelines for the calculation of rutting using equivalent number of cycles.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Wilson, D., Matthew Kamrath, Caitlin Haedrich, Daniel Breton, and Carl Hart. Urban noise distributions and the influence of geometric spreading on skewness. Engineer Research and Development Center (U.S.), 2021. http://dx.doi.org/10.21079/11681/42483.

Der volle Inhalt der Quelle
Annotation:
Statistical distributions of urban noise levels are influenced by many complex phenomena, including spatial and temporal variations in the source level, multisource mixtures, propagation losses, and random fading from multipath reflections. This article provides a broad perspective on the varying impacts of these phenomena. Distributions incorporating random fading and averaging (e.g., gamma and noncentral Erlang) tend to be negatively skewed on logarithmic (decibel) axes but can be positively skewed if the fading process is strongly modulated by source power variations (e.g., compound gamma). In contrast, distributions incorporating randomly positioned sources and explicit geometric spreading [e.g., exponentially modified Gaussian (EMG)] tend to be positively skewed with exponential tails on logarithmic axes. To evaluate the suitability of the various distributions, one-third octave band sound-level data were measured at 37 locations in the North End of Boston, MA. Based on the Kullback-Leibler divergence as calculated across all of the locations and frequencies, the EMG provides the most consistently good agreement with the data, which were generally positively skewed. The compound gamma also fits the data well and even outperforms the EMG for the small minority of cases exhibiting negative skew. The lognormal provides a suitable fit in cases in which particular non-traffic noise sources dominate.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!