Letteratura scientifica selezionata sul tema "Hidden markov model training"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Hidden markov model training".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Hidden markov model training"

1

Tarnas, C., e R. Hughey. "Reduced space hidden Markov model training". Bioinformatics 14, n. 5 (1 giugno 1998): 401–6. http://dx.doi.org/10.1093/bioinformatics/14.5.401.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Yi, Jie. "Method and apparatus for training hidden Markov model". Journal of the Acoustical Society of America 107, n. 5 (2000): 2324. http://dx.doi.org/10.1121/1.428600.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Grewal, Jasleen K., Martin Krzywinski e Naomi Altman. "Markov models — training and evaluation of hidden Markov models". Nature Methods 17, n. 2 (febbraio 2020): 121–22. http://dx.doi.org/10.1038/s41592-019-0702-6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Kwong, S., Q. H. He e K. F. Man. "Training approach for hidden Markov models". Electronics Letters 32, n. 17 (1996): 1554. http://dx.doi.org/10.1049/el:19961080.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Yulin, Sergey S., e Irina N. Palamar. "Probability Model Based on Cluster Analysis to Classify Sequences of Observations for Small Training Sets". Statistics, Optimization & Information Computing 8, n. 1 (18 febbraio 2020): 296–303. http://dx.doi.org/10.19139/soic-2310-5070-690.

Testo completo
Abstract (sommario):
The problem of recognizing patterns, when there are few training data available, is particularly relevant and arises in cases when collection of training data is expensive or essentially impossible. The work proposes a new probability model MC&CL (Markov Chain and Clusters) based on a combination of markov chain and algorithm of clustering (self-organizing map of Kohonen, k-means method), to solve a problem of classifying sequences of observations, when the amount of training dataset is low. An original experimental comparison is made between the developed model (MC&CL) and a number of the other popular models to classify sequences: HMM (Hidden Markov Model), HCRF (Hidden Conditional Random Fields),LSTM (Long Short-Term Memory), kNN+DTW (k-Nearest Neighbors algorithm + Dynamic Time Warping algorithm). A comparison is made using synthetic random sequences, generated from the hidden markov model, with noise added to training specimens. The best accuracy of classifying the suggested model is shown, as compared to those under review, when the amount of training data is low.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Kan-Wing Mak, B., e E. Bocchieri. "Direct training of subspace distribution clustering hidden Markov model". IEEE Transactions on Speech and Audio Processing 9, n. 4 (maggio 2001): 378–87. http://dx.doi.org/10.1109/89.917683.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Yuan, Shenfang, Jinjin Zhang, Jian Chen, Lei Qiu e Weibo Yang. "A uniform initialization Gaussian mixture model–based guided wave–hidden Markov model with stable damage evaluation performance". Structural Health Monitoring 18, n. 3 (29 giugno 2018): 853–68. http://dx.doi.org/10.1177/1475921718783652.

Testo completo
Abstract (sommario):
During practical applications, the time-varying service conditions usually lead to difficulties in properly interpreting structural health monitoring signals. The guided wave–hidden Markov model–based damage evaluation method is a promising approach to address the uncertainties caused by the time-varying service condition. However, researches that have been performed to date are not comprehensive. Most of these research studies did not introduce serious time-varying factors, such as those that exist in reality, and hidden Markov model was applied directly without deep consideration of the performance improvement of hidden Markov model itself. In this article, the training stability problem when constructing the guided wave–hidden Markov model initialized by usually adopted k-means clustering method and its influence to damage evaluation were researched first by applying it to fatigue crack propagation evaluation of an attachment lug. After illustrating the problem of stability induced by k-means clustering, a novel uniform initialization Gaussian mixture model–based guided wave–hidden Markov model was proposed that provides steady and reliable construction of the guided wave–hidden Markov model. The advantage of the proposed method is demonstrated by lug fatigue crack propagation evaluation experiments.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Gales, M. J. F. "Cluster adaptive training of hidden Markov models". IEEE Transactions on Speech and Audio Processing 8, n. 4 (luglio 2000): 417–28. http://dx.doi.org/10.1109/89.848223.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Rabiner, Lawrence R., Chin‐Hui Lee, Biing‐Hwang Juang, David B. Roe e Jay G. Wilpon. "Improved training procedures for hidden Markov models". Journal of the Acoustical Society of America 84, S1 (novembre 1988): S61. http://dx.doi.org/10.1121/1.2026404.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Narwal, Priti, Deepak Kumar e Shailendra N. Singh. "A Hidden Markov Model Combined With Markov Games for Intrusion Detection in Cloud". Journal of Cases on Information Technology 21, n. 4 (ottobre 2019): 14–26. http://dx.doi.org/10.4018/jcit.2019100102.

Testo completo
Abstract (sommario):
Cloud computing has evolved as a new paradigm for management of an infrastructure and gained ample consideration in both industrial and academic area of research. A hidden Markov model (HMM) combined with Markov games can give a solution that may act as a countermeasure for many cyber security threats and malicious intrusions in a network or in a cloud. A HMM can be trained by using training sequences that may be obtained by analyzing the file traces of packet analyzer like Wireshark network analyzer. In this article, the authors have proposed a model in which HMM can be build using a set of training examples that are obtained by using a network analyzer (i.e., Wireshark). As it is not an intrusion detection system, the obtained file traces may be used as training examples to test a HMM model. It also predicts a probability value for each tested sequence and states if sequence is anomalous or not. A numerical example is also shown in this article that calculates the most optimal sequence of observations for both HMM and state sequence probabilities in case a HMM model is already given.
Gli stili APA, Harvard, Vancouver, ISO e altri
Più fonti

Tesi sul tema "Hidden markov model training"

1

McKee, Bill Frederick. "Optimal hidden Markov models". Thesis, University of Plymouth, 1999. http://hdl.handle.net/10026.1/1698.

Testo completo
Abstract (sommario):
In contrast with training algorithms such as Baum-Welch, which produce solutions that are a local optimum of the objective function, this thesis describes the attempt to develop a training algorithm which delivers the global optimum Discrete ICdden Markov Model for a given training sequence. A total of four different methods of attack upon the problem are presented. First, after building the necessary analytical tools, the thesis presents a direct, calculus-based assault featuring Matrix Derivatives. Next, the dual analytic approach known as Geometric Programming is examined and then adapted to the task. After that, a hill-climbing formula is developed and applied. These first three methods reveal a number of interesting and useful insights into the problem. However, it is the fourth method which produces an algorithm that is then used for direct comparison vAth the Baum-Welch algorithm: examples of global optima are collected, examined for common features and patterns, and then a rule is induced. The resulting rule is implemented in *C' and tested against a battery of Baum-Welch based programs. In the limited range of tests carried out to date, the models produced by the new algorithm yield optima which have not been surpassed by (and are typically much better than) the Baum-Welch models. However, far more analysis and testing is required and in its current form the algorithm is not fast enough for realistic application.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Kapadia, Sadik. "Discriminative training of hidden Markov models". Thesis, University of Cambridge, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.624997.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Wilhelmsson, Anna, e Sofia Bedoire. "Driving Behavior Prediction by Training a Hidden Markov Model". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291656.

Testo completo
Abstract (sommario):
Introducing automated vehicles in to traffic withhuman drivers, human behavior prediction is essential to obtainoperation safety. In this study, a human behavior estimationmodel has been developed. The estimations are based on aHidden Markov Model (HMM) using observations to determinethe driving style of surrounding vehicles. The model is trainedusing two different methods: Baum Welch training and Viterbitraining to improve the performance. Both training methods areevaluated by looking at time complexity and convergence. Themodel is implemented with and without training and tested fordifferent driving styles. Results show that training is essentialfor accurate human behavior prediction. Viterbi training is fasterbut more noise sensitive compared to Baum Welch training. Also,Viterbi training produces good results if training data reflects oncurrently observed driver, which is not always the case. BaumWelch training is more robust in such situations. Lastly, BaumWelch training is recommended to obtain operation safety whenintroducing automated vehicles into traffic.
N ̈ar automatiserade fordon introduceras itrafiken och beh ̈over interagera med m ̈anskliga f ̈orare ̈ar det vik-tigt att kunna f ̈orutsp ̊a m ̈anskligt beteende. Detta f ̈or att kunnaerh ̊alla en s ̈akrare trafiksituation. I denna studie har en modellsom estimerar m ̈anskligt beteende utvecklats. Estimeringarna ̈ar baserade p ̊a en Hidden Markov Model d ̈ar observationeranv ̈ands f ̈or att best ̈amma k ̈orstil hos omgivande fordon itrafiken. Modellen tr ̈anas med tv ̊a olika metoder: Baum Welchtr ̈aning och Viterbi tr ̈aning f ̈or att f ̈orb ̈attra modellens prestanda.Tr ̈aningsmetoderna utv ̈arderas sedan genom att analysera derastidskomplexitet och konvergens. Modellen ̈ar implementerad medoch utan tr ̈aning och testad f ̈or olika k ̈orstilar. Erh ̊allna resultatvisar att tr ̈aning ̈ar viktigt f ̈or att kunna f ̈orutsp ̊a m ̈anskligtbeteende korrekt. Viterbi tr ̈aning ̈ar snabbare men mer k ̈ansligf ̈or brus i j ̈amf ̈orelse med Baum Welch tr ̈aning. Viterbi tr ̈aningger ̈aven en bra estimering i de fall d ̊a observerad tr ̈aningsdataavspeglar f ̈orarens k ̈orstil, vilket inte alltid ̈ar fallet. BaumWelch tr ̈aning ̈ar mer robust i s ̊adana situationer. Slutligenrekommenderas en estimeringsmodell implementerad med BaumWelch tr ̈aning f ̈or att erh ̊alla en s ̈aker k ̈orning d ̊a automatiseradefordon introduceras i trafiken
Kandidatexjobb i elektroteknik 2020, KTH, Stockholm
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Davis, Richard I. A. "Training Hidden Markov Models for spatio-temporal pattern recognition /". [St. Lucia, Qld.], 2004. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe18500.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Combrink, Jan Hendrik. "Discriminative training of hidden Markov Models for gesture recognition". Master's thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29267.

Testo completo
Abstract (sommario):
As homes and workplaces become increasingly automated, an efficient, inclusive and language-independent human-computer interaction mechanism will become more necessary. Isolated gesture recognition can be used to this end. Gesture recognition is a problem of modelling temporal data. Non-temporal models can be used for gesture recognition, but require that the signals be adapted to the models. For example, the requirement of fixed-length inputs for support-vector machine classification. Hidden Markov models are probabilistic graphical models that were designed to operate on time-series data, and are sequence length invariant. However, in traditional hidden Markov modelling, models are trained via the maximum likelihood criterion and cannot perform as well as a discriminative classifier. This study employs minimum classification error training to produce a discriminative HMM classifier. The classifier is then applied to an isolated gesture recognition problem, using skeletal features. The Montalbano gesture dataset is used to evaluate the system on the skeletal modality alone. This positions the problem as one of fine-grained dynamic gesture recognition, as the hand pose information contained in other modalities are ignored. The method achieves a highest accuracy of 87.3%, comparable to other results reported on the Montalbano dataset using discriminative non-temporal methods. The research will show that discriminative hidden Markov models can be used successfully as a solution to the problem of isolated gesture recognition
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Majewsky, Stefan. "Training of Hidden Markov models as an instance of the expectation maximization algorithm". Bachelor's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-226903.

Testo completo
Abstract (sommario):
In Natural Language Processing (NLP), speech and text are parsed and generated with language models and parser models, and translated with translation models. Each model contains a set of numerical parameters which are found by applying a suitable training algorithm to a set of training data. Many such training algorithms are instances of the Expectation-Maximization (EM) algorithm. In [BSV15], a generic EM algorithm for NLP is described. This work presents a particular speech model, the Hidden Markov model, and its standard training algorithm, the Baum-Welch algorithm. It is then shown that the Baum-Welch algorithm is an instance of the generic EM algorithm introduced by [BSV15], from which follows that all statements about the generic EM algorithm also apply to the Baum-Welch algorithm, especially its correctness and convergence properties.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Fang, Eric. "Investigation of training algorithms for hidden Markov models applied to automatic speech recognition". Connect to this title online, 2009. http://etd.lib.clemson.edu/documents/1249065572/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Lam, Tin Yin. "HMM converter a tool box for hidden Markov models with two novel, memory efficient parameter training algorithms". Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/5786.

Testo completo
Abstract (sommario):
Hidden Markov models (HMMs) are powerful statistical tools for biological sequence analysis. Many recently developed Bioinformatics applications employ variants of HMMs to analyze diverse types of biological data. It is typically fairly easy to design the states and the topological structure of an HMM. However, it can be difficult to estimate parameter values which yield a good prediction performance. As many HMM-based applications employ similar algorithms for generating predictions, it is also time-consuming and error-prone to have to re-implement these algorithms whenever a new HMM-based application is to be designed. This thesis addresses these challenges by introducing a tool-box, called HMMC0NvERTER, which only requires an XML-input file to define an HMM and to use it for sequence decoding and parameter training. The package not only allows for rapid proto-typing of HMM-based applications, but also incorporates several algorithms for sequence decoding and parameter training, including two new, linear memory algorithms for parameter training. Using this software package, even users without programming knowledge can quickly set up sophisticated HMMs and pair-HMMs and use them with efficient algorithms for parameter training and sequence analyses. We use HMMCONVERTER to construct a new comparative gene prediction program, called ANNOTAID, which can predict pairs of orthologous genes by integrating prior information about each input sequence probabilistically into the gene prediction process and into parameter training. ANNOTAID can thus be readily adapted to predict orthologous gene pairs in newly sequenced genomes.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Varga, Tamás. "Off-line cursive handwriting recognition using synthetic training data". Berlin Aka, 2006. http://deposit.d-nb.de/cgi-bin/dokserv?id=2838183&prov=M&dok_var=1&dok_ext=htm.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Do, Trinh-Minh-Tri. "Regularized bundle methods for large-scale learning problems with an application to large margin training of hidden Markov models". Paris 6, 2010. http://www.theses.fr/2010PA066163.

Testo completo
Abstract (sommario):
L'apprentissage automatique est souvent posé sous la forme d'un problème d'optimisation où l'on cherche le meilleur modèle parmi une famille de modèle paramétré par optimisation d'une fonction réelle de l'ensemble des paramètres. Le meilleur modèle est défini comme celui qui correspond à une l'ensemble de paramètres qui minimise cette fonction objective appelée critère. Les progrès rapide de l'apprentissage automatique ces dernières années sont allés de pair avec le développement de méthodes d'optimisation efficaces et adaptées aux particularités des fonctionnelles à minimiser, notamment pour permettre le traitement de grands jeux de données ou pour réaliser des taches d’apprentissage complexes. Dans cette thèse, nous travaillons sur les techniques d'optimisation nouvelles, génériques et efficaces afin de faciliter l'apprentissage de modèles complexes pour des applications de grande taille, et pour des critères quelconques. En particulier, nous nous sommes focalisés sur des problèmes d'optimisation non contraints dans lesquels la fonction objective peut être non-convexe et non partout différentiable. Etre capable de traiter ce genre de situation permet de pouvoir aborder des problèmes réels avec des modèles complexes et des critères d'apprentissage performants. Les contributions de cette thèse sont présentées en deux parties. La première partie présente nos travaux sur l'optimisation non contrainte. La seconde partie décrit les systèmes que nous avons développés pour l'apprentissage discriminant de modèles graphiques pour l'étiquetage de signaux et séquences, en nous appuyant lorsque nécessaire sur les algorithmes décrits dans la première partie.
Gli stili APA, Harvard, Vancouver, ISO e altri
Più fonti

Libri sul tema "Hidden markov model training"

1

Penny, D. Modeling the covarion model of molecular evolution by hidden Markov chains. Palmerston North, N.Z: Massey University College of Sciences, 1998.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Hidden messages in culture-centered counseling: A triad training model. Thousand Oaks, CA: Sage Publications, 2000.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Sun, Shuying. Haplotype inference using a hidden Markov model with efficient Markov chain sampling. 2007, 2007.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Voutilainen, Atro. Part-of-Speech Tagging. A cura di Ruslan Mitkov. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199276349.013.0011.

Testo completo
Abstract (sommario):
This article outlines the recently used methods for designing part-of-speech taggers; computer programs for assigning contextually appropriate grammatical descriptors to words in texts. It begins with the description of general architecture and task setting. It gives an overview of the history of tagging and describes the central approaches to tagging. These approaches are: taggers based on handwritten local rules, taggers based on n-grams automatically derived from text corpora, taggers based on hidden Markov models, taggers using automatically generated symbolic language models derived using methods from machine tagging, taggers based on handwritten global rules, and hybrid taggers, which combine the advantages of handwritten and automatically generated taggers. This article focuses on handwritten tagging rules. Well-tagged training corpora are a valuable resource for testing and improving language model. The text corpus reminds the grammarian about any oversight while designing a rule.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Miao, Qiang. Application of wavelets and hidden Markov model in condition-based maintenance. 2005.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Pedersen, Paul B. (Bodholdt). Hidden Messages in Culture-Centered Counseling: A Triad Training Model. Sage Publications, Inc, 1999.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Pedersen, Paul B. (Bodholdt). Hidden Messages in Culture-Centered Counseling: A Triad Training Model. Sage Publications, Inc, 1999.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Howes, Andrew, Xiuli Chen, Aditya Acharya e Richard L. Lewis. Interaction as an Emergent Property of a Partially Observable Markov Decision Process. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198799603.003.0011.

Testo completo
Abstract (sommario):
In this chapter we explore the potential advantages of modeling the interaction between a human and a computer as a consequence of a Partially Observable Markov Decision Process (POMDP) that models human cognition. POMDPs can be used to model human perceptual mechanisms, such as human vision, as partial (uncertain) observers of a hidden state are possible. In general, POMDPs permit a rigorous definition of interaction as the outcome of a reward maximizing stochastic sequential decision processes. They have been shown to explain interaction between a human and an environment in a range of scenarios, including visual search, interactive search and sense-making. The chapter uses these scenarios to illustrate the explanatory power of POMDPs in HCI. It also shows that POMDPs embrace the embodied, ecological and adaptive nature of human interaction.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Samuelsson, Christer. Statistical Methods. A cura di Ruslan Mitkov. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199276349.013.0019.

Testo completo
Abstract (sommario):
Statistical methods now belong to mainstream natural language processing. They have been successfully applied to virtually all tasks within language processing and neighbouring fields, including part-of-speech tagging, syntactic parsing, semantic interpretation, lexical acquisition, machine translation, information retrieval, and information extraction and language learning. This article reviews mathematical statistics and applies it to language modelling problems, leading up to the hidden Markov model and maximum entropy model. The real strength of maximum-entropy modelling lies in combining evidence from several rules, each one of which alone might not be conclusive, but which taken together dramatically affect the probability. Maximum-entropy modelling allows combining heterogeneous information sources to produce a uniform probabilistic model where each piece of information is formulated as a feature. The key ideas of mathematical statistics are simple and intuitive, but tend to be buried in a sea of mathematical technicalities. Finally, the article provides mathematical detail related to the topic of discussion.
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Hidden markov model training"

1

Ueda, Nobuhisa, e Taisuke Sato. "Simplified Training Algorithms for Hierarchical Hidden Markov Models". In Discovery Science, 401–15. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45650-3_34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Keller, Bill, e Rudi Lutz. "Improved Learning for Hidden Markov Models Using Penalized Training". In Artificial Intelligence and Cognitive Science, 53–60. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45750-x_7.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Singh, Upendra Kumar, e Vineet Padmanabhan. "Training by ART-2 and Classification of Ballistic Missiles Using Hidden Markov Model". In Lecture Notes in Computer Science, 108–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-45062-4_14.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Maseri, Marlyn, e Mazlina Mamat. "Malay Language Speech Recognition for Preschool Children using Hidden Markov Model (HMM) System Training". In Lecture Notes in Electrical Engineering, 205–14. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-2622-6_21.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Palazón-González, Vicente, Andrés Marzal e Juan M. Vilar. "EM Training of Hidden Markov Models for Shape Recognition Using Cyclic Strings". In Neural Information Processing, 317–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-42051-1_40.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Aupetit, Sébastien, Nicolas Monmarché, Mohamed Slimane e Pierre Liardet. "An Exponential Representation in the API Algorithm for Hidden Markov Models Training". In Lecture Notes in Computer Science, 61–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11740698_6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Li, Chengyuan, Haixia Long, Yanrui Ding, Jun Sun e Wenbo Xu. "Multiple Sequence Alignment by Improved Hidden Markov Model Training and Quantum-Behaved Particle Swarm Optimization". In Lecture Notes in Computer Science, 358–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15615-1_43.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Bagos, Pantelis G., Theodore D. Liakopoulos e Stavros J. Hamodrakas. "Faster Gradient Descent Training of Hidden Markov Models, Using Individual Learning Rate Adaptation". In Grammatical Inference: Algorithms and Applications, 40–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30195-0_5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Awad, Mariette, e Rahul Khanna. "Hidden Markov Model". In Efficient Learning Machines, 81–104. Berkeley, CA: Apress, 2015. http://dx.doi.org/10.1007/978-1-4302-5990-9_5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Wu, Jun. "Hidden Markov model". In The Beauty of Mathematics in Computer Science, 43–51. Boca Raton, FL : Taylor & Francis Group, 2019.: Chapman and Hall/CRC, 2018. http://dx.doi.org/10.1201/9781315169491-5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Hidden markov model training"

1

Aiman, Faquih, Zia Saquib e Shikha Nema. "Hidden Markov Model system training using HTK". In 2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT). IEEE, 2016. http://dx.doi.org/10.1109/icaccct.2016.7831750.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Ozkan, Huseyin, Arda Akman e Suleyman S. Kozat. "Hidden Markov Model training with side information". In 2012 20th Signal Processing and Communications Applications Conference (SIU). IEEE, 2012. http://dx.doi.org/10.1109/siu.2012.6204441.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Xue, Liping, Junxun Yin, Zhen Ji e Lai Jiang. "A Particle Swarm Optimization for Hidden Markov Model Training". In 2006 8th international Conference on Signal Processing. IEEE, 2006. http://dx.doi.org/10.1109/icosp.2006.345542.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Wang, Yan. "Training Generalized Hidden Markov Model with Interval Probability Parameters". In Second International Conference on Vulnerability and Risk Analysis and Management (ICVRAM) and the Sixth International Symposium on Uncertainty, Modeling, and Analysis (ISUMA). Reston, VA: American Society of Civil Engineers, 2014. http://dx.doi.org/10.1061/9780784413609.089.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Popov, Alexander A., Tatyana A. Gultyaeva e Vadim E. Uvarov. "Training hidden Markov models on incomplete sequences". In 2016 13th International Scientific-Technical Conference on Actual Problems of Electronics Instrument Engineering (APEIE). IEEE, 2016. http://dx.doi.org/10.1109/apeie.2016.7806478.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Popov, Alexander A., Tatyana A. Gultyaeva e Vadim E. Uvarov. "Training hidden Markov models on incomplete sequences". In 2016 13th International Scientific-Technical Conference on Actual Problems of Electronics Instrument Engineering (APEIE). IEEE, 2016. http://dx.doi.org/10.1109/apeie.2016.7806941.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Collins, Michael. "Discriminative training methods for hidden Markov models". In the ACL-02 conference. Morristown, NJ, USA: Association for Computational Linguistics, 2002. http://dx.doi.org/10.3115/1118693.1118694.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Shenoy, Renuka, Min-Chi Shih e Kenneth Rose. "Hidden Markov model-based multi-modal image fusion with efficient training". In 2014 IEEE International Conference on Image Processing (ICIP). IEEE, 2014. http://dx.doi.org/10.1109/icip.2014.7025727.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Cho, Taemin, Kibeom Kim e Juan P. Bello. "A Minimum Frame Error Criterion for Hidden Markov Model Training". In 2012 Eleventh International Conference on Machine Learning and Applications (ICMLA). IEEE, 2012. http://dx.doi.org/10.1109/icmla.2012.147.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Foote, J. T., M. M. Hochberg, P. M. Athanas, A. T. Smith, M. E. Wazlowski e H. F. Silverman. "Distributed hidden Markov model training on loosely-coupled multiprocessor networks". In [Proceedings] ICASSP-92: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, 1992. http://dx.doi.org/10.1109/icassp.1992.226384.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Rapporti di organizzazioni sul tema "Hidden markov model training"

1

Yang, Jie, e Yangsheng Xu. Hidden Markov Model for Gesture Recognition. Fort Belvoir, VA: Defense Technical Information Center, maggio 1994. http://dx.doi.org/10.21236/ada282845.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Chan, A. D., K. Englehart, B. Hudgins e D. F. Lovely. Hidden Markov Model Classification of Myoelectric Signals in Speech. Fort Belvoir, VA: Defense Technical Information Center, ottobre 2001. http://dx.doi.org/10.21236/ada410037.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Baggenstoss, Paul M. A Multi-Resolution Hidden Markov Model Using Class-Specific Features. Fort Belvoir, VA: Defense Technical Information Center, gennaio 2008. http://dx.doi.org/10.21236/ada494596.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia