Academic literature on the topic 'Maximum entropy markov model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Maximum entropy markov model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Maximum entropy markov model"

1

KAZAMA, JUN'ICHI, YUSUKE MIYAO, and JUN'ICHI TSUJII. "A Maximum Entropy Tagging Model with Unsupervised Hidden Markov Models." Journal of Natural Language Processing 11, no. 4 (2004): 3–23. http://dx.doi.org/10.5715/jnlp.11.4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cofré, Rodrigo, Cesar Maldonado, and Fernando Rosas. "Large Deviations Properties of Maximum Entropy Markov Chains from Spike Trains." Entropy 20, no. 8 (2018): 573. http://dx.doi.org/10.3390/e20080573.

Full text
Abstract:
We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. To find the maximum entropy Markov chain, we use the thermodynamic formalism, which provides insightful connections with statistical physics and thermodynamics from which large deviations properties arise naturally. We provide an accessible introduction to the maximum entropy Markov chain inference problem and large deviations theory to the community of computational neuroscience, avoiding some technicalities while preserving the core ideas and intuitions. We review large deviations techniques useful in spike train statistics to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability, and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao, Xiao-yu, Jin Zhang, Yuan-yuan Chen, et al. "Promoter recognition based on the maximum entropy hidden Markov model." Computers in Biology and Medicine 51 (August 2014): 73–81. http://dx.doi.org/10.1016/j.compbiomed.2014.04.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Alrashdi, Ibrahim, Muhammad Hameed Siddiqi, Yousef Alhwaiti, Madallah Alruwaili, and Mohammad Azad. "Maximum Entropy Markov Model for Human Activity Recognition Using Depth Camera." IEEE Access 9 (2021): 160635–45. http://dx.doi.org/10.1109/access.2021.3132559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

P, Manikandan, Ramyachitra D, Muthu C, and Sajithra N. "Enrichment of Remote Homology Detection using Cascading Maximum Entropy Markov Model." International Journal of Current Research and Review 13, no. 19 (2021): 80–84. http://dx.doi.org/10.31782/ijcrr.2021.131906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Siddiqi, Muhammad Hameed, Md Golam Rabiul Alam, Choong Seon Hong, Adil Mehmood Khan, and Hyunseung Choo. "A Novel Maximum Entropy Markov Model for Human Facial Expression Recognition." PLOS ONE 11, no. 9 (2016): e0162702. http://dx.doi.org/10.1371/journal.pone.0162702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jalal, Ahmad, Nida Khalid, and Kibum Kim. "Automatic Recognition of Human Interaction via Hybrid Descriptors and Maximum Entropy Markov Model Using Depth Sensors." Entropy 22, no. 8 (2020): 817. http://dx.doi.org/10.3390/e22080817.

Full text
Abstract:
Automatic identification of human interaction is a challenging task especially in dynamic environments with cluttered backgrounds from video sequences. Advancements in computer vision sensor technologies provide powerful effects in human interaction recognition (HIR) during routine daily life. In this paper, we propose a novel features extraction method which incorporates robust entropy optimization and an efficient Maximum Entropy Markov Model (MEMM) for HIR via multiple vision sensors. The main objectives of proposed methodology are: (1) to propose a hybrid of four novel features—i.e., spatio-temporal features, energy-based features, shape based angular and geometric features—and a motion-orthogonal histogram of oriented gradient (MO-HOG); (2) to encode hybrid feature descriptors using a codebook, a Gaussian mixture model (GMM) and fisher encoding; (3) to optimize the encoded feature using a cross entropy optimization function; (4) to apply a MEMM classification algorithm to examine empirical expectations and highest entropy, which measure pattern variances to achieve outperformed HIR accuracy results. Our system is tested over three well-known datasets: SBU Kinect interaction; UoL 3D social activity; UT-interaction datasets. Through wide experimentations, the proposed features extraction algorithm, along with cross entropy optimization, has achieved the average accuracy rate of 91.25% with SBU, 90.4% with UoL and 87.4% with UT-Interaction datasets. The proposed HIR system will be applicable to a wide variety of man–machine interfaces, such as public-place surveillance, future medical applications, virtual reality, fitness exercises and 3D interactive gaming.
APA, Harvard, Vancouver, ISO, and other styles
8

Khuong, Hung The, and Tra Thuy Thi Lai. "Studies on lithofacies sequences in the Hoanh Bo basin, Quang Ninh province by using Markov chain model and Entropy function." Journal of Mining and Earth Sciences 63, no. 1 (2022): 15–26. http://dx.doi.org/10.46326/jmes.2022.63(1).02.

Full text
Abstract:
The succession of lithofacies in the Dong Ho and Tieu Giao formation's Hoanh Bo basin was statistically analyzed using the modified Markov chain model and the function of Entropy. Based on the field definitions, petrographic investigation, and their borehole logs, the lithofacies study was carried out to determine the sediment deposition system and the deposition environment. Seventeen sub - lithofacies organized within the succession are recognized in three lithofacies associations. The analysis result of the Markov chain and chi-square or X2 test indicates that the deposition of the lithofacies is a non - markovian process and represents cyclic deposition of asymmetric fining - upward. To evaluate the randomness of the occurrence of lithofacies in a succession, entropy analysis was performed. Each state is associated with two types of entropy; one is relevant to the Markov matrix expressing upward transitions (entropy after deposition) and the other is relevant to the downward transition matrix (entropy before deposition). The energy regime calculated from the maximum randomness entropy analysis indicates that changing patterns in a deposition has resulted from rapid to steady flow. This results in a change in the depositional pattern from alluvial - fluvial to lacustrine environments, specifically from conglomerate facies (A1) → sandstone facies (A2)→ fine-grained and non - debris facies (A3).
APA, Harvard, Vancouver, ISO, and other styles
9

Denis, Eka Cahyani, and Mustikaningtyas Winda. "Indonesian part of speech tagging using maximum entropy markov model on Indonesian manually tagged corpus." International Journal of Artificial Intelligence (IJ-AI) 11, no. 1 (2022): 336–44. https://doi.org/10.11591/ijai.v11.i1.pp336-344.

Full text
Abstract:
This research discusses the development of a part of speech (POS) tagging system to solve the problem of word ambiguity. This paper presents a new method, namely maximum entropy markov model (MEMM) to solve word ambiguity on the Indonesian dataset. A manually labeled “Indonesian manually tagged corpus” was used as data. Furthermore, the corpus is processed using the entropy formula to obtain the weight of the value of the word being searched for, then calculating it into the MEMM Bigram and MEMM Trigram algorithms with the previously obtained rules to determine the part of speech (POS) tag that has the highest probability. The results obtained show POS tagging using the MEMM method has advantages over the methods used previously which used the same data. This paper improves a performance evaluation of research previously. The resulting average accuracy is 83.04% for the MEMM Bigram algorithm and 86.66% for the MEMM Trigram. The MEMM Trigram algorithm is better than the MEMM Bigram algorithm.
APA, Harvard, Vancouver, ISO, and other styles
10

Pattnaik, Sagarika, and Ajit Kumar Nayak. "A Modified Markov-Based Maximum-Entropy Model for POS Tagging of Odia Text." International Journal of Decision Support System Technology 14, no. 1 (2022): 1–24. http://dx.doi.org/10.4018/ijdsst.286690.

Full text
Abstract:
POS (Parts of Speech) tagging, a vital step in diverse Natural Language Processing (NLP) tasks has not drawn much attention in case of Odia a computationally under-developed language. The proposed hybrid method suggests a robust POS tagger for Odia. Observing the rich morphology of the language and unavailability of sufficient annotated text corpus a combination of machine learning and linguistic rules is adopted in the building of the tagger. The tagger is trained on tagged text corpus from the domain of tourism and is capable of obtaining a perceptible improvement in the result. Also an appreciable performance is observed for news articles texts of varied domains. The performance of proposed algorithm experimenting on Odia language shows its manifestation in dominating over existing methods like rule based, hidden Markov model (HMM), maximum entropy (ME) and conditional random field (CRF).
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Maximum entropy markov model"

1

Wang, Chao. "Exploiting non-redundant local patterns and probabilistic models for analyzing structured and semi-structured data." Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1199284713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bury, Thomas. "Collective behaviours in the stock market: a maximum entropy approach." Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209341.

Full text
Abstract:
Scale invariance, collective behaviours and structural reorganization are crucial for portfolio management (portfolio composition, hedging, alternative definition of risk, etc.). This lack of any characteristic scale and such elaborated behaviours find their origin in the theory of complex systems. There are several mechanisms which generate scale invariance but maximum entropy models are able to explain both scale invariance and collective behaviours.<p>The study of the structure and collective modes of financial markets attracts more and more attention. It has been shown that some agent based models are able to reproduce some stylized facts. Despite their partial success, there is still the problem of rules design. In this work, we used a statistical inverse approach to model the structure and co-movements in financial markets. Inverse models restrict the number of assumptions. We found that a pairwise maximum entropy model is consistent with the data and is able to describe the complex structure of financial systems. We considered the existence of a critical state which is linked to how the market processes information, how it responds to exogenous inputs and how its structure changes. The considered data sets did not reveal a persistent critical state but rather oscillations between order and disorder.<p>In this framework, we also showed that the collective modes are mostly dominated by pairwise co-movements and that univariate models are not good candidates to model crashes. The analysis also suggests a genuine adaptive process since both the maximum variance of the log-likelihood and the accuracy of the predictive scheme vary through time. This approach may provide some clue to crash precursors and may provide highlights on how a shock spreads in a financial network and if it will lead to a crash. The natural continuation of the present work could be the study of such a mechanism.<br>Doctorat en Sciences économiques et de gestion<br>info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
3

Chan, Oscar. "Prosodic features for a maximum entropy language model." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0244.

Full text
Abstract:
A statistical language model attempts to characterise the patterns present in a natural language as a probability distribution defined over word sequences. Typically, they are trained using word co-occurrence statistics from a large sample of text. In some language modelling applications, such as automatic speech recognition (ASR), the availability of acoustic data provides an additional source of knowledge. This contains, amongst other things, the melodic and rhythmic aspects of speech referred to as prosody. Although prosody has been found to be an important factor in human speech recognition, its use in ASR has been limited. The goal of this research is to investigate how prosodic information can be employed to improve the language modelling component of a continuous speech recognition system. Because prosodic features are largely suprasegmental, operating over units larger than the phonetic segment, the language model is an appropriate place to incorporate such information. The prosodic features and standard language model features are combined under the maximum entropy framework, which provides an elegant solution to modelling information obtained from multiple, differing knowledge sources. We derive features for the model based on perceptually transcribed Tones and Break Indices (ToBI) labels, and analyse their contribution to the word recognition task. While ToBI has a solid foundation in linguistic theory, the need for human transcribers conflicts with the statistical model's requirement for a large quantity of training data. We therefore also examine the applicability of features which can be automatically extracted from the speech signal. We develop representations of an utterance's prosodic context using fundamental frequency, energy and duration features, which can be directly incorporated into the model without the need for manual labelling. Dimensionality reduction techniques are also explored with the aim of reducing the computational costs associated with training a maximum entropy model. Experiments on a prosodically transcribed corpus show that small but statistically significant reductions to perplexity and word error rates can be obtained by using both manually transcribed and automatically extracted features.
APA, Harvard, Vancouver, ISO, and other styles
4

Camiola, Vito Dario. "Subbands model for semiconductors based on the Maximum Entropy Principle." Doctoral thesis, Università di Catania, 2013. http://hdl.handle.net/10761/1313.

Full text
Abstract:
In this thesis a double-gate MOSFET is simulated with an energy-transport subband model and an energy-transport model is derived for a nanoscale MOSFET. Regarding the double-gate MOSFET the model is formulated starting from the moment system derived from the Schroedinger-Poisson-Boltzmann equations. The system is closed on the basis of the maximum entropy principle and includes scattering of electrons with acoustic and non-polar optical phonons. The proposed expression of the entropy combines quantum effects and semiclassical transport by weighting the contribution of each sub band with the square modulus of the envelope functions arising from the Schroedinger-Poisson system. The simulations show that the model is able to capture the relevant confining and transport features and asses the robustness of the numerical scheme.\\ The model for the MOSFET takes into account the presence of both 3D and 2D electron gas included along with the quantization in the transversal direction with respect to the oxide at the gate which gives raise to a sub band decomposition of the electron energy.\\ Both intra and inter scattering between the 2D and the 3D electron gas are considered. In particular, a fictitious transition from the 3D to the 2D electrons and vice versa is introduced.
APA, Harvard, Vancouver, ISO, and other styles
5

Sekhi, Ikram. "Développement d'un alphabet structural intégrant la flexibilité des structures protéiques." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC084/document.

Full text
Abstract:
L’objectif de cette thèse est de proposer un Alphabet Structural (AS) permettant une caractérisation fine et précise des structures tridimensionnelles (3D) des protéines, à l’aide des chaînes de Markov cachées (HMM) qui permettent de prendre en compte la logique issue de l’enchaînement des fragments structuraux en intégrant l’augmentation des conformations 3D des structures protéiques désormais disponibles dans la banque de données de la Protein Data Bank (PDB). Nous proposons dans cette thèse un nouvel alphabet, améliorant l’alphabet structural HMM-SA27,appelé SAFlex (Structural Alphabet Flexibility), dans le but de prendre en compte l’incertitude des données (données manquantes dans les fichiers PDB) et la redondance des structures protéiques. Le nouvel alphabet structural SAFlex obtenu propose donc un nouveau modèle d’encodage rigoureux et robuste. Cet encodage permet de prendre en compte l’incertitude des données en proposant trois options d’encodages : le Maximum a posteriori (MAP), la distribution marginale a posteriori (POST)et le nombre effectif de lettres à chaque position donnée (NEFF). SAFlex fournit également un encodage consensus à partir de différentes réplications (chaînes multiples, monomères et homomères) d’une même protéine. Il permet ainsi la détection de la variabilité structurale entre celles-ci. Les avancées méthodologiques ainsi que l’obtention de l’alphabet SAFlex constituent les contributions principales de ce travail de thèse. Nous présentons aussi le nouveau parser de la PDB (SAFlex-PDB) et nous démontrons que notre parser a un intérêt aussi bien sur le plan qualitatif (détection de diverses erreurs)que quantitatif (rapidité et parallélisation) en le comparant avec deux autres parsers très connus dans le domaine (Biopython et BioJava). Nous proposons également à la communauté scientifique un site web mettant en ligne ce nouvel alphabet structural SAFlex. Ce site web représente la contribution concrète de cette thèse alors que le parser SAFlex-PDB représente une contribution importante pour le fonctionnement du site web proposé. Cette caractérisation précise des conformations 3D et la prise en compte de la redondance des informations 3D disponibles, fournies par SAFlex, a en effet un impact très important pour la modélisation de la conformation et de la variabilité des structures 3D, des boucles protéiques et des régions d’interface avec différents partenaires, impliqués dans la fonction des protéines<br>The purpose of this PhD is to provide a Structural Alphabet (SA) for more accurate characterization of protein three-dimensional (3D) structures as well as integrating the increasing protein 3D structure information currently available in the Protein Data Bank (PDB). The SA also takes into consideration the logic behind the structural fragments sequence by using the hidden Markov Model (HMM). In this PhD, we describe a new structural alphabet, improving the existing HMM-SA27 structural alphabet, called SAFlex (Structural Alphabet Flexibility), in order to take into account the uncertainty of data (missing data in PDB files) and the redundancy of protein structures. The new SAFlex structural alphabet obtained therefore offers a new, rigorous and robust encoding model. This encoding takes into account the encoding uncertainty by providing three encoding options: the maximum a posteriori (MAP), the marginal posterior distribution (POST), and the effective number of letters at each given position (NEFF). SAFlex also provides and builds a consensus encoding from different replicates (multiple chains, monomers and several homomers) of a single protein. It thus allows the detection of structural variability between different chains. The methodological advances and the achievement of the SAFlex alphabet are the main contributions of this PhD. We also present the new PDB parser(SAFlex-PDB) and we demonstrate that our parser is therefore interesting both qualitative (detection of various errors) and quantitative terms (program optimization and parallelization) by comparing it with two other parsers well-known in the area of Bioinformatics (Biopython and BioJava). The SAFlex structural alphabet is being made available to the scientific community by providing a website. The SAFlex web server represents the concrete contribution of this PhD while the SAFlex-PDB parser represents an important contribution to the proper function of the proposed website. Here, we describe the functions and the interfaces of the SAFlex web server. The SAFlex can be used in various fashions for a protein tertiary structure of a given PDB format file; it can be used for encoding the 3D structure, identifying and predicting missing data. Hence, it is the only alphabet able to encode and predict the missing data in a 3D protein structure to date. Finally, these improvements; are promising to explore increasing protein redundancy data and obtain useful quantification of their flexibility
APA, Harvard, Vancouver, ISO, and other styles
6

Eruygur, Hakki Ozan. "Impacts Of Policy Changes On Turkish Agriculture: An Optimization Model With Maximum Entropy." Phd thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607740/index.pdf.

Full text
Abstract:
Turkey moves towards integration with EU since 1963. The membership will involve full liberalization of trade in agricultural products with EU. The impact of liberalization depends on the path of agricultural policies in Turkey and the EU. On the other hand, agricultural protection continues to be the most controversial issue in global trade negotiations of World Trade Organization (WTO). To evaluate the impacts of policy scenarios, an economic modeling approach based on non-linear mathematical programming is appropriate. This thesis analyzes the impacts of economic integration with the EU and the potential effects of the application of a new WTO agreement in 2015 on Turkish agriculture using an agricultural sector model. The basic approach is Maximum Entropy based Positive Mathematical Programming of Heckelei and Britz (1999). The model is based on a static optimization algorithm. Following an economic integration with EU, the net export of crops declines and can not tolerate the boom in net import of livestock products. Overall welfare affect is small. Consumers benefit from declining prices. Common Agricultural Policy (CAP) supports are determinative for the welfare of producers. WTO simulation shows that a 15 percent reduction in Turkey&rsquo<br>s binding WTO tariff commitments will increase net meat imports by USD 250 million.
APA, Harvard, Vancouver, ISO, and other styles
7

Piggott, Stephen. "Multiple model control and maximum entropy control of flexible structures, implementation and evaluation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0020/MQ58721.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hogg, David W. (David Wardell). "The principle of maximum entropy production in a simple model of a convection cell." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/26841.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Selvaraj, Bellarmin N. "A Reasoning Mechanism in the Probabilistic Data Model Based on the Maximum Entropy Formalism." NSUWorks, 1999. http://nsuworks.nova.edu/gscis_etd/828.

Full text
Abstract:
A desirable feature of a database system is its ability to reason with probabilistic information. This dissertation proposes a model for the process of reasoning with probabilistic information in databases. A probabilistic data model has been chosen as the framework for this study and the information theoretical aspects of the Maximum Entropy Formalism as the inference engine. This formalism, although semantically interesting, offers major complexity problems. Probabilistic data models generally assume some knowledge of the uncertainty space, and the Maximum Entropy Formalism finds the least commitment probability distribution within the uncertainty space. This dissertation is an investigation of how successfully the entropy principle could be applied to probabilistic data. The Boolean logic and weighted queries when applied to probabilistic databases have certain pros and cons. A query logic based on the combined advantages of both the Boolean logic and weighted queries would be a desirable alternative. The proposed model based on the Maximum Entropy Formalism meets the foregoing desiderata of making the query language more expressive. Probabilistic data models generally deal with tuple-level probabilities whereas the proposed model has the ability to handle attribute-level probabilities. This model also has the ability to guess the unknown joint probability distributions. Three techniques to compute the probability distributions were studied in this dissertation: (1) a brute-force, iterative algorithm, (2) a heuristic algorithm, and (3) a simulation approach. The performance characteristics of these algorithms were examined and conclusions were drawn about the potentials and limitations of the proposed model in probabilistic database applications. Traditionally, the probabilistic solution proposed by the Maximum Entropy Formalism is arrived at by solving nonlinear simultaneous equations for the aggregate factors of the nonlinear terms. The proposed heuristic approach and simulation technique were shown to have the highly desirable property of tractability. Further research is needed to improve the accuracy of the heuristic and make it more attractive. Although the proposed model and algorithms are applicable to tables with a few probabilistic attributes, say two or three, the complexity of extending the model to a large number of probabilistic attributes still remains unsolved as it falls in the realm of NP-hard problems.
APA, Harvard, Vancouver, ISO, and other styles
10

Pesheva, Nina Christova. "A mean-field method for driven diffusive systems based on maximum entropy principle." Diss., Virginia Polytechnic Institute and State University, 1989. http://hdl.handle.net/10919/54398.

Full text
Abstract:
Here, we propose a method for generating a hierarchy of mean-field approximations to study the properties of the driven diffusive Ising model at nonequilibrium steady state. In addition, the present study offers a demonstration of the practical application of the information theoretic methods to a simple interacting nonequilibrium system. The application of maximum entropy principle to the system, which is in contact with a heat reservoir, leads to a minimization principle for the generalized Helmholtz free energy. At every level of approximation the latter is expressed in terms of the corresponding mean—field variables. These play the role of variational parameters. The rate equations for the mean-field variables, which incorporate the dynamics of the system, serve as constraints to the minimization procedure. The method is applicable to high temperatures as well to the low temperature phase coexistence regime and also has the potential for dealing with first-order phase transitions. At low temperatures the free energy is nonconvex and we use a Maxwell construction to find the relevant information for the system. To test the method we carry out numerical calculations at the pair level of approximation for the 2-dimensional driven diffusive Ising model on a square lattice with attractive interactions. The results reproduce quite well all the basic properties of the system as reported from Monte Carlo simulations.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Maximum entropy markov model"

1

Samuelsson, Christer. Statistical Methods. Edited by Ruslan Mitkov. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199276349.013.0019.

Full text
Abstract:
Statistical methods now belong to mainstream natural language processing. They have been successfully applied to virtually all tasks within language processing and neighbouring fields, including part-of-speech tagging, syntactic parsing, semantic interpretation, lexical acquisition, machine translation, information retrieval, and information extraction and language learning. This article reviews mathematical statistics and applies it to language modelling problems, leading up to the hidden Markov model and maximum entropy model. The real strength of maximum-entropy modelling lies in combining evidence from several rules, each one of which alone might not be conclusive, but which taken together dramatically affect the probability. Maximum-entropy modelling allows combining heterogeneous information sources to produce a uniform probabilistic model where each piece of information is formulated as a feature. The key ideas of mathematical statistics are simple and intuitive, but tend to be buried in a sea of mathematical technicalities. Finally, the article provides mathematical detail related to the topic of discussion.
APA, Harvard, Vancouver, ISO, and other styles
2

Piggott, Stephen. Multiple model control and maximum entropy control of flexible structures: Implementation and evaluation. 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Stochastic model of the NASA/MSFC ground facility for large space structures with uncertain parameters: Part II, the maximum entropy approach. Dept. of Mathematics, University of Alabama, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

National Aeronautics and Space Administration (NASA) Staff. Stochastic Model of the Nasa/Msfc Ground Facility for Large Space Structures with Uncertain Parameters: The Maximum Entropy Approach, Part 2. Independently Published, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cheng, Russell. Finite Mixture Models. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198505044.003.0017.

Full text
Abstract:
Fitting a finite mixture model when the number of components, k, is unknown can be carried out using the maximum likelihood (ML) method though it is non-standard. Two well-known Bayesian Markov chain Monte Carlo (MCMC) methods are reviewed and compared with ML: the reversible jump method and one using an approximating Dirichlet process. Another Bayesian method, to be called MAPIS, is examined that first obtains point estimates for the component parameters by the maximum a posteriori method for different k and then estimates posterior distributions, including that for k, using importance sampling. MAPIS is compared with ML and the MCMC methods. The MCMC methods produce multimodal posterior parameter distributions in overfitted models. This results in the posterior distribution of k being biased towards high k. It is shown that MAPIS does not suffer from this problem. A simple numerical example is discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Min, J. Michael Dunn, Amos Golan, and Aman Ullah, eds. Advances in Info-Metrics. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190636685.001.0001.

Full text
Abstract:
Info-metrics is a framework for modeling, reasoning, and drawing inferences under conditions of noisy and insufficient information. It is an interdisciplinary framework situated at the intersection of information theory, statistical inference, and decision-making under uncertainty. In a recent book on the Foundations of Info-Metrics, Golan (OUP, 2018) provides the theoretical underpinning of info-metrics and the necessary tools and building blocks for using that framework. This volume complements Golan’s book and expands on the series of studies on the classical maximum entropy and Bayesian methods published in the different proceedings started with the seminal collection of Levine and Tribus (1979) and continuing annually. The objective of this volume is to expand the study of info-metrics, and information processing, across the sciences and to further explore the basis of information-theoretic inference and its mathematical and philosophical foundations. This volume is inherently interdisciplinary and applications oriented. It contains some of the recent developments in the field, as well as many new cross-disciplinary case studies and examples. The emphasis here is on the interrelationship between information and inference where we view the word ‘inference’ in its most general meaning – capturing all types of problem solving. That includes model building, theory creation, estimation, prediction, and decision making. The volume contains nineteen chapters in seven parts. Although chapters in each part are related, each chapter is self-contained; it provides the necessary tools for using the info-metrics framework for solving the problem confronted in that chapter. This volume is designed to be accessible for researchers, graduate students, and practitioners across the disciplines, requiring only some basic quantitative skills. The multidisciplinary nature and applications provide a hands-on experience for the reader.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Maximum entropy markov model"

1

Attardi, Giuseppe, Luca Baronti, Stefano Dei Rossi, and Maria Simi. "SuperSense Tagging with a Maximum Entropy Markov Model." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-35828-9_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Brette, Stéphane, Jérôme Idier, and Ali Mohammad-Djafari. "Scale Invariant Markov Models for Bayesian Inversion of Linear Inverse Problems." In Maximum Entropy and Bayesian Methods. Springer Netherlands, 1996. http://dx.doi.org/10.1007/978-94-009-0107-0_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mengel, Susan, and Yaoquin Jing. "Extracting Structured Data from Web Pages with Maximum Entropy Segmental Markov Model." In Web Information Systems Engineering - WISE 2009. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04409-0_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Rong, Li-ying Liu, He-fang Fu, and Jia-heng Zheng. "Application Study of Hidden Markov Model and Maximum Entropy in Text Information Extraction." In Artificial Intelligence and Computational Intelligence. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-05253-8_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Eguchi, Shinto, and Osamu Komori. "Maximum Entropy Model." In Minimum Divergence Methods in Statistical Machine Learning. Springer Japan, 2022. http://dx.doi.org/10.1007/978-4-431-56922-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bolthausen, E. "Maximum Entropy Principles for Markov Processes." In Stochastic Processes and their Applications. Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-2117-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kato, Z., J. Zerubia, and M. Berthod. "Bayesian Image Classification Using Markov Random Fields." In Maximum Entropy and Bayesian Methods. Springer Netherlands, 1993. http://dx.doi.org/10.1007/978-94-017-2217-9_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pérez, Patrick, Fabrice Heitz, and Patrick Bouthémy. "Global Bayesian Estimation, Contrained Multiscale Markov Random Fields and the Analysis of Visual Motion." In Maximum Entropy and Bayesian Methods. Springer Netherlands, 1993. http://dx.doi.org/10.1007/978-94-017-2217-9_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gee, J. C., and L. Le Briquer. "An Empirical Model of Brain Shape." In Maximum Entropy and Bayesian Methods. Springer Netherlands, 1998. http://dx.doi.org/10.1007/978-94-011-5028-6_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Hang. "Logistic Regression and Maximum Entropy Model." In Machine Learning Methods. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-3917-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Maximum entropy markov model"

1

Guang-Lu Sun, Yi Guan, Xiao-Long Wang, and Jian Zhao. "A maximum entropy Markov model for chunking." In Proceedings of 2005 International Conference on Machine Learning and Cybernetics. IEEE, 2005. http://dx.doi.org/10.1109/icmlc.2005.1527594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Snoussi, Hichem. "Separation of mixed hidden Markov model sources." In BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING. AIP, 2002. http://dx.doi.org/10.1063/1.1477040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hosseini, Shahram, Rima Guidara, Yannick Deville, and Christian Jutten. "Maximum likelihood separation of spatially autocorrelated images using a Markov model." In Bayesian Inference and Maximum Entropy Methods In Science and Engineering. AIP, 2006. http://dx.doi.org/10.1063/1.2423266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Alamino, Roberto, and Nestor Caticha. "Online Learning in Discrete Hidden Markov Models." In Bayesian Inference and Maximum Entropy Methods In Science and Engineering. AIP, 2006. http://dx.doi.org/10.1063/1.2423274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bali, Nadia. "Mean Field Approximation for BSS of images with a compound hierarchical Gauss-Markov-Potts model." In BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING: 25th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering. AIP, 2005. http://dx.doi.org/10.1063/1.2149802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chama, Z. "Image Recovery from the Magnitude of the Fourier Transform using a Gaussian Mixture with Hidden Potts-Markov Model." In BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING: 25th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering. AIP, 2005. http://dx.doi.org/10.1063/1.2149801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jothilakshmi, R., N. Shanthi, and R. Babisaraswathi. "An approach for semantic query expansion based on maximum entropy-hidden Markov model." In 2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT). IEEE, 2013. http://dx.doi.org/10.1109/icccnt.2013.6726755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhao, Ziping, Ti Zhao, and Yaoting Zhu. "A Maximum Entropy Markov Model for Prediction of Prosodic Phrase Boundaries in Chinese TTS." In 2007 IEEE International Conference on Granular Computing (GRC 2007). IEEE, 2007. http://dx.doi.org/10.1109/grc.2007.4403149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Ziping, Tingjian Zhao, and Yaoting Zhu. "A Maximum Entropy Markov Model for Prediction of Prosodic Phrase Boundaries in Chinese TTS." In 2007 IEEE International Conference on Granular Computing (GRC 2007). IEEE, 2007. http://dx.doi.org/10.1109/grc.2007.66.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mazet, V., D. Brie, and J. Idier. "Decomposition of a Chemical Spectrum using a Marked Point Process and a Constant Dimension Model." In Bayesian Inference and Maximum Entropy Methods In Science and Engineering. AIP, 2006. http://dx.doi.org/10.1063/1.2423286.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Maximum entropy markov model"

1

Baader, Franz, and Anton Claußnitzer. Maximum Entropy Reasoning via Model Counting in (Description) Logics that Count Extended Version. Technische Universität Dresden, 2025. https://doi.org/10.25368/2025.015.

Full text
Abstract:
In previous work it was shown that the logic ALC^ME , which extends the description logic (DL) ALC with probabilistic conditionals, has domain-lifted inference. Here, we extend this result from the base logic ALC to two logics that can count, the two-variable fragment C2 of first-order logic (FOL) with counting quantifiers, and the DL ALCSCC, which is not a fragment of FOL. As an auxiliary result, we prove that model counting in ALCSCC can be realized in a domain-liftable way.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography