To see the other types of publications on this topic, follow the link: Maximum entropy markov model.

Journal articles on the topic 'Maximum entropy markov model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Maximum entropy markov model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

KAZAMA, JUN'ICHI, YUSUKE MIYAO, and JUN'ICHI TSUJII. "A Maximum Entropy Tagging Model with Unsupervised Hidden Markov Models." Journal of Natural Language Processing 11, no. 4 (2004): 3–23. http://dx.doi.org/10.5715/jnlp.11.4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cofré, Rodrigo, Cesar Maldonado, and Fernando Rosas. "Large Deviations Properties of Maximum Entropy Markov Chains from Spike Trains." Entropy 20, no. 8 (2018): 573. http://dx.doi.org/10.3390/e20080573.

Full text
Abstract:
We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. To find the maximum entropy Markov chain, we use the thermodynamic formalism, which provides insightful connections with statistical physics and thermodynamics from which large deviations properties arise naturally. We provide an accessible introduction to the maximum entropy Markov chain inference problem and large deviations theory to the community of computational neuroscience, avoiding some technicalities while preserving the core ideas and intuitions. We review large deviations techniques useful in spike train statistics to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability, and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao, Xiao-yu, Jin Zhang, Yuan-yuan Chen, et al. "Promoter recognition based on the maximum entropy hidden Markov model." Computers in Biology and Medicine 51 (August 2014): 73–81. http://dx.doi.org/10.1016/j.compbiomed.2014.04.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Alrashdi, Ibrahim, Muhammad Hameed Siddiqi, Yousef Alhwaiti, Madallah Alruwaili, and Mohammad Azad. "Maximum Entropy Markov Model for Human Activity Recognition Using Depth Camera." IEEE Access 9 (2021): 160635–45. http://dx.doi.org/10.1109/access.2021.3132559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

P, Manikandan, Ramyachitra D, Muthu C, and Sajithra N. "Enrichment of Remote Homology Detection using Cascading Maximum Entropy Markov Model." International Journal of Current Research and Review 13, no. 19 (2021): 80–84. http://dx.doi.org/10.31782/ijcrr.2021.131906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Siddiqi, Muhammad Hameed, Md Golam Rabiul Alam, Choong Seon Hong, Adil Mehmood Khan, and Hyunseung Choo. "A Novel Maximum Entropy Markov Model for Human Facial Expression Recognition." PLOS ONE 11, no. 9 (2016): e0162702. http://dx.doi.org/10.1371/journal.pone.0162702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jalal, Ahmad, Nida Khalid, and Kibum Kim. "Automatic Recognition of Human Interaction via Hybrid Descriptors and Maximum Entropy Markov Model Using Depth Sensors." Entropy 22, no. 8 (2020): 817. http://dx.doi.org/10.3390/e22080817.

Full text
Abstract:
Automatic identification of human interaction is a challenging task especially in dynamic environments with cluttered backgrounds from video sequences. Advancements in computer vision sensor technologies provide powerful effects in human interaction recognition (HIR) during routine daily life. In this paper, we propose a novel features extraction method which incorporates robust entropy optimization and an efficient Maximum Entropy Markov Model (MEMM) for HIR via multiple vision sensors. The main objectives of proposed methodology are: (1) to propose a hybrid of four novel features—i.e., spatio-temporal features, energy-based features, shape based angular and geometric features—and a motion-orthogonal histogram of oriented gradient (MO-HOG); (2) to encode hybrid feature descriptors using a codebook, a Gaussian mixture model (GMM) and fisher encoding; (3) to optimize the encoded feature using a cross entropy optimization function; (4) to apply a MEMM classification algorithm to examine empirical expectations and highest entropy, which measure pattern variances to achieve outperformed HIR accuracy results. Our system is tested over three well-known datasets: SBU Kinect interaction; UoL 3D social activity; UT-interaction datasets. Through wide experimentations, the proposed features extraction algorithm, along with cross entropy optimization, has achieved the average accuracy rate of 91.25% with SBU, 90.4% with UoL and 87.4% with UT-Interaction datasets. The proposed HIR system will be applicable to a wide variety of man–machine interfaces, such as public-place surveillance, future medical applications, virtual reality, fitness exercises and 3D interactive gaming.
APA, Harvard, Vancouver, ISO, and other styles
8

Khuong, Hung The, and Tra Thuy Thi Lai. "Studies on lithofacies sequences in the Hoanh Bo basin, Quang Ninh province by using Markov chain model and Entropy function." Journal of Mining and Earth Sciences 63, no. 1 (2022): 15–26. http://dx.doi.org/10.46326/jmes.2022.63(1).02.

Full text
Abstract:
The succession of lithofacies in the Dong Ho and Tieu Giao formation's Hoanh Bo basin was statistically analyzed using the modified Markov chain model and the function of Entropy. Based on the field definitions, petrographic investigation, and their borehole logs, the lithofacies study was carried out to determine the sediment deposition system and the deposition environment. Seventeen sub - lithofacies organized within the succession are recognized in three lithofacies associations. The analysis result of the Markov chain and chi-square or X2 test indicates that the deposition of the lithofacies is a non - markovian process and represents cyclic deposition of asymmetric fining - upward. To evaluate the randomness of the occurrence of lithofacies in a succession, entropy analysis was performed. Each state is associated with two types of entropy; one is relevant to the Markov matrix expressing upward transitions (entropy after deposition) and the other is relevant to the downward transition matrix (entropy before deposition). The energy regime calculated from the maximum randomness entropy analysis indicates that changing patterns in a deposition has resulted from rapid to steady flow. This results in a change in the depositional pattern from alluvial - fluvial to lacustrine environments, specifically from conglomerate facies (A1) → sandstone facies (A2)→ fine-grained and non - debris facies (A3).
APA, Harvard, Vancouver, ISO, and other styles
9

Denis, Eka Cahyani, and Mustikaningtyas Winda. "Indonesian part of speech tagging using maximum entropy markov model on Indonesian manually tagged corpus." International Journal of Artificial Intelligence (IJ-AI) 11, no. 1 (2022): 336–44. https://doi.org/10.11591/ijai.v11.i1.pp336-344.

Full text
Abstract:
This research discusses the development of a part of speech (POS) tagging system to solve the problem of word ambiguity. This paper presents a new method, namely maximum entropy markov model (MEMM) to solve word ambiguity on the Indonesian dataset. A manually labeled “Indonesian manually tagged corpus” was used as data. Furthermore, the corpus is processed using the entropy formula to obtain the weight of the value of the word being searched for, then calculating it into the MEMM Bigram and MEMM Trigram algorithms with the previously obtained rules to determine the part of speech (POS) tag that has the highest probability. The results obtained show POS tagging using the MEMM method has advantages over the methods used previously which used the same data. This paper improves a performance evaluation of research previously. The resulting average accuracy is 83.04% for the MEMM Bigram algorithm and 86.66% for the MEMM Trigram. The MEMM Trigram algorithm is better than the MEMM Bigram algorithm.
APA, Harvard, Vancouver, ISO, and other styles
10

Pattnaik, Sagarika, and Ajit Kumar Nayak. "A Modified Markov-Based Maximum-Entropy Model for POS Tagging of Odia Text." International Journal of Decision Support System Technology 14, no. 1 (2022): 1–24. http://dx.doi.org/10.4018/ijdsst.286690.

Full text
Abstract:
POS (Parts of Speech) tagging, a vital step in diverse Natural Language Processing (NLP) tasks has not drawn much attention in case of Odia a computationally under-developed language. The proposed hybrid method suggests a robust POS tagger for Odia. Observing the rich morphology of the language and unavailability of sufficient annotated text corpus a combination of machine learning and linguistic rules is adopted in the building of the tagger. The tagger is trained on tagged text corpus from the domain of tourism and is capable of obtaining a perceptible improvement in the result. Also an appreciable performance is observed for news articles texts of varied domains. The performance of proposed algorithm experimenting on Odia language shows its manifestation in dominating over existing methods like rule based, hidden Markov model (HMM), maximum entropy (ME) and conditional random field (CRF).
APA, Harvard, Vancouver, ISO, and other styles
11

Mihelich, M., D. Faranda, B. Dubrulle, and D. Paillard. "Statistical optimization for passive scalar transport: maximum entropy production versus maximum Kolmogorov–Sinai entropy." Nonlinear Processes in Geophysics 22, no. 2 (2015): 187–96. http://dx.doi.org/10.5194/npg-22-187-2015.

Full text
Abstract:
Abstract. We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy for a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy, seen as functions of a parameter f connected to the jump probability, admit a unique maximum denoted fmaxEP and fmaxKS. The behaviour of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this paper is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation from equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N) tends towards a non-zero value, while fmaxKS(N) tends to 0 when N goes to infinity. For values of N typical of those adopted by Paltridge and climatologists working on maximum entropy production (N ≈ 10–100), we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second-order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non-equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution) to describe the system.
APA, Harvard, Vancouver, ISO, and other styles
12

Mihelich, M., D. Faranda, B. Dubrulle, and D. Paillard. "Statistical optimization for passive scalar transport: maximum entropy production vs. maximum Kolmogorov–Sinay entropy." Nonlinear Processes in Geophysics Discussions 1, no. 2 (2014): 1691–713. http://dx.doi.org/10.5194/npgd-1-1691-2014.

Full text
Abstract:
Abstract. We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N) tends towards a non-zero value, while fmaxKS(N) tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100), we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution) to describe the system.
APA, Harvard, Vancouver, ISO, and other styles
13

Cahyani, Denis Eka, and Winda Mustikaningtyas. "Indonesian part of speech tagging using maximum entropy markov model on Indonesian manually tagged corpus." IAES International Journal of Artificial Intelligence (IJ-AI) 11, no. 1 (2022): 336. http://dx.doi.org/10.11591/ijai.v11.i1.pp336-344.

Full text
Abstract:
This research discusses the development of a part of speech (POS) tagging system to solve the problem of word ambiguity. This paper presents a new method, namely maximum entropy markov model (MEMM) to solve word ambiguity on the Indonesian dataset. A manually labeled “Indonesian manually tagged corpus” was used as data. Furthermore, the corpus is processed using the entropy formula to obtain the weight of the value of the word being searched for, then calculating it into the MEMM Bigram and MEMM Trigram algorithms with the previously obtained rules to determine the part of speech (POS) tag that has the highest probability. The results obtained show POS tagging using the MEMM method has advantages over the methods used previously which used the same data. This paper improves a performance evaluation of research previously. The resulting average accuracy is 83.04% for the MEMM Bigram algorithm and 86.66% for the MEMM Trigram. The MEMM Trigram algorithm is better than the MEMM Bigram algorithm.
APA, Harvard, Vancouver, ISO, and other styles
14

Tesso Nedjo, Abraham. "Automatic Part-of-speech Tagging for Oromo Language Using Maximum Entropy Markov Model (MEMM)." Journal of Information and Computational Science 11, no. 10 (2014): 3319–34. http://dx.doi.org/10.12733/jics20103906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Singh, Sanjay Kumar, Umesh Singh, and Manoj Kumar. "Bayesian inference for exponentiated Pareto model with application to bladder cancer remission time." Statistics in Transition new series 15, no. 3 (2014): 403–26. http://dx.doi.org/10.59170/stattrans-2014-027.

Full text
Abstract:
Maximum likelihood and Bayes estimators of the unknown parameters and the expected experiment times of the exponentiated Pareto model have been obtained for progressive type-II censored data with binomial removal scheme. Markov Chain Monte Carlo (MCMC) method is used to compute the Bayes estimates of the parameters of interest. The generalized entropy loss function and squared error loss function have been considered for obtaining the Bayes estimators. Comparisons are made between Bayesian and maximum likelihood (ML) estimators via Monte Carlo simulation. The proposed methodology is illustrated through real data.
APA, Harvard, Vancouver, ISO, and other styles
16

Chan, Jason Chin-Tiong, and Hong Choon Ong. "A Novel Entropy-Based Decoding Algorithm for a Generalized High-Order Discrete Hidden Markov Model." Journal of Probability and Statistics 2018 (2018): 1–15. http://dx.doi.org/10.1155/2018/8068196.

Full text
Abstract:
The optimal state sequence of a generalized High-Order Hidden Markov Model (HHMM) is tracked from a given observational sequence using the classical Viterbi algorithm. This classical algorithm is based on maximum likelihood criterion. We introduce an entropy-based Viterbi algorithm for tracking the optimal state sequence of a HHMM. The entropy of a state sequence is a useful quantity, providing a measure of the uncertainty of a HHMM. There will be no uncertainty if there is only one possible optimal state sequence for HHMM. This entropy-based decoding algorithm can be formulated in an extended or a reduction approach. We extend the entropy-based algorithm for computing the optimal state sequence that was developed from a first-order to a generalized HHMM with a single observational sequence. This extended algorithm performs the computation exponentially with respect to the order of HMM. The computational complexity of this extended algorithm is due to the growth of the model parameters. We introduce an efficient entropy-based decoding algorithm that used reduction approach, namely, entropy-based order-transformation forward algorithm (EOTFA) to compute the optimal state sequence of any generalized HHMM. This EOTFA algorithm involves a transformation of a generalized high-order HMM into an equivalent first-order HMM and an entropy-based decoding algorithm is developed based on the equivalent first-order HMM. This algorithm performs the computation based on the observational sequence and it requires OTN~2 calculations, where N~ is the number of states in an equivalent first-order model and T is the length of observational sequence.
APA, Harvard, Vancouver, ISO, and other styles
17

Lv, Chengyao, Deng Pan, Yaxiong Li, Jianxin Li, and Zong Wang. "A Novel Chinese Entity Relationship Extraction Method Based on the Bidirectional Maximum Entropy Markov Model." Complexity 2021 (January 19, 2021): 1–8. http://dx.doi.org/10.1155/2021/6610965.

Full text
Abstract:
To identify relationships among entities in natural language texts, extraction of entity relationships technically provides a fundamental support for knowledge graph, intelligent information retrieval, and semantic analysis, promotes the construction of knowledge bases, and improves efficiency of searching and semantic analysis. Traditional methods of relationship extraction, either those proposed at the earlier times or those based on traditional machine learning and deep learning, have focused on keeping relationships and entities in their own silos: extracting relationships and entities are conducted in steps before obtaining the mappings. To address this problem, a novel Chinese relationship extraction method is proposed in this paper. Firstly, the triple is treated as an entity relation chain and can identify the entity before the relationship and predict its corresponding relationship and the entity after the relationship. Secondly, the Joint Extraction of Entity Mentions and Relations model is based on the Bidirectional Long Short-Term Memory and Maximum Entropy Markov Model (Bi-MEMM). Experimental results indicate that the proposed model can achieve a precision of 79.2% which is much higher than that of traditional models.
APA, Harvard, Vancouver, ISO, and other styles
18

Ghadi, Yazeed Yasin, Israr Akhter, Hanan Aljuaid, et al. "Extrinsic Behavior Prediction of Pedestrians via Maximum Entropy Markov Model and Graph-Based Features Mining." Applied Sciences 12, no. 12 (2022): 5985. http://dx.doi.org/10.3390/app12125985.

Full text
Abstract:
With the change of technology and innovation of the current era, retrieving data and data processing becomes a more challenging task for researchers. In particular, several types of sensors and cameras are used to collect multimedia data from various resources and domains, which have been used in different domains and platforms to analyze things such as educational and communicational setups, emergency services, and surveillance systems. In this paper, we propose a robust method to predict human behavior from indoor and outdoor crowd environments. While taking the crowd-based data as input, some preprocessing steps for noise reduction are performed. Then, human silhouettes are extracted that eventually help in the identification of human beings. After that, crowd analysis and crowd clustering are applied for more accurate and clear predictions. This step is followed by features extraction in which the deep flow, force interaction matrix and force flow features are extracted. Moreover, we applied the graph mining technique for data optimization, while the maximum entropy Markov model is applied for classification and predictions. The evaluation of the proposed system showed 87% of mean accuracy and 13% of error rate for the avenue dataset, while 89.50% of mean accuracy rate and 10.50% of error rate for the University of Minnesota (UMN) dataset. In addition, it showed a 90.50 mean accuracy rate and 9.50% of error rate for the A Day on Campus (ADOC) dataset. Therefore, these results showed a better accuracy rate and low error rate compared to state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
19

SAITO, Mitsuo, Tatsuya SUZUKI, Shinkichi INAGAKI, and Takeshi AOKI. "Fault Diagnosis of Event-Driven Systems based on Timed Markov Model with Maximum Entropy Principle." Transactions of the Society of Instrument and Control Engineers 42, no. 9 (2006): 1067–75. http://dx.doi.org/10.9746/sicetr1965.42.1067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Hongbing, Huanhuan Fei, Qi Yu, Wei Zhao, Jia Yan, and Tianjing Hong. "A motifs-based Maximum Entropy Markov Model for realtime reliability prediction in System of Systems." Journal of Systems and Software 151 (May 2019): 180–93. http://dx.doi.org/10.1016/j.jss.2019.02.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Emam, Walid, and Yusra Tashkandy. "The Weibull Claim Model: Bivariate Extension, Bayesian, and Maximum Likelihood Estimations." Mathematical Problems in Engineering 2022 (May 4, 2022): 1–10. http://dx.doi.org/10.1155/2022/8729529.

Full text
Abstract:
Using a class of claim distributions, we introduce the Weibull claim distribution, which is a new extension of the Weibull distribution with three parameters. The maximum likelihood estimation method is used to estimate the three unknown parameters, and the asymptotic confidence intervals and bootstrap confidence intervals are constructed. In addition, we obtained the Bayesian estimates of the unknown parameters of the Weibull claim distribution under the squared error and linear exponential function (LINEX) and the general entropy loss function. Since the Bayes estimators cannot be obtained in closed form, we compute the approximate Bayes estimates via the Markov Chain Monte Carlo (MCMC) procedure. By analyzing the two data sets, the applicability and capabilities of the Weibull claim model are illustrated. The fatigue life of a particular type of Kevlar epoxy strand subjected to a fixed continuous load at a pressure level of 90% until the strand fails data set was analyzed.
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Xinjing, and Wenhao Gui. "Bayesian Estimation of Entropy for Burr Type XII Distribution under Progressive Type-II Censored Data." Mathematics 9, no. 4 (2021): 313. http://dx.doi.org/10.3390/math9040313.

Full text
Abstract:
With the rapid development of statistics, information entropy is proposed as an important indicator used to quantify information uncertainty. In this paper, maximum likelihood and Bayesian methods are used to obtain the estimators of the entropy for a two-parameter Burr type XII distribution under progressive type-II censored data. In the part of maximum likelihood estimation, the asymptotic confidence intervals of entropy are calculated. In Bayesian estimation, we consider non-informative and informative priors respectively, and asymmetric and symmetric loss functions are both adopted. Meanwhile, the posterior risk is also calculated to evaluate the performances of the entropy estimators against different loss functions. In a numerical simulation, the Lindley approximation and the Markov chain Monte Carlo method were used to obtain the Bayesian estimates. In turn, the highest posterior density credible intervals of the entropy were derived. Finally, average absolute bias and mean square error were used to evaluate the estimators under different methods, and a real dataset was selected to illustrate the feasibility of the above estimation model.
APA, Harvard, Vancouver, ISO, and other styles
23

Aurbacher, Joachim, and Stephan Dabbert. "Generating crop sequences in land-use models using maximum entropy and Markov chains." Agricultural Systems 104, no. 6 (2011): 470–79. http://dx.doi.org/10.1016/j.agsy.2011.03.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Zhu, Song Chun, Ying Nian Wu, and David Mumford. "Minimax Entropy Principle and Its Application to Texture Modeling." Neural Computation 9, no. 8 (1997): 1627–60. http://dx.doi.org/10.1162/neco.1997.9.8.1627.

Full text
Abstract:
This article proposes a general theory and methodology, called the minimax entropy principle, for building statistical models for images (or signals) in a variety of applications. This principle consists of two parts. The first is the maximum entropy principle for feature binding (or fusion): for a given set of observed feature statistics, a distribution can be built to bind these feature statistics together by maximizing the entropy over all distributions that reproduce them. The second part is the minimum entropy principle for feature selection: among all plausible sets of feature statistics, we choose the set whose maximum entropy distribution has the minimum entropy. Computational and inferential issues in both parts are addressed; in particular, a feature pursuit procedure is proposed for approximately selecting the optimal set of features. The minimax entropy principle is then corrected by considering the sample variation in the observed feature statistics, and an information criterion for feature pursuit is derived. The minimax entropy principle is applied to texture modeling, where a novel Markov random field (MRF) model, called FRAME (filter, random field, and minimax entropy), is derived, and encouraging results are obtained in experiments on a variety of texture images. The relationship between our theory and the mechanisms of neural computation is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
25

Shao, Liangshan, and Yingchao Gao. "A Gas Prominence Prediction Model Based on Entropy-Weighted Gray Correlation and MCMC-ISSA-SVM." Processes 11, no. 7 (2023): 2098. http://dx.doi.org/10.3390/pr11072098.

Full text
Abstract:
To improve the accuracy of coal and gas prominence prediction, an improved sparrow search algorithm (ISSA) and an optimized support vector machine (SVM) based on the Markov chain Monte Carlo (MCMC) filling algorithm prediction model were proposed. The mean value of the data after filling in the missing values in the coal and gas prominence data using the MCMC filling algorithm was 2.282, with a standard deviation of 0.193. Compared with the mean fill method (Mean), random forest filling method (random forest, RF), and K-nearest neighbor filling method (K-nearest neighbor, KNN), the MCMC filling algorithm showed the best results. The parameter indicators of the salient data were ranked by entropy-weighted gray correlation analysis, and the salient prediction experiments were divided into four groups with different numbers of parameter indicators according to the entropy-weighted gray correlation. The best results were obtained in the fourth group, with a maximum relative error (maximum relative error, REmax) of 0.500, an average relative error (average relative error, MRE) of 0.042, a root mean square error (root mean square error, RMSE) of 0.144, and a coefficient of determination (coefficient of determination, R2) of 0.993. The best predicted parameters were the initial velocity of gas dispersion (X2), gas content (X4), K1 gas desorption (X5), and drill chip volume (X6). To improve the sparrow search algorithm (sparrow search algorithm, SSA), the adaptive t-distribution variation operator was introduced to obtain ISSA, and the prediction models of improved sparrow search algorithm optimized support vector machine based on Markov chain Monte Carlo filling algorithm (MCMC-ISSA-SVM), sparrow search algorithm optimized support vector machine based on Markov chain Monte Carlo filling algorithm (MCMC-SSA-SVM), genetic algorithm optimized support vector machine based on Markov chain Monte Carlo filling algorithm (MCMC-GA-SVM) and particle swarm optimization algorithm optimized support vector machine based on Markov chain Monte Carlo filling algorithm (MCMC- PSO -SVM) were established for coal and gas prominence prediction using the ISSA, SSA, genetic algorithm (genetic algorithm, GA) and particle swarm optimization algorithm (particle swarm optimization, PSO) respectively. Comparing the prediction experimental results of each model, the prediction accuracy of MCMC-ISSA-SVM is 98.25%, the error is 0.018, the convergence speed is the fastest, the number of iterations is the least, and the best fitness and the average fitness are the highest among the four models. All the prediction results of MCMC-ISSA-SVM are significantly better than the other three models, which indicates that the algorithm improvement is effective. ISSA outperformed SSA, PSO, and GA, and the MCMC-ISSA-SVM model was able to significantly improve the prediction accuracy and effectively enhance the generalization ability.
APA, Harvard, Vancouver, ISO, and other styles
26

Tahir, Sheikh Badar ud din, Ahmad Jalal, and Kibum Kim. "Wearable Inertial Sensors for Daily Activity Analysis Based on Adam Optimization and the Maximum Entropy Markov Model." Entropy 22, no. 5 (2020): 579. http://dx.doi.org/10.3390/e22050579.

Full text
Abstract:
Advancements in wearable sensors technologies provide prominent effects in the daily life activities of humans. These wearable sensors are gaining more awareness in healthcare for the elderly to ensure their independent living and to improve their comfort. In this paper, we present a human activity recognition model that acquires signal data from motion node sensors including inertial sensors, i.e., gyroscopes and accelerometers. First, the inertial data is processed via multiple filters such as Savitzky–Golay, median and hampel filters to examine lower/upper cutoff frequency behaviors. Second, it extracts a multifused model for statistical, wavelet and binary features to maximize the occurrence of optimal feature values. Then, adaptive moment estimation (Adam) and AdaDelta are introduced in a feature optimization phase to adopt learning rate patterns. These optimized patterns are further processed by the maximum entropy Markov model (MEMM) for empirical expectation and highest entropy, which measure signal variances for outperformed accuracy results. Our model was experimentally evaluated on University of Southern California Human Activity Dataset (USC-HAD) as a benchmark dataset and on an Intelligent Mediasporting behavior (IMSB), which is a new self-annotated sports dataset. For evaluation, we used the “leave-one-out” cross validation scheme and the results outperformed existing well-known statistical state-of-the-art methods by achieving an improved recognition accuracy of 91.25%, 93.66% and 90.91% when compared with USC-HAD, IMSB, and Mhealth datasets, respectively. The proposed system should be applicable to man–machine interface domains, such as health exercises, robot learning, interactive games and pattern-based surveillance.
APA, Harvard, Vancouver, ISO, and other styles
27

PAPANTONOPOULOS, G., K. TAKAHASHI, T. BOUNTIS, and B. G. LOOS. "USING CELLULAR AUTOMATA EXPERIMENTS TO MODEL PERIODONTITIS: A FIRST STEP TOWARDS UNDERSTANDING THE NONLINEAR DYNAMICS OF THE DISEASE." International Journal of Bifurcation and Chaos 23, no. 03 (2013): 1350056. http://dx.doi.org/10.1142/s0218127413500569.

Full text
Abstract:
Cellular automata (CA) are time and space discrete dynamical systems that can model biological systems. The aim of this study is to simulate by CA experiments how the disease of periodontitis propagates along the dental root surface. Using a Moore neighborhood on a grid copy of the pattern of periodontal ligament fibers (PDLF) supporting and anchoring the teeth to bone, we investigate the fractal structure of the associated pattern using all possible outer-totalistic CA rules. On the basis of the propagation patterns, CA rules are classified in three groups, according to whether the disease was spreading, remaining constant or receding. These are subsequently introduced in a finite state Markov model as probabilistic "state-rules" and the model is validated using datasets retrieved from previous studies. Based on the maximum entropy production principle, we identified the "state-rule" that most appropriately describes the PDLF pattern, showing a power law distribution of periodontitis propagation rates with exponent 1.3. Entropy rates and mutual information of Markov chains were estimated by extensive data simulation. The scale factor of the PDLF used to estimate the conditional entropy of Markov chains was seen to be nearly equal 1.85. This possibly reflects the fact that a dataset representing tooth percentage with bone loss equal to 50% or more of their root length, is found to have a fractal dimension (FD) of about 1.84. Similarly, datasets of serum neutrophil, basophil, eosinophil, monocyte counts and IgG, IgA, IgM levels taken from periodontitis patients, showed a FD ranging from 1.82 to 1.87. Our study presents the first mathematical model to our knowledge that suggests periodontitis is a nonlinear dynamical process. Moreover, the model we propose implies that the entropy rate of the immune-inflammatory host response dictates the rate of periodontitis progression. This is validated by clinical data and suggests that our model can serve as a basis for detecting periodontitis susceptible individuals and shaping prognosis for treated periodontitis patients.
APA, Harvard, Vancouver, ISO, and other styles
28

Dobrovolski, S. G. "South Atlantic sea surface temperature anomalies and air-sea interactions: stochastic models." Annales Geophysicae 12, no. 9 (1994): 903–9. http://dx.doi.org/10.1007/s00585-994-0903-9.

Full text
Abstract:
Abstract. Data on the South Atlantic monthly sea surface temperature anomalies (SSTA) are analysed using the maximum-entropy method. It is shown that the Markov first-order process can describe, to a first approximation, SSTA series. The region of maximum SSTA values coincides with the zone of maximum residual white noise values (sub-Antarctic hydrological front). The theory of dynamic-stochastic climate models is applied to estimate the variability of South Atlantic SSTA and air-sea interactions. The Adem model is used as a deterministic block of the dynamic-stochastic model. Experiments show satisfactorily the SSTA intensification in the sub-Antarctic front zone, with appropriate standard deviations, and demonstrate the leading role of the abnormal drift currents in these processes.
APA, Harvard, Vancouver, ISO, and other styles
29

Xiao, Jinghui, Xiaolong Wang, and Bingquan Liu. "The study of a nonstationary maximum entropy Markov model and its application on the pos-tagging task." ACM Transactions on Asian Language Information Processing 6, no. 2 (2007): 7. http://dx.doi.org/10.1145/1282080.1282082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Huang, Guanghua, Feng Yin, and Jiulin Feng. "An Improved Collaborative Filtering Algorithms Based on Maximum Entropy Model and Markov Process to Mine User Behavior." Journal of Physics: Conference Series 1395 (November 2019): 012001. http://dx.doi.org/10.1088/1742-6596/1395/1/012001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

ODEN, J. TINSLEY, ERNESTO E. PRUDENCIO, and ANDREA HAWKINS-DAARUD. "SELECTION AND ASSESSMENT OF PHENOMENOLOGICAL MODELS OF TUMOR GROWTH." Mathematical Models and Methods in Applied Sciences 23, no. 07 (2013): 1309–38. http://dx.doi.org/10.1142/s0218202513500103.

Full text
Abstract:
We address general approaches to the rational selection and validation of mathematical and computational models of tumor growth using methods of Bayesian inference. The model classes are derived from a general diffuse-interface, continuum mixture theory and focus on mass conservation of mixtures with up to four species. Synthetic data are generated using higher-order base models. We discuss general approaches to model calibration, validation, plausibility, and selection based on Bayesian-based methods, information theory, and maximum information entropy. We also address computational issues and provide numerical experiments based on Markov chain Monte Carlo algorithms and high performance computing implementations.
APA, Harvard, Vancouver, ISO, and other styles
32

Mindrila, Diana. "Bayesian Latent Class Analysis: Sample Size, Model Size, and Classification Precision." Mathematics 11, no. 12 (2023): 2753. http://dx.doi.org/10.3390/math11122753.

Full text
Abstract:
The current literature includes limited information on the classification precision of Bayes estimation for latent class analysis (BLCA). (1) Objectives: The present study compared BLCA with the robust maximum likelihood (MLR) procedure, which is the default procedure with the Mplus 8.0 software. (2) Method: Markov chain Monte Carlo simulations were used to estimate two-, three-, and four-class models measured by four binary observed indicators with samples of 1000, 750, 500, 250, 100, and 75 observations, respectively. With each sample, the number of replications was 500, and entropy and average latent class probabilities for most likely latent class membership were recorded. (3) Results: Bayes entropy values were more stable and ranged between 0.644 and 1. Bayes’ average latent class probabilities ranged between 0.528 and 1. MLR entropy values ranged between 0.552 and 0.958. and MLR average latent class probabilities ranged between 0.539 and 0.993. With the two-class model, BLCA outperformed MLR with all sample sizes. With the three-class model, BLCA had higher classification precision with the 75-sample size, whereas MLR performed slightly better with the 750- and 1000-sample sizes. With the 4-class model, BLCA underperformed MLR and had an increased number of unsuccessful computations, particularly with smaller samples.
APA, Harvard, Vancouver, ISO, and other styles
33

Kumar, Kapil, Indrajeet Kumar, and Hon Keung Tony Ng. "On Estimation of Shannon’s Entropy of Maxwell Distribution Based on Progressively First-Failure Censored Data." Stats 7, no. 1 (2024): 138–59. http://dx.doi.org/10.3390/stats7010009.

Full text
Abstract:
Shannon’s entropy is a fundamental concept in information theory that quantifies the uncertainty or information in a random variable or data set. This article addresses the estimation of Shannon’s entropy for the Maxwell lifetime model based on progressively first-failure-censored data from both classical and Bayesian points of view. In the classical perspective, the entropy is estimated using maximum likelihood estimation and bootstrap methods. For Bayesian estimation, two approximation techniques, including the Tierney-Kadane (T-K) approximation and the Markov Chain Monte Carlo (MCMC) method, are used to compute the Bayes estimate of Shannon’s entropy under the linear exponential (LINEX) loss function. We also obtained the highest posterior density (HPD) credible interval of Shannon’s entropy using the MCMC technique. A Monte Carlo simulation study is performed to investigate the performance of the estimation procedures and methodologies studied in this manuscript. A numerical example is used to illustrate the methodologies. This paper aims to provide practical values in applied statistics, especially in the areas of reliability and lifetime data analysis.
APA, Harvard, Vancouver, ISO, and other styles
34

Jaariyah, Narti, and Ednawati Rainarli. "CONDITIONAL RANDOM FIELDS UNTUK PENGENALAN ENTITAS BERNAMA PADA TEKS BAHASA INDONESIA." Komputa : Jurnal Ilmiah Komputer dan Informatika 6, no. 1 (2017): 29–34. http://dx.doi.org/10.34010/komputa.v6i1.2474.

Full text
Abstract:
Pengenalan entitas bernama merupakan suatu proses untuk mengklasifikasi entitas nama seperti nama orang, lokasi, organisasi, waktu, dan kuantitas pada suatu teks. Untuk teks berbahasa Indonesia, pengenalan entitas bernama sudah pernah dilakukan menggunakan metode Hidden Markov Model (HMM) [1]. Pada perkembangannya, muncul metode Conditional Random Fields (CRF) yang merupakan perbaikan dari HMM. CRF sendiri memiliki banyak kelebihan dibandingkan metode Hidden Markov Model dan Maximum Entropy Markov Model. Hal ini terbukti pada penerapan pengenalan entitas bernama menggunakan metode CRF pada berbagai bahasa yang menghasilkan nilai akurasi yang tinggi. Untuk itu dalam penelitian ini akan digunakan CRF untuk mendeteksi entitas bernama pada teks bahasa Indonesia. Aplikasi pengenalan entitas bernama dibuat untuk menguji seberapa baik CRF dalam mengenali entitas bernama. Fitur yang digunakan adalah kelas kata sekarang, kelas kata sekarang dan kelas kata sebelumnya, dan kelas kata sekarang, kelas sebelumnya, dan kelas kata setelahnya. Pengujian menggunakan data latih dan data uji yang sama hasil akurasi terbaik yang diperoleh sebesar 90.53% dengan recall 63.09% dan precision 31.55%. Hasil pengujian terhadap data latih dan data uji yang berbeda menunjukkan nilai akurasi terbaik adalah 90.06% dengan recall dan precision adalah 68.38% dan 41.35%.
APA, Harvard, Vancouver, ISO, and other styles
35

Thach, Tien Thanh, and Radim Briš. "Non-linear failure rate: A comparison of the Bayesian and frequentist approaches to estimation." ITM Web of Conferences 20 (2018): 03001. http://dx.doi.org/10.1051/itmconf/20182003001.

Full text
Abstract:
In this article, a new generalization of linear failure rate called nonlinear failure rate is developed, analyzed, and applied to a real dataset. A comparison of Bayesian and frequentist approaches to the estimation of parameters and reliability characteristics of non-linear failure rate is investigated. The maximum likelihood estimators are obtained using the cross-entropy method to optimize the log-likelihood function. The Bayes estimators of parameters and reliability characteristics are obtained via Markov chain Monte Carlo method. A simulation study is performed in order to compare the proposed Bayes estimators with maximum likelihood estimators on the basis of their biases and mean squared errors. We demonstrate that the proposed model fits a well-known dataset better than other mixture models.
APA, Harvard, Vancouver, ISO, and other styles
36

Silva, Carlos Pereira da, Cristian Tiago Erazo Mendes, Alessandra Querino da Silva, Luciano Antonio de Oliveira, Renzo Garcia Von Pinho, and Marcio Balestre. "Use of the reversible jump Markov chain Monte Carlo algorithm to select multiplicative terms in the AMMI-Bayesian model." PLOS ONE 18, no. 1 (2023): e0279537. http://dx.doi.org/10.1371/journal.pone.0279537.

Full text
Abstract:
The model selection stage has become a central theme in applying the additive main effects and multiplicative interaction (AMMI) model to determine the optimal number of bilinear components to be retained to describe the genotype-by-environment interaction (GEI). In the Bayesian context, this problem has been addressed by using information criteria and the Bayes factor. However, these procedures are computationally intensive, making their application unfeasible when the model’s parametric space is large. A Bayesian analysis of the AMMI model was conducted using the Reversible Jump algorithm (RJMCMC) to determine the number of multiplicative terms needed to explain the GEI pattern. Three a priori distributions were assigned for the singular value scale parameter under different justifications, namely: i) the insufficient reason principle (uniform); ii) the invariance principle (Jeffreys’ prior) and iii) the maximum entropy principle. Simulated and real data were used to exemplify the method. An evaluation of the predictive ability of models for simulated data was conducted and indicated that the AMMI analysis, in general, was robust, and models adjusted by the Reversible Jump method were superior to those in which sampling was performed only by the Gibbs sampler. In addition, the RJMCMC showed greater feasibility since the selection and estimation of parameters are carried out concurrently in the same sampling algorithm, being more attractive in terms of computational time. The use of the maximum entropy principle makes the analysis more flexible, avoiding the use of procedures for correcting prior degrees of freedom and obtaining improper posterior marginal distributions.
APA, Harvard, Vancouver, ISO, and other styles
37

Khodja, N., H. Aiachi, H. Talhi, and I. N. Benatallah. "THE TRUNCATED XLINDLEY DISTRIBUTION WITH CLASSIC AND 6BAYESIAN INFERENCE UNDER CENSORED DATA." Advances in Mathematics: Scientific Journal 11, no. 12 (2022): 1191–207. http://dx.doi.org/10.37418/amsj.11.12.4.

Full text
Abstract:
We provide a brand-new distribution based on the model of Lindley, with an emphasis on the estimation of its unknown parameters. After introducing the new distribution, we cover two approaches to estimate its parameters; in the presence of a censored scheme, we first use a traditional approach, which is The maximum likelihood technique, then we use the Bayesian approach. The BarzilaiBrown algorithm is used to derive the censored maximum likelihood estimators while a Monte Carlo Markov chains (MCMC) procedure is applied to derive the Bayesian ones. Three loss functions are used to provide the Bayesian estimators: the entropy, the generalized quadratic, and the Linex functions. Using Pitman's proximity criteria; the maximum likelihood and the Bayesian estimations are compared. All of the provided estimations techniques have been evaluated throughout simulation studies. Finally, we consider two sample Bayes predictions to predict future order statistics
APA, Harvard, Vancouver, ISO, and other styles
38

Li, Dongdong, Kai Zhang, Yin Yao, and Shunfu Lin. "Clustering and Markov Model of Plug-In Electric Vehicle Charging Profiles for Peak Shaving Applications." International Transactions on Electrical Energy Systems 2022 (June 29, 2022): 1–14. http://dx.doi.org/10.1155/2022/5006110.

Full text
Abstract:
The charging profiles of plug-in electric vehicles (PEVs) have large volatility. It has brought great challenges for aggregator to accurately complete load identification and aggregated configuration. Therefore, an analysis and configuration method of responsive capacity based on clustering and the Markov model are proposed in this paper. Firstly, the adaptive two-scale clustering algorithm based on spectral clustering (ATCSC) is applied to the clustering of charging piles. The offset compensation of the extreme points is used to form the distance measurement in the clustering process. Then, the responsive aggregated power can be obtained after the change control of suitable charging piles. Finally, the variation characteristics of the actual charging profiles based on the Markov model are introduced to the reliability evaluation in the load curtailment service. Simulation results reveal the following. (1) The robustness of the proposed method is better especially for the PEV charging profiles with strong volatility. (2) The validity of the aggregated configuration is verified. Additionally, the sum of power deviation is 0.0707 kW when the change interval of control strategy is 15 min. (3) The maximum shortage of configuration is −98.0875 kW as the entropy of the volatility is 37.027.
APA, Harvard, Vancouver, ISO, and other styles
39

Choudhury, Upasana, Shruti Kanga, Suraj Kumar Singh, et al. "Projecting Urban Expansion by Analyzing Growth Patterns and Sustainable Planning Strategies—A Case Study of Kamrup Metropolitan, Assam, North-East India." Earth 5, no. 2 (2024): 169–94. http://dx.doi.org/10.3390/earth5020009.

Full text
Abstract:
This research focuses on the urban expansion occurring in the Kamrup Metropolitan District—an area experiencing significant urbanization—with the aim of understanding its patterns and projecting future growth. The research covers the period from 2000 to 2022 and projects growth up to 2052, providing insights for sustainable urban planning. The study utilizes the maximum likelihood method for land use/land cover (LULC) delineation and the Shannon entropy technique for assessing urban sprawl. Additionally, it integrates the cellular automata (CA)-Markov model and the analytical hierarchy process (AHP) for future projections. The results indicate a considerable shift from non-built-up to built-up areas, with the proportion of built-up areas expected to reach 36.2% by 2032 and 40.54% by 2052. These findings emphasize the importance of strategic urban management and sustainable planning. The study recommends adaptive urban planning strategies and highlights the value of integrating the CA Markov model and AHP for policymakers and urban planners. This can contribute to the discourse on sustainable urban development and informed decision-making.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhao, Yanxia, Wei Ren, and Zheng Li. "An Accent Marking Algorithm of English Conversion System Based on Morphological Rules." International Journal of Emerging Technologies in Learning (iJET) 16, no. 01 (2021): 234. http://dx.doi.org/10.3991/ijet.v16i01.19717.

Full text
Abstract:
Facing the English conversion system, the existing accent marking algorithms cannot acquire the morphological rules of English, making the accent marking inaccurate, inefficient, and time-consuming. To solve these problems, this paper puts forward an accent marking algorithm of English conversion system based on morphological rules. Specifically, the English audios in a self-developed English corpus were classified by the speaker classification software based on hidden Markov model, as well as audio classification technology, producing the morphological rules of English. After that, the English accents were marked by the maximum entropy model in the English conversion system. The proposed method was proved accurate and efficient in accent marking through experiments. The research results provide a good reference for marking the accents in English conversion system.
APA, Harvard, Vancouver, ISO, and other styles
41

Halteren, Hans van, Jakub Zavrel, and Walter Daelemans. "Improving Accuracy in Word Class Tagging through the Combination of Machine Learning Systems." Computational Linguistics 27, no. 2 (2001): 199–229. http://dx.doi.org/10.1162/089120101750300508.

Full text
Abstract:
We examine how differences in language models, learned by different data-driven systems performing the same NLP task, can be exploited to yield a higher accuracy than the best individual system. We do this by means of experiments involving the task of morphosyntactic word class tagging, on the basis of three different tagged corpora. Four well-known tagger generators (hidden Markov model, memory-based, transformation rules, and maximum entropy) are trained on the same corpus data. After comparison, their outputs are combined using several voting strategies and second-stage classifiers. All combination taggers outperform their best component. The reduction in error rate varies with the material in question, but can be as high as 24.3% with the LOB corpus.
APA, Harvard, Vancouver, ISO, and other styles
42

Li, Xiaoli, Longlong Zhao, and Jinsong Chen. "Remote Sensing Image Segmentation Based on Probabilistic Fuzzy Local Information Clustering." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences X-4-2024 (October 18, 2024): 211–16. http://dx.doi.org/10.5194/isprs-annals-x-4-2024-211-2024.

Full text
Abstract:
Abstract. In this paper, a remote sensing image segmentation based on probabilistic fuzzy local information clustering algorithm is proposed. First, assuming that the spectral measure within the same ground object follows the same probability distribution. The dissimilarity between pixel and object is modeled by the negative logarithm of the Gaussian probability density function. It can improve the noise sensitive problem caused by the Euclidean distance which can only describe the data with isotropic distribution. Then, in order to consider the effect of local spatial constraint, on the one hand, the probability measure is used to modify the local fuzzy factor to establish the dissimilarity measure with spatial constraints. On the other hand, the hidden Markov random field is used to model the prior probability model of pixel membership. Next, the entropy regularization term of the objective function is built by combining the Kullback-Leibler(KL) maximum entropy model to further improve the robustness and noise resistance. The qualitative and quantitative analysis of simulated image and different types of real remote sensing images show that the proposed algorithm can effectively overcome the above problems and further improve the accuracy of image segmentation to over 95%.
APA, Harvard, Vancouver, ISO, and other styles
43

Ghazal, M. G. M. "Modified Chen distribution: Properties, estimation, and applications in reliability analysis." AIMS Mathematics 9, no. 12 (2024): 34906–46. https://doi.org/10.3934/math.20241662.

Full text
Abstract:
<p>This article proposed a flexible three-parameter distribution known as the modified Chen distribution (MCD). The MCD is capable of modeling failure rates with both monotonic and non-monotonic behaviors, including the bathtub curve commonly used to represent device performance in reliability engineering. We examined its statistical properties, such as moments, mean time to failure, mean residual life, Rényi entropy, and order statistics. Model parameters, along with survival and hazard functions, were estimated by utilizing maximum likelihood estimators and two types of bootstrap confidence intervals. Bayesian estimates of the model parameters, along with the survival and hazard functions and their corresponding credible intervals, were derived via the Markov chain Monte Carlo method under balanced squared error loss, balanced linear-exponential loss, and balanced general entropy loss. We also provided a simulated dataset analysis for illustration. Furthermore, the MCD's performance was compared with other popular distributions across two well-known failure time datasets. The findings suggested that the MCD offered the best fit for these datasets, highlighting its potential applicability to real-world problems and its suitability as a model for analyzing and predicting device failure times.</p>
APA, Harvard, Vancouver, ISO, and other styles
44

Smerlak, M. "Neutral quasispecies evolution and the maximal entropy random walk." Science Advances 7, no. 16 (2021): eabb2376. http://dx.doi.org/10.1126/sciadv.abb2376.

Full text
Abstract:
Even if they have no impact on phenotype, neutral mutations are not equivalent in the eyes of evolution: A robust neutral variant—one which remains functional after further mutations—is more likely to spread in a large, diverse population than a fragile one. Quasispecies theory shows that the equilibrium frequency of a genotype is proportional to its eigenvector centrality in the neutral network. This paper explores the link between the selection for mutational robustness and the navigability of neutral networks. I show that sequences of neutral mutations follow a “maximal entropy random walk,” a canonical Markov chain on graphs with nonlocal, nondiffusive dynamics. I revisit M. Smith’s word-game model of evolution in this light, finding that the likelihood of certain sequences of substitutions can decrease with the population size. These counterintuitive results underscore the fertility of the interface between evolutionary dynamics, information theory, and physics.
APA, Harvard, Vancouver, ISO, and other styles
45

Nadeem, Amir, Ahmad Jalal, and Kibum Kim. "Accurate Physical Activity Recognition using Multidimensional Features and Markov Model for Smart Health Fitness." Symmetry 12, no. 11 (2020): 1766. http://dx.doi.org/10.3390/sym12111766.

Full text
Abstract:
Recent developments in sensor technologies enable physical activity recognition (PAR) as an essential tool for smart health monitoring and for fitness exercises. For efficient PAR, model representation and training are significant factors contributing to the ultimate success of recognition systems because model representation and accurate detection of body parts and physical activities cannot be distinguished if the system is not well trained. This paper provides a unified framework that explores multidimensional features with the help of a fusion of body part models and quadratic discriminant analysis which uses these features for markerless human pose estimation. Multilevel features are extracted as displacement parameters to work as spatiotemporal properties. These properties represent the respective positions of the body parts with respect to time. Finally, these features are processed by a maximum entropy Markov model as a recognition engine based on transition and emission probability values. Experimental results demonstrate that the proposed model produces more accurate results compared to the state-of-the-art methods for both body part detection and for physical activity recognition. The accuracy of the proposed method for body part detection is 90.91% on a University of Central Florida’s (UCF) sports action dataset and, for activity recognition on a UCF YouTube action dataset and an IM-DailyRGBEvents dataset, accuracy is 89.09% and 88.26% respectively.
APA, Harvard, Vancouver, ISO, and other styles
46

Xiong, Shufeng, Xiaobo Fan, Vishwash Batra, et al. "An Entropy-Based Method with a New Benchmark Dataset for Chinese Textual Affective Structure Analysis." Entropy 25, no. 5 (2023): 794. http://dx.doi.org/10.3390/e25050794.

Full text
Abstract:
Affective understanding of language is an important research focus in artificial intelligence. The large-scale annotated datasets of Chinese textual affective structure (CTAS) are the foundation for subsequent higher-level analysis of documents. However, there are very few published datasets for CTAS. This paper introduces a new benchmark dataset for the task of CTAS to promote development in this research direction. Specifically, our benchmark is a CTAS dataset with the following advantages: (a) it is Weibo-based, which is the most popular Chinese social media platform used by the public to express their opinions; (b) it includes the most comprehensive affective structure labels at present; and (c) we propose a maximum entropy Markov model that incorporates neural network features and experimentally demonstrate that it outperforms the two baseline models.
APA, Harvard, Vancouver, ISO, and other styles
47

M Hasaballah, Mustafa, Oluwafemi Samson Balogun, and M. E. Bakr. "Bayesian estimation for the power rayleigh lifetime model with application under a unified hybrid censoring scheme." Physica Scripta 99, no. 10 (2024): 105209. http://dx.doi.org/10.1088/1402-4896/ad72b2.

Full text
Abstract:
Abstract This study presents a comprehensive analysis of Bayesian estimation techniques for the parameters of the power Rayleigh (PR) distribution under a unified hybrid censoring scheme (UHCS). The research employs both Bayesian and Frequentist approaches, utilizing maximum likelihood estimation (MLE) alongside Bayesian estimates derived through Markov Chain Monte Carlo (MCMC) methods. The study incorporates symmetric and asymmetric loss functions, specifically general entropy (GE), linear expoential (LINEX), and squared error (SE), to evaluate the performance of the estimators. A Monte Carlo simulation study is conducted to assess the effectiveness of the proposed methods, revealing that Bayesian estimators generally outperform Frequentist estimators in terms of mean squared error (MSE). Additionally, the paper includes a real-world application involving ball bearing lifetimes, demonstrating the practical utility of the proposed estimation techniques. The findings indicate that both point and interval estimates exhibit strong properties for parameter estimation, with Bayesian estimates being particularly favored for their accuracy and reliability.
APA, Harvard, Vancouver, ISO, and other styles
48

Anabike, Ifeanyi C., Chinyere P. Igbokwe, Chrisogonus K. Onyekwere, and Okechukwu J. Obulezi. "Inference on the Parameters of Zubair-Exponential Distribution with Application to Survival Times of Guinea Pigs." Journal of Advances in Mathematics and Computer Science 38, no. 7 (2023): 12–35. http://dx.doi.org/10.9734/jamcs/2023/v38i71769.

Full text
Abstract:
In this paper, we derived a sub-model of Zubair-G familiy of distribution named Zubair-Exponential distribution with two parameters. Simulation of the Estimates of the parameters based on some classical methods are obtained. The likelihood equations and the maximum likelihood estimator as well as asymptotic confidence interval are derived. Bayes estimates with the estimates of the associated greatest posterior density credible interval are derived using squared error Loss (SEL), Linear-Exponential (LINEX) and Generalized Entropy Loss (GEL) functions. Using the Metropolis-Hasting algorithm and the method of Markov Chain Monte Carlo (MCMC), estimates of Bayes are summarized. To determine the performance of the estimates, a Monte Carlo simulation study is carried out and maximum likelihood estimates, their standard errors and measures of fitness using real data on survival times of Guinea pigs are obtained. The proposed distribution has a better fit based on Akaike Information criterion (AIC) and the Bayesian Information criterion (BIC).
APA, Harvard, Vancouver, ISO, and other styles
49

Zhang, Hao, Ze Meng Zhao, Ahmet Palazoglu, and Wei Sun. "Ozone Prediction Performance of HMM Based on the Data Compressed by Different Wavelet Basis Functions." Advanced Materials Research 518-523 (May 2012): 1586–91. http://dx.doi.org/10.4028/www.scientific.net/amr.518-523.1586.

Full text
Abstract:
Surface ozone in the air boundary layer is one of the most harmful air pollutants produced by photochemical reaction between nitrogen oxides and volatile hydrocarbons, which causes great damage to human beings and environment. The prediction of surface ozone levels plays an important role in the control and the reduction of air pollutants. As model-driven statistical prediction models, hidden Markov Models (HMMs) are rich in mathematical structure and work well in many important applications. Due to the complex structure of HMM, long observation sequences would increase computational load by geometric ratio. In order to reduce training time, wavelet decomposition is used to compress the original observations into shorter ones. During compression step, observation sequences compressed by different wavelet basis functions keep different information content. This may have impact on prediction results. In this paper, ozone prediction performance of HMM based on different wavelet basis functions are discussed. Shannon entropy is employed to measure how much information content is kept in the new sequence compared to the original one. Data from Houston Metropolitan Area, TX are used in this paper. Results show that wavelet basis functions used in data compression step can affect the HMM model performance significantly. The new sequence with the maximum Shannon entropy generates the best prediction result.
APA, Harvard, Vancouver, ISO, and other styles
50

Savas, Yagiz, Christos K. Verginis, and Ufuk Topcu. "Deceptive Decision-Making under Uncertainty." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 5 (2022): 5332–40. http://dx.doi.org/10.1609/aaai.v36i5.20470.

Full text
Abstract:
We study the design of autonomous agents that are capable of deceiving outside observers about their intentions while carrying out tasks in stochastic, complex environments. By modeling the agent's behavior as a Markov decision process, we consider a setting where the agent aims to reach one of multiple potential goals while deceiving outside observers about its true goal. We propose a novel approach to model observer predictions based on the principle of maximum entropy and to efficiently generate deceptive strategies via linear programming. The proposed approach enables the agent to exhibit a variety of tunable deceptive behaviors while ensuring the satisfaction of probabilistic constraints on the behavior. We evaluate the performance of the proposed approach via comparative user studies and present a case study on the streets of Manhattan, New York, using real travel time distributions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography