To see the other types of publications on this topic, follow the link: Entropy estimator.

Journal articles on the topic 'Entropy estimator'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Entropy estimator.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ao, Ziqiao, and Jinglai Li. "Entropy Estimation via Normalizing Flow." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (2022): 9990–98. http://dx.doi.org/10.1609/aaai.v36i9.21237.

Full text
Abstract:
Entropy estimation is an important problem in information theory and statistical science. Many popular entropy estimators suffer from fast growing estimation bias with respect to dimensionality, rendering them unsuitable for high dimensional problems. In this work we propose a transformbased method for high dimensional entropy estimation, which consists of the following two main ingredients. First by modifying the k-NN based entropy estimator, we propose a new estimator which enjoys small estimation bias for samples that are close to a uniform distribution. Second we design a normalizing flow based mapping that pushes samples toward a uniform distribution, and the relation between the entropy of the original samples and the transformed ones is also derived. As a result the entropy of a given set of samples is estimated by first transforming them toward a uniform distribution and then applying the proposed estimator to the transformed samples. Numerical experiments demonstrate the effectiveness of the method for high dimensional entropy estimation problems.
APA, Harvard, Vancouver, ISO, and other styles
2

Leena, Chawla, Kumar Vijay, and Saxena Arti. "Kernel density estimation of Tsalli's entropy with applications in adaptive system training." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 2 (2024): 2247–53. https://doi.org/10.11591/ijai.v13.i2.pp2247-2253.

Full text
Abstract:
Information theoretic learning plays a very important role in adaption learning systems. Many non-parametric entropy estimators have been proposed by the researchers. This work explores kernel density estimation based on Tsallis entropy. Firstly, it has been proved that for linearly independent samples and for equal samples, Tsallis-estimator is consistent for the PDF and minimum respectively. Also, it is investigated that Tsallis-estimator is smooth for differentiable, symmetric, and unimodal kernel function. Further, important properties of Tsallis-estimator such as scaling and invariance for both single and joint entropy estimation have been proved. The objective of the work is to understand the mathematics behind the underlying concept.
APA, Harvard, Vancouver, ISO, and other styles
3

Pinchas, Assaf, Irad Ben-Gal, and Amichai Painsky. "A Comparative Analysis of Discrete Entropy Estimators for Large-Alphabet Problems." Entropy 26, no. 5 (2024): 369. http://dx.doi.org/10.3390/e26050369.

Full text
Abstract:
This paper presents a comparative study of entropy estimation in a large-alphabet regime. A variety of entropy estimators have been proposed over the years, where each estimator is designed for a different setup with its own strengths and caveats. As a consequence, no estimator is known to be universally better than the others. This work addresses this gap by comparing twenty-one entropy estimators in the studied regime, starting with the simplest plug-in estimator and leading up to the most recent neural network-based and polynomial approximate estimators. Our findings show that the estimators’ performance highly depends on the underlying distribution. Specifically, we distinguish between three types of distributions, ranging from uniform to degenerate distributions. For each class of distribution, we recommend the most suitable estimator. Further, we propose a sample-dependent approach, which again considers three classes of distribution, and report the top-performing estimators in each class. This approach provides a data-dependent framework for choosing the desired estimator in practical setups.
APA, Harvard, Vancouver, ISO, and other styles
4

Chawla, Leena, Vijay Kumar, and Arti Saxena. "Kernel density estimation of Tsalli’s entropy with applications in adaptive system training." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 2 (2024): 2247. http://dx.doi.org/10.11591/ijai.v13.i2.pp2247-2253.

Full text
Abstract:
Information theoretic learning plays a very important role in adaption learning systems. Many non-parametric entropy estimators have been proposed by the researchers. This work explores kernel density estimation based on Tsallis entropy. Firstly, it has been proved that for linearly independent samples and for equal samples, Tsallis-estimator is consistent for the PDF and minimum respectively. Also, it is investigated that Tsallis-estimator is smooth for differentiable, symmetric, and unimodal kernel function. Further, important properties of Tsallis-estimator such as scaling and invariance for both single and joint entropy estimation have been proved. The objective of the work is to understand the mathematics behind the underlying concept.
APA, Harvard, Vancouver, ISO, and other styles
5

Paninski, Liam. "Estimation of Entropy and Mutual Information." Neural Computation 15, no. 6 (2003): 1191–253. http://dx.doi.org/10.1162/089976603321780272.

Full text
Abstract:
We present some new results on the nonparametric estimation of entropy and mutual information. First, we use an exact local expansion of the entropy function to prove almost sure consistency and central limit theorems for three of the most commonly used discretized information estimators. The setup is related to Grenander's method of sieves and places no assumptions on the underlying probability measure generating the data. Second, we prove a converse to these consistency theorems, demonstrating that a misapplication of the most common estimation techniques leads to an arbitrarily poor estimate of the true information, even given unlimited data. This “inconsistency” theorem leads to an analytical approximation of the bias, valid in surprisingly small sample regimes and more accurate than the usual [Formula: see text] formula of Miller and Madow over a large region of parameter space. The two most practical implications of these results are negative: (1) information estimates in a certain data regime are likely contaminated by bias, even if “bias-corrected” estimators are used, and (2) confidence intervals calculated by standard techniques drastically underestimate the error of the most common estimation methods. Finally, we note a very useful connection between the bias of entropy estimators and a certain polynomial approximation problem. By casting bias calculation problems in this approximation theory framework, we obtain the best possible generalization of known asymptotic bias results. More interesting, this framework leads to an estimator with some nice properties: the estimator comes equipped with rigorous bounds on the maximum error over all possible underlying probability distributions, and this maximum error turns out to be surprisingly small. We demonstrate the application of this new estimator on both real and simulated data.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Zhiyi. "Entropy Estimation in Turing's Perspective." Neural Computation 24, no. 5 (2012): 1368–89. http://dx.doi.org/10.1162/neco_a_00266.

Full text
Abstract:
A new nonparametric estimator of Shannon's entropy on a countable alphabet is proposed and analyzed against the well-known plug-in estimator. The proposed estimator is developed based on Turing's formula, which recovers distributional characteristics on the subset of the alphabet not covered by a size-n sample. The fundamental switch in perspective brings about substantial gain in estimation accuracy for every distribution with finite entropy. In general, a uniform variance upper bound is established for the entire class of distributions with finite entropy that decays at a rate of O(ln(n)/n) compared to O([ln(n)]2/n) for the plug-in. In a wide range of subclasses, the variance of the proposed estimator converges at a rate of O(1/n), and this rate of convergence carries over to the convergence rates in mean squared errors in many subclasses. Specifically, for any finite alphabet, the proposed estimator has a bias decaying exponentially in n. Several new bias-adjusted estimators are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
7

Schürmann, Thomas. "A Note on Entropy Estimation." Neural Computation 27, no. 10 (2015): 2097–106. http://dx.doi.org/10.1162/neco_a_00775.

Full text
Abstract:
We compare an entropy estimator [Formula: see text] recently discussed by Zhang ( 2012 ) with two estimators, [Formula: see text] and [Formula: see text], introduced by Grassberger ( 2003 ) and Schürmann ( 2004 ). We prove the identity [Formula: see text], which has not been taken into account by Zhang ( 2012 ). Then we prove that the systematic error (bias) of [Formula: see text] is less than or equal to the bias of the ordinary likelihood (or plug-in) estimator of entropy. Finally, by numerical simulation, we verify that for the most interesting regime of small sample estimation and large event spaces, the estimator [Formula: see text] has a significantly smaller statistical error than [Formula: see text].
APA, Harvard, Vancouver, ISO, and other styles
8

Alizadeh Noughabi, Hadi, and Mohammad Shafaei Noughabi. "A New Estimator for Shannon Entropy." Statistics, Optimization & Information Computing 13, no. 2 (2025): 891–99. https://doi.org/10.19139/soic-2310-5070-1844.

Full text
Abstract:
In this paper we propose a new estimator of the entropy of a continuous random variable. The estimator is obtained by modifying the estimator proposed by Vasicek (1976). Consistency of the proposed estimator is proved, and comparisons are made with Vasicek’s estimator (1976), Ebrahimi et al.’s estimator (1994) and Correa’s estimator (1995). The results indicate that the proposed estimator has smaller mean squared error than considered alternative estimators. The proposed estimator is applied to a real data set for illustration.
APA, Harvard, Vancouver, ISO, and other styles
9

Skorski, Maciej. "Towards More Efficient Rényi Entropy Estimation." Entropy 25, no. 2 (2023): 185. http://dx.doi.org/10.3390/e25020185.

Full text
Abstract:
Estimation of Rényi entropy is of fundamental importance to many applications in cryptography, statistical inference, and machine learning. This paper aims to improve the existing estimators with regard to: (a) the sample size, (b) the estimator adaptiveness, and (c) the simplicity of the analyses. The contribution is a novel analysis of the generalized “birthday paradox” collision estimator. The analysis is simpler than in prior works, gives clear formulas, and strengthens existing bounds. The improved bounds are used to develop an adaptive estimation technique that outperforms previous methods, particularly in regimes of low or moderate entropy. Last but not least, to demonstrate that the developed techniques are of broader interest, a number of applications concerning theoretical and practical properties of “birthday estimators” are discussed.
APA, Harvard, Vancouver, ISO, and other styles
10

Al-Nasser, Amjad D. "Entropy Type Estimator to Simple Linear Measurement Error Models." Austrian Journal of Statistics 34, no. 3 (2016): 283–94. http://dx.doi.org/10.17713/ajs.v34i3.418.

Full text
Abstract:
The classical maximum likelihood estimation fails to estimate the simple linear measurement error model, with or without equation error, unless additional assumptions are made about the structural parameters. In the literature there are six different assumptions that could be added in order to solve the measurement error models. In this paper, we proposed an entropy-type estimator based on the generalized maximum entropy estimation approach, which allows one to abstract away from the additional assumptions that are made in the classical method. Monte Carlo experiments were carried out in order to investigate the performance of the proposed estimators. The simulation results showed that the entropy-type estimator of unknown parameters has outperformed the classical estimators in terms of mean square error criterion.
APA, Harvard, Vancouver, ISO, and other styles
11

Lakhdar, Yissam, and El Hassan Sbai. "Online Variable Kernel Estimator." International Journal of Operations Research and Information Systems 8, no. 1 (2017): 58–92. http://dx.doi.org/10.4018/ijoris.2017010104.

Full text
Abstract:
In this work, the authors propose a novel method called online variable kernel estimation of the probability density function (pdf). This new online estimator combines the characteristics and properties of two estimators namely nearest neighbors estimator and the Parzen-Rosenblatt estimator. Their approach allows a compact online adaptation of the estimated probability density function from the new arrival data. The performance of the online variable kernel estimator (OVKE) depends on the choice of the bandwidth. The authors present in this article a new technique for determining the optimal smoothing parameter of OVKE based on the maximum entropy principle (MEP). The robustness and performance of the proposed approach are demonstrated by examples of online estimation of real and simulated data distributions.
APA, Harvard, Vancouver, ISO, and other styles
12

Chaubey, Yogendra P., and Nhat Linh Vu. "A numerical study of entropy and residual entropy estimators based on smooth density estimators for non-negative random variables." Journal of Statistical Research 54, no. 2 (2021): 99–121. http://dx.doi.org/10.47302/jsr.2020540201.

Full text
Abstract:
In this paper, we are interested in estimating the entropy of a non-negative random variable. Since the underlying probability density function is unknown, we propose the use of the Poisson smoothed histogram density estimator to estimate the entropy. To study the per- formance of our estimator, we run simulations on a wide range of densities and compare our entropy estimators with the existing estimators based on different approaches such as spacing estimators. Furthermore, we extend our study to residual entropy estimators which is the entropy of a random variable given that it has been survived up to time $t$.
APA, Harvard, Vancouver, ISO, and other styles
13

Filipiak, Katarzyna, Daniel Klein, and Monika Mokrzycka. "Estimators Comparison of Separable Covariance Structure with One Component as Compound Symmetry Matrix." Electronic Journal of Linear Algebra 33 (May 16, 2018): 83–98. http://dx.doi.org/10.13001/1081-3810.3740.

Full text
Abstract:
The maximum likelihood estimation (MLE) of separable covariance structure with one component as compound symmetry matrix has been widely studied in the literature. Nevertheless, the proposed estimates are not given in explicit form and can be determined only numerically. In this paper we give an alternative form of MLE and we show that this new algorithm is much quicker than the algorithms given in the literature.\\ Another estimator of covariance structure can be found by minimizing the entropy loss function. In this paper we give three methods of finding the best approximation of separable covariance structure with one component as compound symmetry matrix and we compare the quickness of proposed algorithms.\\ We conduct simulation studies to compare statistical properties of MLEs and entropy loss estimators (ELEs), such us biasedness, variability and loss. Another estimator of covariance structure can be found by minimizing the entropy loss function. In this paper we give three methods of finding the best approximation of separable covariance structure with one component as compound symmetry matrix and we compare the quickness of proposed algorithms. We conduct simulation studies to compare statistical properties of MLEs and entropy loss estimators (ELEs), such us biasedness and variability.
APA, Harvard, Vancouver, ISO, and other styles
14

Hu, Xue, and Haiping Ren. "Statistical inference of the stress-strength reliability for inverse Weibull distribution under an adaptive progressive type-Ⅱ censored sample." AIMS Mathematics 8, no. 12 (2023): 28465–87. http://dx.doi.org/10.3934/math.20231457.

Full text
Abstract:
<abstract><p>In this paper, we investigate classical and Bayesian estimation of stress-strength reliability $\delta = P(X > Y)$ under an adaptive progressive type-Ⅱ censored sample. Assume that $X$ and $Y$ are independent random variables that follow inverse Weibull distribution with the same shape but different scale parameters. In classical estimation, the maximum likelihood estimator and asymptotic confidence interval are deduced. An approximate maximum likelihood estimator approach is used to obtain the explicit form. In Bayesian estimation, the Bayesian estimators are derived based on symmetric entropy loss function and LINEX loss function. Due to the complexity of integrals, we proposed Lindley's approximation to get the approximate Bayesian estimates. To compare the different estimators, we performed Monte Carlo simulations. Under gamma prior, the approximate maximum likelihood estimator performs better than Bayesian estimators. Under non-informative prior, the approximate maximum likelihood estimator has the same behavior as Bayesian estimators. In the end, two data sets are used to prove the effectiveness of the proposed methods.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
15

Ahmed, Samah M., and Gamal Ismail. "Statistical Inference Based on Censored Data of Entropy for Lomax Distribution." European Journal of Pure and Applied Mathematics 18, no. 1 (2025): 5737. https://doi.org/10.29020/nybg.ejpam.v18i1.5737.

Full text
Abstract:
The current research centers around entropy, which is officially defined as an indicator of the possible amount of information of the Lomax (Lo) distribution that quantifies the uncertainty of random variables. When the parameters are unknown under adaptive progressive Using type-II censored data, the entropy maximum likelihood estimate (MLE) is computed and the bootstrap confidence intervals of entropy is displayed. The Bayes estimator of entropy is demonstrated using both the symmetric and asymmetric loss functions. In the meantime, the posterior is also computed to assess how well the entropy estimators perform in relation to various loss functions. The Bayesian estimates were obtained in a numerical simulation applying the method of Markov Chain Monte Carlo (MCMC). Then, using Monte Carlo simulations, various approaches are compared to identify the believable intervals of the entropy's highest posterior density (HPD). Lastly, the recommended methods are illustrated using a numerical example.
APA, Harvard, Vancouver, ISO, and other styles
16

He, Kexin, and Wenhao Gui. "Reliability Estimation for Burr XII Distribution under the Weighted Q-Symmetric Entropy Loss Function." Applied Sciences 14, no. 8 (2024): 3308. http://dx.doi.org/10.3390/app14083308.

Full text
Abstract:
Considering that the choice of loss function plays a significant role in the derivation of Bayesian estimators, we propose a novel asymmetric loss function named the weighted Q-symmetric entropy loss for computing the estimates of the parameter and reliability function of the Burr XII distribution. This paper covers the classical maximum-likelihood, uniformly minimum-variance unbiased, and Bayesian estimation methods under the squared error loss, general entropy loss, Q-symmetric entropy loss, and new loss functions. Through Monte Carlo simulation, the respective performances of the considered estimators for the reliability function are evaluated, indicating that the Bayesian estimator under the new loss function is more efficient than those under other loss functions. Finally, a real data set is used to demonstrate the practicality of the presented estimators.
APA, Harvard, Vancouver, ISO, and other styles
17

KALTCHENKO, ALEXEI, NINA TIMOFEEVA, and EUGENIY A. TIMOFEEV. "BIAS REDUCTION OF THE NEAREST NEIGHBOR ENTROPY ESTIMATOR." International Journal of Bifurcation and Chaos 18, no. 12 (2008): 3781–87. http://dx.doi.org/10.1142/s0218127408022731.

Full text
Abstract:
A new family of entropy estimators, constructed as a linear combination (weighted average) of nearest neighbor estimators with slightly different individual properties, is proposed. It is shown that a special suboptimal selection of the coefficients in the linear combination results in a polynomial reduction of the estimator's bias, which continues the line of work of P. Grassberger on entropy estimation. Computer simulation results are provided.
APA, Harvard, Vancouver, ISO, and other styles
18

Grassberger, Peter. "On Generalized Schürmann Entropy Estimators." Entropy 24, no. 5 (2022): 680. http://dx.doi.org/10.3390/e24050680.

Full text
Abstract:
We present a new class of estimators of Shannon entropy for severely undersampled discrete distributions. It is based on a generalization of an estimator proposed by T. Schürmann, which itself is a generalization of an estimator proposed by myself.For a special set of parameters, they are completely free of bias and have a finite variance, something which is widely believed to be impossible. We present also detailed numerical tests, where we compare them with other recent estimators and with exact results, and point out a clash with Bayesian estimators for mutual information.
APA, Harvard, Vancouver, ISO, and other styles
19

Chugg, Ben, Peter Henderson, Jacob Goldin, and Daniel E. Ho. "Entropy Regularization for Population Estimation." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 10 (2023): 12198–204. http://dx.doi.org/10.1609/aaai.v37i10.26438.

Full text
Abstract:
Entropy regularization is known to improve exploration in sequential decision-making problems. We show that this same mechanism can also lead to nearly unbiased and lower-variance estimates of the mean reward in the optimize-and-estimate structured bandit setting. Mean reward estimation (i.e., population estimation) tasks have recently been shown to be essential for public policy settings where legal constraints often require precise estimates of population metrics. We show that leveraging entropy and KL divergence can yield a better trade-off between reward and estimator variance than existing baselines, all while remaining nearly unbiased. These properties of entropy regularization illustrate an exciting potential for bringing together the optimal exploration and estimation literature.
APA, Harvard, Vancouver, ISO, and other styles
20

Farsipour, N. Sanjari. "Estimation of the Parameter in the Fisher Nile's Problem Under Entropy Loss." Calcutta Statistical Association Bulletin 46, no. 1-2 (1996): 29–33. http://dx.doi.org/10.1177/0008068319960104.

Full text
Abstract:
Estimation of the parameter in the problem of the Nile is treated as a decision problem with entropy loss. It is shown that the minimum risk scale equivariant estimator dominates the incomplete sufficient unbiased estimators. Sharper bounds for the equivariant estimator are derived which may be used to obtain the values of the same from the sample with sufficient accuracy.
APA, Harvard, Vancouver, ISO, and other styles
21

Rasheed, Huda Abdullah, and Maryam N. Abd. "Bayesian Estimation for Two Parameters of Exponential Distribution under Different Loss Functions." Ibn AL-Haitham Journal For Pure and Applied Sciences 36, no. 2 (2023): 289–300. http://dx.doi.org/10.30526/36.2.2946.

Full text
Abstract:
In this paper, two parameters for the Exponential distribution were estimated using theBayesian estimation method under three different loss functions: the Squared error loss function,the Precautionary loss function, and the Entropy loss function. The Exponential distribution priorand Gamma distribution have been assumed as the priors of the scale γ and location δ parametersrespectively. In Bayesian estimation, Maximum likelihood estimators have been used as the initialestimators, and the Tierney-Kadane approximation has been used effectively. Based on the MonteCarlosimulation method, those estimators were compared depending on the mean squared errors (MSEs).The results showed that the Bayesian estimation under the Entropy loss function,assuming Exponential distribution and Gamma distribution priors for the scale and locationparameters, respectively, is the best estimator for the scale parameter. The best estimation methodfor location is the Bayesian estimation under the Entropy loss function in case of a small value ofthe scale γ (say γ < 1). Bayesian estimation under the Precautionary loss function is the best incase of a relatively large value of the scale γ (say γ > 1).
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Jialin, and Jingyi Shi. "Asymptotic Normality for Plug-In Estimators of Generalized Shannon’s Entropy." Entropy 24, no. 5 (2022): 683. http://dx.doi.org/10.3390/e24050683.

Full text
Abstract:
Shannon’s entropy is one of the building blocks of information theory and an essential aspect of Machine Learning (ML) methods (e.g., Random Forests). Yet, it is only finitely defined for distributions with fast decaying tails on a countable alphabet. The unboundedness of Shannon’s entropy over the general class of all distributions on an alphabet prevents its potential utility from being fully realized. To fill the void in the foundation of information theory, Zhang (2020) proposed generalized Shannon’s entropy, which is finitely defined everywhere. The plug-in estimator, adopted in almost all entropy-based ML method packages, is one of the most popular approaches to estimating Shannon’s entropy. The asymptotic distribution for Shannon’s entropy’s plug-in estimator was well studied in the existing literature. This paper studies the asymptotic properties for the plug-in estimator of generalized Shannon’s entropy on countable alphabets. The developed asymptotic properties require no assumptions on the original distribution. The proposed asymptotic properties allow for interval estimation and statistical tests with generalized Shannon’s entropy.
APA, Harvard, Vancouver, ISO, and other styles
23

Contreras Rodríguez, Lianet, Evaristo José Madarro-Capó , Carlos Miguel Legón-Pérez , Omar Rojas, and Guillermo Sosa-Gómez. "Selecting an Effective Entropy Estimator for Short Sequences of Bits and Bytes with Maximum Entropy." Entropy 23, no. 5 (2021): 561. http://dx.doi.org/10.3390/e23050561.

Full text
Abstract:
Entropy makes it possible to measure the uncertainty about an information source from the distribution of its output symbols. It is known that the maximum Shannon’s entropy of a discrete source of information is reached when its symbols follow a Uniform distribution. In cryptography, these sources have great applications since they allow for the highest security standards to be reached. In this work, the most effective estimator is selected to estimate entropy in short samples of bytes and bits with maximum entropy. For this, 18 estimators were compared. Results concerning the comparisons published in the literature between these estimators are discussed. The most suitable estimator is determined experimentally, based on its bias, the mean square error short samples of bytes and bits.
APA, Harvard, Vancouver, ISO, and other styles
24

Alduais, Fuad. "Comparison of classical and Bayesian estimators to estimate the parameters in Weibull distribution under weighted general entropy loss function." International Journal of ADVANCED AND APPLIED SCIENCES 8, no. 3 (2021): 57–62. http://dx.doi.org/10.21833/ijaas.2021.03.008.

Full text
Abstract:
In this work, we have developed a General Entropy loss function (GE) to estimate parameters of Weibull distribution (WD) based on complete data when both shape and scale parameters are unknown. The development is done by merging weight into GE to produce a new loss function called the weighted General Entropy loss function (WGE). Then, we utilized WGE to derive the parameters of the WD. After, we compared the performance of the developed estimation in this work with the Bayesian estimator using the GE loss function. Bayesian estimator using square error (SE) loss function, Ordinary Least Squares Method (OLS), Weighted Least Squared Method (WLS), and maximum likelihood estimation (MLE). Based on the Monte Carlo simulation method, those estimators are compared depending on the mean squared errors (MSE’s). The results show that the performance of the Bayes estimator under developed method (WGE) loss function is the best for estimating shape parameters in all cases and has good performance for estimating scale parameter.
APA, Harvard, Vancouver, ISO, and other styles
25

Prakash, Gyan. "Some Estimators for the Pareto Distribution." Journal of Scientific Research 1, no. 2 (2009): 236–47. http://dx.doi.org/10.3329/jsr.v1i2.1642.

Full text
Abstract:
We derive some shrinkage test-estimators and the Bayes estimators for the shape parameter of a Pareto distribution under the general entropy loss (GEL) function. The properties have been studied in terms of relative efficiency. The choices of shrinkage factor are also suggested.  Keywords: General entropy loss; Shrinkage factor; Shrinkage test-estimator; Bayes estimator; Relative efficiency. © 2009 JSR Publications. ISSN: 2070-0237 (Print); 2070-0245 (Online). All rights reserved. DOI: 10.3329/jsr.v1i2.1642 Â
APA, Harvard, Vancouver, ISO, and other styles
26

Vințe, Claudiu, Marcel Ausloos, and Titus Felix Furtună. "A Volatility Estimator of Stock Market Indices Based on the Intrinsic Entropy Model." Entropy 23, no. 4 (2021): 484. http://dx.doi.org/10.3390/e23040484.

Full text
Abstract:
Grasping the historical volatility of stock market indices and accurately estimating are two of the major focuses of those involved in the financial securities industry and derivative instruments pricing. This paper presents the results of employing the intrinsic entropy model as a substitute for estimating the volatility of stock market indices. Diverging from the widely used volatility models that take into account only the elements related to the traded prices, namely the open, high, low, and close prices of a trading day (OHLC), the intrinsic entropy model takes into account the traded volumes during the considered time frame as well. We adjust the intraday intrinsic entropy model that we introduced earlier for exchange-traded securities in order to connect daily OHLC prices with the ratio of the corresponding daily volume to the overall volume traded in the considered period. The intrinsic entropy model conceptualizes this ratio as entropic probability or market credence assigned to the corresponding price level. The intrinsic entropy is computed using historical daily data for traded market indices (S&P 500, Dow 30, NYSE Composite, NASDAQ Composite, Nikkei 225, and Hang Seng Index). We compare the results produced by the intrinsic entropy model with the volatility estimates obtained for the same data sets using widely employed industry volatility estimators. The intrinsic entropy model proves to consistently deliver reliable estimates for various time frames while showing peculiarly high values for the coefficient of variation, with the estimates falling in a significantly lower interval range compared with those provided by the other advanced volatility estimators.
APA, Harvard, Vancouver, ISO, and other styles
27

Quadeer, Maria, Marco Tomamichel, and Christopher Ferrie. "Minimax quantum state estimation under Bregman divergence." Quantum 3 (March 4, 2019): 126. http://dx.doi.org/10.22331/q-2019-03-04-126.

Full text
Abstract:
We investigate minimax estimators for quantum state tomography under general Bregman divergences. First, generalizing the work of Koyama et al. [Entropy 19, 618 (2017)] for relative entropy, we find that given any estimator for a quantum state, there always exists a sequence of Bayes estimators that asymptotically perform at least as well as the given estimator, on any state. Second, we show that there always exists a sequence of priors for which the corresponding sequence of Bayes estimators is asymptotically minimax (i.e. it minimizes the worst-case risk). Third, by re-formulating Holevo's theorem for the covariant state estimation problem in terms of estimators, we find that there exists a covariant measurement that is, in fact, minimax (i.e. it minimizes the worst-case risk). Moreover, we find that a measurement that is covariant only under a unitary 2-design is also minimax. Lastly, in an attempt to understand the problem of finding minimax measurements for general state estimation, we study the qubit case in detail and find that every spherical 2-design is a minimax measurement.
APA, Harvard, Vancouver, ISO, and other styles
28

Jeon, Young Eun, and Suk-Bok Kang. "Estimation of the Rayleigh Distribution under Unified Hybrid Censoring." Austrian Journal of Statistics 50, no. 1 (2021): 59–73. http://dx.doi.org/10.17713/ajs.v50i1.990.

Full text
Abstract:
We derive some estimators of the scale parameter of the Rayleigh distribution under the unified hybrid censoring scheme. We also derive some estimators of the reliability function and the entropy of the Rayleigh distribution. First, we obtain the maximum likelihood estimator of the scale parameter. Second, we obtain the Bayes estimator using the mean of the posterior distribution. Lastly, we obtain the Bayes estimator using the mode of the posterior distribution. We also derive the interval estimation (confidence interval, credible interval, and HPD credible interval) for the scale parameter under the unified hybrid censoring scheme. We compare the proposed estimators in the sense of the mean squared error through Monte Carlo simulation. Coverage probability and average lengths of 95 % and 90% intervals are obtained.
APA, Harvard, Vancouver, ISO, and other styles
29

Al-Obedy, Nadia. "Semi- Minimax Estimations on the Exponential Distribution Under Symmetric and Asymmetric Loss Functions." Journal of Al-Rafidain University College For Sciences ( Print ISSN: 1681-6870 ,Online ISSN: 2790-2293 ), no. 2 (October 13, 2021): 245–70. http://dx.doi.org/10.55562/jrucs.v36i2.257.

Full text
Abstract:
In this paper the semi-minimax estimators of the scale parameter of the exponential distribution are presented by applying the theorem of Lehmann under symmetric (quadratic) loss function and asymmetric (entropy, mlinex , precautionary) loss functions .The results of comparison between these estimators are compared empirically using Monte-Carlo simulation study with respect to the mean square error(MSE) and the mean percentage error(MPE). In general, the results showed that the semi-minimax estimator under quadratic loss function is the best estimator by MSE and MPE for all sample sizes. We can notice that, when the values of the parameters β ,θ increasing the semi-minimax estimator under quadratic loss function is the best estimator by MSE while comparison by MPE showed that the semi-minimax estimator under mlinex loss function when the value of c positive is the best, but they both get worse as α ,θ increases. Also the results showed that when α, β together increase the semi-minimax estimator under entropy loss function is the best by MSE while by MPE the semi-minimax estimator under precautionary loss function is the best estimator.
APA, Harvard, Vancouver, ISO, and other styles
30

Correa, Juan C. "A new estimator of entropy." Communications in Statistics - Theory and Methods 24, no. 10 (1995): 2439–49. http://dx.doi.org/10.1080/03610929508831626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Amigó, José M., and Matthew B. Kennel. "Variance estimators for the Lempel-Ziv entropy rate estimator." Chaos: An Interdisciplinary Journal of Nonlinear Science 16, no. 4 (2006): 043102. http://dx.doi.org/10.1063/1.2347102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Irshad, Muhammed Rasheed, Radhakumari Maya, Francesco Buono та Maria Longobardi. "Kernel Estimation of Cumulative Residual Tsallis Entropy and Its Dynamic Version under ρ-Mixing Dependent Data". Entropy 24, № 1 (2021): 9. http://dx.doi.org/10.3390/e24010009.

Full text
Abstract:
Tsallis introduced a non-logarithmic generalization of Shannon entropy, namely Tsallis entropy, which is non-extensive. Sati and Gupta proposed cumulative residual information based on this non-extensive entropy measure, namely cumulative residual Tsallis entropy (CRTE), and its dynamic version, namely dynamic cumulative residual Tsallis entropy (DCRTE). In the present paper, we propose non-parametric kernel type estimators for CRTE and DCRTE where the considered observations exhibit an ρ-mixing dependence condition. Asymptotic properties of the estimators were established under suitable regularity conditions. A numerical evaluation of the proposed estimator is exhibited and a Monte Carlo simulation study was carried out.
APA, Harvard, Vancouver, ISO, and other styles
33

Asti Ralita Sari and Haposan Sirait. "Bayes Estimator for Dagum Distribution Parameters Using Non-Informative Prior Rules with K-Loss Function and Entropy Loss Function." International Journal of Mathematics, Statistics, and Computing 1, no. 4 (2023): 61–68. http://dx.doi.org/10.46336/ijmsc.v1i4.59.

Full text
Abstract:
The parameter estimator discussed is the p parameter estimator of the Dagum distribution with the K-loss function and the entropy loss function using the Bayes method. To get the Bayes estimator from the scale parameter of the Dagum distribution, the Jeffrey non-informative prior distribution is used based on the maximum likelihood function and the loss function for the K-loss function and the entropy loss function to obtain an efficient estimator. Determination of the best estimator is done by comparing the variance values generated from each estimator. An estimator that uses the entropy loss function is the best method for estimating the parameters of the Dagum distribution of the data population with efficient conditions met.
APA, Harvard, Vancouver, ISO, and other styles
34

Kuc, Roman, and Hilda Li. "Reduced-Order Autoregressive Modeling for Center-Frequency Estimation." Ultrasonic Imaging 7, no. 3 (1985): 244–51. http://dx.doi.org/10.1177/016173468500700304.

Full text
Abstract:
The center frequency of a narrowband, discrete-time random process, such as a reflected ultrasound signal, is estimated from the parameter values of a reduced, second-order autoregressive (AR) model. This approach is proposed as a fast estimator that performs better than the zero-crossing count estimate for determining the center-frequency location. The parameter values are obtained through a linear prediction analysis on the correlated random process, which in this case is identical to the maximum entropy method for spectral estimation. The frequency of the maximum of the second-order model spectrum is determined from these parameters and is used as the center-frequency estimate. This estimate can be computed very efficiently, requiring only the estimates of the first three terms of the process autocorrelation function. The bias and variance properties of this estimator are determined for a random process having a Gaussian-shaped spectrum and compared to those of the ideal FM frequency discriminator, zero-crossing count estimator and a correlation estimator. It is found that the variance values for the reduced-order AR model center-frequency estimator lie between those for the ideal FM frequency discriminator and the zero-crossing count estimator.
APA, Harvard, Vancouver, ISO, and other styles
35

Farokhinia, Hadi, Rahim Chinipardaz, and Gholamali Parham. "Improved estimation of the sensitive proportion using a new randomization technique and the Horvitz–Thompson type estimator." Statistics, Optimization & Information Computing 12, no. 6 (2024): 1612–21. http://dx.doi.org/10.19139/soic-2310-5070-1807.

Full text
Abstract:
Randomized response techniques efficiently collect data on sensitive subjects to protect individual privacy. This paper aims to introduce a new randomizing technique in the additive scrambled model so that privacy is well preserved and the estimator's efficiency for the sensitive population proportion is improved. Also, a Horvitz–Thompson type estimator is presented as an unbiased estimator of the sensitive proportion of the population, then convergence to the normal distribution for the Horvitz–Thompson type estimator is considered by the entropy of the inclusion indicators in the Poisson sampling. Eventually, using the new additive scrambled model, the ratio of taking addictive drugs is estimated among students of the University.
APA, Harvard, Vancouver, ISO, and other styles
36

Kim, Young-Sik. "Low Complexity Estimation Method of Rényi Entropy for Ergodic Sources." Entropy 20, no. 9 (2018): 657. http://dx.doi.org/10.3390/e20090657.

Full text
Abstract:
Since the entropy is a popular randomness measure, there are many studies for the estimation of entropies for given random samples. In this paper, we propose an estimation method of the Rényi entropy of order α . Since the Rényi entropy of order α is a generalized entropy measure including the Shannon entropy as a special case, the proposed estimation method for Rényi entropy can detect any significant deviation of an ergodic stationary random source’s output. It is shown that the expected test value of the proposed scheme is equivalent to the Rényi entropy of order α . After deriving a general representation of parameters of the proposed estimator, we discuss on the particular orders of Rényi entropy such as α → 1 , α = 1 / 2 , and α = 2 . Because the Rényi entropy of order 2 is the most popular one, we present an iterative estimation method for the application with stringent resource restrictions.
APA, Harvard, Vancouver, ISO, and other styles
37

Modhesh, A. A., and Abdulkareem M. Basheer. "Bayesian Estimation of Entropy for Kumaraswamy Distribution and Its Application to Progressively First-Failure Censored Data." Asian Journal of Probability and Statistics 21, no. 4 (2023): 22–33. http://dx.doi.org/10.9734/ajpas/2023/v21i4470.

Full text
Abstract:
Entropy can be mathematically defined as a measure of the uncertainty of random variables that represents the potential quantity of information. This article investigates the behavior of the entropy of random variables which follow a Kumaraswamy distribution using progressively first-failure censored (PFFC) data. In particular, we calculate the maximum likelihood estimation and the confidence interval of entropy by using the observed Fisher information matrix through the asymptotic distribution of the maximum likelihood estimator. Furthermore, we apply the Markov Chain Monte Carlo (MCMC) method which help us to estimates the entropy and to formulation the credible intervals in order to address this problem. Here, a numerical example of real data is presented to illustrate the performance of the proposed method. Finally, we perform Monte Carlo simulations to observe the behavior of the proposed procedure.
APA, Harvard, Vancouver, ISO, and other styles
38

Awad, Manahel Kh, and Huda A. Rasheed. "Estimation of the Reliability Function of Basic Gompertz Distribution under Different Priors." Ibn AL- Haitham Journal For Pure and Applied Sciences 33, no. 3 (2020): 167. http://dx.doi.org/10.30526/33.3.2482.

Full text
Abstract:
In this paper, some estimators for the reliability function R(t) of Basic Gompertz (BG) distribution have been obtained, such as Maximum likelihood estimator, and Bayesian estimators under General Entropy loss function by assuming non-informative prior by using Jefferys prior and informative prior represented by Gamma and inverted Levy priors. Monte-Carlo simulation is conducted to compare the performance of all estimates of the R(t), based on integrated mean squared.
APA, Harvard, Vancouver, ISO, and other styles
39

Zhou, Hui. "Data Processing with Computation in Bayes Reliability Analysis for Burr Type X Distribution under Different Loss Functions." Advanced Materials Research 978 (June 2014): 205–8. http://dx.doi.org/10.4028/www.scientific.net/amr.978.205.

Full text
Abstract:
This paper studies the estimation of the parameter of Burr Type X distribution. Maximum likelihood estimator is first derived, and then the Bayes and Empirical Bayes estimators of the unknown parameter are obtained under three loss functions, which are squared error loss, LINEX loss and entropy loss functions. The prior distribution of parmeter used in this paper is Gamma distribution. Finally, a Monte Carlo simulation is given to illustrate the application of these estimators.
APA, Harvard, Vancouver, ISO, and other styles
40

Hayashi, Masahito. "Measuring quantum relative entropy with finite-size effect." Quantum 9 (May 5, 2025): 1725. https://doi.org/10.22331/q-2025-05-05-1725.

Full text
Abstract:
We study the estimation of relative entropy D(ρ‖σ) when σ is known. We show that the Cramér-Rao type bound equals the relative varentropy. Our estimator attains the Cramér-Rao type bound when the dimension d is fixed. It also achieves the sample complexity O(d2) when the dimension d increases. This sample complexity is optimal when σ is the completely mixed state. Also, it has time complexity O(d6polylog d). Our proposed estimator unifiedly works under both settings.
APA, Harvard, Vancouver, ISO, and other styles
41

Montalvão, J., R. Attux, and D. Silva. "A pragmatic entropy and differential entropy estimator for small datasets." Journal of Communication and Information Systems 29, no. 1 (2014): 29–36. http://dx.doi.org/10.14209/jcis.2014.8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Timofeev, E. A. "Unbiased Entropy Estimator for Binary Sequences." Modeling and Analysis of Information Systems 20, no. 1 (2015): 107–15. http://dx.doi.org/10.18255/1818-1015-2013-1-107-115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Delattre, Sylvain, and Nicolas Fournier. "On the Kozachenko–Leonenko entropy estimator." Journal of Statistical Planning and Inference 185 (June 2017): 69–93. http://dx.doi.org/10.1016/j.jspi.2017.01.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Montalvão, J., R. Attux, and D. G. Silva. "Simple entropy estimator for small datasets." Electronics Letters 48, no. 17 (2012): 1059–61. http://dx.doi.org/10.1049/el.2012.2002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Lee, Intae. "Sample-Spacings-Based Density and Entropy Estimators for Spherically Invariant Multidimensional Data." Neural Computation 22, no. 8 (2010): 2208–27. http://dx.doi.org/10.1162/neco.2010.02-09-972.

Full text
Abstract:
While the sample-spacings-based density estimation method is simple and efficient, its applicability has been restricted to one-dimensional data. In this letter, the method is generalized such that it can be extended to multiple dimensions in certain circumstances. As a consequence, a multidimensional entropy estimator of spherically invariant continuous random variables is derived. Partial bias of the estimator is analyzed, and the estimator is further used to derive a nonparametric objective function for frequency-domain independent component analysis. The robustness and the effectiveness of the objective function are demonstrated with simulation results.
APA, Harvard, Vancouver, ISO, and other styles
46

Fard, Farzad Alavi, Firmin Doko Tchatoka, and Sivagowry Sriananthakumar. "Maximum Entropy Evaluation of Asymptotic Hedging Error under a Generalised Jump-Diffusion Model." Journal of Risk and Financial Management 14, no. 3 (2021): 97. http://dx.doi.org/10.3390/jrfm14030097.

Full text
Abstract:
In this paper we propose a maximum entropy estimator for the asymptotic distribution of the hedging error for options. Perfect replication of financial derivatives is not possible, due to market incompleteness and discrete-time hedging. We derive the asymptotic hedging error for options under a generalised jump-diffusion model with kernel bias, which nests a number of very important processes in finance. We then obtain an estimation for the distribution of hedging error by maximising Shannon’s entropy subject to a set of moment constraints, which in turn yields the value-at-risk and expected shortfall of the hedging error. The significance of this approach lies in the fact that the maximum entropy estimator allows us to obtain a consistent estimate of the asymptotic distribution of hedging error, despite the non-normality of the underlying distribution of returns.
APA, Harvard, Vancouver, ISO, and other styles
47

Watters, Nicholas, and George N. Reeke. "Neuronal Spike Train Entropy Estimation by History Clustering." Neural Computation 26, no. 9 (2014): 1840–72. http://dx.doi.org/10.1162/neco_a_00627.

Full text
Abstract:
Neurons send signals to each other by means of sequences of action potentials (spikes). Ignoring variations in spike amplitude and shape that are probably not meaningful to a receiving cell, the information content, or entropy of the signal depends on only the timing of action potentials, and because there is no external clock, only the interspike intervals, and not the absolute spike times, are significant. Estimating spike train entropy is a difficult task, particularly with small data sets, and many methods of entropy estimation have been proposed. Here we present two related model-based methods for estimating the entropy of neural signals and compare them to existing methods. One of the methods is fast and reasonably accurate, and it converges well with short spike time records; the other is impractically time-consuming but apparently very accurate, relying on generating artificial data that are a statistical match to the experimental data. Using the slow, accurate method to generate a best-estimate entropy value, we find that the faster estimator converges to this value more closely and with smaller data sets than many existing entropy estimators.
APA, Harvard, Vancouver, ISO, and other styles
48

Hernández, Damián G., and Inés Samengo. "Estimating the Mutual Information between Two Discrete, Asymmetric Variables with Limited Samples." Entropy 21, no. 6 (2019): 623. http://dx.doi.org/10.3390/e21060623.

Full text
Abstract:
Determining the strength of nonlinear, statistical dependencies between two variables is a crucial matter in many research fields. The established measure for quantifying such relations is the mutual information. However, estimating mutual information from limited samples is a challenging task. Since the mutual information is the difference of two entropies, the existing Bayesian estimators of entropy may be used to estimate information. This procedure, however, is still biased in the severely under-sampled regime. Here, we propose an alternative estimator that is applicable to those cases in which the marginal distribution of one of the two variables—the one with minimal entropy—is well sampled. The other variable, as well as the joint and conditional distributions, can be severely undersampled. We obtain a consistent estimator that presents very low bias, outperforming previous methods even when the sampled data contain few coincidences. As with other Bayesian estimators, our proposal focuses on the strength of the interaction between the two variables, without seeking to model the specific way in which they are related. A distinctive property of our method is that the main data statistics determining the amount of mutual information is the inhomogeneity of the conditional distribution of the low-entropy variable in those states in which the large-entropy variable registers coincidences.
APA, Harvard, Vancouver, ISO, and other styles
49

Fortune, Timothy, and Hailin Sang. "Shannon Entropy Estimation for Linear Processes." Journal of Risk and Financial Management 13, no. 9 (2020): 205. http://dx.doi.org/10.3390/jrfm13090205.

Full text
Abstract:
In this paper, we estimate the Shannon entropy S(f)=−E[log(f(x))] of a one-sided linear process with probability density function f(x). We employ the integral estimator Sn(f), which utilizes the standard kernel density estimator fn(x) of f(x). We show that Sn(f) converges to S(f) almost surely and in Ł2 under reasonable conditions.
APA, Harvard, Vancouver, ISO, and other styles
50

Lu, Chien, and Jaakko Peltonen. "Enhancing Nearest Neighbor Based Entropy Estimator for High Dimensional Distributions via Bootstrapping Local Ellipsoid." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 5013–20. http://dx.doi.org/10.1609/aaai.v34i04.5941.

Full text
Abstract:
An ellipsoid-based, improved kNN entropy estimator based on random samples of distribution for high dimensionality is developed. We argue that the inaccuracy of the classical kNN estimator in high dimensional spaces results from the local uniformity assumption and the proposed method mitigates the local uniformity assumption by two crucial extensions, a local ellipsoid-based volume correction and a correction acceptance testing procedure. Relevant theoretical contributions are provided and several experiments from simple to complicated cases have shown that the proposed estimator can effectively reduce the bias especially in high dimensionalities, outperforming current state of the art alternative estimators.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography