To see the other types of publications on this topic, follow the link: Statistical estimation problem.

Journal articles on the topic 'Statistical estimation problem'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Statistical estimation problem.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Vogel, Annika, and Richard Ménard. "How far can the statistical error estimation problem be closed by collocated data?" Nonlinear Processes in Geophysics 30, no. 3 (2023): 375–98. http://dx.doi.org/10.5194/npg-30-375-2023.

Full text
Abstract:
Abstract. Accurate specification of the error statistics required for data assimilation remains an ongoing challenge, partly because their estimation is an underdetermined problem that requires statistical assumptions. Even with the common assumption that background and observation errors are uncorrelated, the problem remains underdetermined. One natural question that could arise is as follows: can the increasing amount of overlapping observations or other datasets help to reduce the total number of statistical assumptions, or do they introduce more statistical unknowns? In order to answer this question, this paper provides a conceptual view on the statistical error estimation problem for multiple collocated datasets, including a generalized mathematical formulation, an illustrative demonstration with synthetic data, and guidelines for setting up and solving the problem. It is demonstrated that the required number of statistical assumptions increases linearly with the number of datasets. However, the number of error statistics that can be estimated increases quadratically, allowing for an estimation of an increasing number of error cross-statistics between datasets for more than three datasets. The presented generalized estimation of full error covariance and cross-covariance matrices between datasets does not necessarily accumulate the uncertainties of assumptions among error estimations of multiple datasets.
APA, Harvard, Vancouver, ISO, and other styles
2

Pisarenko, V. F., A. A. Lyubushin, V. B. Lysenko, and T. V. Golubeva. "Statistical estimation of seismic hazard parameters: Maximum possible magnitude and related parameters." Bulletin of the Seismological Society of America 86, no. 3 (1996): 691–700. http://dx.doi.org/10.1785/bssa0860030691.

Full text
Abstract:
Abstract The problem of statistical estimation of earthquake hazard parameters is considered. The emphasis is on estimation of the maximum regional magnitude, Mmax, and the maximum magnitude, Mmax(T), in a future time interval T and quantiles of its distribution. Two estimators are suggested: an unbiased estimator with the lowest possible variance and a Bayesian estimator. As an illustration, these methods are applied for the estimation of Mmax and related parameters in California and Italy.
APA, Harvard, Vancouver, ISO, and other styles
3

Yang, Da. "Interval Estimation and Hypothesis Testing." Applied Mechanics and Materials 543-547 (March 2014): 1717–20. http://dx.doi.org/10.4028/www.scientific.net/amm.543-547.1717.

Full text
Abstract:
Mathematical statistics is a branch of mathematics has extensive application of interval estimation and hypothesis testing, which are two important problems of statistical inference. As two important statistical inference methods, interval estimation and hypothesis testing problem is more and more widely used in the field of economic management, finance and insurance, scientific research, engineering technology, the science of decision functions are recognized by more and more people. Can go further to establish mutual influence and communication between the interval estimation and hypothesis testing, can use the theory to explain the problem of interval estimation of parameter hypothesis test, this is an important problem to improve the statistical inference theory. Therefore, the basis on the internal relations between the interval estimation and hypothesis test for deep research, explain the problem of hypothesis testing and interval estimation from the point of view, discusses the difference and connection between the two.
APA, Harvard, Vancouver, ISO, and other styles
4

Adepoju, Akeem Ajibola, Akanji Olalekan Bello, Alhaji Modu Isa, Akinrefon Adesupo, and Jamiu S. Olumoh. "STATISTICAL INFERENCE ON SINE-EXPONENTIAL DISTRIBUTION PARAMETER." Journal of Computational Innovation and Analytics (JCIA) 3, no. 2 (2024): 129–45. http://dx.doi.org/10.32890/jcia2024.3.2.6.

Full text
Abstract:
The Sine-Exponential (Sine-E) distribution is a probability distribution that combines the periodic behavior of the sine function with the decay characteristic of the exponential function. This study addresses the problem of identifying the most accurate and reliable estimation method for the parameter of the Sine-E distribution. The objective is to evaluate various parameter estimation techniques, including Maximum Likelihood Estimation (MLE), Least Squares Estimation (LSE), Weighted Least Squares Estimation (WLSE), Maximum Product of Spacing Estimation (MPSE), Cramer-von-Mises Estimation (CVME), and Anderson-Darling Estimation (ADE), using Mean Square Error (MSE) as the criterion for determining the technique with the minimum error. The study’s findings reveal that as sample size increases, the parameter estimates for all techniques converge to the true parameter value, with decreases in bias, MSE, and mean relative estimates. Among the techniques evaluated, the MPSE method consistently provides estimates closest to the true parameter value and exhibits the least bias and lowest MSE across small, moderate, and large sample sizes, making it the best estimator for the Sine-E distribution.
APA, Harvard, Vancouver, ISO, and other styles
5

Rothman, Daniel H. "Nonlinear inversion, statistical mechanics, and residual statics estimation." GEOPHYSICS 50, no. 12 (1985): 2784–96. http://dx.doi.org/10.1190/1.1441899.

Full text
Abstract:
Nonlinear inverse problems are usually solved with linearized techniques that depend strongly on the accuracy of initial estimates of the model parameters. With linearization, objective functions can be minimized efficiently, but the risk of local rather than global optimization can be severe. I address the problem confronted in nonlinear inversion when no good initial guess of the model parameters can be made. The fully nonlinear approach presented is rooted in statistical mechanics. Although a large nonlinear problem might appear computationally intractable without linearization, reformulation of the same problem into smaller, interdependent parts can lead to tractable computation while preserving nonlinearities. I formulate inversion as a problem of Bayesian estimation, in which the prior probability distribution is the Gibbs distribution of statistical mechanics. Solutions are then obtained by maximizing the posterior probability of the model parameters. Optimization is performed with a Monte Carlo technique that was originally introduced to simulate the statistical mechanics of systems in equilibrium. The technique is applied to residual statics estimation when statics are unusually large and data are contaminated by noise. Poorly picked correlations (“cycle skips” or “leg jumps”) appear as local minima of the objective function, but global optimization is successfully performed. Further applications to deconvolution and velocity estimation are proposed.
APA, Harvard, Vancouver, ISO, and other styles
6

Yamane, Ikko, Hiroaki Sasaki, and Masashi Sugiyama. "Regularized Multitask Learning for Multidimensional Log-Density Gradient Estimation." Neural Computation 28, no. 7 (2016): 1388–410. http://dx.doi.org/10.1162/neco_a_00844.

Full text
Abstract:
Log-density gradient estimation is a fundamental statistical problem and possesses various practical applications such as clustering and measuring nongaussianity. A naive two-step approach of first estimating the density and then taking its log gradient is unreliable because an accurate density estimate does not necessarily lead to an accurate log-density gradient estimate. To cope with this problem, a method to directly estimate the log-density gradient without density estimation has been explored and demonstrated to work much better than the two-step method. The objective of this letter is to improve the performance of this direct method in multidimensional cases. Our idea is to regard the problem of log-density gradient estimation in each dimension as a task and apply regularized multitask learning to the direct log-density gradient estimator. We experimentally demonstrate the usefulness of the proposed multitask method in log-density gradient estimation and mode-seeking clustering.
APA, Harvard, Vancouver, ISO, and other styles
7

Haj Ahmad, Hanan, and Ehab M. Almetwally. "On Statistical Inference of Generalized Pareto Distribution with Jointly Progressive Censored Samples with Binomial Removal." Mathematical Problems in Engineering 2023 (April 21, 2023): 1–14. http://dx.doi.org/10.1155/2023/1821347.

Full text
Abstract:
A jointly censored sample is a very useful sampling technique in conducting comparative life tests of the products, its efficiency appears in permitting the selection of two samples from two manufacturing lines at the same time and conducting a life-testing experiment. This article presents estimation information of the joint generalized Pareto distributions parameters using Type-II progressive censoring scheme, which is carried out with binomial removal. The generalized Pareto distribution has many applications in different fields. We outline the problem of parameter estimation using the frequentest maximum likelihood and the Bayesian estimation methods. Furthermore, different interval estimation methods for estimating the four parameters were used: the asymptotic property of the maximum likelihood estimator, the credible confidence intervals, and the Bootstrap confidence intervals. The detailed numerical simulations have been considered to compare the performance of the proposed estimates. In addition, the applicability of the joint generalized Pareto censored model has been performed by applying a real data example.
APA, Harvard, Vancouver, ISO, and other styles
8

Ao, Ziqiao, and Jinglai Li. "Entropy Estimation via Normalizing Flow." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (2022): 9990–98. http://dx.doi.org/10.1609/aaai.v36i9.21237.

Full text
Abstract:
Entropy estimation is an important problem in information theory and statistical science. Many popular entropy estimators suffer from fast growing estimation bias with respect to dimensionality, rendering them unsuitable for high dimensional problems. In this work we propose a transformbased method for high dimensional entropy estimation, which consists of the following two main ingredients. First by modifying the k-NN based entropy estimator, we propose a new estimator which enjoys small estimation bias for samples that are close to a uniform distribution. Second we design a normalizing flow based mapping that pushes samples toward a uniform distribution, and the relation between the entropy of the original samples and the transformed ones is also derived. As a result the entropy of a given set of samples is estimated by first transforming them toward a uniform distribution and then applying the proposed estimator to the transformed samples. Numerical experiments demonstrate the effectiveness of the method for high dimensional entropy estimation problems.
APA, Harvard, Vancouver, ISO, and other styles
9

Sasaki, Hiroaki, Yung-Kyun Noh, Gang Niu, and Masashi Sugiyama. "Direct Density Derivative Estimation." Neural Computation 28, no. 6 (2016): 1101–40. http://dx.doi.org/10.1162/neco_a_00835.

Full text
Abstract:
Estimating the derivatives of probability density functions is an essential step in statistical data analysis. A naive approach to estimate the derivatives is to first perform density estimation and then compute its derivatives. However, this approach can be unreliable because a good density estimator does not necessarily mean a good density derivative estimator. To cope with this problem, in this letter, we propose a novel method that directly estimates density derivatives without going through density estimation. The proposed method provides computationally efficient estimation for the derivatives of any order on multidimensional data with a hyperparameter tuning method and achieves the optimal parametric convergence rate. We further discuss an extension of the proposed method by applying regularized multitask learning and a general framework for density derivative estimation based on Bregman divergences. Applications of the proposed method to nonparametric Kullback-Leibler divergence approximation and bandwidth matrix selection in kernel density estimation are also explored.
APA, Harvard, Vancouver, ISO, and other styles
10

Sun, Qingfeng, Cuihong Chen, Hui Wang, Ningning Xu, Chao Liu, and Jixi Gao. "A Method for Assessing Background Concentrations near Sources of Strong CO2 Emissions." Atmosphere 14, no. 2 (2023): 200. http://dx.doi.org/10.3390/atmos14020200.

Full text
Abstract:
In the quantification model of emission intensity of emission sources, the estimation of the background concentration of greenhouse gases near an emission source is an important problem. The traditional method of estimating the background concentration of greenhouse gases through statistical information often results in a certain deviation. In order to solve this problem, we propose an adaptive estimation method of CO2 background concentrations near emission sources in this work, which takes full advantage of robust local regression and a Gaussian mixture model to achieve accurate estimations of greenhouse gas background concentrations. It is proved by experiments that when the measurement error is 0.2 ppm, the background concentration estimation error is only 0.08 mg/m3, and even when the measurement error is 1.2 ppm, the background concentration estimation error is less than 0.4 mg/m3. The CO2 concentration measurement data all show a good background concentration assessment effect, and the accuracy of top-down carbon emission quantification based on actual measurements should be effectively improved in the future.
APA, Harvard, Vancouver, ISO, and other styles
11

Nakonechny, Alexander, Grigory Kudin, Petr Zinko, and Taras Zinko. "GUARANTEED ROOT-MEAN-SQUARE ESTIMATES OF LINEAR MATRIX TRANSFORMATIONS UNDER CONDITIONS OF STATISTICAL UNCERTAINTY." Journal of Automation and Information sciences 2 (March 1, 2021): 24–37. http://dx.doi.org/10.34229/1028-0979-2021-2-3.

Full text
Abstract:
Linear estimation of observations in conditions of various types of interference in order to obtain unbiased estimates is the subject of research in numerous scientific publications. The problem of linear regression analysis in conditions when the elements of vector observations are known matrices that allow small deviations from the calculated ones was studied in previous publications of the authors. Using the technology of pseudo inverse operators, as well as the perturbation method, the problem was solved under the condition that linearly independent matrices are subject to small perturbations. The parameters of the linear estimates were presented in the form of expansions in a small parameter. Over the past decades, solving linear estimation problems under uncertainty has been carried out within the framework of the well-known minimax estimation method. Formally, the problems that arise in this direction are solved in the presence of some spaces for unknown observation parameters, as well as spaces to which observation errors may belong. The coefficients of the linear estimates are determined in the process of optimizing the guaranteed mean-square error of the desired estimate. Thus, the subject of scientific research can be problems of linear estimation of unknown rectangular matrices based on observations from errors with unknown correlation matrices of errors: unknown matrices belong to some bounded set, correlation matrices of random perturbations of the observation vector are unknown, but it is possible to assume cases when they belong to one or another defined bounded set. Some formulations of problems of linear estimation of observations are investigated in the proposed publication. The problem of linear estimation for a vector of observations of a special form is considered, the components of which are known rectangular matrices that are subject to small perturbations. Variants of the problem statement are proposed, which allow obtaining an analytical solution in the first approximation of a small parameter. A test example is presented.
APA, Harvard, Vancouver, ISO, and other styles
12

Jin, Hanqing, and Shige Peng. "Optimal unbiased estimation for maximal distribution." Probability, Uncertainty and Quantitative Risk 6, no. 3 (2021): 189. http://dx.doi.org/10.3934/puqr.2021009.

Full text
Abstract:
<p style='text-indent:20px;'>Unbiased estimation for parameters of maximal distribution is a fundamental problem in the statistical theory of sublinear expectations. In this paper, we proved that the maximum estimator is the largest unbiased estimator for the upper mean and the minimum estimator is the smallest unbiased estimator for the lower mean.</p>
APA, Harvard, Vancouver, ISO, and other styles
13

Kong, Yuqing, Grant Schoenebeck, Biaoshuai Tao, and Fang-Yi Yu. "Information Elicitation Mechanisms for Statistical Estimation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 02 (2020): 2095–102. http://dx.doi.org/10.1609/aaai.v34i02.5583.

Full text
Abstract:
We study learning statistical properties from strategic agents with private information. In this problem, agents must be incentivized to truthfully reveal their information even when it cannot be directly verified. Moreover, the information reported by the agents must be aggregated into a statistical estimate. We study two fundamental statistical properties: estimating the mean of an unknown Gaussian, and linear regression with Gaussian error. The information of each agent is one point in a Euclidean space.Our main results are two mechanisms for each of these problems which optimally aggregate the information of agents in the truth-telling equilibrium:• A minimal (non-revelation) mechanism for large populations — agents only need to report one value, but that value need not be their point.• A mechanism for small populations that is non-minimal — agents need to answer more than one question.These mechanisms are “informed truthful” mechanisms where reporting unaltered data (truth-telling) 1) forms a strict Bayesian Nash equilibrium and 2) has strictly higher welfare than any oblivious equilibrium where agents' strategies are independent of their private signals. We also show a minimal revelation mechanism (each agent only reports her signal) for a restricted setting and use an impossibility result to prove the necessity of this restriction.We build upon the peer prediction literature in the single-question setting; however, most previous work in this area focuses on discrete signals, whereas our setting is inherently continuous, and we further simplify the agents' reports.
APA, Harvard, Vancouver, ISO, and other styles
14

Mu, Weiyan, Qiuyue Wei, and Shifeng Xiong. "Some Notes on Concordance between Optimization and Statistics." Mathematical Problems in Engineering 2019 (January 16, 2019): 1–6. http://dx.doi.org/10.1155/2019/3485064.

Full text
Abstract:
Many engineering problems require solutions to statistical optimization problems. When the global solution is hard to attain, engineers or statisticians always use the better solution because we intuitively believe a principle, called better solution principle (BSP) in this paper, that a better solution to a statistical optimization problem also has better statistical properties of interest. This principle displays some concordance between optimization and statistics and is expected to widely hold. Since theoretical study on BSP seems to be neglected by statisticians, this paper presents a primary discussion on BSP within a relatively general framework. We demonstrate two comparison theorems as the key results of this paper. Their applications to maximum likelihood estimation are presented. It can be seen that BSP for this problem holds under reasonable conditions; i.e., an estimator with greater likelihood is better in some statistical sense.
APA, Harvard, Vancouver, ISO, and other styles
15

Ciuperca, Gabriela. "A Method to Treat Some Dynamical Statistical Models." Journal of Biological Systems 06, no. 04 (1998): 357–75. http://dx.doi.org/10.1142/s0218339098000236.

Full text
Abstract:
In this paper we present a method for the estimation of the parameters of models described by a nonlinear system of differential equations: we study the maximum likelihood estimator and the jackknife estimator for parameters of the system and for the covariance matrix of the state variables and we seek possible linear relations between parameters. We take into account the difficulty due to the small number of observations. The optimal experimental design for this kind of problem is determined. We give an application of this method for the glucose metabolism of goats.
APA, Harvard, Vancouver, ISO, and other styles
16

A. Al-Saaedy, Mustafa, and Emad H.Aboudi. "Estimate the Survival Function of the Power Lomax (POLO) Distribution by Using the Simulated Annealing Algorithm." Al-Nahrain Journal of Science 25, no. 1 (2022): 30–34. http://dx.doi.org/10.22401/anjs.25.1.05.

Full text
Abstract:
In this paper the survival function of the Power Lomax distribution is estimated by two methods of estimation, which are the maximum likelihood method and the moment method. The obtained estimators contain non-linear equations that cannot be solved by ordinary mathematical methods and do not represent the estimations clearly, so a simulated annealing algorithm was used to solve this problem, then simulation was used to compare the estimation methods based on the statistical comparison criterion mean squares of integral error (IMSE) and to get the best estimator for survival function. The results show the preference the maximum likelihood method than the moment method.
APA, Harvard, Vancouver, ISO, and other styles
17

Ren, Haiping, Qin Gong, and Xue Hu. "Estimation of Entropy for Generalized Rayleigh Distribution under Progressively Type-II Censored Samples." Axioms 12, no. 8 (2023): 776. http://dx.doi.org/10.3390/axioms12080776.

Full text
Abstract:
This paper investigates the problem of entropy estimation for the generalized Rayleigh distribution under progressively type-II censored samples. Based on progressively type-II censored samples, we first discuss the maximum likelihood estimation and interval estimation of Shannon entropy for the generalized Rayleigh distribution. Then, we explore the Bayesian estimation problem of entropy under three types of loss functions: K-loss function, weighted squared error loss function, and precautionary loss function. Due to the complexity of Bayesian estimation computation, we use the Lindley approximation and MCMC method for calculating Bayesian estimates. Finally, using a Monte Carlo statistical simulation, we compare the mean square errors to examine the superiority of maximum likelihood estimation and Bayesian estimation under different loss functions. An actual example is provided to verify the feasibility and practicality of various estimations.
APA, Harvard, Vancouver, ISO, and other styles
18

Aldosari, Mubarak Saad, and Alaa Ahmed Abdelmegaly. "Mixed Statistical Estimation in Constrained Principal Component Model." Migration Letters 20, no. 5 (2023): 641–47. http://dx.doi.org/10.59670/ml.v20i5.4054.

Full text
Abstract:
There was found a serious problem in the statistical estimation if we don't reject a false hypothesis because of ignoring prior information about the estimator, constrained principal component model (CPCM) considered to be a general model for many constrained estimator, but this model do not important with the additional information about the error term that found in the restriction model. This paper seeks to find another face for the constrained principal component model that important with the variance of the error term that found in the restriction model and have a value dispute zero. This paper aims to introduce a generalized ordinary mixed estimator (GOME) using the CPCM which introduced by (Takane, 2014) and we try to find some special cases for this estimator which introduced earlier. The new estimator is more benefit for researchers in making decisions and depending on results that have more credible. We try to get the subset models and their constrained from the constrained principal component model; a model and a constrained model each of them have a variance term that have value despite zero, we also try to make a mixed estimator for each model and combined them to get the GOME and try to find the superiority of the new estimator.
APA, Harvard, Vancouver, ISO, and other styles
19

Tessore, Nicolas, and Sarah Bridle. "Moment-based ellipticity measurement as a statistical parameter estimation problem." New Astronomy 69 (May 2019): 58–68. http://dx.doi.org/10.1016/j.newast.2018.12.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Borrajo, Laura, and Ricardo Cao. "Nonparametric Mean Estimation for Big-but-Biased Data." Proceedings 2, no. 18 (2018): 1167. http://dx.doi.org/10.3390/proceedings2181167.

Full text
Abstract:
Some authors have recently warned about the risks of the sentence with enough data, the numbers speak for themselves. The problem of nonparametric statistical inference in big data under the presence of sampling bias is considered in this work. The mean estimation problem is studied in this setup, in a nonparametric framework, when the biasing weight function is unknown (realistic). The problem of ignoring the weight function is remedied by having a small SRS of the real population. This problem is related to nonparametric density estimation. The asymptotic expression for the MSE of the estimator proposed is considered. Some simulations illustrate the performance of the nonparametric method proposed in this work.
APA, Harvard, Vancouver, ISO, and other styles
21

Bouzebda, Salim, and Christophe Chesneau. "A Note on the Nonparametric Estimation of the Conditional Mode by Wavelet Methods." Stats 3, no. 4 (2020): 475–83. http://dx.doi.org/10.3390/stats3040030.

Full text
Abstract:
The purpose of this note is to introduce and investigate the nonparametric estimation of the conditional mode using wavelet methods. We propose a new linear wavelet estimator for this problem. The estimator is constructed by combining a specific ratio technique and an established wavelet estimation method. We obtain rates of almost sure convergence over compact subsets of Rd. A general estimator beyond the wavelet methodology is also proposed, discussing adaptivity within this statistical framework.
APA, Harvard, Vancouver, ISO, and other styles
22

Flaspoher, David C., and Ann L. Dinkheller. "German Tanks: A Problem in Estimation." Mathematics Teacher 92, no. 8 (1999): 724–28. http://dx.doi.org/10.5951/mt.92.8.0724.

Full text
Abstract:
Estimation is covered extensively in elementary statistics courses. The example discussed in this article describes a real-world situation and a simulation of that problem in which the selection of a suitable estimate is less apparent. This problem can be used at various levels. In a more elementary setting, the problem is useful to describe the concept of estimation. An advanced class can use the problem to discuss unbiasedness, minimum variance, and best estimates. At any level, the problem furnishes an excellent opportunity to make connections to the social studies curriculum and demonstrates an application of statistical techniques.
APA, Harvard, Vancouver, ISO, and other styles
23

Korda, Anna S., Gennady A. Mikhailov, and Sergey V. Rogasinsky. "Construction and optimization of numerically-statistical projection algorithms for solving integral equations." Russian Journal of Numerical Analysis and Mathematical Modelling 37, no. 4 (2022): 213–19. http://dx.doi.org/10.1515/rnam-2022-0018.

Full text
Abstract:
Abstract The problem of minimizing the root-mean-square error of the numerical-statistical projection estimation of the solution to an integral equation is solved. It is shown that the optimal estimator in this sense can be obtained by equalizing deterministic and stochastic components of the error in the case when the norm of the remainder of the utilized decomposition decreases inversely proportional to its length. As a test, the Milne problem of radiation transfer in a semi-infinite layer of matter is solved using Laguerre polynomials. To solve such a problem in the case of a finite layer, a special regularized projection algorithm is used.
APA, Harvard, Vancouver, ISO, and other styles
24

Bulinski, Alexander, and Alexey Kozhevin. "Statistical estimation of conditional Shannon entropy." ESAIM: Probability and Statistics 23 (2019): 350–86. http://dx.doi.org/10.1051/ps/2018026.

Full text
Abstract:
The new estimates of the conditional Shannon entropy are introduced in the framework of the model describing a discrete response variable depending on a vector of d factors having a density w.r.t. the Lebesgue measure in ℝd. Namely, the mixed-pair model (X, Y ) is considered where X and Y take values in ℝd and an arbitrary finite set, respectively. Such models include, for instance, the famous logistic regression. In contrast to the well-known Kozachenko–Leonenko estimates of unconditional entropy the proposed estimates are constructed by means of the certain spacial order statistics (or k-nearest neighbor statistics where k = kn depends on amount of observations n) and a random number of i.i.d. observations contained in the balls of specified random radii. The asymptotic unbiasedness and L2-consistency of the new estimates are established under simple conditions. The obtained results can be applied to the feature selection problem which is important, e.g., for medical and biological investigations.
APA, Harvard, Vancouver, ISO, and other styles
25

MOUNT, DAVID M., NATHAN S. NETANYAHU, CHRISTINE D. PIATKO, RUTH SILVERMAN, and ANGELA Y. WU. "QUANTILE APPROXIMATION FOR ROBUST STATISTICAL ESTIMATION AND k-ENCLOSING PROBLEMS." International Journal of Computational Geometry & Applications 10, no. 06 (2000): 593–608. http://dx.doi.org/10.1142/s0218195900000334.

Full text
Abstract:
Given a set P of n points in Rd, a fundamental problem in computational geometry is concerned with finding the smallest shape of some type that encloses all the points of P. Well-known instances of this problem include finding the smallest enclosing box, minimum volume ball, and minimum volume annulus. In this paper we consider the following variant: Given a set of n points in Rd, find the smallest shape in question that contains at least k points or a certain quantile of the data. This type of problem is known as a k-enclosing problem. We present a simple algorithmic framework for computing quantile approximations for the minimum strip, ellipsoid, and annulus containing a given quantile of the points. The algorithms run in O(n log n) time.
APA, Harvard, Vancouver, ISO, and other styles
26

Abd-Elfattah, A. M., and A. H. Alharbey. "Bayesian Estimation for Burr Distribution Type III Based on Trimmed Samples." ISRN Applied Mathematics 2012 (November 18, 2012): 1–18. http://dx.doi.org/10.5402/2012/250393.

Full text
Abstract:
Trimmed samples are widely employed in several areas of statistical practice, especially when some sample values at either or both extremes might have been contaminated. The problem of estimating the parameters of Burr distribution type III based on a trimmed samples and prior information will be considered. In this paper, we study the estimation of unknown parameters based on doubly censored type II. The problem discussed using maximum likelihood method and Bayesian approach to estimate the shape parameters of Burr type III distribution. The numerical illustration requires solving nonlinear equations, therefore, MathCAD (2001) statistical package used to asses these effects numerically.
APA, Harvard, Vancouver, ISO, and other styles
27

Brecheteau, Claire, Edouard Genetay, Timothee Mathieu, and Adrien Saumard. "Topics in robust statistical learning." ESAIM: Proceedings and Surveys 74 (November 2023): 119–36. http://dx.doi.org/10.1051/proc/202374119.

Full text
Abstract:
Some recent contributions to robust inference are presented. Firstly, the classical problem of robust M-estimation of a location parameter is revisited using an optimal transport approach - with specifically designed Wasserstein-type distances - that reduces robustness to a continuity property. Secondly, a procedure of estimation of the distance function to a compact set is described, using union of balls. This methodology originates in the field of topological inference and offers as a byproduct a robust clustering method. Thirdly, a robust Lloyd-type algorithm for clustering is constructed, using a bootstrap variant of the median-of-means strategy. This algorithm comes with a robust initialization.
APA, Harvard, Vancouver, ISO, and other styles
28

Mondelli, Marco, and Ramji Venkataramanan. "Approximate message passing with spectral initialization for generalized linear models*." Journal of Statistical Mechanics: Theory and Experiment 2022, no. 11 (2022): 114003. http://dx.doi.org/10.1088/1742-5468/ac9828.

Full text
Abstract:
Abstract We consider the problem of estimating a signal from measurements obtained via a generalized linear model. We focus on estimators based on approximate message passing (AMP), a family of iterative algorithms with many appealing features: the performance of AMP in the high-dimensional limit can be succinctly characterized under suitable model assumptions; AMP can also be tailored to the empirical distribution of the signal entries, and for a wide class of estimation problems, AMP is conjectured to be optimal among all polynomial-time algorithms. However, a major issue of AMP is that in many models (such as phase retrieval), it requires an initialization correlated with the ground-truth signal and independent from the measurement matrix. Assuming that such an initialization is available is typically not realistic. In this paper, we solve this problem by proposing an AMP algorithm initialized with a spectral estimator. With such an initialization, the standard AMP analysis fails since the spectral estimator depends in a complicated way on the design matrix. Our main contribution is a rigorous characterization of the performance of AMP with spectral initialization in the high-dimensional limit. The key technical idea is to define and analyze a two-phase artificial AMP algorithm that first produces the spectral estimator, and then closely approximates the iterates of the true AMP. We also provide numerical results that demonstrate the validity of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
29

Berdimuradov, M. B. "Comparative analysis of unknown parameter estimation of the gamma distribution with right-censored data in incomplete statistical models." Проблемы вычислительной и прикладной математики, no. 1(63) (March 22, 2025): 116–28. https://doi.org/10.71310/pcam.1_63.2025.09.

Full text
Abstract:
In this article, the problem of estimating the parameters of the gamma distribution under censored data conditions in incomplete statistical models is considered. Numerical maximum likelihood methods are analyzed, including the Nelder-Mead and Expectation-Maximization (EM) algorithms, which are applied for estimating the distribution parameters. A comparison of estimation accuracy at different levels of censoring is conducted, allowing the identification of the advantages and limitations of each method. The obtained results show that the EM algorithm provides higher estimation accuracy under censoring conditions, while the Nelder-Mead method demonstrates stable results under full observation. The influence of the proportional hazards model on parameter estimation under dependent censoring is also examined. This study expands the investigation of numerical methods for estimating distribution parameters under incomplete data conditions, offering recommendations on selecting the most effective method depending on the sample characteristics and the level of censoring.
APA, Harvard, Vancouver, ISO, and other styles
30

Zhao, Zhihe, та Heng Lian. "Distributed Estimation for ℓ0-Constrained Quantile Regression Using Iterative Hard Thresholding". Mathematics 13, № 4 (2025): 669. https://doi.org/10.3390/math13040669.

Full text
Abstract:
Distributed frameworks for statistical estimation and inference have become a critical toolkit for analyzing massive data efficiently. In this paper, we present distributed estimation for high-dimensional quantile regression with ℓ0 constraint using iterative hard thresholding (IHT). We propose a communication-efficient distributed estimator which is linearly convergent to the true parameter up to the statistical precision of the model, despite the fact that the check loss minimization problem with an ℓ0 constraint is neither strongly smooth nor convex. The distributed estimator we develop can achieve the same convergence rate as the estimator based on the whole data set under suitable assumptions. In our simulations, we illustrate the convergence of the estimators under different settings and also demonstrate the accuracy of nonzero parameter identification.
APA, Harvard, Vancouver, ISO, and other styles
31

Baaske, Markus, Felix Ballani, and Karl Gerald Van den Boogaart. "A QUASI-LIKELIHOOD APPROACH TO PARAMETER ESTIMATION FOR SIMULATABLE STATISTICAL MODELS." Image Analysis & Stereology 33, no. 2 (2014): 107. http://dx.doi.org/10.5566/ias.v33.p107-119.

Full text
Abstract:
This paper introduces a parameter estimation method for a general class of statistical models. The method exclusively relies on the possibility to conduct simulations for the construction of interpolation-based metamodels of informative empirical characteristics and some subjectively chosen correlation structure of the underlying spatial random process. In the absence of likelihood functions for such statistical models, which is often the case in stochastic geometric modelling, the idea is to follow a quasi-likelihood (QL) approach to construct an optimal estimating function surrogate based on a set of interpolated summary statistics. Solving these estimating equations one can account for both the random errors due to simulations and the uncertainty about the meta-models. Thus, putting the QL approach to parameter estimation into a stochastic simulation setting the proposed method essentially consists of finding roots to a sequence of approximating quasiscore functions. As a simple demonstrating example, the proposed method is applied to a special parameter estimation problem of a planar Boolean model with discs. Here, the quasi-score function has a half-analytical, numerically tractable representation and allows for the comparison of the model parameter estimates found by the simulation-based method and obtained from solving the exact quasi-score equations.
APA, Harvard, Vancouver, ISO, and other styles
32

Chetverikov, Denis, Dongwoo Kim, and Daniel Wilhelm. "Nonparametric Instrumental-Variable Estimation." Stata Journal: Promoting communications on statistics and Stata 18, no. 4 (2018): 937–50. http://dx.doi.org/10.1177/1536867x1801800411.

Full text
Abstract:
In this article, we introduce the commands npiv and npivcv, which implement nonparametric instrumental-variable (NPIV) estimation methods without and with a cross-validated choice of tuning parameters, respectively. Both commands can impose the constraint that the resulting estimated function is monotone. Using such a shape restriction may significantly improve the performance of the NPIV estimator (Chetverikov and Wilhelm, 2017, Econometrica 85: 1303–1320) because the ill-posedness of the NPIV estimation problem leads to unconstrained estimators that suffer from particularly poor statistical properties such as high variance. However, the constrained estimator that imposes the monotonicity significantly reduces variance by removing nonmonotone oscillations of the estimator. We provide a small Monte Carlo experiment to study the estimators’ finite-sample properties and an application to the estimation of gasoline demand functions.
APA, Harvard, Vancouver, ISO, and other styles
33

Yang, Ziheng, Nick Goldman, and Adrian Friday. "Maximum Likelihood Trees from DNA Sequences: A Peculiar Statistical Estimation Problem." Systematic Biology 44, no. 3 (1995): 384. http://dx.doi.org/10.2307/2413599.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Diao, Huai-An, Yimin Wei, and Pengpeng Xie. "Small sample statistical condition estimation for the total least squares problem." Numerical Algorithms 75, no. 2 (2016): 435–55. http://dx.doi.org/10.1007/s11075-016-0185-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Yang, Ziheng, Nick Goldman, and Adrian Friday. "Maximum Likelihood Trees from DNA Sequences: A Peculiar Statistical Estimation Problem." Systematic Biology 44, no. 3 (1995): 384–99. http://dx.doi.org/10.1093/sysbio/44.3.384.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Marek, Jaroslav, and Pavel Chmelař. "Survey of Point Cloud Registration Methods and New Statistical Approach." Mathematics 11, no. 16 (2023): 3564. http://dx.doi.org/10.3390/math11163564.

Full text
Abstract:
The use of a 3D range scanning device for autonomous object description or unknown environment mapping leads to the necessity of improving computer methods based on identical point pairs from different point clouds (so-called registration problem). The registration problem and three-dimensional transformation of coordinates still require further research. The paper attempts to guide the reader through the vast field of existing registration methods so that he can choose the appropriate approach for his particular problem. Furthermore, the article contains a regression method that enables the estimation of the covariance matrix of the transformation parameters and the calculation of the uncertainty of the estimated points. This makes it possible to extend existing registration methods with uncertainty estimates and to improve knowledge about the performed registration. The paper’s primary purpose is to present a survey of known methods and basic estimation theory concepts for the point cloud registration problem. The focus will be on the guiding principles of the estimation theory: ICP algorithm; Normal Distribution Transform; Feature-based registration; Iterative dual correspondences; Probabilistic iterative correspondence method; Point-based registration; Quadratic patches; Likelihood-field matching; Conditional random fields; Branch-and-bound registration; PointReg. The secondary purpose of this article is to show an innovative statistical model for this transformation problem. The new theory needs known covariance matrices of identical point coordinates. An unknown rotation matrix and shift vector have been estimated using a nonlinear regression model with nonlinear constraints. The paper ends with a relevant numerical example.
APA, Harvard, Vancouver, ISO, and other styles
37

Wu, Yong Zhong, and Jiang Wen Liu. "A Regression-Based Cycle Time Estimator for the PCB Placement Machine." Applied Mechanics and Materials 533 (February 2014): 487–90. http://dx.doi.org/10.4028/www.scientific.net/amm.533.487.

Full text
Abstract:
The assembly of printed circuit boards is conducted by the sophisticated PCB placement machines. The cycle time for a machine to complete an assembly job is determined by solving two machine optimization problems, i.e., the feeder arrangement and placement sequencing.Rapid and accurate estimation of these cycle times is important for solving the higher level planning problems including the line balancing problem. In this paper, a regression-based estimator was established to estimate the cycle time. Statistical results on 100 PCB samples showed that the proposed estimator could estimate the cycle times accurately. At the same time, the computationally efficiency makes it suitableto be used in solving higher level planning problems.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhu, Tianqi, and Ziheng Yang. "Complexity of the simplest species tree problem." Molecular Biology and Evolution 38, no. 9 (2021): 3993–4009. http://dx.doi.org/10.1093/molbev/msab009.

Full text
Abstract:
Abstract The multispecies coalescent model provides a natural framework for species tree estimation accounting for gene-tree conflicts. Although a number of species tree methods under the multispecies coalescent have been suggested and evaluated using simulation, their statistical properties remain poorly understood. Here, we use mathematical analysis aided by computer simulation to examine the identifiability, consistency, and efficiency of different species tree methods in the case of three species and three sequences under the molecular clock. We consider four major species-tree methods including concatenation, two-step, independent-sites maximum likelihood, and maximum likelihood. We develop approximations that predict that the probit transform of the species tree estimation error decreases linearly with the square root of the number of loci. Even in this simplest case, major differences exist among the methods. Full-likelihood methods are considerably more efficient than summary methods such as concatenation and two-step. They also provide estimates of important parameters such as species divergence times and ancestral population sizes,whereas these parameters are not identifiable by summary methods. Our results highlight the need to improve the statistical efficiency of summary methods and the computational efficiency of full likelihood methods of species tree estimation.
APA, Harvard, Vancouver, ISO, and other styles
39

Samar, Mahvish, Xinzhong Zhu, and Huiying Xu. "Conditioning Theory for ML-Weighted Pseudoinverse and ML-Weighted Least Squares Problem." Axioms 13, no. 6 (2024): 345. http://dx.doi.org/10.3390/axioms13060345.

Full text
Abstract:
The conditioning theory of the ML-weighted least squares and ML-weighted pseudoinverse problems is explored in this article. We begin by introducing three types of condition numbers for the ML-weighted pseudoinverse problem: normwise, mixed, and componentwise, along with their explicit expressions. Utilizing the derivative of the ML-weighted pseudoinverse problem, we then provide explicit condition number expressions for the solution of the ML-weighted least squares problem. To ensure reliable estimation of these condition numbers, we employ the small-sample statistical condition estimation method for all three algorithms. The article concludes with numerical examples that highlight the results obtained.
APA, Harvard, Vancouver, ISO, and other styles
40

Azhmyakov, Vadim, Ilya Shirokov, and Luz Adriana Guzman Trujillo. "Advanced Statistical Analysis of the Predicted Volatility Levels in Crypto Markets." Journal of Risk and Financial Management 17, no. 7 (2024): 279. http://dx.doi.org/10.3390/jrfm17070279.

Full text
Abstract:
Our paper deals with an advanced statistical tool for the volatility prediction problem in financial (crypto) markets. First, we consider the conventional GARCH-based volatility models. Next, we extend the corresponding GARCH-based forecasting and calculate a specific probability associated with the predicted volatility levels. As the probability evaluation is based on a stochastic model, we develop an advanced data-driven estimation of this probability. The novel statistical estimation we propose uses real market data. The obtained analytical results for the statistical probability of the levels are also discussed in the framework of the integrated volatility concept. The possible application of the established probability estimation approach to the volatility clustering problem is also mentioned. Our paper includes a concrete implementation of the proposed volatility prediction tool and considers a novel trading and volatility estimation module for crypto markets recently developed by the 1ex Trading Board group in collaboration with GoldenGate Venture. We also briefly discuss the possible application of a model combined with the data-driven volatility prediction methodology to financial risk management.
APA, Harvard, Vancouver, ISO, and other styles
41

Bannikova, T. M., V. M. Nemtsov, N. A. Baranova, G. N. Konygin, and O. M. Nemtsova. "A method for estimating the statistical error of the solution in the inverse spectroscopy problem." Izvestiya Instituta Matematiki i Informatiki Udmurtskogo Gosudarstvennogo Universiteta 58 (November 2021): 3–17. http://dx.doi.org/10.35634/2226-3594-2021-58-01.

Full text
Abstract:
A method for obtaining the interval of statistical error of the solution of the inverse spectroscopy problem, for the estimation of the statistical error of experimental data of which the normal distribution law can be applied, has been proposed. With the help of mathematical modeling of the statistical error of partial spectral components obtained from the numerically stable solution of the inverse problem, it has become possible to specify the error of the corresponding solution. The problem of getting the inverse solution error interval is actual because the existing methods of solution error evaluation are based on the analysis of smooth functional dependences under rigid restrictions on the region of acceptable solutions (compactness, monotonicity, etc.). Their use in computer processing of real experimental data is extremely difficult and therefore, as a rule, is not applied. Based on the extraction of partial spectral components and the estimation of their error, a method for obtaining an interval of statistical error for the solution of inverse spectroscopy problems has been proposed in this work. The necessity and importance of finding the solution error interval to provide reliable results is demonstrated using examples of processing Mössbauer spectra.
APA, Harvard, Vancouver, ISO, and other styles
42

Dohi, Tadashi, Hiroyuki Okamura, and Cun Hua Qian. "Statistical software fault management based on bootstrap confidence intervals." International Journal of Quality & Reliability Management 37, no. 6/7 (2020): 905–23. http://dx.doi.org/10.1108/ijqrm-10-2019-0326.

Full text
Abstract:
PurposeIn this paper, the authors propose two construction methods to estimate confidence intervals of the time-based optimal software rejuvenation policy and its associated maximum system availability via a parametric bootstrap method. Through simulation experiments the authors investigate their asymptotic behaviors and statistical properties.Design/methodology/approachThe present paper is the first challenge to derive the confidence intervals of the optimal software rejuvenation schedule, which maximizes the system availability in the sense of long run. In other words, the authors concern the statistical software fault management by employing an idea of process control in quality engineering and a parametric bootstrap.FindingsAs a remarkably different point from the existing work, the authors carefully take account of a special case where the two-sided confidence interval of the optimal software rejuvenation time does not exist due to that fact that the estimator distribution of the optimal software rejuvenation time is defective. Here the authors propose two useful construction methods of the two-sided confidence interval: conditional confidence interval and heuristic confidence interval.Research limitations/implicationsAlthough the authors applied a simulation-based bootstrap confidence method in this paper, another re-sampling-based approach can be also applied to the same problem. In addition, the authors just focused on a parametric bootstrap, but a non-parametric bootstrap method can be also applied to the confidence interval estimation of the optimal software rejuvenation time interval, when the complete knowledge on the distribution form is not available.Practical implicationsThe statistical software fault management techniques proposed in this paper are useful to control the system availability of operational software systems, by means of the control chart.Social implicationsThrough the online monitoring in operational software systems, it would be possible to estimate the optimal software rejuvenation time and its associated system availability, without applying any approximation. By implementing this function on application programming interface (API), it is possible to realize the low-cost fault-tolerance for software systems with aging.Originality/valueIn the past literature, almost all authors employed parametric and non-parametric inference techniques to estimate the optimal software rejuvenation time but just focused on the point estimation. This may often lead to the miss-judgment based on over-estimation or under-estimation under uncertainty. The authors overcome the problem by introducing the two-sided confidence interval approach.
APA, Harvard, Vancouver, ISO, and other styles
43

Tsai, Chia-Hsuan, and Ming-Tien Tsai. "Consistent Estimators of the Population Covariance Matrix and Its Reparameterizations." Mathematics 13, no. 2 (2025): 191. https://doi.org/10.3390/math13020191.

Full text
Abstract:
For the high-dimensional covariance estimation problem, when limn→∞p/n=c∈(0,1), the orthogonally equivariant estimator of the population covariance matrix proposed by Tsai and Tsai exhibits certain optimal properties. Under some regularity conditions, the authors showed that their novel estimators of eigenvalues are consistent with the eigenvalues of the population covariance matrix. In this paper, under the multinormal setup, we show that they are consistent estimators of the population covariance matrix under a high-dimensional asymptotic setup. We also show that the novel estimator is the MLE of the population covariance matrix when c∈(0,1). The novel estimator is used to establish that the optimal decomposite TT2-test has been retained. A high-dimensional statistical hypothesis testing problem is used to carry out statistical inference for high-dimensional principal component analysis-related problems without the sparsity assumption. In the final section, we discuss the situation in which p>n, especially for high-dimensional low-sample size categorical data models in which p>>n.
APA, Harvard, Vancouver, ISO, and other styles
44

Abdullah, Muhammad, Tahir N. Malik, Ali Ahmed, Muhammad F. Nadeem, Irfan A. Khan, and Rui Bo. "A Novel Hybrid GWO-LS Estimator for Harmonic Estimation Problem in Time Varying Noisy Environment." Energies 14, no. 9 (2021): 2587. http://dx.doi.org/10.3390/en14092587.

Full text
Abstract:
The power quality of the Electrical Power System (EPS) is greatly affected by electrical harmonics. Hence, accurate and proper estimation of electrical harmonics is essential to design appropriate filters for mitigation of harmonics and their associated effects on the power quality of EPS. This paper presents a novel statistical (Least Square) and meta-heuristic (Grey wolf optimizer) based hybrid technique for accurate detection and estimation of electrical harmonics with minimum computational time. The non-linear part (phase and frequency) of harmonics is estimated using GWO, while the linear part (amplitude) is estimated using the LS method. Furthermore, harmonics having transients are also estimated using proposed harmonic estimators. The effectiveness of the proposed harmonic estimator is evaluated using various case studies. Comparing the proposed approach with other harmonic estimation techniques demonstrates that it has a minimum mean square error with less complexity and better computational efficiency.
APA, Harvard, Vancouver, ISO, and other styles
45

Willick, Jeffrey A. "Statistical bias in distance and peculiar velocity estimation. 1: The 'calibration' problem." Astrophysical Journal Supplement Series 92 (May 1994): 1. http://dx.doi.org/10.1086/191957.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Burov, V. A., E. E. Kasatkina, O. D. Rumyantseva, and E. E. Sukhov. "Inverse problem of a statistical estimation of scatterer characteristics and model examples." Acoustical Physics 49, no. 3 (2003): 290–99. http://dx.doi.org/10.1134/1.1574356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Kwon, Suyong, Woohwan Jung, and Kyuseok Shim. "Cardinality estimation of approximate substring queries using deep learning." Proceedings of the VLDB Endowment 15, no. 11 (2022): 3145–57. http://dx.doi.org/10.14778/3551793.3551859.

Full text
Abstract:
Cardinality estimation of an approximate substring query is an important problem in database systems. Traditional approaches build a summary from the text data and estimate the cardinality using the summary with some statistical assumptions. Since deep learning models can learn underlying complex data patterns effectively, they have been successfully applied and shown to outperform traditional methods for cardinality estimations of queries in database systems. However, since they are not yet applied to approximate substring queries, we investigate a deep learning approach for cardinality estimation of such queries. Although the accuracy of deep learning models tends to improve as the train data size increases, producing a large train data is computationally expensive for cardinality estimation of approximate substring queries. Thus, we develop efficient train data generation algorithms by avoiding unnecessary computations and sharing common computations. We also propose a deep learning model as well as a novel learning method to quickly obtain an accurate deep learning-based estimator. Extensive experiments confirm the superiority of our data generation algorithms and deep learning model with the novel learning method.
APA, Harvard, Vancouver, ISO, and other styles
48

Ermakov, M. S. "Minimax estimation in a deconvolution problem." Journal of Physics A: Mathematical and General 25, no. 5 (1992): 1273–81. http://dx.doi.org/10.1088/0305-4470/25/5/030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Karimi Ezmareh, Z., and G. Yari. "Statistical Inference of Stress-Strength Reliability of Gompertz Distribution under Type II Censoring." Advances in Mathematical Physics 2022 (December 3, 2022): 1–14. http://dx.doi.org/10.1155/2022/2129677.

Full text
Abstract:
This paper develops the problem of estimating stress-strength reliability for Gompertz lifetime distribution. First, the maximum likelihood estimation (MLE) and exact and asymptotic confidence intervals for stress-strength reliability are obtained. Then, Bayes estimators under informative and noninformative prior distributions are obtained by using Lindley approximation, Monte Carlo integration, and MCMC. Bayesian credible intervals are constructed under these prior distributions. Also, simulation studies are used to illustrate these inference methods. Finally, a real dataset is analyzed to show the implementation of the proposed methodologies.
APA, Harvard, Vancouver, ISO, and other styles
50

Brunel, Victor-Emmanuel. "A change-point problem and inference for segment signals." ESAIM: Probability and Statistics 22 (2018): 210–35. http://dx.doi.org/10.1051/ps/2018014.

Full text
Abstract:
We address the problem of detection and estimation of one or two change-points in the mean of a series of random variables. We use the formalism of set estimation in regression: to each point of a design is attached a binary label that indicates whether that point belongs to an unknown segment and this label is contaminated with noise. The endpoints of the unknown segment are the change-points. We study the minimal size of the segment which allows statistical detection in different scenarios, including when the endpoints are separated from the boundary of the domain of the design, or when they are separated from one another. We compare this minimal size with the minimax rates of convergence for estimation of the segment under the same scenarios. The aim of this extensive study of a simple yet fundamental version of the change-point problem is two-fold: understanding the impact of the location and the separation of the change points on detection and estimation and bringing insights about the estimation and detection of convex bodies in higher dimensions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography