To see the other types of publications on this topic, follow the link: Least absolute deviations (Statistics).

Journal articles on the topic 'Least absolute deviations (Statistics)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Least absolute deviations (Statistics).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sudermann-Merx, Nathan, and Steffen Rebennack. "Leveraged least trimmed absolute deviations." OR Spectrum 43, no. 3 (2021): 809–34. http://dx.doi.org/10.1007/s00291-021-00627-y.

Full text
Abstract:
AbstractThe design of regression models that are not affected by outliers is an important task which has been subject of numerous papers within the statistics community for the last decades. Prominent examples of robust regression models are least trimmed squares (LTS), where the k largest squared deviations are ignored, and least trimmed absolute deviations (LTA) which ignores the k largest absolute deviations. The numerical complexity of both models is driven by the number of binary variables and by the value k of ignored deviations. We introduce leveraged least trimmed absolute deviations (LLTA) which exploits that LTA is already immune against y-outliers. Therefore, LLTA has only to be guarded against outlying values in x, so-called leverage points, which can be computed beforehand, in contrast to y-outliers. Thus, while the mixed-integer formulations of LTS and LTA have as many binary variables as data points, LLTA only needs one binary variable per leverage point, resulting in a significant reduction of binary variables. Based on 11 data sets from the literature, we demonstrate that (1) LLTA’s prediction quality improves much faster than LTS and as fast as LTA for increasing values of k and (2) that LLTA solves the benchmark problems about 80 times faster than LTS and about five times faster than LTA, in median.
APA, Harvard, Vancouver, ISO, and other styles
2

MORGENTHALER, STEPHAN. "Least-absolute-deviations fits for generalized linear models." Biometrika 79, no. 4 (1992): 747–54. http://dx.doi.org/10.1093/biomet/79.4.747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ziegelmann, Flavio A. "A Local Linear Least-Absolute-Deviations Estimator of Volatility." Communications in Statistics - Simulation and Computation 37, no. 8 (2008): 1543–64. http://dx.doi.org/10.1080/03610910802244398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ciuperca, Gabriela. "Penalized least absolute deviations estimation for nonlinear model with change-points." Statistical Papers 52, no. 2 (2009): 371–90. http://dx.doi.org/10.1007/s00362-009-0236-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Vazler, Ivan, Kristian Sabo, and Rudolf Scitovski. "Weighted Median of the Data in Solving Least Absolute Deviations Problems." Communications in Statistics - Theory and Methods 41, no. 8 (2012): 1455–65. http://dx.doi.org/10.1080/03610926.2010.539750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mathew, Thomas, and Kenneth Nordström. "Least squares and least absolute deviation procedures in approximately linear models." Statistics & Probability Letters 16, no. 2 (1993): 153–58. http://dx.doi.org/10.1016/0167-7152(93)90160-k.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

McKean, Joseph W., and Gerald L. Sievers. "Coefficients of determination for least absolute deviation analysis." Statistics & Probability Letters 5, no. 1 (1987): 49–54. http://dx.doi.org/10.1016/0167-7152(87)90026-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pan, Baoguo, Min Chen, Yan Wang, and Wei Xia. "Weighted least absolute deviations estimation for ARFIMA time series with finite or infinite variance." Journal of the Korean Statistical Society 44, no. 1 (2015): 1–11. http://dx.doi.org/10.1016/j.jkss.2014.04.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tyrsin, A. N. "Robust construction of regression models based on the generalized least absolute deviations method." Journal of Mathematical Sciences 139, no. 3 (2006): 6634–42. http://dx.doi.org/10.1007/s10958-006-0380-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chan, Ngai Hang, and Liang Peng. "Weighted least absolute deviations estimation for an AR(1) process with ARCH(1) errors." Biometrika 92, no. 2 (2005): 477–84. http://dx.doi.org/10.1093/biomet/92.2.477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Ling, Shiqing. "Self-weighted least absolute deviation estimation for infinite variance autoregressive models." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 67, no. 3 (2005): 381–93. http://dx.doi.org/10.1111/j.1467-9868.2005.00507.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Hongxia, Jinguan Lin, and Jinde Wang. "Local Linear Estimation for Spatiotemporal Models Based on Least Absolute Deviation." Communications in Statistics - Theory and Methods 44, no. 7 (2013): 1508–22. http://dx.doi.org/10.1080/03610926.2013.771744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Xinghui, Huilong Wang, Hongrui Wang, and Shuhe Hu. "Asymptotic inference of least absolute deviation estimation for AR(1) processes." Communications in Statistics - Theory and Methods 49, no. 4 (2018): 809–26. http://dx.doi.org/10.1080/03610926.2018.1549252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Silva, Lana Mirian Santos da, Luiz Carlos Estraviz Rodriguez, José Vicente Caixeta Filho, and Simone Carolina Bauch. "Fitting a taper function to minimize the sum of absolute deviations." Scientia Agricola 63, no. 5 (2006): 460–70. http://dx.doi.org/10.1590/s0103-90162006000500007.

Full text
Abstract:
Multiple product inventories of forests require accurate estimates of the diameter, length and volume of each product. Taper functions have been used to precisely describe tree form, once they provide estimates for the diameter at any height or the height at any diameter. This study applied a goal programming technique to estimate the parameters of two taper functions to describe individual tree forms. The goal programming formulation generates parameters that minimize total absolute deviations (MOTAD). These parameters generated by the MOTAD method were compared to those of ordinary least squares (OLS) method. The analysis used a set of 178 trees cut from cloned eucalyptus plantations in the Southern part of the state of Bahia, Brazil. The values of the estimated parameters for the two taper functions resulted very similar when the two methods were compared. There was no significant difference between the two fitting methods according to the statistics used to evaluate the quality of the generated estimates. OLS and MOTAD resulted equally precise in the estimation of diameters and volumes outside and inside bark.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhu, Qianqian, Ruochen Zeng, and Guodong Li. "Bootstrap Inference for Garch Models by the Least Absolute Deviation Estimation." Journal of Time Series Analysis 41, no. 1 (2019): 21–40. http://dx.doi.org/10.1111/jtsa.12474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Caner, Mehmet. "A NOTE ON LEAST ABSOLUTE DEVIATION ESTIMATION OF A THRESHOLD MODEL." Econometric Theory 18, no. 3 (2002): 800–814. http://dx.doi.org/10.1017/s0266466602183113.

Full text
Abstract:
This paper develops the limit law for the least absolute deviation estimator of the threshold parameter in linear regression. In this respect, we extend the literature of threshold models. The existing literature considers only the least squares estimation of the threshold parameter (see Chan, 1993, Annals of Statistics 21, 520–533; Hansen, 2000, Econometrica 68, 575–605). This result is useful because in the case of heavy-tailed errors there is an efficiency loss resulting from the use of least squares. Also, for the first time in the literature, we derive the limit law for the likelihood ratio test for the threshold parameter using the least absolute deviation technique.
APA, Harvard, Vancouver, ISO, and other styles
17

Abdullahi, Ibrahim, and Abubakar Yahaya. "Analysis of quantile regression as alternative to ordinary least squares." International Journal of Advanced Statistics and Probability 3, no. 2 (2015): 138. http://dx.doi.org/10.14419/ijasp.v3i2.4686.

Full text
Abstract:
<p>In this article, an alternative to ordinary least squares (OLS) regression based on analytical solution in the Statgraphics software is considered, and this alternative is no other than quantile regression (QR) model. We also present goodness of fit statistic as well as approximate distributions of the associated test statistics for the parameters. Furthermore, we suggest a goodness of fit statistic called the least absolute deviation (LAD) coefficient of determination. The procedure is well presented, illustrated and validated by a numerical example based on publicly available dataset on fuel consumption in miles per gallon in highway driving.</p>
APA, Harvard, Vancouver, ISO, and other styles
18

Wu, Rongning, and Richard A. Davis. "Least absolute deviation estimation for general autoregressive moving average time-series models." Journal of Time Series Analysis 31, no. 2 (2010): 98–112. http://dx.doi.org/10.1111/j.1467-9892.2009.00648.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Lihong, and Jinde Wang. "The limiting behavior of least absolute deviation estimators for threshold autoregressive models." Journal of Multivariate Analysis 89, no. 2 (2004): 243–60. http://dx.doi.org/10.1016/j.jmva.2004.02.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Wu, Rongning. "Least absolute deviation estimation for general ARMA time series models with infinite variance." Statistica Sinica 21, no. 2 (2011): 779. http://dx.doi.org/10.5705/ss.2011.035a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Catanzaro, Denys, and James C. Taylor. "The scaling of dispersion and correlation: A comparison of least-squares and absolute-deviation statistics." British Journal of Mathematical and Statistical Psychology 49, no. 1 (1996): 171–88. http://dx.doi.org/10.1111/j.2044-8317.1996.tb01081.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Smit, Merijn, and Konrad Kuijken. "Chasing the peak: optimal statistics for weak shear analyses." Astronomy & Astrophysics 609 (January 2018): A103. http://dx.doi.org/10.1051/0004-6361/201731410.

Full text
Abstract:
Context. Weak gravitational lensing analyses are fundamentally limited by the intrinsic distribution of galaxy shapes. It is well known that this distribution of galaxy ellipticity is non-Gaussian, and the traditional estimation methods, explicitly or implicitly assuming Gaussianity, are not necessarily optimal. Aims. We aim to explore alternative statistics for samples of ellipticity measurements. An optimal estimator needs to be asymptotically unbiased, efficient, and robust in retaining these properties for various possible sample distributions. We take the non-linear mapping of gravitational shear and the effect of noise into account. We then discuss how the distribution of individual galaxy shapes in the observed field of view can be modeled by fitting Fourier modes to the shear pattern directly. This allows scientific analyses using statistical information of the whole field of view, instead of locally sparse and poorly constrained estimates. Methods. We simulated samples of galaxy ellipticities, using both theoretical distributions and data for ellipticities and noise. We determined the possible bias Δe, the efficiency η and the robustness of the least absolute deviations, the biweight, and the convex hull peeling (CHP) estimators, compared to the canonical weighted mean. Using these statistics for regression, we have shown the applicability of direct Fourier mode fitting. Results. We find an improved performance of all estimators, when iteratively reducing the residuals after de-shearing the ellipticity samples by the estimated shear, which removes the asymmetry in the ellipticity distributions. We show that these estimators are then unbiased in the absence of noise, and decrease noise bias by more than ~30%. Our results show that the CHP estimator distribution is skewed, but still centered around the underlying shear, and its bias least affected by noise. We find the least absolute deviations estimator to be the most efficient estimator in almost all cases, except in the Gaussian case, where it’s still competitive (0.83 < η < 5.1) and therefore robust. These results hold when fitting Fourier modes, where amplitudes of variation in ellipticity are determined to the order of 10-3. Conclusions. The peak of the ellipticity distribution is a direct tracer of the underlying shear and unaffected by noise, and we have shown that estimators that are sensitive to a central cusp perform more efficiently, potentially reducing uncertainties by more than 50% and significantly decreasing noise bias. These results become increasingly important, as survey sizes increase and systematic issues in shape measurements decrease.
APA, Harvard, Vancouver, ISO, and other styles
23

Herce, Miguel A. "Asymptotic Theory of LAD Estimation in a Unit Root Process with Finite Variance Errors." Econometric Theory 12, no. 1 (1996): 129–53. http://dx.doi.org/10.1017/s0266466600006472.

Full text
Abstract:
In this paper we derive the asymptotic distribution of the least absolute deviations (LAD) estimator of the autoregressive parameter under the unit root hypothesis, when the errors are assumed to have finite variances, and present LAD-based unit root tests, which, under heavy-tailed errors, are expected to be more powerful than tests based on least squares. The limiting distribution of the LAD estimator is that of a functional of a bivariate Brownian motion, similar to those encountered in cointegrating regressions. By appropriately correcting for serial correlation and other distributional parameters, the test statistics introduced here are found to have either conditional or unconditional normal limiting distributions. The results of the paper complement similar ones obtained by Knight (1991, Canadian Journal of Statistics 17, 261-278) for infinite variance errors. A simulation study is conducted to investigate the finite sample properties of our tests.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhao, Quanshui. "ASYMPTOTICALLY EFFICIENT MEDIAN REGRESSION IN THE PRESENCE OF HETEROSKEDASTICITY OF UNKNOWN FORM." Econometric Theory 17, no. 4 (2001): 765–84. http://dx.doi.org/10.1017/s0266466601174050.

Full text
Abstract:
We consider a linear model with heteroskedasticity of unknown form. Using Stone's (1977, Annals of Statistics 5, 595–645) k nearest neighbors (k-NN) estimation approach, the optimal weightings for efficient least absolute deviation regression are estimated consistently using residuals from preliminary estimation. The reweighted least absolute deviation or median regression estimator with the estimated weights is shown to be equivalent to the estimator using the true but unknown weights under mild conditions. Asymptotic normality of the estimators is also established. In the finite sample case, the proposed estimators are found to outperform the generalized least squares method of Robinson (1987, Econometrica 55, 875–891) and the one-step estimator of Newey and Powell (1990, Econometric Theory 6, 295–317) based on a Monte Carlo simulation experiment.
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, Dan, and Xiangfeng Yang. "Maximum likelihood estimation for uncertain autoregressive model with application to carbon dioxide emissions." Journal of Intelligent & Fuzzy Systems 40, no. 1 (2021): 1391–99. http://dx.doi.org/10.3233/jifs-201724.

Full text
Abstract:
The objective of uncertain time series analysis is to explore the relationship between the imprecise observation data over time and to predict future values, where these data are uncertain variables in the sense of uncertainty theory. In this paper, the method of maximum likelihood is used to estimate the unknown parameters in the uncertain autoregressive model, and the unknown parameters of uncertainty distributions of the disturbance terms are simultaneously obtained. Based on the fitted autoregressive model, the forecast value and confidence interval of the future data are derived. Besides, the mean squared error is proposed to measure the goodness of fit among different estimation methods, and an algorithm is introduced. Finally, the comparative analysis of the least squares, least absolute deviations, and maximum likelihood estimations are given, and two examples are presented to verify the feasibility of this approach.
APA, Harvard, Vancouver, ISO, and other styles
26

Li, Guodong, and Wai Keung Li. "Diagnostic checking for time series models with conditional heteroscedasticity estimated by the least absolute deviation approach." Biometrika 92, no. 3 (2005): 691–701. http://dx.doi.org/10.1093/biomet/92.3.691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Li, G., and W. K. Li. "Least absolute deviation estimation for fractionally integrated autoregressive moving average time series models with conditional heteroscedasticity." Biometrika 95, no. 2 (2008): 399–414. http://dx.doi.org/10.1093/biomet/asn014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Lerspipatthananon, Wanida, and Peerayuth Charnsethikul. "USING COLUMN GENERATION TECHNIQUE TO ESTIMATE PROBABILITY STATISTICS IN TRANSITION MATRIX OF LARGE SCALE MARKOV CHAIN WITH LEAST ABSOLUTE DEVIATION CRITERIA." Journal of Mathematics and Statistics 10, no. 3 (2014): 331–38. http://dx.doi.org/10.3844/jmssp.2014.331.338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Krivulin, Nikolai. "Using Parameter Elimination to Solve Discrete Linear Chebyshev Approximation Problems." Mathematics 8, no. 12 (2020): 2210. http://dx.doi.org/10.3390/math8122210.

Full text
Abstract:
We consider discrete linear Chebyshev approximation problems in which the unknown parameters of linear function are fitted by minimizing the least maximum absolute deviation of errors. Such problems find application in the solution of overdetermined systems of linear equations that appear in many practical contexts. The least maximum absolute deviation estimator is used in regression analysis in statistics when the distribution of errors has bounded support. To derive a direct solution of the problem, we propose an algebraic approach based on a parameter elimination technique. As a key component of the approach, an elimination lemma is proved to handle the problem by reducing it to a problem with one parameter eliminated, together with a box constraint imposed on this parameter. We demonstrate the application of the lemma to the direct solution of linear regression problems with one and two parameters. We develop a procedure to solve multidimensional approximation (multiple linear regression) problems in a finite number of steps. The procedure follows a method that comprises two phases: backward elimination and forward substitution of parameters. We describe the main components of the procedure and estimate its computational complexity. We implement symbolic computations in MATLAB to obtain exact solutions for two numerical examples.
APA, Harvard, Vancouver, ISO, and other styles
30

Wada, Kazumi, Keiichiro Sakashita, and Hiroe Tsubaki. "Robust Estimation for a Generalised Ratio Model." Austrian Journal of Statistics 50, no. 1 (2021): 74–87. http://dx.doi.org/10.17713/ajs.v50i1.994.

Full text
Abstract:
It is known that data such as business sales and household income need data transformation prior to regression estimate as the data has a homoscedastic error. However, data transformations make the estimation of mean and total unstable. Therefore, the ratio model is often used for imputation in the field of official statistics to avoid the problem. Our study aims to robustify the estimator following the ratio model by means of M-estimation. Reformulation of the conventional ratio model with homoscedastic quasi-error term provides quasi-residuals which can be used as a measure of outlyingness as same as a linear regression model. A generalisation of the model, which accommodates varied error terms with different heteroscedasticity, is also proposed. Functions for robustified estimators of the generalised ratio model are implemented by the iterative re-weighted least squares algorithm in R environment and illustrated using random datasets. Monte Carlo simulation confirms accuracy of the proposed estimators, as well as their computational efficiency. A comparison of the scale parameters between the average absolute deviation (AAD) and median absolute deviation (MAD) is made regarding Tukey's biweight function. The results with Huber's weight function are also provided for reference. The proposed robust estimator of the generalised ratio model is used for imputation of major corporate accounting items of the 2016 Economic Census for Business Activity in Japan.
APA, Harvard, Vancouver, ISO, and other styles
31

Khan, Naveed Ahmad, Muhammad Sulaiman, Carlos Andrés Tavera Romero, and Fawaz Khaled Alarfaj. "Numerical Analysis of Electrohydrodynamic Flow in a Circular Cylindrical Conduit by Using Neuro Evolutionary Technique." Energies 14, no. 22 (2021): 7774. http://dx.doi.org/10.3390/en14227774.

Full text
Abstract:
This paper analyzes the mathematical model of electrohydrodynamic (EHD) fluid flow in a circular cylindrical conduit with an ion drag configuration. The phenomenon was modelled as a nonlinear differential equation. Furthermore, an application of artificial neural networks (ANNs) with a generalized normal distribution optimization algorithm (GNDO) and sequential quadratic programming (SQP) were utilized to suggest approximate solutions for the velocity, displacements, and acceleration profiles of the fluid by varying the Hartmann electric number (Ha2) and the strength of nonlinearity (α). ANNs were used to model the fitness function for the governing equation in terms of mean square error (MSE), which was further optimized initially by GNDO to exploit the global search. Then SQP was implemented to complement its local convergence. Numerical solutions obtained by the design scheme were compared with RK-4, the least square method (LSM), and the orthonormal Bernstein collocation method (OBCM). Stability, convergence, and robustness of the proposed algorithm were endorsed by the statistics and analysis on results of absolute errors, mean absolute deviation (MAD), Theil’s inequality coefficient (TIC), and error in Nash Sutcliffe efficiency (ENSE).
APA, Harvard, Vancouver, ISO, and other styles
32

Sullivan, Patrick W., Julia F. Slejko, Mark J. Sculpher, and Vahram Ghushchyan. "Catalogue of EQ-5D Scores for the United Kingdom." Medical Decision Making 31, no. 6 (2011): 800–804. http://dx.doi.org/10.1177/0272989x11401031.

Full text
Abstract:
Background. The National Institute for Health and Clinical Excellence (NICE) has issued guidance on cost-effectiveness analyses, suggesting that preference-based health-related quality of life (HRQL) weights or utilities be based on UK community preferences, preferably using the EQ-5D; ideally all analyses would use the same system for deriving HRQL weights, to encourage consistency and comparability across analyses. Development of a catalogue of EQ-5D scores for a range of health conditions based on UK preferences would help achieve many of these goals. Objective. To provide a UK-based catalogue of EQ-5D index scores. Methods. Methods were consistent with the previously published catalogue of EQ-5D scores for the US. Community-based UK preferences were applied to EQ-5D descriptive questionnaire responses in the US-based Medical Expenditure Panel Survey (MEPS). Ordinary least squares (OLS), Tobit, and censored least absolute deviations (CLAD) regression methods were used to estimate the ‘marginal disutility’ of each condition controlling for covariates. Results. Pooled MEPS files (2000-2003) resulted in 79,522 individuals with complete EQ-5D scores. Marginal disutilities for 135 chronic ICD-9 and 100 CCC codes are provided. Unadjusted descriptive statistics including mean, median, 25th and 75th percentiles are also reported. Conclusion. This research provides community-based EQ-5D index scores for a wide variety of chronic conditions that can be used to estimate QALYs in cost-effectiveness analyses in the UK. Although using EQ-5D questionnaire responses from the US-based MEPS is less than ideal, the estimates approximate HRQL guidelines by NICE and provide an easily accessible“off-the-shelf” resource for cost-effectiveness and publichealth applications.
APA, Harvard, Vancouver, ISO, and other styles
33

Nyquist, Hans. "Least orthogonal absolute deviations." Computational Statistics & Data Analysis 6, no. 4 (1988): 361–67. http://dx.doi.org/10.1016/0167-9473(88)90076-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Xia, Yan, Shuguo Pan, Xiaolin Meng, Wang Gao, and He Wen. "Robust Statistical Detection of GNSS Multipath Using Inter-Frequency C/N0 Differences." Remote Sensing 12, no. 20 (2020): 3388. http://dx.doi.org/10.3390/rs12203388.

Full text
Abstract:
Multipath detection and mitigation are crucial issues for global navigation satellite system (GNSS) high-precision positioning. The multi-frequency carrier power-to-noise density ratio (C/N0)-based multipath detection technique has achieved good results in real-time static and low-dynamic applications, and shown better practicability because of the low computational load and the requirement for little additional hardware. However, the classic multipath detection method based on inter-frequency C/N0 differences directly employs the 3σ rule to determine the threshold without considering the distribution of detection statistics and their variation characteristics with elevation angle, and ignores the interference of outliers to the reference functions. A robust multipath detection method is proposed in this paper. The reference functions of C/N0 differences are fitted using least absolute deviation (LAD) to obtain more accurate nominal values. According to the skew characteristics of the detection statistics, a medcouple (MC)-based adjusted boxplot is employed to determine the threshold. The performance of the new detection method is verified in the multipath environments. The experimental results show that compared with the classic method, the new multipath detector has strong robustness and can respond more accurately to large changes in multipath (MP) combination values at most elevation angles. It is sensitive to short-delay multipath and diffraction, and is an important supplement to multipath detection techniques.
APA, Harvard, Vancouver, ISO, and other styles
35

Sokolovsky, K. V., A. Z. Bonanos, P. Gavras, et al. "The Hubble Catalog of Variables (HCV)." Proceedings of the International Astronomical Union 14, S339 (2017): 91–94. http://dx.doi.org/10.1017/s1743921318002296.

Full text
Abstract:
AbstractThe Hubble Source Catalog (HSC) combines lists of sources detected on images obtained with the WFPC2, ACS and WFC3 instruments aboard the Hubble Space Telescope (HST) and now available in the Hubble Legacy Archive. The catalogue contains time-domain information for about two million of its sources detected using the same instrument and filter on at least five HST visits. The Hubble Catalog of Variables (HCV) aims to identify HSC sources showing significant brightness variations. A magnitude-dependent threshold in the median absolute deviation of photometric measurements (an outlier-resistant measure of light-curve scatter) is adopted as the variability detection statistic. It is supplemented with a cut in χred2 that removes sources with large photometric errors. A pre-processing procedure involving bad image identification, outlier rejection and computation of local magnitude zero-point corrections is applied to the HSC light-curves before computing the variability detection statistics. About 52 000 HSC sources have been identified as candidate variables, among which 7,800 show variability in more than one filter. Visual inspection suggests that ∼70% of the candidates detected in multiple filters are true variables, while the remaining ∼30% are sources with aperture photometry corrupted by blending, imaging artefacts or image processing anomalies. The candidate variables have AB magnitudes in the range 15–27m, with a median of 22m. Among them are the stars in our own and nearby galaxies, and active galactic nuclei.
APA, Harvard, Vancouver, ISO, and other styles
36

Powell, Roger, Eleanor C. R. Green, Estephany Marillo Sialer, and Jon Woodhead. "Robust isochron calculation." Geochronology 2, no. 2 (2020): 325–42. http://dx.doi.org/10.5194/gchron-2-325-2020.

Full text
Abstract:
Abstract. The standard classical statistics approach to isochron calculation assumes that the distribution of uncertainties on the data arising from isotopic analysis is strictly Gaussian. This effectively excludes datasets that have more scatter from consideration, even though many appear to have age significance. A new approach to isochron calculations is developed in order to circumvent this problem, requiring only that the central part of the uncertainty distribution of the data defines a “spine” in the trend of the data. This central spine can be Gaussian but this is not a requirement. This approach significantly increases the range of datasets from which age information can be extracted but also provides seamless integration with well-behaved datasets and thus all legacy age determinations. The approach is built on the robust statistics of Huber (1981) but using the data uncertainties for the scale of data scatter around the spine rather than a scale derived from the scatter itself, ignoring the data uncertainties. This robust data fitting reliably determines the position of the spine when applied to data with outliers but converges on the classical statistics approach for datasets without outliers. The spine width is determined by a robust measure, the normalised median absolute deviation of the distances of the data points to the centre of the spine, divided by the uncertainties on the distances. A test is provided to ascertain that there is a spine in the data, requiring that the spine width is consistent with the uncertainties expected for Gaussian-distributed data. An iteratively reweighted least squares algorithm is presented to calculate the position of the robust line and its uncertainty, accompanied by an implementation in Python.
APA, Harvard, Vancouver, ISO, and other styles
37

Håkansson, Nina, Claudia Adok, Anke Thoss, Ronald Scheirer, and Sara Hörnquist. "Neural network cloud top pressure and height for MODIS." Atmospheric Measurement Techniques 11, no. 5 (2018): 3177–96. http://dx.doi.org/10.5194/amt-11-3177-2018.

Full text
Abstract:
Abstract. Cloud top height retrieval from imager instruments is important for nowcasting and for satellite climate data records. A neural network approach for cloud top height retrieval from the imager instrument MODIS (Moderate Resolution Imaging Spectroradiometer) is presented. The neural networks are trained using cloud top layer pressure data from the CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) dataset. Results are compared with two operational reference algorithms for cloud top height: the MODIS Collection 6 Level 2 height product and the cloud top temperature and height algorithm in the 2014 version of the NWC SAF (EUMETSAT (European Organization for the Exploitation of Meteorological Satellites) Satellite Application Facility on Support to Nowcasting and Very Short Range Forecasting) PPS (Polar Platform System). All three techniques are evaluated using both CALIOP and CPR (Cloud Profiling Radar for CloudSat (CLOUD SATellite)) height. Instruments like AVHRR (Advanced Very High Resolution Radiometer) and VIIRS (Visible Infrared Imaging Radiometer Suite) contain fewer channels useful for cloud top height retrievals than MODIS, therefore several different neural networks are investigated to test how infrared channel selection influences retrieval performance. Also a network with only channels available for the AVHRR1 instrument is trained and evaluated. To examine the contribution of different variables, networks with fewer variables are trained. It is shown that variables containing imager information for neighboring pixels are very important. The error distributions of the involved cloud top height algorithms are found to be non-Gaussian. Different descriptive statistic measures are presented and it is exemplified that bias and SD (standard deviation) can be misleading for non-Gaussian distributions. The median and mode are found to better describe the tendency of the error distributions and IQR (interquartile range) and MAE (mean absolute error) are found to give the most useful information of the spread of the errors. For all descriptive statistics presented MAE, IQR, RMSE (root mean square error), SD, mode, median, bias and percentage of absolute errors above 0.25, 0.5, 1 and 2 km the neural network perform better than the reference algorithms both validated with CALIOP and CPR (CloudSat). The neural networks using the brightness temperatures at 11 and 12 µm show at least 32 % (or 623 m) lower MAE compared to the two operational reference algorithms when validating with CALIOP height. Validation with CPR (CloudSat) height gives at least 25 % (or 430 m) reduction of MAE.
APA, Harvard, Vancouver, ISO, and other styles
38

Jiang, Feng, Lin Du, Fan Yang, and Zi-Chen Deng. "Regularized least absolute deviation-based sparse identification of dynamical systems." Chaos: An Interdisciplinary Journal of Nonlinear Science 33, no. 1 (2023): 013103. http://dx.doi.org/10.1063/5.0130526.

Full text
Abstract:
This work develops a regularized least absolute deviation-based sparse identification of dynamics (RLAD-SID) method to address outlier problems in the classical metric-based loss function and the sparsity constraint framework. Our method uses absolute derivation loss as a substitute of Euclidean loss. Moreover, a corresponding computationally efficient optimization algorithm is derived on the basis of the alternating direction method of multipliers due to the non-smoothness of both the new proposed loss function and the regularization term. Numerical experiments are performed to evaluate the effectiveness of RLAD-SID using several exemplary nonlinear dynamical systems, such as the van der Pol equation, the Lorenz system, and the 1D discrete logistic map. Furthermore, detailed numerical comparisons are provided with other existing methods in metric-based sparse regression. Numerical results demonstrate that (1) RLAD-SID shows significant robustness toward a large outlier and (2) RLAD-SID can be seen as a particular metric-based sparse regression strategy that exhibits the effectiveness of the metric-based sparse regression framework for solving outlier problems in a dynamical system identification.
APA, Harvard, Vancouver, ISO, and other styles
39

Santos, André Luiz Pinto dos, Guilherme Rocha Moreira, Cicero Carlos Ramos de Brito, et al. "Method to generate growth and degrowth models obtained from differential equations applied to agrarian sciences." Semina: Ciências Agrárias 39, no. 6 (2018): 2659. http://dx.doi.org/10.5433/1679-0359.2018v39n6p2659.

Full text
Abstract:
This study aims to propose a method to generate growth and degrowth models using differential equations as well as to present a model based on the method proposed, compare it with the classic linear mathematical models Logistic, Von Bertalanffy, Brody, Gompertz, and Richards, and identify the one that best represents the mean growth curve. To that end, data on Undefined Breed (UB) goats and Santa Inês sheep from the works of Cavalcante et al. (2013) and Sarmento et al. (2006a), respectively, were used. Goodness-of-fit was measured using residual mean squares (RMS), Akaike information criterion (AIC), Bayesian information criterion (BIC), mean absolute deviation (MAD), and adjusted coefficient of determination . The models’ parameters (?, weight at adulthood; ?, an integration constant; ?, shape parameter with no biological interpretation; k, maturation rate; and m, inflection point) were estimated by the least squares method using Levenberg-Marquardt algorithm on the software IBM SPSS Statistics 1.0. It was observed that the proposed model was superior to the others to study the growth curves of goats and sheep according to the methodology and conditions under which the present study was carried out.
APA, Harvard, Vancouver, ISO, and other styles
40

Saha, Bikash Chandra, Joshuva Arockia Dhanraj, M. Sujatha, et al. "Investigating Rotor Conditions on Wind Turbines Using Integrating Tree Classifiers." International Journal of Photoenergy 2022 (June 9, 2022): 1–14. http://dx.doi.org/10.1155/2022/5389574.

Full text
Abstract:
Renewable wind power is productive and feasible to manage the energy crisis and global warming. The wind turbine’s blades are the essential components. The dimension of wind turbine blades has been increased with blade sizes varying from approx. 25 m up to approx. 100 m or even greater with a specific purpose to increase energy efficiency. While wind turbine blades tend to be highly stressed by environmental conditions, the wind turbine blade must be constantly tested, inspected, and monitored for wind turbine blades safety monitoring. This research presents a methodology adaptation on machine learning technique for appropriate classification of different failure conditions on blade during turbine operation. Five defects were reported for the diagnosis study of defective wind turbine rotor blades, and the considered defects are blade crack, erosion, loose hub blade contact, angle twist, and blade bend. The statistical features have been drawn from the recorded vibration signals, and the important features was selected through J48 classifier. Eight tree-dependent classifiers were used to categorize the state of the rotor blades. Among the classifiers, the least absolute deviation tree performed better with the classification percentage of 90% ( Kappa statistics = 0.88 , MAE = 0.0362 , and RMSE = 0.1704 ) with a computational time of 0.06 s.
APA, Harvard, Vancouver, ISO, and other styles
41

Martin, R. Douglas, and Daniel Z. Xia. "Efficient bias robust regression for time series factor models." Journal of Asset Management 23, no. 3 (2022): 215–34. http://dx.doi.org/10.1057/s41260-022-00258-0.

Full text
Abstract:
AbstractWe introduce a robust regression estimator for time series factor models called the mOpt estimator. This estimator minimizes the maximum bias due to outlier generating distribution deviations from a standard normal errors distribution model, and at the same time has a high normal distribution efficiency. We demonstrate the efficacy of the mOpt estimator in comparison with the non-robust least squares (LS) estimator in applications to both single factor and multifactor time series models. For the case of single factor CAPM models we compared mOpt and LS estimates for cross sections of liquid stocks from the CRSP database in each contiguous two-year interval from 1963 to 1980. The results show that absolute differences between the two estimates greater than 0.3 occur for about 18% of the stocks, and differences greater than 0.5 occur for about 7.5% of the stocks. Our application of the mOpt estimator to multifactor models focuses on fitting the Fama-French 3-factor and the Fama-French-Carhart 4-factor models to weekly stock returns for the year 2008, using both the robust t-statistics associated with the mOpt estimates and a new statistical test for differences between the mOpt and LS coefficients. The results demonstrate the efficacy of the mOpt estimator in providing better model fits than the LS estimates, which are adversely influenced by outliers. Finally, since model selection is an important aspect of time series factor model fitting, we introduce a new robust prediction errors based model selection criterion called the Robust Final Prediction Error (RFPE), which makes natural use of the mOpt regression estimator. When applied to the 4-factor model, the RFPE finds as the best subset model the one that contains the Market, SMB and MOM factors, not the three Fama-French factors Market, SMB and HML. We anticipate that RFPE will prove to be quite useful for model selection of time series factor models.
APA, Harvard, Vancouver, ISO, and other styles
42

Lee, Chau-Kwor, and J. Keith Ord. "Discriminant Analysis Using Least Absolute Deviations." Decision Sciences 21, no. 1 (1990): 86–96. http://dx.doi.org/10.1111/j.1540-5915.1990.tb00318.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Späth, H. "Clusterwise linear least absolute deviations regression." Computing 37, no. 4 (1986): 371–77. http://dx.doi.org/10.1007/bf02251095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Sigauke, Caston, Murendeni Nemukula, and Daniel Maposa. "Probabilistic Hourly Load Forecasting Using Additive Quantile Regression Models." Energies 11, no. 9 (2018): 2208. http://dx.doi.org/10.3390/en11092208.

Full text
Abstract:
Short-term hourly load forecasting in South Africa using additive quantile regression (AQR) models is discussed in this study. The modelling approach allows for easy interpretability and accounting for residual autocorrelation in the joint modelling of hourly electricity data. A comparative analysis is done using generalised additive models (GAMs). In both modelling frameworks, variable selection is done using least absolute shrinkage and selection operator (Lasso) via hierarchical interactions. Four models considered are GAMs and AQR models with and without interactions, respectively. The AQR model with pairwise interactions was found to be the best fitting model. The forecasts from the four models were then combined using an algorithm based on the pinball loss (convex combination model) and also using quantile regression averaging (QRA). The AQR model with interactions was then compared with the convex combination and QRA models and the QRA model gave the most accurate forecasts. Except for the AQR model with interactions, the other two models (convex combination model and QRA model) gave prediction interval coverage probabilities that were valid for the 90 % , 95 % and the 99 % prediction intervals. The QRA model had the smallest prediction interval normalised average width and prediction interval normalised average deviation. The modelling framework discussed in this paper has established that going beyond summary performance statistics in forecasting has merit as it gives more insight into the developed forecasting models.
APA, Harvard, Vancouver, ISO, and other styles
45

Kovalchuk, I. P., K. A. Lukianchuk, and V. A. Bogdanets. "Assessment of open source digital elevation models (SRTM-30, ASTER, ALOS) for erosion processes modeling." Journal of Geology, Geography and Geoecology 28, no. 1 (2019): 95–105. http://dx.doi.org/10.15421/111911.

Full text
Abstract:
The relief has a major impact on the landscape`s hydrological, geomorphological and biological processes. Many geographic information systems used elevation data as the primary data for analysis, modeling, etc. A digital elevation model (DEM) is a modern representation of the continuous variations of relief over space in digital form. Digital Elevation Models (DEMs) are important source for prediction of soil erosion parameters. The potential of global open source DEMs (SRTM, ASTER, ALOS) and their suitability for using in modeling of erosion processes are assessed in this study. Shumsky district of Ternopil region, which is located in the Western part of Ukraine, is the area of our study. The soils of Shumsky district are adverselyaffected by erosion processes. The analysis was performed on the basis of the characteristics of the hydrological network and relief. The reference DEM was generated from the hypsographic data(contours) on the 1:50000 topographical map series compiled by production units of the Main Department of Geodesy and Cartography under the Council of Ministers. The differences between the reference DEM and open source DEMs (SRTM, ASTER and ALOS) are examined. Methods of visual detection of DEM defects, profiling, correlation, and statistics were used in the comparative analysis. This research included the analysis oferrors that occurred during the generation of DEM. The vertical accuracy of these DEMs, root mean square error (RMSE), absolute and relative errors, maximum deviation, and correlation coefficient have been calculated. Vertical accuracy of DEMs has been assessed using actual heights of the sample points. The analysis shows that SRTM and ALOS DEMs are more reliable and accurate than ASTER GDEM. The results indicate that vertical accuracy of DEMs is 7,02m, 7,12 m, 7,60 mand 8,71 m for ALOS, SRTM 30, SRTM 90 and ASTER DEMs respectively. ASTER GDEM had the highest absolute, relative and root mean square errors, the highest maximum positive and negative deviation, a large difference with reference heights, and the lowest correlation coefficient. Therefore, ASTER GDEM is the least acceptable for studying the intensity and development of erosion processes. The use of global open source DEMs, compared with the vectorization of topographic maps,greatly simplifies and accelerates the modeling of erosion processes and the assessment of the erosion risk in the administrative district.
APA, Harvard, Vancouver, ISO, and other styles
46

Xue, Wei, Wensheng Zhang, and Gaohang Yu. "Least absolute deviations learning of multiple tasks." Journal of Industrial & Management Optimization 14, no. 2 (2018): 719–29. http://dx.doi.org/10.3934/jimo.2017071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Rogers, Alan J. "LEAST ABSOLUTE DEVIATIONS REGRESSION UNDER NONSTANDARD CONDITIONS." Econometric Theory 17, no. 4 (2001): 820–52. http://dx.doi.org/10.1017/s0266466601174074.

Full text
Abstract:
Most work on the asymptotic properties of least absolute deviations (LAD) estimators makes use of the assumption that the common distribution of the disturbances has a density that is both positive and finite at zero. We consider the implications of weakening this assumption in a number of regression settings, primarily with a time series orientation. These models include ones with deterministic and stochastic trends, and we pay particular attention to the case of a simple unit root model. The way in which the conventional assumption on the error distribution is modified is motivated in part by N.V. Smirnov's work on domains of attraction in the asymptotic theory of sample quantiles. The approach adopted usually allows for simple characterizations (often featuring a single parameter, γ), of both the shapes of the limiting distributions of the LAD estimators and their convergence rates. The present paper complements the closely related recent work of K. Knight.
APA, Harvard, Vancouver, ISO, and other styles
48

Mokhtari, Mikaeel, Tofigh Allahviranloo, Mohammad Hassan Behzadi, and Farhad Hoseinzadeh Lotfi. "Introducing a trapezoidal interval type-2 fuzzy regression model." Journal of Intelligent & Fuzzy Systems 42, no. 3 (2022): 1381–403. http://dx.doi.org/10.3233/jifs-210340.

Full text
Abstract:
The uncertainty is an important attribute about data that can arise from different sources including randomness and fuzziness, therefore in uncertain environments, especially, in modeling, planning, decision-making, and control under uncertainty, most data available contain some degree of fuzziness, randomness, or both, and at the same time, some of this data may be anomalous (outliers). In this regard, the new fuzzy regression approaches by creating a functional relationship between response and explanatory variables can provide efficient tools to explanation, prediction and possibly control of randomness, fuzziness, and outliers in the data obtained from uncertain environments. In the present study, we propose a new two-stage fuzzy linear regression model based on a new interval type-2 (IT2) fuzzy least absolute deviation (FLAD) method so that regression coefficients and dependent variables are trapezoidal IT2 fuzzy numbers and independent variables are crisp. In the first stage, to estimate the IT2 fuzzy regression coefficients and provide an initial model (by original dataset), we introduce two new distance measures for comparison of IT2 fuzzy numbers and propose a novel framework for solving fuzzy mathematical programming problems. In the second stage, we introduce a new procedure to determine the mild and extreme fuzzy outlier cutoffs and apply them to remove the outliers, and then provide the final model based on a clean dataset. Furthermore, to evaluate the performance of the proposed methodology, we introduce and employ suitable goodness of fit indices. Finally, to illustrate the theoretical results of the proposed method and explain how it can be used to derive the regression model with IT2 trapezoidal fuzzy data, as well as compare the performance of the proposed model with some well-known models using training data designed by Tanaka et al. [55], we provide two numerical examples.
APA, Harvard, Vancouver, ISO, and other styles
49

Babu, Gutti Jogesh, and C. Radhakrishna Rao. "Expansions for statistics involving the mean absolute deviations." Annals of the Institute of Statistical Mathematics 44, no. 2 (1992): 387–403. http://dx.doi.org/10.1007/bf00058648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Li, Ta-Hsin. "On robust spectral analysis by least absolute deviations." Journal of Time Series Analysis 33, no. 2 (2011): 298–303. http://dx.doi.org/10.1111/j.1467-9892.2011.00760.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!