Academic literature on the topic 'Least trimmed squares estimator'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Least trimmed squares estimator.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Dissertations / Theses on the topic "Least trimmed squares estimator"

1

Can, Mutan Oya. "Comparison Of Regression Techniques Via Monte Carlo Simulation." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/3/12605175/index.pdf.

Full text
Abstract:
The ordinary least squares (OLS) is one of the most widely used methods for modelling the functional relationship between variables. However, this estimation procedure counts on some assumptions and the violation of these assumptions may lead to nonrobust estimates. In this study, the simple linear regression model is investigated for conditions in which the distribution of the error terms is Generalised Logistic. Some robust and nonparametric methods such as modified maximum likelihood (MML), least absolute deviations (LAD), Winsorized least squares, least trimmed squares (LTS), Theil and weighted Theil are compared via computer simulation. In order to evaluate the estimator performance, mean, variance, bias, mean square error (MSE) and relative mean square error (RMSE) are computed.
APA, Harvard, Vancouver, ISO, and other styles
2

Oliveira, Pedro Rodrigues de. "Um estudo dos determinantes da confiança interpessoal e seu impacto no crescimento econômico." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/96/96131/tde-28042008-172141/.

Full text
Abstract:
Na década de 1990, emergiu uma numerosa literatura abordando os efeitos da confiança interpessoal no crescimento econômico dos países. Teoricamente, a confiança afeta o crescimento econômico por afetar as decisões que envolvem incerteza acerca das ações futuras de outros agentes, como: investimentos, contratações de trabalhadores, inovação, dentre outras. Este trabalho utiliza a metodologia corrente nesta literatura, avaliando o papel da confiança no crescimento econômico em um cross section de países para três períodos, utilizando informações, principalmente, das Penn World Tables, World Values Survey e dados de educação da UNESCO. Aplicando a técnica de least trimmed squares é avaliada a robustez da variável confiança quando se retiram observações aberrantes. Encontra-se que a confiança tem um efeito considerável no crescimento econômico, mesmo quando outliers são removidos. Também são realizados exercícios para a correção de possíveis problemas de endogeneidade da variável de confiança. Além disso, o trabalho analisa os determinantes da confiança individual, utilizando um modelo probit cujas variáveis explicativas são: renda, escolaridade, idade, país, religião, dentre outras. Este exercício também é feito para analisar o caso brasileiro. Encontra-se que a confiança é uma variável que depende mais da sociedade ou do grupo que das características individuais e, para o caso brasileiro, verificou-se que independentemente de gênero, escolaridade ou renda, as pessoas não confiam nos demais.<br>In the 1990\'s a large number of works came out investigating the effects of interpersonal trust on the economic growth of countries. Theoretically, trust affects economic growth by affecting all decisions that involve uncertainty on future actions of other agents, such as: investments, hire of employees, innovation, among others. This study uses the current literature methodology, tackling the trust importance for economic growth on a cross section of countries for three periods, using informations mainly from the Penn World Tables, World Values Survey and educational data from UNESCO. Applying the least trimmed squares technique it is evaluated the robustness of the trust variable when influential observations are excluded. It is found a remarkable estimated effect of trust on economic growth, even when outliers are removed. Also some studies are made in order to correct for possible endogeneity problems of the trust variable. Moreover, the work analyses the determinants of individual trust, using a probit model with the regressors: income, schooling, age, country, religion, among others. This analysis is also applied for the brazilian case. It is found that trust depends more on the society or group than on individual characteristics and, for the brazilian case, it was observed that, no matter which gender, schooling or income level the person belongs to, people do not trust each other.
APA, Harvard, Vancouver, ISO, and other styles
3

Cheniae, Michael G. "Observability method for the least median of squares estimator as applied to power systems." Thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-08142009-040309/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bringmann, Philipp. "Adaptive least-squares finite element method with optimal convergence rates." Doctoral thesis, Humboldt-Universität zu Berlin, 2021. http://dx.doi.org/10.18452/22350.

Full text
Abstract:
Die Least-Squares Finite-Elemente-Methoden (LSFEMn) basieren auf der Minimierung des Least-Squares-Funktionals, das aus quadrierten Normen der Residuen eines Systems von partiellen Differentialgleichungen erster Ordnung besteht. Dieses Funktional liefert einen a posteriori Fehlerschätzer und ermöglicht die adaptive Verfeinerung des zugrundeliegenden Netzes. Aus zwei Gründen versagen die gängigen Methoden zum Beweis optimaler Konvergenzraten, wie sie in Carstensen, Feischl, Page und Praetorius (Comp. Math. Appl., 67(6), 2014) zusammengefasst werden. Erstens scheinen fehlende Vorfaktoren proportional zur Netzweite den Beweis einer schrittweisen Reduktion der Least-Squares-Schätzerterme zu verhindern. Zweitens kontrolliert das Least-Squares-Funktional den Fehler der Fluss- beziehungsweise Spannungsvariablen in der H(div)-Norm, wodurch ein Datenapproximationsfehler der rechten Seite f auftritt. Diese Schwierigkeiten führten zu einem zweifachen Paradigmenwechsel in der Konvergenzanalyse adaptiver LSFEMn in Carstensen und Park (SIAM J. Numer. Anal., 53(1), 2015) für das 2D-Poisson-Modellproblem mit Diskretisierung niedrigster Ordnung und homogenen Dirichlet-Randdaten. Ein neuartiger expliziter residuenbasierter Fehlerschätzer ermöglicht den Beweis der Reduktionseigenschaft. Durch separiertes Markieren im adaptiven Algorithmus wird zudem der Datenapproximationsfehler reduziert. Die vorliegende Arbeit verallgemeinert diese Techniken auf die drei linearen Modellprobleme das Poisson-Problem, die Stokes-Gleichungen und das lineare Elastizitätsproblem. Die Axiome der Adaptivität mit separiertem Markieren nach Carstensen und Rabus (SIAM J. Numer. Anal., 55(6), 2017) werden in drei Raumdimensionen nachgewiesen. Die Analysis umfasst Diskretisierungen mit beliebigem Polynomgrad sowie inhomogene Dirichlet- und Neumann-Randbedingungen. Abschließend bestätigen numerische Experimente mit dem h-adaptiven Algorithmus die theoretisch bewiesenen optimalen Konvergenzraten.<br>The least-squares finite element methods (LSFEMs) base on the minimisation of the least-squares functional consisting of the squared norms of the residuals of first-order systems of partial differential equations. This functional provides a reliable and efficient built-in a posteriori error estimator and allows for adaptive mesh-refinement. The established convergence analysis with rates for adaptive algorithms, as summarised in the axiomatic framework by Carstensen, Feischl, Page, and Praetorius (Comp. Math. Appl., 67(6), 2014), fails for two reasons. First, the least-squares estimator lacks prefactors in terms of the mesh-size, what seemingly prevents a reduction under mesh-refinement. Second, the first-order divergence LSFEMs measure the flux or stress errors in the H(div) norm and, thus, involve a data resolution error of the right-hand side f. These difficulties led to a twofold paradigm shift in the convergence analysis with rates for adaptive LSFEMs in Carstensen and Park (SIAM J. Numer. Anal., 53(1), 2015) for the lowest-order discretisation of the 2D Poisson model problem with homogeneous Dirichlet boundary conditions. Accordingly, some novel explicit residual-based a posteriori error estimator accomplishes the reduction property. Furthermore, a separate marking strategy in the adaptive algorithm ensures the sufficient data resolution. This thesis presents the generalisation of these techniques to three linear model problems, namely, the Poisson problem, the Stokes equations, and the linear elasticity problem. It verifies the axioms of adaptivity with separate marking by Carstensen and Rabus (SIAM J. Numer. Anal., 55(6), 2017) in three spatial dimensions. The analysis covers discretisations with arbitrary polynomial degree and inhomogeneous Dirichlet and Neumann boundary conditions. Numerical experiments confirm the theoretically proven optimal convergence rates of the h-adaptive algorithm.
APA, Harvard, Vancouver, ISO, and other styles
5

Paranagama, Thilanka Dilruwani. "A simulation study of the robustness of the least median of squares estimator of slope in a regression through the origin model." Kansas State University, 2010. http://hdl.handle.net/2097/7045.

Full text
Abstract:
Master of Science<br>Department of Statistics<br>Paul I. Nelson<br>The principle of least squares applied to regression models estimates parameters by minimizing the mean of squared residuals. Least squares estimators are optimal under normality but can perform poorly in the presence of outliers. This well known lack of robustness motivated the development of alternatives, such as least median of squares estimators obtained by minimizing the median of squared residuals. This report uses simulation to examine and compare the robustness of least median of squares estimators and least squares estimators of the slope of a regression line through the origin in terms of bias and mean squared error in a variety of conditions containing outliers created by using mixtures of normal and heavy tailed distributions. It is found that least median of squares estimation is almost as good as least squares estimation under normality and can be much better in the presence of outliers.
APA, Harvard, Vancouver, ISO, and other styles
6

Läuter, Henning. "Estimation in partly parametric additive Cox models." Universität Potsdam, 2003. http://opus.kobv.de/ubp/volltexte/2011/5150/.

Full text
Abstract:
The dependence between survival times and covariates is described e.g. by proportional hazard models. We consider partly parametric Cox models and discuss here the estimation of interesting parameters. We represent the ma- ximum likelihood approach and extend the results of Huang (1999) from linear to nonlinear parameters. Then we investigate the least squares esti- mation and formulate conditions for the a.s. boundedness and consistency of these estimators.
APA, Harvard, Vancouver, ISO, and other styles
7

Munnae, Jomkwun. "Uncalibrated robotic visual servo tracking for large residual problems." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37219.

Full text
Abstract:
In visually guided control of a robot, a large residual problem occurs when the robot configuration is not in the neighborhood of the target acquisition configuration. Most existing uncalibrated visual servoing algorithms use quasi-Gauss-Newton methods which are effective for small residual problems. The solution used in this study switches between a full quasi-Newton method for large residual case and the quasi-Gauss-Newton methods for the small case. Visual servoing to handle large residual problems for tracking a moving target has not previously appeared in the literature. For large residual problems various Hessian approximations are introduced including an approximation of the entire Hessian matrix, the dynamic BFGS (DBFGS) algorithm, and two distinct approximations of the residual term, the modified BFGS (MBFGS) algorithm and the dynamic full Newton method with BFGS (DFN-BFGS) algorithm. Due to the fact that the quasi-Gauss-Newton method has the advantage of fast convergence, the quasi-Gauss-Newton step is used as the iteration is sufficiently near the desired solution. A switching algorithm combines a full quasi-Newton method and a quasi-Gauss-Newton method. Switching occurs if the image error norm is less than the switching criterion, which is heuristically selected. An adaptive forgetting factor called the dynamic adaptive forgetting factor (DAFF) is presented. The DAFF method is a heuristic scheme to determine the forgetting factor value based on the image error norm. Compared to other existing adaptive forgetting factor schemes, the DAFF method yields the best performance for both convergence time and the RMS error. Simulation results verify validity of the proposed switching algorithms with the DAFF method for large residual problems. The switching MBFGS algorithm with the DAFF method significantly improves tracking performance in the presence of noise. This work is the first successfully developed model independent, vision-guided control for large residual with capability to stably track a moving target with a robot.
APA, Harvard, Vancouver, ISO, and other styles
8

Morlanes, José Igor. "Some Extensions of Fractional Ornstein-Uhlenbeck Model : Arbitrage and Other Applications." Doctoral thesis, Stockholms universitet, Statistiska institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-147437.

Full text
Abstract:
This doctoral thesis endeavors to extend probability and statistical models using stochastic differential equations. The described models capture essential features from data that are not explained by classical diffusion models driven by Brownian motion. New results obtained by the author are presented in five articles. These are divided into two parts. The first part involves three articles on statistical inference and simulation of a family of processes related to fractional Brownian motion and Ornstein-Uhlenbeck process, the so-called fractional Ornstein-Uhlenbeck process of the second kind (fOU2). In two of the articles, we show how to simulate fOU2 by means of circulant embedding method and memoryless transformations. In the other one, we construct a least squares consistent estimator of the drift parameter and prove the central limit theorem using techniques from Stochastic Calculus for Gaussian processes and Malliavin Calculus. The second phase of my research consists of two articles about jump market models and arbitrage portfolio strategies for an insider trader. One of the articles describes two arbitrage free markets according to their risk neutral valuation formula and an arbitrage strategy by switching the markets. The key aspect is the difference in volatility between the markets. Statistical evidence of this situation is shown from a sequential data set. In the other one, we analyze the arbitrage strategies of an strong insider in a pure jump Markov chain financial market by means of a likelihood process. This is constructed in an enlarged filtration using Itô calculus and general theory of stochastic processes.<br>Föreliggande doktorsavhandling strävar efter att utöka sannolikhetsbaserade och statistiska modeller med stokastiska differentialekvationer. De beskrivna modellerna fångar väsentliga egenskaper i data som inte förklaras av klassiska diffusionsmodeller för brownsk rörelse.  Nya resultat, som författaren har härlett, presenteras i fem uppsatser. De är ordnade i två delar. Del 1 innehåller tre uppsatser om statistisk inferens och simulering av en familj av stokastiska processer som är relaterade till fraktionell brownsk rörelse och Ornstein-Uhlenbeckprocessen, så kallade andra ordningens fraktionella Ornstein-Uhlenbeckprocesser (fOU2). I två av uppsatserna visar vi hur vi kan simulera fOU2-processer med hjälp av cyklisk inbäddning och minneslös transformering. I den tredje uppsatsen konstruerar vi en minsta-kvadratestimator som ger konsistent skattning av driftparametern och bevisar centrala gränsvärdessatsen med tekniker från statistisk analys för gaussiska processer och malliavinsk analys.  Del 2 av min forskning består av två uppsatser om marknadsmodeller med plötsliga hopp och portföljstrategier med arbitrage för en insiderhandlare. En av uppsatserna beskriver två arbitragefria marknader med riskneutrala värderingsformeln och en arbitragestrategi som består i växla mellan marknaderna. Den väsentliga komponenten är skillnaden mellan marknadernas volatilitet. Statistisk evidens i den här situationen visas utifrån ett sekventiellt datamaterial. I den andra uppsatsen analyserar vi arbitragestrategier hos en insiderhandlare i en finansiell marknad som förändrar sig enligt en Markovkedja där alla förändringar i tillstånd består av plötsliga hopp. Det gör vi med en likelihoodprocess. Vi konstruerar detta med utökad filtrering med hjälp av Itôanalys och allmän teori för stokastiska processer.<br><p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 4: Manuscript. Paper 5: Manuscript.</p>
APA, Harvard, Vancouver, ISO, and other styles
9

Fragneau, Christopher. "Estimation dans le modèle de régression monotone single index en grande dimension." Thesis, Paris 10, 2020. http://www.theses.fr/2020PA100069.

Full text
Abstract:
Cette thèse s’inscrit dans le modèle de régression monotone single-index selon lequel une variable réelle Y est liée à un vecteur réel X de taille d, par la relation E[Y|X] = f(aTX) p.s., où la fonction monotone réelle f et aT, la transposée du vecteur a de taille d, sont inconnus. Ce modèle est réputé en économie, médecine et biostatistiques où la monotonie de f est naturelle. Lorsque a est dans S, la sphère unité de dimension d, ma finalité est l’estimation de (a,f) fondée sur n observations de (X,Y) dans le cadre de la grande dimension, où d dépend de n et peut tendre vers l’infini avec n. Le premier Chapitre présente la théorie de l’estimation monotone, de la grande dimension et le modèle de régression single-index. Le second chapitre étudie les minimiseurs des critères population des moindres carrés et du maximum de vraisemblance sur des classes K de couples (b,g) où b est dans un sous-ensemble de S et g est monotone. Cette étude est requise pour la convergence des estimateurs contraints sur K, qui fait l'objet du troisième chapitre. Dans le cadre où d dépend de n et la loi de X est bornée ou sous-Gaussienne, j’établis la vitesse de convergence des estimateurs de f(aT·), a et f dans lecas où (a,f) ∈K, ainsi que la consistance des estimateurs de f(aT·) dans le cas contraire. Le quatrième chapitre propose une méthode d’estimation de (a,f) lorsque X est un vecteur Gaussien. Elle consiste àse placer dans un modèle linéaire mal spécifié, et à estimer son vecteur des paramètres grâce à la méthode du de-sparcified Lasso de Zhang et Zhang (2014). Je prouve que l’estimateur résultant divisé par sa norme Euclidienne est Gaussien et converge vers a, à vitesse paramétrique. Je propose des estimateurs de f(aT·) et f ainsi que leurs vitesses de convergence<br>The framework of this thesis is the monotone single-index model which assumes that a real variable Y is linked to a d dimensional real vector X through the relationship E[Y|X] = f(aTX) a.s., where the real monotonic function f and aT, the transposed vector of a are unknown. This model is well-known in economics, medecine and biostatistics where the monotonicity of f appears naturally. Given n replications of (X,Y) and assuming that a belongs to S, the d unit dimensional sphere, my main aim is to estimate (a,f) in the high dimensional context, where d is allowed to depend on n and to grow to infinity with n. First Chapter introduces the theory of monotone estimation, the high-dimensional context, and the single-index model. Second Chapter studies the minimizers of least-squares and maximum likelihood population criteria over classes K of couples (b,g) where b belongs to a subset of S and g is monotonic. My results are are needed for constrained estimators convergence purposes over K, the aim of the Third Chapter. In a setting where d depends on n and the distribution of X is eitherbounded or sub-Gaussian, I establish the rates of convergence of the estimators of f(aT·), a and f in case where (a,f) ∈ K, as well as the consistency of estimators of f(aT·), otherwise. Fourth Chapter furnishes an estimation method of (a,f) when X is assumed to be a Gaussian vector. This method fits a mispecified linear model, and estimates its parameter vector thanks to the de-sparcified Lasso method of Zhang and Zhang (2014). I show that the resulting estimator divided by its Euclidean norm is Gaussian and converges to a, at parametric rate. I provide estimators of f(aT·) and f, and I establish their rates of convergence
APA, Harvard, Vancouver, ISO, and other styles
10

Zelasco, José Francisco. "Gestion des données : contrôle de qualité des modèles numériques des bases de données géographiques." Thesis, Montpellier 2, 2010. http://www.theses.fr/2010MON20232.

Full text
Abstract:
Les modèles numériques de terrain, cas particulier de modèles numériques de surfaces, n'ont pas la même erreur quadratique moyenne en planimétrie qu'en altimétrie. Différentes solutions ont été envisagées pour déterminer séparément l'erreur en altimétrie et l'erreur planimétrique, disposant, bien entendu, d'un modèle numérique plus précis comme référence. La démarche envisagée consiste à déterminer les paramètres des ellipsoïdes d'erreur, centrées dans la surface de référence. Dans un premier temps, l'étude a été limitée aux profils de référence avec l'ellipse d'erreur correspondante. Les paramètres de cette ellipse sont déterminés à partir des distances qui séparent les tangentes à l'ellipse du centre de cette même ellipse. Remarquons que cette distance est la moyenne quadratique des distances qui séparent le profil de référence des points du modèle numérique à évaluer, c'est à dire la racine de la variance marginale dans la direction normale à la tangente. Nous généralisons à l'ellipsoïde de révolution. C'est le cas ou l'erreur planimétrique est la même dans toutes les directions du plan horizontal (ce n'est pas le cas des MNT obtenus, par exemple, par interférométrie radar). Dans ce cas nous montrons que le problème de simulation se réduit à l'ellipse génératrice et la pente du profil correspondant à la droite de pente maximale du plan appartenant à la surface de référence. Finalement, pour évaluer les trois paramètres d'un ellipsoïde, cas où les erreurs dans les directions des trois axes sont différentes (MNT obtenus par Interférométrie SAR), la quantité des points nécessaires pour la simulation doit être importante et la surface tr ès accidentée. Le cas échéant, il est difficile d'estimer les erreurs en x et en y. Néanmoins, nous avons remarqué, qu'il s'agisse de l'ellipsoïde de révolution ou non, que dans tous les cas, l'estimation de l'erreur en z (altimétrie) donne des résultats tout à fait satisfaisants<br>A Digital Surface Model (DSM) is a numerical surface model which is formed by a set of points, arranged as a grid, to study some physical surface, Digital Elevation Models (DEM), or other possible applications, such as a face, or some anatomical organ, etc. The study of the precision of these models, which is of particular interest for DEMs, has been the object of several studies in the last decades. The measurement of the precision of a DSM model, in relation to another model of the same physical surface, consists in estimating the expectancy of the squares of differences between pairs of points, called homologous points, one in each model which corresponds to the same feature of the physical surface. But these pairs are not easily discernable, the grids may not be coincident, and the differences between the homologous points, corresponding to benchmarks in the physical surface, might be subject to special conditions such as more careful measurements than on ordinary points, which imply a different precision. The generally used procedure to avoid these inconveniences has been to use the squares of vertical distances between the models, which only address the vertical component of the error, thus giving a biased estimate when the surface is not horizontal. The Perpendicular Distance Evaluation Method (PDEM) which avoids this bias, provides estimates for vertical and horizontal components of errors, and is thus a useful tool for detection of discrepancies in Digital Surface Models (DSM) like DEMs. The solution includes a special reference to the simplification which arises when the error does not vary in all horizontal directions. The PDEM is also assessed with DEM's obtained by means of the Interferometry SAR Technique
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography