To see the other types of publications on this topic, follow the link: Unbiased estimation of autocorrelation.

Dissertations / Theses on the topic 'Unbiased estimation of autocorrelation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 38 dissertations / theses for your research on the topic 'Unbiased estimation of autocorrelation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kamanu, Timothy Kevin Kuria. "Location-based estimation of the autoregressive coefficient in ARX(1) models." Thesis, University of the Western Cape, 2006. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_9551_1186751947.

Full text
Abstract:

In recent years, two estimators have been proposed to correct the bias exhibited by the leastsquares (LS) estimator of the lagged dependent variable (LDV) coefficient in dynamic regression models when the sample is finite. They have been termed as &lsquo
mean-unbiased&rsquo
and &lsquo
medianunbiased&rsquo
estimators. Relative to other similar procedures in the literature, the two locationbased estimators have the advantage that they offer an exact and uniform methodology for LS estimation of the LDV coefficient in a first order autoregressive model with or without exogenous regressors i.e. ARX(1).


However, no attempt has been made to accurately establish and/or compare the statistical properties among these estimators, or relative to those of the LS estimator when the LDV coefficient is restricted to realistic values. Neither has there been an attempt to 
compare their performance in terms of their mean squared error (MSE) when various forms of the exogenous regressors are considered. Furthermore, only implicit confidence intervals have been given for the &lsquo
medianunbiased&rsquo
estimator. Explicit confidence bounds that are directly usable for inference are not available for either estimator. In this study a new estimator of the LDV coefficient is proposed
the &lsquo
most-probably-unbiased&rsquo
estimator. Its performance properties vis-a-vis the existing estimators are determined and compared when the parameter space of the LDV coefficient is restricted. In addition, the following new results are established: (1) an explicit computable form for the density of the LS estimator is derived for the first time and an efficient method for its numerical evaluation is proposed
(2) the exact bias, mean, median and mode of the distribution of the LS estimator are determined in three specifications of the ARX(1) model
(3) the exact variance and MSE of LS estimator is determined
(4) the standard error associated with the determination of same quantities when simulation rather than numerical integration method is used are established and the methods are compared in terms of computational time and effort
(5) an exact method of evaluating the density of the three estimators is described
(6) their exact bias, mean, variance and MSE are determined and analysed
and finally, (7) a method of obtaining the explicit exact confidence intervals from the distribution functions of the estimators is proposed.


The discussion and results show that the estimators are still biased in the usual sense: &lsquo
in expectation&rsquo
. However the bias is substantially reduced compared to that of the LS estimator. The findings are important in the specification of time-series regression models, point and interval estimation, decision theory, and simulation.

APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Keshu. "Best linear unbiased estimation fusion with constraints." ScholarWorks@UNO, 2003. http://louisdl.louislibraries.org/u?/NOD,86.

Full text
Abstract:
Thesis (Ph. D.)--University of New Orleans, 2003.
Title from electronic submission form. "A dissertation ... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Electrical Engineering"--Dissertation t.p. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
3

Cipperly, George Edward. "Direct scene parameter estimation from autocorrelation data." Diss., The University of Arizona, 1992. http://hdl.handle.net/10150/186058.

Full text
Abstract:
Several aspects of extracting scene object information directly from the associated autocorrelation (or spectrum modulus) data arrays are investigated. Emphasis is on the particular scenario in which the scene can be modelled by a small set of dispersed objects with associated position, size, shape, and brightness parameters. These parameters may completely define a scene, they may contain the information of interest in a more complex scene, or they may merely constitute a reasonable first approximation to an arbitrary scene. A typical two step approach to estimating such parameters is to first use phase retrieval/image reconstruction techniques to estimate an associated image, and then apply pattern recognition techniques to extract the important information from it. The work described here focusses on eliminating the image recovery step and estimating parameter values directly from the autocorrelation. This task naturally separates into several distinct sub-problems, the first of which is extracting the significant features from the autocorrelation. This is a pattern recognition problem with special consideration to the unique features of autocorrelations. From these feature positions, the number of objects in the scene and their relative positions are next deduced, and finally, the individual object sizes, shapes and brightnesses are extracted. Optional further analysis is described in which the object parameter estimates are further refined by seeking a Maximum Likelihood estimate with regard to the data array. Alternatively, the initial estimates could be used to generate a trial image for an iterative phase retrieval procedure to reconstruct the full scene. Since the trial estimate already contains the major features of the scene, convergence to the correct solution should be both faster and better assured. The phase retrieval problem has been well studied and is not investigated here. For each of these sub-problems, the logical or mathematical development of the solution is presented, implementing computer algorithms are described, and theoretical and practical limitations are discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Donghui 1970. "Median-unbiased estimation in linear autoregressive time series models." Monash University, Dept. of Econometrics and Business Statistics, 2001. http://arrow.monash.edu.au/hdl/1959.1/9044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Huilin. "Small area estimation an empirical best linear unbiased prediction approach /." College Park, Md.: University of Maryland, 2007. http://hdl.handle.net/1903/7600.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Mathematical Statistics Program. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
6

Sall, Cheikh Ahmed Tidiane. "Dynamique et persistance de l’inflation dans l’UEMOA : le rôle des facteurs globaux, régionaux et nationaux." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM1085/document.

Full text
Abstract:
La thèse étudie la dynamique et la persistance de l’inflation dans les pays en développement, particulièrement ceux des pays de la Zone UEMOA, en mettant en exergue les spécificités de ces économies. Le premier chapitre, consacré à l’évaluation de la persistance, révèle que le degré de persistance de l'inflation est faible dans ces pays, ce qui constitue un atout pour les autorités monétaires. Dans le chapitre 2, il a été défini un cadre théorique plus approprié à l’analyse de la persistance de l’inflation dans les pays de la sous-région. L’approche a permis de montrer que le degré de persistance de l’inflation dans ces pays ne dépendait pas uniquement des politiques monétaire et de change, mais aussi négativement du poids du secteur vivrier local dans l’économie. Dans le chapitre 3, la thèse analyse les écarts d’inflation dans les pays membres de l’UEMOA, en examinant la β-convergence des différentiels d'inflation. Les estimations révèlent que, d’une part, les écarts d’inflation se sont fortement réduits à l’intérieur de l'Union et que, d’autre part, ils restent fortement persistants avec la zone Euro. Le chapitre 4 est consacré à l’évaluation du rôle des différents facteurs et utilise ensuite une spécification spatiale en panel, pour tester les effets de contagion entre pays. Les estimations indiquent une prédominance des facteurs globaux et des effets de contagion entre pays dont l'ampleur dépend du poids des exportations de chaque pays vers les autres pays de la sous région
This thesis examines the inflation dynamics and persistence in developing countries, especially in the UEMOA zone, highlighting the specificities of these economies. The first chapter, reveals that the inflation persistence degree, in these countries, is low which represents an asset to the monetary authorities. In Chapter 2, it was defined a more appropriate theoretical framework to analyze the inflation persistence in the countries of the sub-region. The approach allowed to demonstrate that the inflation persistence degree in these countries is not only dependent on monetary and exchange rate policies, but also negatively to the weight of local food sector in the economy. Chapter 3, analyzes the inflation differentials in the UEMOA member countries, by examining the β - convergence of inflation differentials. Estimations show that the inflation differentials are greatly reduced within the Union and they are highly persistent with the Euro zone. Chapter 4, is devoted to assessing the role of various factors and then uses a spatial panel specification to test the spillover effect between countries. Estimations indicate a predominance of global factors and contagion between countries whose magnitude depends on the weight of exports to other countries in the sub-region
APA, Harvard, Vancouver, ISO, and other styles
7

Kalender, Emre. "Parametric Estimation Of Clutter Autocorrelation Matrix For Ground Moving Target Indication." Master's thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615313/index.pdf.

Full text
Abstract:
In airborne radar systems with Ground Moving Target Indication (GMTI) mode, it is desired to detect the presence of targets in the interference consisting of noise, ground clutter, and jamming signals. These interference components usually mask the target return signal, such that the detection requires suppression of the interference signals. Space-time adaptive processing is a widely used interference suppression technique which uses temporal and spatial information to eliminate the effects of clutter and jamming and enables the detection of moving targets with small radial velocity. However, adaptive estimation of the interference requires high computation capacity as well as large secondary sample data support. The available secondary range cells may be fewer than required due to non-homogeneity problems and computational capacity of the radar system may not be sufficient for the computations required. In order to reduce the computational load and the required number of secondary data for estimation, parametric methods use a priori information on the structure of the clutter covariance matrix. Space Time Auto-regressive (STAR) filtering, which is a parametric adaptive method, and full parametric model-based approaches for interference suppression are proposed as alternatives to STAP in the literature. In this work, space time auto-regressive filtering and model-based GMTI approaches are investigated. Performance of these approaches are evaluated by both simulated and flight test data and compared with the performance of sample matrix inversion space time adaptive processing.
APA, Harvard, Vancouver, ISO, and other styles
8

Hu, Qilin. "Autocorrelation-based factor analysis and nonlinear shrinkage estimation of large integrated covariance matrix." Thesis, London School of Economics and Political Science (University of London), 2016. http://etheses.lse.ac.uk/3551/.

Full text
Abstract:
The first part of my thesis deals with the factor modeling for high-dimensional time series based on a dimension-reduction viewpoint. we allow the dimension of time series N to be as large as, or even larger than the sample size of the time series. The estimation of the factor loading matrix and subsequently the factors are done via an eigenanalysis on a non-negative definite matrix constructed from autocorrelation matrix. The method is dubbed as AFA. We give explicit comparison of the convergence rates between AFA with PCA. We show that AFA possesses the advantage over PCA when dealing with small dimension time series for both one step and two step estimations, while at large dimension, the performance is still comparable. The second part of my thesis considers large integrated covariance matrix estimation. While the use of intra-day price data increases the sample size substantially for asset allocation, the usual realized covariance matrix still suffers from bias contributed from the extreme eigenvalues when the number of assets is large. We introduce a novel nonlinear shrinkage estimator for the integrated volatility matrix which shrinks the extreme eigenvalues of a realized covariance matrix back to acceptable level, and enjoys a certain asymptotic efficiency at the same time, all at a high dimensional setting where the number of assets can have the same order as the number of data points. Compared to a time-variation adjusted realized covariance estimator and the usual realized covariance matrix, our estimator demonstrates favorable performance in both simulations and a real data analysis in portfolio allocation. This include a novel maximum exposure bound and an actual risk bound when our estimator is used in constructing the minimum variance portfolio.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Zhanlue. "Performance Appraisal of Estimation Algorithms and Application of Estimation Algorithms to Target Tracking." ScholarWorks@UNO, 2006. http://scholarworks.uno.edu/td/394.

Full text
Abstract:
This dissertation consists of two parts. The first part deals with the performance appraisal of estimation algorithms. The second part focuses on the application of estimation algorithms to target tracking. Performance appraisal is crucial for understanding, developing and comparing various estimation algorithms. In particular, with the evolvement of estimation theory and the increase of problem complexity, performance appraisal is getting more and more challenging for engineers to make comprehensive conclusions. However, the existing theoretical results are inadequate for practical reference. The first part of this dissertation is dedicated to performance measures which include local performance measures, global performance measures and model distortion measure. The second part focuses on application of the recursive best linear unbiased estimation (BLUE) or lineae minimum mean square error (LMMSE) estimation to nonlinear measurement problem in target tracking. Kalman filter has been the dominant basis for dynamic state filtering for several decades. Beyond Kalman filter, a more fundamental basis for the recursive best linear unbiased filtering has been thoroughly investigated in a series of papers by Dr. X. Rong Li. Based on the so-called quasirecursive best linear unbiased filtering technique, the constraints of the Kalman filter Linear-Gaussian assumptions can be relaxed such that a general linear filtering technique for nonlinear systems can be achieved. An approximate optimal BLUE filter is implemented for nonlinear measurements in target tracking which outperforms the existing method significantly in terms of accuracy, credibility and robustness.
APA, Harvard, Vancouver, ISO, and other styles
10

Miladinovic, Branko. "Kernel density estimation of reliability with applications to extreme value distribution." [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002760.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Korte, Robert A. "Inference in Power Series Distributions." Kent State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=kent1352937611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Tsiappoutas, Kyriakos Michael. "Statistical Spectral Parameter Estimation of Acoustic Signals with Applications to Byzantine Music." ScholarWorks@UNO, 2011. http://scholarworks.uno.edu/td/1358.

Full text
Abstract:
Digitized acoustical signals of Byzantine music performed by Iakovos Nafpliotis are used to extract the fundamental frequency of each note of the diatonic scale. These empirical results are then contrasted to the theoretical suggestions and previous empirical findings. Several parametric and non-parametric spectral parameter estimation methods are implemented. These include: (1) Phase vocoder method, (2) McAulay-Quatieri method, (3) Levinson-Durbin algorithm,(4) YIN, (5) Quinn & Fernandes Estimator, (6) Pisarenko Frequency Estimator, (7) MUltiple SIgnal Characterization (MUSIC) algorithm, (8) Periodogram method, (9) Quinn & Fernandes Filtered Periodogram, (10) Rife & Vincent Estimator, and (11) the Fourier transform. Algorithm performance was very precise. The psychophysical aspect of human pitch discrimination is explored. The results of eight (8) psychoacoustical experiments were used to determine the aural just noticeable difference (jnd) in pitch and deduce patterns utilized to customize acceptable performable pitch deviation to the application at hand. These customizations [Acceptable Performance Difference (a new measure of frequency differential acceptability), Perceptual Confidence Intervals (a new concept of confidence intervals based on psychophysical experiment rather than statistics of performance data), and one based purely on music-theoretical asymphony] are proposed, discussed, and used in interpretation of results. The results suggest that Nafpliotis' intervals are closer to just intonation than Byzantine theory (with minor exceptions), something not generally found in Thrasivoulos Stanitsas' data. Nafpliotis' perfect fifth is identical to the just intonation, even though he overstretches his octaveby fifteen (15)cents. His perfect fourth is also more just, as opposed to Stanitsas' fourth which is directionally opposite. Stanitsas' tendency to exaggerate the major third interval A4-F4 is still seen in Nafpliotis, but curbed. This is the only noteworthy departure from just intonation, with Nafpliotis being exactly Chrysanthian (the most exaggerated theoretical suggestion of all) and Stanitsas overstretching it even more than Nafpliotis and Chrysanth. Nafpliotis ascends in the second tetrachord more robustly diatonically than Stanitsas. The results are reported and interpreted within the framework of Acceptable Performance Differences.
APA, Harvard, Vancouver, ISO, and other styles
13

Baba, Harra M'hammed. "Estimation de densités spectrales d'ordre élevé." Rouen, 1996. http://www.theses.fr/1996ROUES023.

Full text
Abstract:
Dans cette thèse nous construisons des estimateurs de la densité spectrale du cumulant, pour un processus strictement homogène et centré, l'espace des temps étant l'espace multidimensionnel, euclidien réel ou l'espace multidimensionnel des nombres p-adiques. Dans cette construction nous avons utilisé la méthode de lissage de la trajectoire et un déplacement dans le temps ou la méthode de fenêtres spectrales. Sous certaines conditions de régularité, les estimateurs proposés sont asymptotiquement sans biais et convergents. Les procédures d'estimation exposées peuvent trouver des applications dans de nombreux domaines scientifiques et peuvent aussi fournir des éléments de réponse aux questions relatives à certaines propriétés statistiques des processus aléatoires.
APA, Harvard, Vancouver, ISO, and other styles
14

Akgun, Burcin. "Identification Of Periodic Autoregressive Moving Average Models." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1083682/index.pdf.

Full text
Abstract:
In this thesis, identification of periodically varying orders of univariate Periodic Autoregressive Moving-Average (PARMA) processes is mainly studied. The identification of the varying orders of PARMA process is carried out by generalizing the well-known Box-Jenkins techniques to a seasonwise manner. The identification of pure periodic moving-average (PMA) and pure periodic autoregressive (PAR) models are considered only. For PARMA model identification, the Periodic Autocorrelation Function (PeACF) and Periodic Partial Autocorrelation Function (PePACF), which play the same role as their ARMA counterparts, are employed. For parameter estimation, which is considered only to refine model identification, the conditional least squares estimation (LSE) method is used which is applicable to PAR models. Estimation becomes very complicated, difficult and may give unsatisfactory results when a moving-average (MA) component exists in the model. On account of overcoming this difficulty, seasons following PMA processes are tried to be modeled as PAR processes with reasonable orders in order to employ LSE. Diagnostic checking, through residuals of the fitted model, is also performed stating its reasons and methods. The last part of the study demonstrates application of identification techniques through analysis of two seasonal hydrologic time series, which consist of average monthly streamflows. For this purpose, computer programs were developed specially for PARMA model identification.
APA, Harvard, Vancouver, ISO, and other styles
15

Allison, Malena Kathleen. "Statistical Topics Applied to Pressure and Temperature Readings in the Gulf of Mexico." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4625.

Full text
Abstract:
The field of statistical research in weather allows for the application of old and new methods, some of which may describe relationships between certain variables better such as temperatures and pressure. The objective of this study was to apply a variety of traditional and novel statistical methods to analyze data from the National Data Buoy Center, which records among other variables barometric pressure, atmospheric temperature, water temperature and dew point temperature. The analysis included attempts to better describe and model the data as well as to make estimations for certain variables. The following statistical methods were utilized: linear regression, non-response analysis, residual analysis, descriptive statistics, parametric analysis, Kolmogorov-Smirnov test, autocorrelation, normal approximation for the binomial, and chi-squared test of independence. Of the more significant results, one was establishing the Johnson SB as the best fitting parametric distribution for a group of pressures and another was finding that there was high autocorrelation in atmospheric temperature and pressure for small lags. This topic remains conducive to future research, and such endeavors may strengthen the field of applied statistics and improve our understanding of various weather entities.
APA, Harvard, Vancouver, ISO, and other styles
16

Carrico, Robert. "Unbiased Estimation for the Contextual Effect of Duration of Adolescent Height Growth on Adulthood Obesity and Health Outcomes via Hierarchical Linear and Nonlinear Models." VCU Scholars Compass, 2012. http://scholarscompass.vcu.edu/etd/2817.

Full text
Abstract:
This dissertation has multiple aims in studying hierarchical linear models in biomedical data analysis. In Chapter 1, the novel idea of studying the durations of adolescent growth spurts as a predictor of adulthood obesity is defined, established, and illustrated. The concept of contextual effects modeling is introduced in this first section as we study secular trend of adulthood obesity and how this trend is mitigated by the durations of individual adolescent growth spurts and the secular average length of adolescent growth spurts. It is found that individuals with longer periods of fast height growth in adolescence are more prone to having favorable BMI profiles in adulthood. In Chapter 2 we study the estimation of contextual effects in a hierarchical generalized linear model (HGLM). We simulate data and study the effects using the higher level group sample mean as the estimate for the true mean versus using an Empirical Bayes (EB) approach (Shin and Raudenbush 2010). We study this comparison for logistic, probit, log-linear, ordinal and nominal regression models. We find that in general the EB estimate lends a parameter estimate much closer to the true value, except for cases with very small variability in the upper level, where it is a more complicated situation and there is likely no need for contextual effects analysis. In Chapter 3 the HGLM studies are made clearer with large-scale simulations. These large scale simulations are shown for logistic regression and probit regression models for binary outcome data. With repetition we are able to establish coverage percentages of the confidence intervals of the true contextual effect. Coverage percentages show the percentage of simulations that have confidence intervals containing the true parameter values. Results confirm observations from the preliminary simulations in the previous section of this paper, and an accompanying example of adulthood hypertension shows how these results can be used in an application.
APA, Harvard, Vancouver, ISO, and other styles
17

Manomaiphiboon, Kasemsan. "Estimation of Emission Strength and Air Pollutant Concentrations by Lagrangian Particle Modeling." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5141.

Full text
Abstract:
A Lagrangian particle model was applied to estimating emission strength and air pollutant concentrations specifically for the short-range dispersion of an air pollutant in the atmospheric boundary layer. The model performance was evaluated with experimental data. The model was then used as the platform of parametric uncertainty analysis, in which effects of uncertainties in five parameters (Monin-Obukhov length, friction velocity, roughness height, mixing height, and the universal constant of the random component) of the model on mean ground-level concentrations were examined under slightly and moderately stable conditions. The analysis was performed under a probabilistic framework using Monte Carlo simulations with Latin hypercube sampling and linear regression modeling. In addition, four studies related to the Lagrangian particle modeling was included. They are an alternative technique of formulating joint probability density functions of velocity for atmospheric turbulence based on the Koehler-Symanowski technique, analysis of local increments in a multidimensional single-particle Lagrangian particle model using the algebra of Ito integrals and the Wagner-Platen formula, analogy between the diffusion limit of Lagrangian particle models and the classical theory of turbulent diffusion, and evaluation of some proposed forms of the Lagrangian velocity autocorrelation of turbulence.
APA, Harvard, Vancouver, ISO, and other styles
18

Puddephat, Michael J. "Computer interface for convenient application for stereological methods for unbiased estimation of volume and surface area : studies using MRI with particular reference to the human brain." Thesis, University of Liverpool, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Skalski, Tomasz, and Witold Zaborowski. "Object detection and pose estimation of randomly organized objects for a robotic bin picking system." Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2153.

Full text
Abstract:
Today modern industry systems are almost fully automated. The high requirements regarding speed, flexibility, precision and reliability makes it in some cases very difficult to create. One of the most willingly researched solution to solve many processes without human influence is bin-picking. Bin picking is a very complex process which integrates devices such as: robotic grasping arm, vision system, collision avoidance algorithms and many others. This paper describes the creation of a vision system - the most important part of the whole bin-picking system. Authors propose a model-based solution for estimating a best pick-up candidate position and orientation. In this method database is created from 3D CAD model, compared with processed image from the 3D scanner. Paper widely describes database creation from 3D STL model, Sick IVP 3D scanner configuration and creation of the comparing algorithm based on autocorrelation function and morphological operators. The results shows that proposed solution is universal, time efficient, robust and gives opportunities for further work.
+4915782529118
APA, Harvard, Vancouver, ISO, and other styles
20

Chitte, Sree Divya. "Source localization from received signal strength under lognormal shadowing." Thesis, University of Iowa, 2010. https://ir.uiowa.edu/etd/477.

Full text
Abstract:
This thesis considers statistical issues in source localization from the received signal strength (RSS) measurements at sensor locations, under the practical assumption of log-normal shadowing. Distance information of source from sensor locations can be estimated from RSS measurements and many algorithms directly use powers of distances to localize the source, even though distance measurements are not directly available. The first part of the thesis considers the statistical analysis of distance estimation from RSS measurments. We show that the underlying problem is inefficient and there is only one unbiased estimator for this problem and its mean square error (MSE) grows exponentially with noise power. Later, we provide the linear minimum mean square error (MMSE) estimator whose bias and MSE are bounded in noise power. The second part of the thesis establishes an isomorphism between estimates of differences between squares of distances and the source location. This is used to completely characterize the class of unbiased estimates of the source location and to show that their MSEs grow exponentially with noise powers. Later, we propose an estimate based on the linear MMSE estimate of distances that has error variance and bias that is bounded in the noise variance.
APA, Harvard, Vancouver, ISO, and other styles
21

Krishnan, Rajet. "Problems in distributed signal processing in wireless sensor networks." Thesis, Manhattan, Kan. : Kansas State University, 2009. http://hdl.handle.net/2097/1351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Teixeira, Marcos Vinícius. "Estudos sobre a implementação online de uma técnica de estimação de energia no calorímetro hadrônico do atlas em cenários de alta luminosidade." Universidade Federal de Juiz de Fora (UFJF), 2015. https://repositorio.ufjf.br/jspui/handle/ufjf/4169.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-04-25T13:40:30Z No. of bitstreams: 1 marcosviniciusteixeira.pdf: 5877294 bytes, checksum: 8fe056549285d49782c2d9ec8e16f786 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-04-25T15:26:43Z (GMT) No. of bitstreams: 1 marcosviniciusteixeira.pdf: 5877294 bytes, checksum: 8fe056549285d49782c2d9ec8e16f786 (MD5)
Made available in DSpace on 2017-04-25T15:26:43Z (GMT). No. of bitstreams: 1 marcosviniciusteixeira.pdf: 5877294 bytes, checksum: 8fe056549285d49782c2d9ec8e16f786 (MD5) Previous issue date: 2015-08-21
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Este trabalho tem como objetivo o estudo de técnicas para a estimação da amplitude de sinais no calorímetro de telhas (TileCal) do ATLAS no LHC em cenários de alta luminosidade. Em alta luminosidade, sinais provenientes de colisões adjacentes são observados, ocasionando o efeito de empilhamento de sinais. Neste ambiente, o método COF (do inglês, Constrained Optimal Filter), apresenta desempenho superior ao algoritmo atualmente implementado no sistema. Entretanto, o COF requer a inversão de matrizes para o cálculo da pseudo-inversa de uma matriz de convolução, dificultando sua implementação online. Para evitar a inversão de matrizes, este trabalho apresenta métodos interativos, para a daptação do COF, que resultam em operações matemáticas simples. Baseados no Gradiente Descendente, os resultados demonstraram que os algoritmos são capazes de estimar a amplitude de sinais empilhados, além do sinal de interesse com eficiência similar ao COF. Visando a implementação online, este trabalho apresenta estudos sobre a complexidade dos métodos iterativos e propõe uma arquitetura de processamento em FPGA. Baseado em uma estrutura sequencial e utilizando lógica aritmética em ponto fixo, os resultados demonstraram que a arquitetura desenvolvida é capaz executar o método iterativo, atendendo os requisitos de tempo de processamento exigidos no TileCal.
This work aims at the study of techniques for online energy estimation in the ATLAS hadronic Calorimeter (TileCal) on the LHC collider. During further periods of the LHC operation, signals coming from adjacent collisions will be observed within the same window, producing a signal superposition. In this environment, the energy reconstruction method COF (Constrained Optimal Filter) outperforms the algorithm currently implemented in the system. However , the COF method requires an inversion of matrices and its online implementation is not feasible. To avoid such inversion of matrices, this work presents iteractive methods to implement the COF, resulting in simple mathematical operations. Based on the Gradient Descent, the results demonstrate that the algorithms are capable of estimating the amplitude of the superimposed signals with efficiency similar to COF. In addition, a processing architecture for FPGA implementation is proposed. The analysis has shown that the algorithms can be implemented in the new TilaCal electronics, reaching the processing time requirements.
APA, Harvard, Vancouver, ISO, and other styles
23

Porto, Rogério de Faria. "Regressão não-paramétrica com erros correlacionados via ondaletas." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-27102008-101711/.

Full text
Abstract:
Nesta tese, são obtidas taxas de convergência a zero, do risco de estimação obtido com regressão não-paramétrica via ondaletas, quando há erros correlacionados. Quatro métodos de regressão não-paramétrica via ondaletas, com delineamento desigualmente espaçado são estudados na presença de erros correlacionados, oriundos de processos estocásticos. São apresentadas condições sobre os erros e adaptações aos procedimentos necessárias à obtenção de taxas de convergência quase minimax, para os estimadores. Sempre que possível são obtidas taxas de convergência para os estimadores no domínio da função, sob condições bastante gerais a respeito da função a ser estimada, do delineamento e da correlação dos erros. Mediante estudos de simulação, são avaliados os comportamentos de alguns métodos propostos quando aplicados a amostras finitas. Em geral sugere-se usar um dos procedimentos estudados, porém aplicando-se limiares por níveis. Como a estimação da variância dos coecientes de detalhes pode ser problemática em alguns casos, também se propõe um procedimento iterativo semi-paramétrico geral para métodos que utilizam ondaletas, na presença de erros em séries temporais.
In this thesis, rates of convergence to zero are obtained for the estimation risk, for non-parametric regression using wavelets, when the errors are correlated. Four non-parametric regression methods using wavelets, with un-equally spaced design are studied in the presence of correlated errors, that come from stochastic processes. Conditions on the errors and adaptations to the procedures are presented, so that the estimators achieve quasi-minimax rates of convergence. Whenever is possible, rates of convergence are obtained for the estimators in the domain of the function, under mild conditions on the function to be estimated, on the design and on the error correlation. Through simulation studies, the behavior of some of the proposed methods is evaluated, when used on finite samples. Generally, it is suggested to use one of the studied methods, however applying thresholds by level. Since the estimation of the detail coecients can be dicult in some cases, it is also proposed a general semi-parametric iterative procedure, for wavelet methods in the presence of time-series errors.
APA, Harvard, Vancouver, ISO, and other styles
24

Shang, Lei, and lei shang@ieee org. "Modelling of Mobile Fading Channels with Fading Mitigation Techniques." RMIT University. Electrical and Computer Engineering, 2006. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20061222.113303.

Full text
Abstract:
This thesis aims to contribute to the developments of wireless communication systems. The work generally consists of three parts: the first part is a discussion on general digital communication systems, the second part focuses on wireless channel modelling and fading mitigation techniques, and in the third part we discuss the possible application of advanced digital signal processing, especially time-frequency representation and blind source separation, to wireless communication systems. The first part considers general digital communication systems which will be incorporated in later parts. Today's wireless communication system is a subbranch of a general digital communication system that employs various techniques of A/D (Analog to Digital) conversion, source coding, error correction, coding, modulation, and synchronization, signal detection in noise, channel estimation, and equalization. We study and develop the digital communication algorithms to enhance the performance of wireless communication systems. In the Second Part we focus on wireless channel modelling and fading mitigation techniques. A modified Jakes' method is developed for Rayleigh fading channels. We investigate the level-crossing rate (LCR), the average duration of fades (ADF), the probability density function (PDF), the cumulative distribution function (CDF) and the autocorrelation functions (ACF) of this model. The simulated results are verified against the analytical Clarke's channel model. We also construct frequency-selective geometrical-based hyperbolically distributed scatterers (GBHDS) for a macro-cell mobile environment with the proper statistical characteristics. The modified Clarke's model and the GBHDS model may be readily expanded to a MIMO channel model thus we study the MIMO fading channel, specifically we model the MIMO channel in the angular domain. A detailed analysis of Gauss-Markov approximation of the fading channel is also given. Two fading mitigation techniques are investigated: Orthogonal Frequency Division Multiplexing (OFDM) and spatial diversity. In the Third Part, we devote ourselves to the exciting fields of Time-Frequency Analysis and Blind Source Separation and investigate the application of these powerful Digital Signal Processing (DSP) tools to improve the performance of wireless communication systems.
APA, Harvard, Vancouver, ISO, and other styles
25

Esstafa, Youssef. "Modèles de séries temporelles à mémoire longue avec innovations dépendantes." Thesis, Bourgogne Franche-Comté, 2019. http://www.theses.fr/2019UBFCD021.

Full text
Abstract:
Dans cette thèse nous considérons, dans un premier temps, le problème de l'analyse statistique des modèles FARIMA (Fractionally AutoRegressive Integrated Moving-Average) induits par un bruit blanc non corrélé mais qui peut contenir des dépendances non linéaires très générales. Ces modèles sont appelés FARIMA faibles et permettent de modéliser des processus à mémoire longue présentant des dynamiques non linéaires, de structures souvent non-identifiées, très générales. Relâcher l'hypothèse d'indépendance sur le terme d'erreur, une hypothèse habituellement imposée dans la littérature, permet aux modèles FARIMA faibles d'élargir considérablement leurs champs d'application en couvrant une large classe de processus à mémoire longue non linéaires. Les modèles FARIMA faibles sont denses dans l'ensemble des processus stationnaires purement non déterministes, la classe formée par ces modèles englobe donc celle des processus FARIMA avec un bruit indépendant et identiquement distribué (iid). Nous appelons par la suite FARIMA forts les modèles dans lesquels le terme d'erreur est supposé être un bruit iid.Nous établissons les procédures d'estimation et de validation des modèles FARIMA faibles. Nous montrons, sous des hypothèses faibles de régularités sur le bruit, que l'estimateur des moindres carrés des paramètres des modèles FARIMA(p,d,q) faibles est fortement convergent et asymptotiquement normal. La matrice de variance asymptotique de l'estimateur des moindres carrés des modèles FARIMA(p,d,q) faibles est de la forme "sandwich". Cette matrice peut être très différente de la variance asymptotique obtenue dans le cas fort (i.e. dans le cas où le bruit est supposé iid). Nous proposons, par deux méthodes différentes, un estimateur convergent de cette matrice. Une méthode alternative basée sur une approche d'auto-normalisation est également proposée pour construire des intervalles de confiance des paramètres des modèles FARIMA(p,d,q) faibles. Cette technique nous permet de contourner le problème de l'estimation de la matrice de variance asymptotique de l'estimateur des moindres carrés.Nous accordons ensuite une attention particulière au problème de la validation des modèles FARIMA(p,d,q) faibles. Nous montrons que les autocorrélations résiduelles ont une distribution asymptotique normale de matrice de covariance différente de celle obtenue dans le cadre des FARIMA forts. Cela nous permet de déduire la loi asymptotique exacte des statistiques portmanteau et de proposer ainsi des versions modifiées des tests portmanteau standards de Box-Pierce et Ljung-Box. Il est connu que la distribution asymptotique des tests portmanteau est correctement approximée par un khi-deux lorsque le terme d'erreur est supposé iid. Dans le cas général, nous montrons que cette distribution asymptotique est celle d'une somme pondérée de khi-deux. Elle peut être très différente de l'approximation khi-deux usuelle du cas fort. Nous adoptons la même approche d'auto-normalisation utilisée pour la construction des intervalles de confiance des paramètres des modèles FARIMA faibles pour tester l'adéquation des modèles FARIMA(p,d,q) faibles. Cette méthode a l'avantage de contourner le problème de l'estimation de la matrice de variance asymptotique du vecteur joint de l'estimateur des moindres carrés et des autocovariances empiriques du bruit.Dans un second temps, nous traitons dans cette thèse le problème de l'estimation des modèles autorégressifs d'ordre 1 induits par un bruit gaussien fractionnaire d'indice de Hurst H supposé connu. Nous étudions, plus précisément, la convergence et la normalité asymptotique de l'estimateur des moindres carrés généralisés du paramètre autorégressif de ces modèles
We first consider, in this thesis, the problem of statistical analysis of FARIMA (Fractionally AutoRegressive Integrated Moving-Average) models endowed with uncorrelated but non-independent error terms. These models are called weak FARIMA and can be used to fit long-memory processes with general nonlinear dynamics. Relaxing the independence assumption on the noise, which is a standard assumption usually imposed in the literature, allows weak FARIMA models to cover a large class of nonlinear long-memory processes. The weak FARIMA models are dense in the set of purely non-deterministic stationary processes, the class of these models encompasses that of FARIMA processes with an independent and identically distributed noise (iid). We call thereafter strong FARIMA models the models in which the error term is assumed to be an iid innovations.We establish procedures for estimating and validating weak FARIMA models. We show, under weak assumptions on the noise, that the least squares estimator of the parameters of weak FARIMA(p,d,q) models is strongly consistent and asymptotically normal. The asymptotic variance matrix of the least squares estimator of weak FARIMA(p,d,q) models has the "sandwich" form. This matrix can be very different from the asymptotic variance obtained in the strong case (i.e. in the case where the noise is assumed to be iid). We propose, by two different methods, a convergent estimator of this matrix. An alternative method based on a self-normalization approach is also proposed to construct confidence intervals for the parameters of weak FARIMA(p,d,q) models.We then pay particular attention to the problem of validation of weak FARIMA(p,d,q) models. We show that the residual autocorrelations have a normal asymptotic distribution with a covariance matrix different from that one obtained in the strong FARIMA case. This allows us to deduce the exact asymptotic distribution of portmanteau statistics and thus to propose modified versions of portmanteau tests. It is well known that the asymptotic distribution of portmanteau tests is correctly approximated by a chi-squared distribution when the error term is assumed to be iid. In the general case, we show that this asymptotic distribution is a mixture of chi-squared distributions. It can be very different from the usual chi-squared approximation of the strong case. We adopt the same self-normalization approach used for constructing the confidence intervals of weak FARIMA model parameters to test the adequacy of weak FARIMA(p,d,q) models. This method has the advantage of avoiding the problem of estimating the asymptotic variance matrix of the joint vector of the least squares estimator and the empirical autocovariances of the noise.Secondly, we deal in this thesis with the problem of estimating autoregressive models of order 1 endowed with fractional Gaussian noise when the Hurst parameter H is assumed to be known. We study, more precisely, the convergence and the asymptotic normality of the generalized least squares estimator of the autoregressive parameter of these models
APA, Harvard, Vancouver, ISO, and other styles
26

Huang, Yao-pseng, and 黃耀增. "Autocorrelation Based SNR Estimation." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/cm5475.

Full text
Abstract:
碩士
國立中山大學
通訊工程研究所
96
Signal-to-noise ratio (SNR) estimation is one of the important research topics in wireless communications. In the receiver, many algorithms require SNR information to achieve optimal performance. In this thesis, an autocorrelation based SNR estimator is proposed. The proposed method utilizes the correlation properties of symbol sequence and the uncorrelated properties of noise sequence to distinguish the signal power from the received signal. Curve fitting method is used for SNR estimator to predict the signal power. Mean and variance performance of the proposed SNR estimator is compared with that of the conventional SNR estimator by computer simulations. These simulations consider additive white Gaussian noise and multipath Rayleigh fading channel with BPSK, 8PSK, 16QAM and 64QAM modulation schemes. According to the simulation results, the proposed method can provide better performance than conventional methods in both mean and mean-square-error.
APA, Harvard, Vancouver, ISO, and other styles
27

Feng-ChengWu and 吳灃宸. "Unbiased Estimation of Numerical Derivative on Log-likelihood." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/459k47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Hirukawa, Masayuki. "Heteroskedasticity and autocorrelation consistent covariance matrix estimation." 2004. http://www.library.wisc.edu/databases/connect/dissertations.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Pao, Hsiao-Yung, and 包孝永. "On autocorrelation estimation of high frequency squared returns." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/51598320107837477106.

Full text
Abstract:
碩士
國立中山大學
應用數學系研究所
98
In this paper, we investigate the problem of estimating the autocorrelation of squared returns modeled by diffusion processes with data observed at non-equi-spaced discrete times. Throughout, we will suppose that the stock price processes evolve in continuous time as the Heston-type stochastic volatility processes and the transactions arrive randomly according to a Poisson process. In order to estimate the autocorrelation at a fixed delay, the original non-equispaced data will be synchronized. When imputing missing data, we adopt the previous-tick interpolation scheme. Asymptotic property of the sample autocorrelation of squared returns based on the previous-tick synchronized data will be investigated. Simulation studies are performed and applications to real examples are illustrated.
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Rei-Yang, and 王瑞陽. "On Unbiased Risk Estimation of Random Effect ANOVA Model with Balanced Data." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/86726005649820252005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Khan, Salman Ahmed. "Autocorrelation function based mobile velocity estimation in correlated Rayleigh MIMO channels." Thesis, 2008. http://spectrum.library.concordia.ca/976167/1/MR45308.pdf.

Full text
Abstract:
In upcoming 4 th generation mobile systems using multiple antennas, knowledge of the speed of the mobile will help allocate adaptively scarce system resources to users. Due to insufficient scattering in the propagation environment or insufficient antenna spacing on either the transmitter or receiver, Multiple Input Multiple Output (MIMO) channels are often correlated. Velocity estimation in MIMO channels has not received much attention up to now. On the other hand, a large number of schemes have been developed for velocity estimation in Single Input Single Output (SISO) systems. Some of these schemes can be categorized as Autocorrelation Function (ACF) based schemes. These ACF based schemes are easy to implement and give accurate velocity estimates. In this thesis, we focus on extending this existing class of ACF based velocity estimation schemes to correlated MIMO channels. This way, the benefits of ACF based schemes can be derived in commonly occurring correlated MIMO channels. In the first part of the thesis, we first establish a performance reference by determining the performance of ACF based schemes in uncorrelated MIMO channels. Then we analyze the performance of ACF based schemes in correlated MIMO channel using the full antenna set. Some loss in the accuracy of velocity estimates is observed compared to the case of the uncorrelated MIMO channel. To recover this loss, we then present a channel decorrelation based recovery scheme. The second part of the thesis studies the extension of ACF based schemes to the case of correlated MIMO channels with antenna selection. The performance of the ACF based schemes in this case is analyzed. In this case, a degradation of performance larger than the case of the full antenna set is noticed. Thereafter a recovery scheme based on channel decorrelation is presented. This scheme partially recovers the degradation in accuracy of velocity estimates. Thus the work performed in this thesis enables us to obtain accurate estimates of velocity in correlated MIMO channels
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Po-Yu, and 黎博幼. "Cramér-Rao Bounds of Unbiased Channel Estimation for Amplify-and-Forward Relay Networks." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/72274200139111267727.

Full text
Abstract:
碩士
臺灣大學
電信工程學研究所
98
Cooperative communication has gained increasing popularity recently since spatial diversity can be provided so as to mitigate fading over wireless transmission without raising hardware complexity of mobile devices. The main concept of cooperative communication is that idle mobile devices, termed as relay nodes, can be utilized to send data for the source node. The relay networks form a virtual antenna array in a distributed manner. In the literature, however, the performance analysis of channel estimation for relay networks has not been well addressed yet. In this thesis, the performance of channel estimation for amplify-and-forward (AF) relay networks will be analyzed in terms of Cramér-Rao bound (CRB), which is the renowned lower bound for estimation. First, we will introduce the relaying protocols and corresponding signal models. Then we will derive the CRB of AF relay networks for unbiased centralized channel estimation, and for unbiased distributed channel estimation, respectively. Finally, the influence of relay parameters on the derived CRB will be discussed in detail.
APA, Harvard, Vancouver, ISO, and other styles
33

Delpish, Ayesha Nneka Niu Xu-Feng. "A comparison of estimators in hierarchical linear modeling restricted maximum likelihood versus bootstrap via minimum norm quadratic unbiased estimators /." 2006. http://etd.lib.fsu.edu/theses/available/06262006-100559.

Full text
Abstract:
Thesis (Ph. D.)--Florida State University, 2006.
Advisor: Xu-Feng Niu, Florida State University, College of Arts and Sciences, Dept. of Statistics. Title and description from dissertation home page (viewed Sept. 18, 2006). Document formatted into pages; contains ix, 116 pages. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
34

Dégerine, Serge. "Fonction d'autocorrélation partielle et estimation autorégressive dans le domaine temporel." Phd thesis, 1988. http://tel.archives-ouvertes.fr/tel-00243761.

Full text
Abstract:
Etude, dans un cadre probabiliste et statistique, de la fonction d'autocorrelation partielle d'un processus scalaire reel, a temps discret, stationnaire au second ordre et centre. Nous nous attachons, dans une premiere partie, a decrire de facon tres complete les differents aspects de cette fonction dans l'investigation, sur le plan probabiliste, de la structure du processus. Notre presentation est faite essentiellement dans le domaine temporel. Cependant le choix d'un langage geometrique, dans l'espace de hilbert engendre par les composantes du processus, facilite le lien avec le domaine spectral. Nous soulignons le role privilegie autoregressif. Nous considerons aussi le cas des processus vectoriels pour lesquels nous proposons la notion de fonction d'autocorrelation partielle canonique. La 2eme partie est consacree aux apports de la fonction d'autocorrelation partielle dans l'estimation de la structure temporelle du processus. La necessite de recourir a d'autres techniques que celle, usuelle, utilisant les autocorrelations empiriques se rencontre lorsque la serie observee est courte, meme en presence d'un echantillon, ou encore lorsqu'elle provient d'un modele proche de la singularite. Nous insistons sur la methode du maximum de vraisemblance, pour laquelle nous precisons les conditions d'utilisation (existence, unicite...) et nous proposons, dans le cas d'un echantillon de series courtes, une methode de relaxation pour sa mise en oeuvre. Nous analysons et comparons les differentes methodes d'estimation autoregressive dans le domaine temporel et constatons les bonnes performances de celle basee sur la version empirique des autocorrelaitons partielles que nous proposons.
APA, Harvard, Vancouver, ISO, and other styles
35

Krishnan, Sunder Ram. "Optimum Savitzky-Golay Filtering for Signal Estimation." Thesis, 2013. http://hdl.handle.net/2005/3293.

Full text
Abstract:
Motivated by the classic works of Charles M. Stein, we focus on developing risk-estimation frameworks for denoising problems in both one-and two-dimensions. We assume a standard additive noise model, and formulate the denoising problem as one of estimating the underlying clean signal from noisy measurements by minimizing a risk corresponding to a chosen loss function. Our goal is to incorporate perceptually-motivated loss functions wherever applicable, as in the case of speech enhancement, with the squared error loss being considered for the other scenarios. Since the true risks are observed to depend on the unknown parameter of interest, we circumvent the roadblock by deriving finite-sample un-biased estimators of the corresponding risks based on Stein’s lemma. We establish the link with the multivariate parameter estimation problem addressed by Stein and our denoising problem, and derive estimators of the oracle risks. In all cases, optimum values of the parameters characterizing the denoising algorithm are determined by minimizing the Stein’s unbiased risk estimator (SURE). The key contribution of this thesis is the development of a risk-estimation approach for choosing the two critical parameters affecting the quality of nonparametric regression, namely, the order and bandwidth/smoothing parameters. This is a classic problem in statistics, and certain algorithms relying on derivation of suitable finite-sample risk estimators for minimization have been reported in the literature (note that all these works consider the mean squared error (MSE) objective). We show that a SURE-based formalism is well-suited to the regression parameter selection problem, and that the optimum solution guarantees near-minimum MSE (MMSE) performance. We develop algorithms for both glob-ally and locally choosing the two parameters, the latter referred to as spatially-adaptive regression. We observe that the parameters are so chosen as to tradeoff the squared bias and variance quantities that constitute the MSE. We also indicate the advantages accruing out of incorporating a regularization term in the cost function in addition to the data error term. In the more general case of kernel regression, which uses a weighted least-squares (LS) optimization, we consider the applications of image restoration from very few random measurements, in addition to denoising of uniformly sampled data. We show that local polynomial regression (LPR) becomes a special case of kernel regression, and extend our results for LPR on uniform data to non-uniformly sampled data also. The denoising algorithms are compared with other standard, performant methods available in the literature both in terms of estimation error and computational complexity. A major perspective provided in this thesis is that the problem of optimum parameter choice in nonparametric regression can be viewed as the selection of optimum parameters of a linear, shift-invariant filter. This interpretation is provided by deriving motivation out of the hallmark paper of Savitzky and Golay and Schafer’s recent article in IEEE Signal Processing Magazine. It is worth noting that Savitzky and Golay had shown in their original Analytical Chemistry journal article, that LS fitting of a fixed-order polynomial over a neighborhood of fixed size is equivalent to convolution with an impulse response that is fixed and can be pre-computed. They had provided tables of impulse response coefficients for computing the smoothed function and smoothed derivatives for different orders and neighborhood sizes, the resulting filters being referred to as Savitzky-Golay (S-G) filters. Thus, we provide the new perspective that the regression parameter choice is equivalent to optimizing for the filter impulse response length/3dB bandwidth, which are inversely related. We observe that the MMSE solution is such that the S-G filter chosen is of longer impulse response length (equivalently smaller cutoff frequency) at relatively flat portions of the noisy signal so as to smooth noise, and vice versa at locally fast-varying portions of the signal so as to capture the signal patterns. Also, we provide a generalized S-G filtering viewpoint in the case of kernel regression. Building on the S-G filtering perspective, we turn to the problem of dynamic feature computation in speech recognition. We observe that the methodology employed for computing dynamic features from the trajectories of static features is in fact derivative S-G filtering. With this perspective, we note that the filter coefficients can be pre-computed, and that the whole problem of delta feature computation becomes efficient. Indeed, we observe an advantage by a factor of 104 on making use of S-G filtering over actual LS polynomial fitting and evaluation. Thereafter, we study the properties of first-and second-order derivative S-G filters of certain orders and lengths experimentally. The derivative filters are bandpass due to the combined effects of LPR and derivative computation, which are lowpass and highpass operations, respectively. The first-and second-order S-G derivative filters are also observed to exhibit an approximately constant-Q property. We perform a TIMIT phoneme recognition experiment comparing the recognition accuracies obtained using S-G filters and the conventional approach followed in HTK, where Furui’s regression formula is made use of. The recognition accuracies for both cases are almost identical, with S-G filters of certain bandwidths and orders registering a marginal improvement. The accuracies are also observed to improve with longer filter lengths, for a particular order. In terms of computation latency, we note that S-G filtering achieves delta and delta-delta feature computation in parallel by linear filtering, whereas they need to be obtained sequentially in case of the standard regression formulas used in the literature. Finally, we turn to the problem of speech enhancement where we are interested in de-noising using perceptually-motivated loss functions such as Itakura-Saito (IS). We propose to perform enhancement in the discrete cosine transform domain using risk-minimization. The cost functions considered are non-quadratic, and derivation of the unbiased estimator of the risk corresponding to the IS distortion is achieved using an approximate Taylor-series analysis under high signal-to-noise ratio assumption. The exposition is general since we focus on an additive noise model with the noise density assumed to fall within the exponential class of density functions, which comprises most of the common densities. The denoising function is assumed to be pointwise linear (modified James-Stein (MJS) estimator), and parallels between Wiener filtering and the optimum MJS estimator are discussed.
APA, Harvard, Vancouver, ISO, and other styles
36

Lain, Michal. "Robustní odhady autokorelační funkce." Master's thesis, 2020. http://www.nusl.cz/ntk/nusl-415917.

Full text
Abstract:
The autocorrelation function is a basic tool for time series analysis. The clas- sical estimation is very sensitive to outliers and can lead to misleading results. This thesis deals with robust estimations of the autocorrelation function, which is more resistant to the outliers than the classical estimation. There are presen- ted following approaches: leaving out the outliers from the data, replacement the average with the median, data transformation, the estimation of another coeffici- ent, robust estimation of the partial autocorrelation function or linear regression. The thesis describes the applicability of the presented methods, their advantages and disadvantages and necessary assumptions. All the approaches are compared in simulation study and applied to real financial data. 1
APA, Harvard, Vancouver, ISO, and other styles
37

Moura, Ricardo Pinto. "Likelihood-based Inference for Multivariate Regression Models using Synthetic Data." Doctoral thesis, 2016. http://hdl.handle.net/10362/19694.

Full text
Abstract:
Likelihood-based exact inference procedures are derived for the multivariate regression model, for singly and multiply imputed synthetic data generated via Posterior Predictive Sampling (PPS), via a newly proposed sampling method, which will be called Fixed-Posterior Predictive Sampling (FPPS), and via Plug-in sampling. By contemplating the single imputation case, the new developed procedures fill the gap in the existing literature where inferential methods are only available for multiple imputation and, by being based in exact distributions, it may even be applied to cases where the sample size is small. Simulation studies compare the results obtained from all the proposed exact inferential procedures and also compare these with the results obtained from the adaptation of Reiter’s combination rule to multiply imputed synthetic datasets. An application using U.S. 2000 Current Population Survey data is discussed and measures of privacy are presented and compared among all methods.
APA, Harvard, Vancouver, ISO, and other styles
38

Kubínová, Zuzana. "Vliv zvýšené koncentrace CO2 a ozářenosti na kvantitativní parametry mezofylových buněk smrku ztepilého." Master's thesis, 2010. http://www.nusl.cz/ntk/nusl-296294.

Full text
Abstract:
KUBÍNOVÁ, Zuzana. The effect of elevated CO2 concentration and irradiation on quantitative parameters of mesophyll cells of Norway spruce. Prague, 2010. 74 p. Master's degree thesis. Faculty of Science, Charles University in Prague. Abstract The aim of the present thesis was to choose and adjust a suitable methodology for counting particles in 3D space, which would be suitable for unbiased estimation of chloroplast number in needle mesophyll cells. The disector method was used for the first time to determine the number of chloroplasts. This method enables unbiased estimation of chloroplast number in needle volume from optical sections captured from fresh free-hand sections by confocal microscope. The sections did not need any pre-processing. Another aim was to compare selected photosynthetic and anatomical characteristics of sun and shade Norway spruce needles, which were grown under different CO2 concentration. The trees were grown for eight years in ambient (during the experiment increasing from 357 up to 370 µmol CO2 ∙ mol-1 ) CO2 concentration or elevated (700 µmol ∙ mol-1 ) CO2 concentration in special glass domes on an experimental research site of the Institute of Systems Biology and Ecology, Academy of Sciences of the Czech Republic at Bílý Kříž in Moravskoslezské Beskydy mountains. The sun needles...
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography