To see the other types of publications on this topic, follow the link: Standard error of measurement.

Dissertations / Theses on the topic 'Standard error of measurement'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Standard error of measurement.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tataryn, Douglas Joseph 1960. "Standard errors of measurement, confidence intervals, and the distribution of error for the observed score curve." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/277223.

Full text
Abstract:
This paper reviews the basic literature on the suggested applications of the standard error of measurement (SEM), and points out that there are discrepancies in its suggested application. In the process of determining the efficacy and appropriateness of each of the proposals, a formula to determine the distribution of error for the observed score curve is derived. The final recommendation, which is congruent with Cronbach, Gleser, Nanda & Rajaratnam's (1972) recommendations, is to not use the SEM to create confidence intervals around the observed score: The predicted true score and the standard error of the prediction are better suited (non-biased and more efficient) for the task of estimating a confidence interval which will contain an individual's true score. Finally, the distribution of future observed scores around the expected true score is derived.
APA, Harvard, Vancouver, ISO, and other styles
2

Durney, Ann Wells. "Truncation and its effect on standard error of correlation coefficients." Thesis, The University of Arizona, 1990. http://hdl.handle.net/10150/277950.

Full text
Abstract:
A Monte Carlo study was conducted to investigate the effect of truncation of score distributions on systematic bias and random error of correlation coefficient distributions. The findings were twofold: Correlation decreases systematically due to increasing truncation; and the standard error of the correlation coefficient, which is a measure of random error, increases due to increasing truncation.
APA, Harvard, Vancouver, ISO, and other styles
3

Choi, Jinah. "Conditional standard errors of measurement, confidence interval, and reliability for individual level student growth percentiles." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6074.

Full text
Abstract:
The importance of measuring and monitoring educational achievement longitudinally has led to a proliferation of growth models. The Student Growth Percentile (SGP) is one score metric which helps to make inferences about current relative student status given prior test scores. The major purpose of this study was to provide two Conditional Standard Errors of Measurement (CSEM) estimation approaches for individual-level SGPs with theoretical justifications and empirical elaborations of them. Estimation approaches were developed under two commonly used paradigms: Classical Test Theory (CTT) and Item Response Theory (IRT). Within each paradigm, measurement error was conceptualized as variability of individual-level test scores across hypothetical repeated measurement using parallel test forms. Under the CTT paradigm, the measurement errors were assumed to be distributed as a binomial model. Under the IRT paradigm, they were assumed to be distributed as a compound binomial model. In addition to CSEMs, the purpose of this study was to develop procedures for constructing individual-level SGP confidence intervals and for estimating reliability. The proposed methods were demonstrated using data for a large-scale assessment of mathematics achievement from Grades 3 to 4. For example, pertinent tables and graphs including outcome statistics showed that the mean and median values of CSEMs for individual SGPs were sizable, the length of tests influenced actual values of CSEM for SGP, but there were small differences in CSEM values between the two types of conversion relationships. The CSEM values on the SGP scale by each academic peer group were distributed in an arch shape. Also, compared to the SGP reliabilities under CTT, those under IRT had similar reliability coefficients in the three tests. The results of these demonstrations were used to evaluate measurement errors in the context of practical and policy implications of SGP use. In final chapter, the practical use of SGPs and important considerations regarding measurement issues are provided. Further research related to SGPs using different subjects or grade levels, or simulation studies on the effective of the developed methodologies are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

Filipe, Vitor. "Cinemática tri-dimensional do tronco durante uma tarefa de lifting: estudo da fiabilidade teste-reteste e diferença mínima detetável em indivíduos saudáveis." Master's thesis, Instituto Politécnico de Setúbal. Escola Superior de Saúde, 2017. http://hdl.handle.net/10400.26/19914.

Full text
Abstract:
Relatório do Projeto de Investigação apresentado para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Fisioterapia, área de especialização em Fisioterapia em Condições Músculo-Esqueléticas
INTRODUÇÃO: A dor lombar (DL) é uma das condições músculo-esqueléticas que provoca um maior índice de incapacidade entre os indivíduos. Devido à relação estabelecida entre o movimento, dor e incapacidade, avaliação do movimento lumbo-pélvico é extremamente importante durante o exame de um paciente com DL. Assim, o estudo de padrões de movimentos de indivíduos saudáveis é importante para criar uma base empírica para a diferenciação entre o movimento normal e patológico. Embora exista conhecimento sobre os padrões lumbo-pélvicos em indivíduos saudáveis durante diferentes atividades diárias, falta informação sobre as propriedades psicométricas dos instrumentos de medição usados na sua avaliação. Assim, este estudo tem como objetivo avaliar a fiabilidade teste-reteste, o erro padrão de medida (EPM) e a diferença mínima detetável (DMD) da análise cinemática 3D do tronco e membro inferior durante uma tarefa de lifting, em indivíduos assintomáticos. OBJETIVO: Avaliar a fiabilidade teste-reteste, o EPM e a DMD da análise cinemática 3D do tronco e membro inferior durante uma tarefa de lifting, em indivíduos assintomáticos. METODOLOGIA: O presente estudo utilizou uma amostra de 14 indivíduos assintomáticos, em que cada um participou em dois momentos de avaliação, separados por uma média de 7 dias. Esta avaliação consistiu na recolha e análise da cinemática 3D do tronco e membro inferior durante uma tarefa de lifting. Para aferir acerca da fiabilidade e da concordância, foram calculados os valores de coeficiente de correlação intraclasse (CCI), assim como os valores de EPM, DMD e os limites de concordância (LDC). RESULTADOS: Foram obtidos valores elevados de fiabilidade teste-reteste (CCI >0.80) assim como valores de EPM baixos (<4°) para a maioria dos ângulos articulares analisados. No que toca à EPM% observou-se uma variação de 1.7 a 619% para os ângulos articulares máximos e mínimos e uma variação de 6.9 a 37.8% para as amplitudes articulares nos diferentes planos. Por último os valores absolutos de DMD variaram entre 2.2º e 23º, sendo que para a DMD% variou entre 4.7 e 1715.7% para os ângulos articulares máximos e mínimos. Já para os valores das amplitudes articulares nos diferentes planos, os valores absolutos de DMD variaram entre 1.5º e 19.7º, sendo que para a DMD% variaram entre 419 e 104.7%. DISCUSSÃO E CONCLUSÃO: O presente estudo veio demonstrar uma elevada fiabilidade para a medição cinemática 3D dos ângulos articulares do tronco e do membro inferior, assim como valores de EPM clinicamente aceitáveis, principalmente no que diz respeito às amplitudes articulares. Os resultados obtidos suportam a utilização desta medida na avaliação da tarefa do lifting em indivíduos assintomáticos, particularmente em contexto de investigação.
INTRODUCTION and AIM: Low Back Pain (LBP) is one of the musculoskeletal conditions that led to high levels of disability among individuals. Due to the established relationship between movement, pain and disability, the assessment of lumbo-pelvic movement is extremely important during the examination of a LBP patient. Thus, the study of healthy individuals movement patterns` is of importance in order to create an empirical basis for differentiation between normal and pathological movement. Although, knowledge do exist regarding lumbo-pelvic patterns in healthy individuals during different daily activities, information regarding the psychometric properties of the measurement tools used in their assessment is lacking. Thus, this study aims to evaluate the test-retest reliability, measurement error (SEM) and the minimal detectable change (MDC) of 3D kinematic analysis of the trunk and lower limb during a lifting task, in healthy individuals METHODS: The present study used a sample of 14 healthy individuals, who participated in two measurement moments separated by 7 days. This measurement consisted on the collection and analysis of the trunk and lower limb 3D kinematics during a lifting task. Intraclass correlation coefficient (ICC) values and respective 95% CI, as well as the SEM values, the 95% of limits of agreement (95% LOA), and SEM% were calculated. Finally, the absolute and percentage values of MDC were computed. RESULTS: High test-retest reliability (ICC >0.80) as well as low SEM values (<4°) were obtained for the most of the peak joint angles. Regarding the SEM%, the values ranged from 1.7 to 619% for the maximum and minimum joint angles, and from 6.9 to 37.8% for range of motion (ROM) on different movement planes. Finally, absolute MDC for maximum and minimum joint angles ranged from 2.2 to 23°, and MDC% ranged from 4.7 to 1715.7%. The absolute MDC for range of motion on different planes ranged from 1.5 to 19.7°, and the MDC% ranged from 419 e 104.7%. DISCUSSION AND CONCLUSION: The results of this study show high test-retest reliability and low measurement error for trunk and the lower limb joint angles, particularly regarding ROM parameters. High values for SEM% and MDC% were also found, especially in the horizontal plane parameters. Despite this, the obtained results seem to support the use of 3D analysis of the trunk and lower limb during lifting task, particularly in research contexts.
APA, Harvard, Vancouver, ISO, and other styles
5

Gomes, Luís. "Cinemática tri-dimensional do tronco durante uma tarefa de lifting: estudo da fiabilidade teste-reteste e diferença mínima detetável em indivíduos com dor lombar crónica." Master's thesis, Instituto Politécnico de Setúbal. Escola Superior de Saúde, 2017. http://hdl.handle.net/10400.26/18888.

Full text
Abstract:
Relatório do Projeto de Investigação apresentado para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Fisioterapia, área de especialização em Fisioterapia em Condições Músculo- Esqueléticas
INTRODUÇÃO: A dor lombar crónica (DLC) é uma das condições musculo-esqueléticas que provoca um maior índice de incapacidade entre os indivíduos. Associada aos sintomas de dor e incapacidade, indivíduos com DLC apresentam alterações dos padrões de movimento (cinemáticas, cinéticas e eletromiográficas) em várias atividades da vida diária, especialmente aquelas que envolvem movimentos de flexão/ extensão do tronco como é o caso do lifting. Apesar de muito do conhecimento acerca destas alterações ter sido obtido com recurso a instrumentos considerados gold standard para a avaliação do movimento (instrumentos de medição regional e tri-dimensional (3D) do tronco e membros inferiores), a dúvida acerca das propriedades psicométricas destas medidas persiste. Assim, torna-se extremamente importante o estudo de propriedades como a fiabilidade teste-reteste, o erro padrão de medida (EPM) e a diferença mínima detetável (DMD) destas das medidas de forma a que estas possam ser utilizadas da forma mais fiável e precisa no estudo do movimento de indivíduos com DLC. OBJETIVO: Avaliar a fiabilidade teste-reteste, o EPM e a DMD da análise cinemática 3D do tronco e membro inferior durante uma tarefa de lifting, em indivíduos com DLC. METODOLOGIA: O presente estudo utilizou uma amostra de 14 indivíduos com DLC, em que cada um participou em dois momentos de avaliação, separados por uma média de 7.6 ±1.8 dias. Esta avaliação consistiu na recolha e análise da cinemática 3D do tronco e membro inferior durante uma tarefa de lifting. Para aferir acerca da fiabilidade e da concordância, foram calculados os valores de coeficiente de correlação intraclasse (CCI), e respetivos 95% IC, assim como os valores de EPM, respetivos 95% dos limites de concordância (LDC), e da EPM%. Por último, foram calculados os valores absolutos e de percentagem da DMD. RESULTADOS: Foram obtidos elevados valores de fiabilidade teste-reteste (CCI >0.80) assim como valores de EPM baixos (<4°) para a maioria dos ângulos articulares analisados. No que toca à EPM% foi encontrada uma grande variação de valores (variaram entre 1.57 e 23453.74%). Por último os valores absolutos de DMD variaram entre 2.12° e 20.22°, sendo que para a DMD% variou entre 4.36 e 65010.46%. DISCUSSÃO E CONCLUSÃO: O presente estudo veio demonstrar uma elevada fiabilidade para a medição cinemática 3D dos ângulos articulares do tronco e do membro inferior, assim como valores de EPM clinicamente aceitáveis. Por outro lado, foram também encontrados valores de EPM% e DMD% elevados, especialmente para os movimentos no plano horizontal. Apesar disto, os resultados obtidos suportam a utilização desta medida na avaliação da tarefa do lifting em indivíduos com DLC, em pelo menos contextos de investigação.
INTRODUCTION: Chronic low back pain (CLBP) is a musculoskeletal condition that causes a great level of disability among individuals. Associated with pain and disability, individuals with CLBP present changes in movement patterns (kinematic, kinetic and electromyographic) during many daily activities, especially those involving trunk flexion / extension movements such as lifting. Although much of the knowledge about these changes has been obtained using instruments considered as gold standard for movement study (regional and tri-dimensional instruments (3D) of the trunk and lower limbs), the question about the psychometric properties of these measures persists. Thus, gaining knowledge about properties such as reliability, standard error of measurement (SEM) and the minimal detectable change (MDC) of these measurements is extremely important so that they can be used in the most reliable and accurate way in the study of the movement of individuals with CLBP. AIM: Evaluate the test-retest reliability, SEM and MDC of 3D kinematic analysis of the trunk and lower limb during a lifting task, in individuals with CLBP. METHODOLOGY: The present study used a sample of 14 individuals with CLBP, that participated in two measurement moments separated by a mean of 7.6 ± 1.8 days. This measurement consisted on the collection and analysis of the 3D kinematics of the trunk and lower limb during a lifting task. Intraclass correlation coefficient (ICC) values, and their respective 95% CI, as well as the SEM values, respective 95% of limits of agreement (95% LOA), and SEM% were calculated. Finally, the absolute and percentage values of MDC were calculated. RESULTS: High values of test-retest reliability (ICC> 0.80) as well as low SEM values (< 4°) were obtained for the most of the peak joint angles. Regarding the SEM%, a large variation of values was found (varied between 1.57 and 23453.74%). Finally, absolute MDC ranged from 2.12 to 20.22°, and for MDC% ranged from 4.36 to 65010.46%. DISCUSSION AND CONCLUSION: The present study demonstrated a high test-retest reliability and a low SEM for the peak articular angles of the trunk and the lower limb. On the other hand, values of high SEM% and MDC% were also found, especially for the movements in the horizontal plane. Despite this, the results obtained support the use of this measure in the assessment of the lifting task in individuals with CLBP, at least in research contexts.
APA, Harvard, Vancouver, ISO, and other styles
6

Šrámek, Jan. "Vyjadřování nejistoty u přesných délkových měření II." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2011. http://www.nusl.cz/ntk/nusl-229763.

Full text
Abstract:
This diploma thesis is supposed to inform readers interested in the distance measurements uncertainty on practical issues connected with the solving of these troubles when utilising a frequently used calibration method - the Tesa NPL measure instrument. This measuring was implemented by the Czech Metrologic Institute, where the author works; therefore he could take advantage of his rich experience as well as experience of his superiors and colleagues. The author regards the national metrological institute (CMI) as a sufficient guarantee for the hereinafter presented materials, both regarding the professional qualification and from the perspective of formal rightness of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
7

Bashir, Saghir Ahmed. "Measurement error in epidemiology." Thesis, University of Cambridge, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Viprey, Fabien. "Modélisation et caractérisation des défauts de structure de machine-outil 5 axes pour la mesure in-process." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLN071/document.

Full text
Abstract:
Le principe de la métrologie en cours d’usinage est d'obtenir des données de mesure directement dans le flot de production. Ce principe fait suite au besoin croissant des industriels de réaliser des mesures en ligne durant une opération ou entre deux opérations d'usinage en employant le moyen de production pour mesurer la pièce usinée. La maîtrise des sources d’erreur de mesure telles que les erreurs géométriques est une condition sine qua non pour garantir la métrologie dimensionnelle traçable directement sur les machines-outils. Ces travaux portent sur la modélisation géométrique de machine-outil 5 axes, basée sur une paramétrisation normalisée des erreurs géométriques. Ce modèle est simulé et simplifié par l’utilisation d’une machine virtuelle développée comme un outil d’aide à la compréhension et à la visualisation des effets des erreurs géométriques sur l’erreur volumétrique. Un nouvel étalon matériel thermo-invariant a été développé : la Multi-Feature Bar. Raccordé à la définition internationale du mètre par un étalonnage et une intercomparaison européenne, il permet d’envisager des mesures traçables sur machine-outil dans un environnement hostile. L’identification de trois paramètres intrinsèques à cet étalon, couplée à une procédure de mesure, assure une identification complète et traçable des erreurs de mouvement d’axes linéaires. Suite à cela, l’identification des erreurs entre axes est quant à elle basée sur une analyse de combinaisons de paramètres suffisants pour caractériser au mieux l’erreur volumétrique. Une procédure d’identification des paramètres du modèle est proposée en minimisant la dérive temporelle de la structure ainsi que les effets des erreurs de mouvement précédemment identifiées. Une analyse de sensibilité des paramètres de réglages de la procédure de mesure ainsi que des effets de bruits permet de garantir la qualité de l’identification proposée
In-process metrology consists in obtaining measurement data directly into the manufacturing process. This method results from an increasing need of manufacturers to carry out on-line measurements during one manufacturing task or between two manufacturing tasks by using the mean of production to measure the machined part. Monitoring the sources of errors like geometric errors is one of the prerequisites to ensure the traceable dimensional metrology directly on the machine tool.This thesis deals with the geometric modeling of 5-axis machine tool based on a standardized parameterization of geometric errors. This model is simulated and simplified by the use of a virtual machine developed in order to help understand and visualize the effects of geometric errors on the volumetric error.A new standard thermo-invariant material namely Multi-Feature Bar has been developed.After its calibration and after a European intercomparison, it provides a direct metrological traceability to the SI meter for dimensional measurement on machine tool in a hostile environment. The identification of three intrinsic parameters of this standard, coupled with a measurement procedure ensures complete and traceable identification of motion errors of linear axes. The identification of position and orientation errors of axis is based on an analysis of combinations of necessary parameters to characterize volumetric error and at best. A model parameter identification procedure is proposed by minimizing the time drift of the structural loop and the effects of previously identified motion errors. Asensitivity analysis of the measurement procedure settings and of the noise effects ensures the quality of this proposed identification
APA, Harvard, Vancouver, ISO, and other styles
9

Ratzer, Edward Alexander. "Error-correction on non-standard communication channels." Thesis, University of Cambridge, 2004. https://www.repository.cam.ac.uk/handle/1810/237471.

Full text
Abstract:
Many communication systems are poorly modelled by the standard channels assumed in the information theory literature, such as the binary symmetric channel or the additive white Gaussian noise channel. Real systems suffer from additional problems including time-varying noise, cross-talk, synchronization errors and latency constraints. In this thesis, low-density parity-check codes and codes related to them are applied to non-standard channels. First, we look at time-varying noise modelled by a Markov channel. A low-density parity-check code decoder is modified to give an improvement of over 1dB. Secondly, novel codes based on low-density parity-check codes are introduced which produce transmissions with Pr(bit = 1) ≠ Pr(bit = 0). These non-linear codes are shown to be good candidates for multi-user channels with crosstalk, such as optical channels. Thirdly, a channel with synchronization errors is modelled by random uncorrelated insertion or deletion events at unknown positions. Marker codes formed from low-density parity-check codewords with regular markers inserted within them are studied. It is shown that a marker code with iterative decoding has performance close to the bounds on the channel capacity, significantly outperforming other known codes. Finally, coding for a system with latency constraints is studied. For example, if a telemetry system involves a slow channel some error correction is often needed quickly whilst the code should be able to correct remaining errors later. A new code is formed from the intersection of a convolutional code with a high rate low-density parity-check code. The convolutional code has good early decoding performance and the high rate low-density parity-check code efficiently cleans up remaining errors after receiving the entire block. Simulations of the block code show a gain of 1.5dB over a standard NASA code.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Qiong. "Robust Estimation via Measurement Error Modeling." NCSU, 2005. http://www.lib.ncsu.edu/theses/available/etd-08112005-222926/.

Full text
Abstract:
We introduce a new method to robustifying inference that can be applied in any situation where a parametric likelihood is available. The key feature is that data from the postulated parametric models are assumed to be measured with error where the measurement error distribution is chosen to produce the occasional gross errors found in data. We show that the tails of the error-contamination model control the properties (boundedness, redescendingness) of the resulting influence functions, with heavier tails in the error contamination model producing more robust estimators. In the application to location-scale models with independent and identically distributed data, the resulting analytically-intractable likelihoods are approximated via Monte Carlo integration. In the application to time series models, we propose a Bayesian approach to the robust estimation of time series parameters. We use Markov Chain Monte Carlo (MCMC) to estimate the parameters of interest and also the gross errors. The latter are used as outlier diagnostics.
APA, Harvard, Vancouver, ISO, and other styles
11

Johansson, Fredrik. "Essays on measurement error and nonresponse /." Uppsala : Department of Economics, Uppsala University, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Boaventura, Guimareas Dumangane Montezuma. "Essays on duration response measurement error." Thesis, University of Bristol, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Lo, Sau Yee. "Measurement error in logistic regression model /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?MATH%202004%20LO.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2004.
Includes bibliographical references (leaves 82-83). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
14

AHMAD, SHOAIB. "Finite Precision Error in FPGA Measurement." Thesis, Linnéuniversitetet, Institutionen för fysik och elektroteknik (IFE), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-49646.

Full text
Abstract:
Finite precision error in digital signal processing creates a threshold of quality of the processed signal. It is very important to agree on the outcome while paying in terms of power and performance. This project deals with the design and implementation of digital filters FIR and IIR, which is further utilized by a measurement system in order to correctly measure different parameters. Compared to analog filters, these digital filters have more precise and accurate results along with the flexibility of expected hardware and environmental changes. The error is exposed and the filters are implemented to meet the requirements of a measurement system using finite precision arithmetic and the results are also verified through MATLAB. Moreover with the help of simulations, a comparison between FIR and IIR digital filters have been presented.

Passed


Digital filters and FPGA
APA, Harvard, Vancouver, ISO, and other styles
15

Fang, Xiaoqiong. "Mixtures-of-Regressions with Measurement Error." UKnowledge, 2018. https://uknowledge.uky.edu/statistics_etds/36.

Full text
Abstract:
Finite Mixture model has been studied for a long time, however, traditional methods assume that the variables are measured without error. Mixtures-of-regression model with measurement error imposes challenges to the statisticians, since both the mixture structure and the existence of measurement error can lead to inconsistent estimate for the regression coefficients. In order to solve the inconsistency, We propose series of methods to estimate the mixture likelihood of the mixtures-of-regressions model when there is measurement error, both in the responses and predictors. Different estimators of the parameters are derived and compared with respect to their relative efficiencies. The simulation results show that the proposed estimation methods work well and improve the estimating process.
APA, Harvard, Vancouver, ISO, and other styles
16

Cao, Chendi. "Linear regression with Laplace measurement error." Kansas State University, 2016. http://hdl.handle.net/2097/32719.

Full text
Abstract:
Master of Science
Statistics
Weixing Song
In this report, an improved estimation procedure for the regression parameter in simple linear regression models with the Laplace measurement error is proposed. The estimation procedure is made feasible by a Tweedie type equality established for E(X|Z), where Z = X + U, X and U are independent, and U follows a Laplace distribution. When the density function of X is unknown, a kernel estimator for E(X|Z) is constructed in the estimation procedure. A leave-one-out cross validation bandwidth selection method is designed. The finite sample performance of the proposed estimation procedure is evaluated by simulation studies. Comparison study is also conducted to show the superiority of the proposed estimation procedure over some existing estimation methods.
APA, Harvard, Vancouver, ISO, and other styles
17

Hirst, William Mark. "Outcome measurement error in survival analysis." Thesis, University of Liverpool, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.366352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Breitner, Susanne. "Time-varying coefficient models and measurement error." Diss., lmu, 2007. http://nbn-resolving.de/urn:nbn:de:bvb:19-79772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Marsh, Jennifer Lucy. "Measurement error in longitudinal film badge data." Thesis, University of Newcastle Upon Tyne, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.394638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Woodhouse, Geoffrey M. "Adjustment for measurement error in multilevel analysis." Thesis, University College London (University of London), 1998. http://discovery.ucl.ac.uk/10019113/.

Full text
Abstract:
Measurements in educational research are often subject to error. Where it is desired to base conclusions on underlying characteristics rather than on the raw measurements of them, it is necessary to adjust for measurement error in the modelling process. In this thesis it is shown how the classical model for measurement error may be extended to model the more complex structures of error variance and covariance that typically occur in multilevel models, particularly multivariate multilevel models, with continuous response. For these models parameter estimators are derived, with adjustment based on prior values of the measurement error variances and covariances among the response and explanatory variables. A straightforward method of specifring these prior values is presented. In simulations using data with known characteristics the new procedure is shown to be effective in reducing the biases in parameter estimates that result from unadjusted estimation. Improved estimates of the standard errors also are demonstrated. In particular, random coefficients of variables with error are successfully estimated. The estimation procedure is then used in a two-level analysis of an educational data set. It is shown how estimates and conclusions can vary, depending on the degree of measurement error that is assumed to exist in explanatory variables at level 1 and level 2. The importance of obtaining satisfactory prior estimates of measurement error variances and covariances, and of correctly adjusting for them during analysis, is demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
21

Johnson, Nels Gordon. "Semiparametric Regression Methods with Covariate Measurement Error." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/49551.

Full text
Abstract:
In public health, biomedical, epidemiological, and other applications, data collected are often measured with error. When mismeasured data is used in a regression analysis, not accounting for the measurement error can lead to incorrect inference about the relationships between the covariates and the response. We investigate measurement error in the covariates of two types of regression models.  For each we propose a fully Bayesian approach that treats the variable measured with error as a latent variable to be integrated over, and a semi-Bayesian approach which uses a first order Laplace approximation to marginalize the variable measured with error out of the likelihood.

The first model is the matched case-control study for analyzing clustered binary outcomes. We develop low-rank thin plate splines for the case where a variable measured with error has an unknown, nonlinear relationship with the response. In addition to the semi- and fully Bayesian approaches, we propose another using expectation-maximization to detect both parametric and nonparametric relationships between the covariates and the binary outcome. We assess the performance of each method via simulation terms of mean squared error and mean bias. We illustrate each method on a perturbed example of 1--4 matched case-control study.

The second regression model is the generalized linear model (GLM) with unknown link function. Usually, the link function is chosen by the user based on the distribution of the response variable, often to be the canonical link. However, when covariates are measured with error, incorrect inference as a result of the error can be compounded by incorrect choice of link function. We assess performance via simulation of the semi- and fully Bayesian methods in terms of mean squared error. We illustrate each method on the Framingham Heart Study dataset.

The simulation results for both regression models support that the fully Bayesian approach is at least as good as the semi-Bayesian approach for adjusting for measurement error, particularly when the distribution of the variable of measure with error and the distribution of the measurement error are misspecified.

Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
22

Bautista, Rene. "An examination of sources of error in exit polls| Nonresponse and measurement error." Thesis, The University of Nebraska - Lincoln, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3715450.

Full text
Abstract:

This dissertation focuses on understudied aspects of nonresponse in a context where limited information is available from refusals. In particular, this study examines social and psychological predictors of nonresponse in fast-paced face-to-face surveys; namely, election day surveys —popularly known as exit polls. Exit polls present unique challenges to study nonresponse since the population being sampled is fleeting and several conditions are beyond the researcher’s control.

If sample voters choose not participate, there is no practical way of contacting them to collect information in a timely manner. Using a proof-of-concept approach, this study explores a unique dataset that links information of respondents, nonrespondents, interviewer characteristics, as well as precinct-level information. Using this information, model-based plausible information is generated for nonrespondents (i.e., imputed data) to examine nonresponse dynamics. These data are then analyzed with multilevel regression methods. Nonresponse hypotheses are motivated by literature on cognitive abilities, cognition and social behavior.

Results from multiply imputed data and multilevel regression analyses are consistent with hypothesized relationships, suggesting that this approach may offer a way of studying nonresponse where limited information exists. Additionally, this dissertation explores sources of measurement error in exit polls. It examines whether the mechanisms likely to produce refusals are the same mechanisms likely introduce error once survey cooperation is established. A series of statistical interaction terms in OLS regressions —motivated by social interactions between interviewers and respondents— are used to explore hypothesized relationships. Overall, this research finds that cognitive mechanisms appear to account for voter nonresponse, whereas social desirability mechanisms seem to explain exit polling error.

APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Zhuosong. "Error Mitigation in Roughness Measurements." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/50146.

Full text
Abstract:
Road roughness is an important factor in determining the quality of a stretch of road. The International Roughness Index, a specific measure of road roughness, is widely used metric. However, in order to measure roughness, an accurate road profile must exist. To measure the roads, terrain profiling systems are commonly used. Modern systems based on laser scanners and inertial navigation systems (INS) are able to measure thousands of data points per seconds over a wide path. However, because of the subsystems in the profiling systems, they are susceptible to errors that reduce the accuracy of the measurements. Thus, both major subsystems - the laser and the navigation system - must be accurate and synchronized for the road to be correctly scanned. The sensors' mounting was investigated to ensure that the vehicle motion is accurately captured and accounted for, demonstrated in the Vehicle Terrain Performance Lab's (VTPL) Ford Explorer profilometer. Next, INS errors were addressed. These may include drift in the inertial measurement unit or errors due to poor reception with the global navigation satellite system. The solution to these errors was demonstrated through the VTPL's HMMWV profilometer.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
24

Holst, Daryl Allan. "Inquiring into measurement error in the science laboratory." Montana State University, 2011. http://etd.lib.montana.edu/etd/2011/holst/HolstD0811.pdf.

Full text
Abstract:
High school students often struggle with accurate data collection in the science laboratory. This study examined the effects of inquiry-based laboratory learning experiences on student ability to recognize the limited precision of measurements, ability to see error, manipulative ability in using laboratory instruments and commitment to accuracy. Results indicate increased student ability to see and correct error as well as improved understanding of error.
APA, Harvard, Vancouver, ISO, and other styles
25

Rummel, David. "Correction for covariate measurement error in nonparametric regression." Diss., [S.l.] : [s.n.], 2006. http://edoc.ub.uni-muenchen.de/archive/00006436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Han, Hillary H. "Measurement-error bias correction in spawner-recruitment relationships." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ37541.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Coy, Joanne. "The quantification of sampling error in coordinate measurement." Thesis, University of Warwick, 1991. http://wrap.warwick.ac.uk/35678/.

Full text
Abstract:
This work was canied out between October 1986 and February 1989 at the School of Engineering, University of Warwick. The thesis begins with a review of the configurations of coordinate measuring machines in common use and an investigation into the types and magnitudes of the errors incurred due to various phenomena associated with the design, deformation or misalignment of the machine components. Some of the more significant of these errors are then measured and tabulated with a view to using them as a comparison to further work. Methods by which these errors can be rectified are then briefly reviewed. Chapter 2 is concerned with the inadequacies associated with current coordinate measuring machine software algorithm design. Measurement practices are reviewed and sources of inconsistency or potential misinterpretation are identified. Sampling error is singled out as being of particular significance. Chapter 3 reviews geometric element fitting procedures and the errors that can result from ill advised measuring practice. Systematic and random error analyses of the errors incurred in the estimates of geometric parameters are reviewed and an original investigation is performed into the errors incurred in parameters due to not considering all possible data (sampling error.) Chapter 4 presents an assessment of the nature of the problem of sampling error and outlines the way in which a robust algorithm for the formal quantification of these errors should be formulated. Chapter 5 then identifies the criteria that would maximise the implementability of an algorithm of this type. An algorithm satisfying these particular requirements is duly developed. Finally, chapter 6 consists of an investigation into the effect of probe geometry on the phenomenon of sampling errors. A method is then developed whereby the probe geometry that will minimise sampling error can be readily selected.
APA, Harvard, Vancouver, ISO, and other styles
28

Saneii, Seyed Hassan. "Measurement error modelling for ordered covariates in epidemiology." Thesis, University of Liverpool, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Liu, Lian. "Topics in measurement error and missing data problems." Thesis, [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Xie, Xiangwen. "Covariate measurement error methods in failure time regression /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/9538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Davies, Jonathan. "Sparse regression methods with measurement-error for magnetoencephalography." Thesis, University of Nottingham, 2017. http://eprints.nottingham.ac.uk/48062/.

Full text
Abstract:
Magnetoencephalography (MEG) is a neuroimaging method for mapping brain activity based on magnetic field recordings. The inverse problem associated with MEG is severely ill-posed and is complicated by the presence of high collinearity in the forward (leadfield) matrix. This means that accurate source localisation can be challenging. The most commonly used methods for solving the MEG problem do not employ sparsity to help reduce the dimensions of the problem. In this thesis we review a number of the sparse regression methods that are widely used in statistics, as well as some more recent methods, and assess their performance in the context of MEG data. Due to the complexity of the forward model in MEG, the presence of measurement-error in the leadfield matrix can create issues in the spatial resolution of the data. Therefore we investigate the impact of measurement-error on sparse regression methods as well as how we can correct for it. We adapt the conditional score and simulation extrapolation (SIMEX) methods for use with sparse regression methods and build on an existing corrected lasso method to cover the elastic net penalty. These methods are demonstrated using a number of simulations for different types of measurement-error and are also tested with real MEG data. The measurement-error methods perform well in simulations, including high dimensional examples, where they are able to correct for attenuation bias in the true covariates. However the extent of their correction is much more restricted in the more complex MEG data where covariates are highly correlated and there is uncertainty over the distribution of the error.
APA, Harvard, Vancouver, ISO, and other styles
32

Hayley, S. "Cognitive error in the measurement of investment returns." Thesis, City University London, 2015. http://openaccess.city.ac.uk/13172/.

Full text
Abstract:
This thesis identifies and quantifies the impact of cognitive errors in certain aspects of investor decision-making. One error is that investors are unaware that the Internal Rate of Return (IRR) is a biased indicator of expected terminal wealth for any dynamic strategy where the amount invested is systematically related to the returns made to date. This error leads investors to use Value Averaging (VA). This thesis demonstrates that this is an inefficient strategy, since alternative strategies can generate identical outturns with lower initial capital. Investors also wrongly assume that the lower average purchase cost which is achieved by Dollar Cost Averaging (DCA) results in higher expected returns. DCA is a similarly inefficient strategy. Investors also adopt strategies such as Volatility Pumping, which appears to benefit from high asset volatility and large rebalancing trades. This thesis demonstrates that any increase in the expected geometric mean associated with rebalancing is likely to be due to reduced volatility drag, and that simpler strategies involving lower transactions costs are likely to be more profitable. Academic papers in highly-ranked journals similarly misinterpret the reduction in volatility drag achieved by rebalanced portfolios, mistakenly claiming that it results from the rebalancing trades “buying low and selling high”. The previously unidentified bias in the IRR has also affected an increasing number of academic studies, leading to misleadingly low estimates of the equity risk premium and exaggerated estimates of the losses resulting from bad investment timing. This thesis also derives a method for decomposing the differential between the GM return and the IRR into (i) the effects of this retrospective bias, and (ii) genuine effects of investor timing. Using this method I find that the low IRR on US equities is almost entirely due to this bias, and so should not lead us to revise down our estimates of the equity risk premium. This method has wider applications in fields where IRRs are used (e.g. mutual fund performance and project evaluation). In identifying these errors this thesis makes a contribution: (i) to the academic literature by correcting previous misleading results and improving research methods; (ii) to investment practitioners by identifying avoidable errors in investor decision-making. It also makes a contribution to the field of behavioural finance by altering the range of investor behaviour which should be seen as resulting from cognitive error rather than the pursuit of different objectives.
APA, Harvard, Vancouver, ISO, and other styles
33

Irani, Ramin. "Error Detection for DMB Video Streams." Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5086.

Full text
Abstract:
The purpose of this thesis is to detect errors in Digital Multimedia Broadcasting (DMB) transport stream. DMB uses the MPEG-4 standard for encapsulating Packetized Elementary Stream (PES), and uses the MPEG-2 standard for assembling them in the form of transport stream packets. Recently many research works have been carried out about video stream error detection. They mostly do this by focusing on some decoding parameters related to frame. Processing complexity can be a disadvantage for the proposed methods. In this thesis, we investigated syntax error occurrences due to corruption in the header of the video transport stream. The main focus of the study is the video streams that cannot be decoded. The proposed model is implemented by filtering video and audio packets in order to find the errors. The filters investigate some sources that can affect the video stream playback. The output from this method determines the type, location and duration of the errors. The simplicity of the structure is one of advantages of this model. It can be implemented by three simple filters for detecting errors and a “calculation unit” for calculating the duration of an error. Fast processing is another benefit of the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
34

McMahan, Angela Renee. "Measurement Error in Designed Experiments for Second Order Models." Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/30290.

Full text
Abstract:
Measurement error (ME) in the factor levels of designed experiments is often overlooked in the planning and analysis of experimental designs. A familiar model for this type of ME, called the Berkson error model, is discussed at length. Previous research has examined the effect of Berkson error on two-level factorial and fractional factorial designs. This dissertation extends the examination to designs for second order models. The results are used to suggest optimal values for axial points in Central Composite Designs. The proper analysis for experimental data including ME is outlined for first and second order models. A comparison of this analysis to a typical Ordinary Least Squares analysis is made for second order models. The comparison is used to quantify the difference in performance of the two methods, both of which yield unbiased coefficient estimates. Robustness to misspecification of the ME variance is also explored. A solution for experimental planning is also suggested. A design optimality criterion, called the DME criterion, is used to create a second-stage design when ME is present. The performance of the criterion is compared to a D-optimal design augmentation. A final comparison is made between methods accounting for ME and methods ignoring ME.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
35

Cirineo, Tony, and Bob Troublefield. "STANDARD INTEROPERABLE DATALINK SYSTEM, ENGINEERING DEVELOPMENT MODEL." International Foundation for Telemetering, 1995. http://hdl.handle.net/10150/608398.

Full text
Abstract:
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada
This paper describes an Engineering Development Model (EDM) for the Standard Interoperable Datalink System (SIDS). This EDM represents an attempt to design and build a programmable system that can be used to test and evaluate various aspects of a modern digital datalink. First, an investigation was started of commercial wireless components and standards that could be used to construct the SIDS datalink. This investigation lead to the construction of an engineering developmental model. This model presently consists of wire wrap and prototype circuits that implement many aspects of a modern digital datalink.
APA, Harvard, Vancouver, ISO, and other styles
36

Perera, Walgampolage Ranjith Indrasiri. "Detection of the interaction term in measurement-error-models." Thesis, University of East London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.369770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ng, YongKad. "An essay on unit root tests and measurement error." [Lincoln, Neb. : University of Nebraska-Lincoln], 2004. http://www.unl.edu/libr/Dissertations/2004/NgDis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Kim, Hyo Soo. "Periodic error in heterodyne interferometry measurement, uncertainty, and elimination /." [Gainesville, Fla.] : University of Florida, 2009. http://purl.fcla.edu/fcla/etd/UFE0041110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Liu, Long. "Essays on measurement error, nonstationary panels and nonparametrics econometrics." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2008. http://wwwlib.umi.com/cr/syr/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

COSTA, PAULO WERNECK DE ANDRADE. "ADAPTIVE CONTROL OF A MACROECONOMETRIC MODEL WITH MEASUREMENT ERROR." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1991. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=9400@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
O Planejamento econômico, abordado como um problema de controle, tem por objetivo estabelecer trajetórias ótimas (ou sub-ótimas) para as variáveis que estão sujeitas ao controle do Governo. Isto significa dizer que as varáveis de política (controle) não mais serão arbitrariamente determinadas pelos seus planejadores, sendo agora resultantes de um processo de otimização , tendo em vista o cumprimento de metas previamente estabelecidas. Neste artigo aplicamos um controlador adaptativo de certeza equivalente a um modelo macroeconométrico da economia brasileira, considerando erro de medida nas variáveis de estado. A adoção de um controlador adaptativo é justificada tendo em vista as críticas (principalmente a crítica de Lucas) que recaíram sobre os modelos macroeconométricos estacionários. Uma das formas adequadas de se tratar a não estacionariedade de tais modelos é por intermédio de um controlador adaptativo cujo objetivo será controlar e identificar simultaneamente o modelo em questão. Apresentamos uma pequena resenha das aplicações de controle ótimo e controle adaptativo em problema econômicos, ressaltando a aplicação de ambas as técnicas em modelos macroeconométricos com expectativas racionais. Por intermédio de simulações comparamos a política realmente efetivada pelo governo federal e a política ótima obtida via controle ótimo não adaptativo.
Economic planning, when considered as a control problem, has as its objective establishing optimal (or sub-optimal) trajectories for the variables subject to Government Control. This means that the policy variables (control), instead of being arbitrarily determined by the policymakers, will be the result of an optimization process, with the objective of reaching pre-established goals. In this work a Certainly Equivalence Adaptative Control is applied to a macroeconometric model of the Brazilian economy with measurement error. Since the employment of time-invariant models has been widely criticized (Lucas critique) the model used here is time- varying. An adequate way to treat such a case is through an adaptative control scheme, in which control and identification of the model are perfomed simultaneously. By means of simulations the policy obtained with the adaptative controller is compared to the policy adopted by the Brazilian Government.
APA, Harvard, Vancouver, ISO, and other styles
41

Kon, Henry B. "Data quality management : foundations in error measurement and propagation." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/9838.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Haanpää, T. (Tuomas). "Fuzz testing coverage measurement based on error log analysis." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201605051644.

Full text
Abstract:
Fuzz testing is a black box testing method in which a SUT is subjected to anomalous inputs in order to uncover faults such as crashes or other incorrect behaviour. A defining attribute of any black box testing is the assumption that the inner workings and the source code of the SUT are unknown. This lack of information adds an element of difficulty to the task of estimating test coverage. During testing a SUT typically produces a log of error conditions and other events which were triggered by the testing process. This log data is available even when the source code is not. The purpose of this study was to research whether a meaningful metric of test coverage could be extracted from the log data. The goal was to discover a metric similar to code coverage, but applicable to black box testing. A hypothesis was presented that a large variety of observed events translated to great code coverage as well. To extract this metric, a rudimentary pattern recognition algorithm was devised in order to automatically classify the events encountered during a test run. Measurements were performed on three open source SUTs representing three widely used communication protocols. Log analysis results were compared to code coverage measurements in order to study any possible correlation between them. The results were positive, as the study showed clear correlation between the code coverage metric and the log analysis results for two of the three case studies. Further study is required to establish whether the studied log analysis method is generally applicable
Fuzz-testaus on mustalaatikkotestausmenetelmä, jossa testikohteesta pyritään löytämään vikoja altistamalla se virheelliselle syötteelle. Mahdolliset ohjelmistoviat ilmenevät kaatumisina tai muuna virheellisenä toimintana. Mustalaatikkotestaukselle ominaista on se, että kohteen sisäistä toimintaa ja lähdekoodia ei tunneta, mikä tekee testauskattavuuden arvioinnista ongelmallista. Testauksen aikana kohde tavallisesti tuottaa lokitiedoston, joka sisältää kohteessa havaitut virhetilat. Tämä lokiaineisto on käytettävissä myös silloin, kun lähdekoodia ei tunneta. Tämän tutkielman tarkoituksena on selvittää, onko mahdollista kehittää mittaustekniikka testikattavuuden arviointiin lokiaineiston perusteella. Tämä mittaustekniikka muistuttaisi koodikattavuusmittausta, mutta sitä voisi soveltaa myös mustalaatikkotestauksen yhteydessä. Tutkielmassa esitetty hypoteesi oli se, että mikäli lokissa havaitaan suuri määrä erilaisia virhetiloja, myös koodikattavuus olisi korkea. Mittausten suorittamiseksi kehitettiin alkeellinen hahmontunnistusalgoritmi, joka luokitteli testauksen aikana kerätyn lokiaineiston. Mittaukset toistettiin kolmella testikohteella, joiden lähdekoodi oli avointa, ja jotka edustivat yleisesti käytettyjä tietoliikenneprotokollia. Lokianalyysituloksia verrattiin koodikattavuusmittaustuloksiin, jotta mahdollinen korrelaatio tulosten välillä havaittaisiin. Tutkimuksen tulokset olivat positiiviset, sillä kahdessa esimerkkitapauksessa kolmesta havaittiin selkeää korrelaatiota koodikattavuusmittausten ja lokianalyysitulosten välillä. Menetelmän yleinen sovellettavuus vaatii kuitenkin lisätutkimusta
APA, Harvard, Vancouver, ISO, and other styles
43

Miles, Caleb Hilliard. "Semiparametric Methods for Causal Mediation Analysis and Measurement Error." Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:23845420.

Full text
Abstract:
Chapter 1: Since the early 2000s, evidence has accumulated for a significant differential effect of first-line antiretroviral therapy (ART) regimens on human immunodeficiency virus (HIV) treatment outcomes, such as CD4 response and viral load suppression. This finding was replicated in our data from the Harvard President's Emergency Plan for AIDS Relief (PEPFAR) program in Nigeria. Investigators were interested in finding the source of these differences, i.e., understanding the mechanisms through which one regimen outperforms another, particularly via adherence. This amounts to a mediation question with adherence playing the role of mediator. Existing mediation analysis results, however, have relied on an assumption of no exposure-induced confounding of the intermediate variable, and generally require an assumption of no unmeasured confounding for nonparametric identification. Both assumptions are violated by the presence of drug toxicity. In this paper, we relax these assumptions and show that certain path-specific effects remain identified under weaker conditions. We focus on the path-specific effect solely mediated by adherence and not by toxicity and propose a suite of estimators for this effect, including a semiparametric-efficient, multiply-robust estimator. We illustrate with simulations and present results from a study applying the methodology to the Harvard PEPFAR data. Chapter 2: In causal mediation analysis, nonparametric identification of the pure (natural) direct effect typically relies on fundamental assumptions of (i) so-called ``cross-world-counterfactuals" independence and (ii) no exposure-induced confounding. When the mediator is binary, bounds for partial identification have been given when neither assumption is made, or alternatively when assuming only (ii). We extend these bounds to the case of a polytomous mediator, and provide bounds for the case assuming only (i). We apply these bounds to data from the Harvard PEPFAR program in Nigeria, where we evaluate the extent to which the effects of antiretroviral therapy on virological failure are mediated by a patient's adherence, and show that inference on this effect is somewhat sensitive to model assumptions. Chapter 3: When assessing the presence of an exposure causal effect on a given outcome, it is well known that classical measurement error of the exposure can seriously reduce the power of a test of the null hypothesis in question, although its type I error rate will generally remain controlled at the nominal level. In contrast, classical measurement error of a confounder can have disastrous consequences on the type I error rate of a test of treatment effect. In this paper, we develop a large class of semiparametric test statistics of an exposure causal effect, which are completely robust to classical measurement error of a subset of confounders. A unique and appealing feature of our proposed methods is that they require no external information such as validation data or replicates of error-prone confounders. The approach relies on the observation that under the sharp null hypothesis of no exposure causal effect, the standard assumption of no unmeasured confounding implies that the outcome is in fact a valid instrumental variable for the association between the error-prone confounder and the exposure. We present a doubly-robust form of this test that requires only one of two models -- an outcome-regression and a propensity-score model -- to be correctly specified for the resulting test statistic to have correct type I error rate. Validity and power within our class of test statistics is demonstrated via multiple simulation studies. We apply the methods to a multi-U.S.-city, time-series data set to test for an effect of temperature on mortality while adjusting for atmospheric particulate matter with diameter of 2.5 micrometres or less (PM2.5), which is well known to be measured with error.
Biostatistics
APA, Harvard, Vancouver, ISO, and other styles
44

Natu, Ambarish Shrikrishna Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Error resilience in JPEG2000." Awarded by:University of New South Wales. Electrical Engineering and Telecommunications, 2003. http://handle.unsw.edu.au/1959.4/18835.

Full text
Abstract:
The rapid growth of wireless communication and widespread access to information has resulted in a strong demand for robust transmission of compressed images over wireless channels. The challenge of robust transmission is to protect the compressed image data against loss, in such a way as to maximize the received image quality. This thesis addresses this problem and provides an investigation of a forward error correction (FEC) technique that has been evaluated in the context of the emerging JPEG2000 standard. Not much effort has been made in the JPEG2000 project regarding error resilience. The only techniques standardized are based on insertion of marker codes in the code-stream, which may be used to restore high-level synchronization between the decoder and the code-stream. This helps to localize error and prevent it from propagating through the entire code-stream. Once synchronization is achieved, additional tools aim to exploit as much of the remaining data as possible. Although these techniques help, they cannot recover lost data. FEC adds redundancy into the bit-stream, in exchange for increased robustness to errors. We investigate unequal protection schemes for JPEG2000 by applying different levels of protection to different quality layers in the code-stream. More particularly, the results reported in this thesis provide guidance concerning the selection of JPEG2000 coding parameters and appropriate combinations of Reed-Solomon (RS) codes for typical wireless bit error rates. We find that unequal protection schemes together with the use of resynchronization makers and some additional tools can significantly improve the image quality in deteriorating channel conditions. The proposed channel coding scheme is easily incorporated into the existing JPEG2000 code-stream structure and experimental results clearly demonstrate the viability of our approach
APA, Harvard, Vancouver, ISO, and other styles
45

Kamaras, Konstantinos. "JPEG2000 image compression and error resilience for transmission over wireless channels." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://sirsi.nps.navy.mil/uhtbin/hyperion-image/02Mar%5FKamaras.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Ekholm, Harry H. (Harry Hilding). "An Analysis of the Utilization of the Work Measurement System Requirements of Military Standard 1567a as Compared to Standard Operating Procedures." Thesis, University of North Texas, 1991. https://digital.library.unt.edu/ark:/67531/metadc332728/.

Full text
Abstract:
Work measurement systems are widely used by American manufacturers in establishing labor standards as inputs for budgeting, estimating, planning, evaluating, and managing production. These systems are used by defense related and non-defense related contractors alike. However in 1987, their use was required in United States Air Force major procurement contracts through Military Standard 1567A.
APA, Harvard, Vancouver, ISO, and other styles
47

Grossman, Hy. "A New Standard for Temperature Measurement in an Aviation Environment." International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/604311.

Full text
Abstract:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
Accurate temperature measurement is an essential requirement in modern aircraft data acquisition systems. Both thermocouples and Platinum resistance temperature detectors (RTD) are used for this purpose with the latter being both more accurate and more repeatable. To ensure that only the sensor limits the accuracy of a temperature measurement, end-to-end system accuracy forward of the sensor, should be significantly greater than that of the sensor itself. This paper describes a new digital signal processing (DSP) based system for providing precision RTD based temperature measurements with laboratory accuracy in an aviation environment. Advantages of the new system include, true 3-wire RTD measurement, linear temperature output, on-board ultra-precision resistance standards and transparent dynamic calibration.
APA, Harvard, Vancouver, ISO, and other styles
48

Hong, Cefu. "Error Calibration on Five-axis Machine Tools by Relative Displacement Measurement between Spindle and Work Table." 京都大学 (Kyoto University), 2012. http://hdl.handle.net/2433/157572.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Hu, Chia-Ming, and 胡家銘. "Error Measurement of Time Synchronization for a Traceable Standard Time Source." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/31500301266971538437.

Full text
Abstract:
碩士
國立臺灣海洋大學
資訊工程學系
96
In this thesis, we present a hierarchical and traceable standard time source system which distributes the national standard time to arbitrary application servers. The system consists of three layers, the first layer is the NMI server, which maintains a the time source synchronized with the national standard time, the second layer is the TC server, which synchronizes its time source with the NMI server through the modem/telephone link, and the third layer is the application server, which requests standard time from a TC server. Traceability of this system is established through periodical time synchronizations, the integrity of the records of time synchronization, error measurements of time synchronization, capability of maintaining stable frequency source, and fair auditing processes. The main contribution of this thesis are two folds; first use two-way time-transfer technique over modem/telephone links is used to reduce errors of time synchronization, and the second consists of experiments which measure time synchronization errors to establish the traceability of the distributed time source.
APA, Harvard, Vancouver, ISO, and other styles
50

Poncy, Brian. "An investigation of the dependability and standard error of measurement of words read correctly per minute using curriculum-based measurement." 2006. http://etd.utk.edu/2006/PoncyBrian.pdf.

Full text
Abstract:
Thesis (Ph. D.) -- University of Tennessee, Knoxville, 2006.
Title from title page screen (viewed on February 2, 2007). Thesis advisor: Christopher H. Skinner. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography