To see the other types of publications on this topic, follow the link: Likelihood ratio test (LRT).

Dissertations / Theses on the topic 'Likelihood ratio test (LRT)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Likelihood ratio test (LRT).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Stoorhöök, Li, and Sara Artursson. "Hur påverkar avrundningar tillförlitligheten hos parameterskattningar i en linjär blandad modell?" Thesis, Uppsala universitet, Statistiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-279039.

Full text
Abstract:
Tidigare studier visar på att blodtrycket hos gravida sjunker under andra trimestern och sedanökar i ett senare skede av graviditeten. Högt blodtryck hos gravida kan medföra hälsorisker, vilket gör mätningar av blodtryck relevanta. Dock uppstår det osäkerhet då olika personer inom vården hanterar blodtrycksmätningarna på olika sätt. Delar av vårdpersonalen avrundarmätvärden och andra gör det inte, vilket kan leda till svårigheter att tolkablodtrycksutvecklingen. I uppsatsen behandlas ett dataset innehållandes blodtrycksvärden hos gravida genom att skatta nio olika linjära regressionsmodeller med blandade effekter. Därefter genomförs en simuleringsstudie med syfte att undersöka hur mätproblem orsakat av avrundningar påverkar parameterskattningar och modellval i en linjär blandad modell. Slutsatsen är att blodtrycksavrundningarna inte påverkar typ 1-felet men påverkar styrkan. Dock innebär inte detta något problem vid fortsatt analys av blodtrycksvärdena i det verkliga datasetet.
APA, Harvard, Vancouver, ISO, and other styles
2

Barton, William H. "COMPARISON OF TWO SAMPLES BY A NONPARAMETRIC LIKELIHOOD-RATIO TEST." UKnowledge, 2010. http://uknowledge.uky.edu/gradschool_diss/99.

Full text
Abstract:
In this dissertation we present a novel computational method, as well as its software implementation, to compare two samples by a nonparametric likelihood-ratio test. The basis of the comparison is a mean-type hypothesis. The software is written in the R-language [4]. The two samples are assumed to be independent. Their distributions, which are assumed to be unknown, may be discrete or continuous. The samples may be uncensored, right-censored, left-censored, or doubly-censored. Two software programs are offered. The first program covers the case of a single mean-type hypothesis. The second program covers the case of multiple mean-type hypotheses. For the first program, an approximate p-value for the single hypothesis is calculated, based on the premise that -2log-likelihood-ratio is asymptotically distributed as ­­χ2(1). For the second program, an approximate p-value for the p hypotheses is calculated, based on the premise that -2log-likelihood-ratio is asymptotically distributed as ­χ2(p). In addition we present a proof relating to use of a hazard-type hypothesis as the basis of comparison. We show that -2log-likelihood-ratio is asymptotically distributed as ­­χ2(1) for this hypothesis. The R programs we have developed can be downloaded free-of-charge on the internet at the Comprehensive R Archive Network (CRAN) at http://cran.r-project.org, package name emplik2. The R-language itself is also available free-of-charge at the same site.
APA, Harvard, Vancouver, ISO, and other styles
3

Dai, Xiaogang. "Score Test and Likelihood Ratio Test for Zero-Inflated Binomial Distribution and Geometric Distribution." TopSCHOLAR®, 2018. https://digitalcommons.wku.edu/theses/2447.

Full text
Abstract:
The main purpose of this thesis is to compare the performance of the score test and the likelihood ratio test by computing type I errors and type II errors when the tests are applied to the geometric distribution and inflated binomial distribution. We first derive test statistics of the score test and the likelihood ratio test for both distributions. We then use the software package R to perform a simulation to study the behavior of the two tests. We derive the R codes to calculate the two types of error for each distribution. We create lots of samples to approximate the likelihood of type I error and type II error by changing the values of parameters. In the first chapter, we discuss the motivation behind the work presented in this thesis. Also, we introduce the definitions used throughout the paper. In the second chapter, we derive test statistics for the likelihood ratio test and the score test for the geometric distribution. For the score test, we consider the score test using both the observed information matrix and the expected information matrix, and obtain the score test statistic zO and zI . Chapter 3 discusses the likelihood ratio test and the score test for the inflated binomial distribution. The main parameter of interest is w, so p is a nuisance parameter in this case. We derive the likelihood ratio test statistics and the score test statistics to test w. In both tests, the nuisance parameter p is estimated using maximum likelihood estimator pˆ. We also consider the score test using both the observed and the expected information matrices. Chapter 4 focuses on the score test in the inflated binomial distribution. We generate data to follow the zero inflated binomial distribution by using the package R. We plot the graph of the ratio of the two score test statistics for the sample data, zI /zO , in terms of different values of n0, the number of zero values in the sample. In chapter 5, we discuss and compare the use of the score test using two types of information matrices. We perform a simulation study to estimate the two types of errors when applying the test to the geometric distribution and the inflated binomial distribution. We plot the percentage of the two errors by fixing different parameters, such as the probability p and the number of trials m. Finally, we conclude by briefly summarizing the results in chapter 6.
APA, Harvard, Vancouver, ISO, and other styles
4

Liang, Yi. "Likelihood ratio test for the presence of cured individuals : a simulation study /." Internet access available to MUN users only, 2002. http://collections.mun.ca/u?/theses,157472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Emberson, E. A. "The asymptotic distribution and robustness of the likelihood ratio and score test statistics." Thesis, University of St Andrews, 1995. http://hdl.handle.net/10023/13738.

Full text
Abstract:
Cordeiro & Ferrari (1991) use the asymptotic expansion of Harris (1985) for the moment generating function of the score statistic to produce a generalization of Bartlett adjustment for application to the score statistic. It is shown here that Harris's expansion is not invariant under reparameterization and an invariant expansion is derived using a method based on the expected likelihood yoke. A necessary and sufficient condition for the existence of a generalized Bartlett adjustment for an arbitrary statistic is given in terms of its moment generating function. Generalized Bartlett adjustments to the likelihood ratio and score test statistics are derived in the case where the interest parameter is one-dimensional under the assumption of a mis-specified model, where the true distribution is not assumed to be that under the null hypothesis.
APA, Harvard, Vancouver, ISO, and other styles
6

Yu, Yuan. "Tests of Independence in a Single 2x2 Contingency Table with Random Margins." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/625.

Full text
Abstract:
In analysis of the contingency tables, the Fisher's exact test is a very important statistical significant test that is commonly used to test independence between the two variables. However, the Fisher' s exact test is based upon the assumption of the fixed margins. That is, the Fisher's exact test uses information beyond the table so that it is conservative. To solve this problem, we allow the margins to be random. This means that instead of fitting the count data to the hypergeometric distribution as in the Fisher's exact test, we model the margins and one cell using multinomial distribution, and then we use the likelihood ratio to test the hypothesis of independence. Furthermore, using Bayesian inference, we consider the Bayes factor as another test statistic. In order to judge the test performance, we compare the power of the likelihood ratio test, the Bayes factor test and the Fisher's exact test. In addition, we use our methodology to analyse data gathered from the Worcester Heart Attack Study to assess gender difference in the therapeutic management of patients with acute myocardial infarction (AMI) by selected demographic and clinical characteristics.
APA, Harvard, Vancouver, ISO, and other styles
7

Shen, Paul. "Empirical Likelihood Tests For Constant Variance In The Two-Sample Problem." Bowling Green State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1544187568883762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ngunkeng, Grace. "Statistical Analysis of Skew Normal Distribution and its Applications." Bowling Green State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1370958073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yumin, Xiao. "Robustness of the Likelihood Ratio Test for Periodicity in Short Time Series and Application to Gene Expression Data." Thesis, Uppsala universitet, Statistiska institutionen, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-175807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lynch, O'Neil. "Mixture distributions with application to microarray data analysis." Scholar Commons, 2009. http://scholarcommons.usf.edu/etd/2075.

Full text
Abstract:
The main goal in analyzing microarray data is to determine the genes that are differentially expressed across two types of tissue samples or samples obtained under two experimental conditions. In this dissertation we proposed two methods to determine differentially expressed genes. For the penalized normal mixture model (PMMM) to determine genes that are differentially expressed, we penalized both the variance and the mixing proportion parameters simultaneously. The variance parameter was penalized so that the log-likelihood will be bounded, while the mixing proportion parameter was penalized so that its estimates are not on the boundary of its parametric space. The null distribution of the likelihood ratio test statistic (LRTS) was simulated so that we could perform a hypothesis test for the number of components of the penalized normal mixture model. In addition to simulating the null distribution of the LRTS for the penalized normal mixture model, we showed that the maximum likelihood estimates were asymptotically normal, which is a first step that is necessary to prove the asymptotic null distribution of the LRTS. This result is a significant contribution to field of normal mixture model. The modified p-value approach for detecting differentially expressed genes was also discussed in this dissertation. The modified p-value approach was implemented so that a hypothesis test for the number of components can be conducted by using the modified likelihood ratio test. In the modified p-value approach we penalized the mixing proportion so that the estimates of the mixing proportion are not on the boundary of its parametric space. The null distribution of the (LRTS) was simulated so that the number of components of the uniform beta mixture model can be determined. Finally, for both modified methods, the penalized normal mixture model and the modified p-value approach were applied to simulated and real data.
APA, Harvard, Vancouver, ISO, and other styles
11

Williams, Matthew Richard. "Likelihood-based testing and model selection for hazard functions with unknown change-points." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/26835.

Full text
Abstract:
The focus of this work is the development of testing procedures for the existence of change-points in parametric hazard models of various types. Hazard functions and the related survival functions are common units of analysis for survival and reliability modeling. We develop a methodology to test for the alternative of a two-piece hazard against a simpler one-piece hazard. The location of the change is unknown and the tests are irregular due to the presence of the change-point only under the alternative hypothesis. Our approach is to consider the profile log-likelihood ratio test statistic as a process with respect to the unknown change-point. We then derive its limiting process and find the supremum distribution of the limiting process to obtain critical values for the test statistic. We first reexamine existing work based on Taylor Series expansions for abrupt changes in exponential data. We generalize these results to include Weibull data with known shape parameter. We then develop new tests for two-piece continuous hazard functions using local asymptotic normality (LAN). Finally we generalize our earlier results for abrupt changes to include covariate information using the LAN techniques. While we focus on the cases of no censoring, simple right censoring, and censoring generated by staggered-entry; our derivations reveal that our framework should apply to much broader censoring scenarios.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
12

Gottfridsson, Anneli. "Likelihood ratio tests of separable or double separable covariance structure, and the empirical null distribution." Thesis, Linköpings universitet, Matematiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-69738.

Full text
Abstract:
The focus in this thesis is on the calculations of an empirical null distributionfor likelihood ratio tests testing either separable or double separable covariancematrix structures versus an unstructured covariance matrix. These calculationshave been performed for various dimensions and sample sizes, and are comparedwith the asymptotic χ2-distribution that is commonly used as an approximative distribution. Tests of separable structures are of particular interest in cases when data iscollected such that more than one relation between the components of the observationis suspected. For instance, if there are both a spatial and a temporalaspect, a hypothesis of two covariance matrices, one for each aspect, is reasonable.
APA, Harvard, Vancouver, ISO, and other styles
13

Eger, Karl-Heinz, and Evgeni Borisovich Tsoy. "Sequential probability ratio tests based on grouped observations." Universitätsbibliothek Chemnitz, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-201000938.

Full text
Abstract:
This paper deals with sequential likelihood ratio tests based on grouped observations. It is demonstrated that the method of conjugated parameter pairs known from the non-grouped case can be extended to the grouped case obtaining Waldlike approximations for the OC- and ASN- function. For near hypotheses so-called F-optimal groupings are recommended. As example an SPRT based on grouped observations for the parameter of an exponentially distributed random variable is considered.
APA, Harvard, Vancouver, ISO, and other styles
14

Bokharaiee, Najafee Simin. "Spectrum Sensing in Cognitive Radio Networks." IEEE Transactions on Vehicular Technology, 2011. http://hdl.handle.net/1993/24069.

Full text
Abstract:
Given the ever-growing demand for radio spectrum, cognitive radio has recently emerged as an attractive wireless communication technology. This dissertation is concerned with developing spectrum sensing algorithms in cognitive radio networks where a single or multiple cognitive radios (CRs) assist in detecting licensed primary bands employed by single or multiple primary users. First, given that orthogonal frequency-division multiplexing (OFDM) is an important wideband transmission technique, detection of OFDM signals in low-signal-to-noise-ratio scenario is studied. It is shown that the cyclic prefix correlation coefficient (CPCC)-based spectrum sensing algorithm, which was previously introduced as a simple and computationally efficient spectrum-sensing method for OFDM signals, is a special case of the constrained generalized likelihood ratio test (GLRT) in the absence of multipath. The performance of the CPCC-based algorithm degrades in a multipath scenario. However when OFDM is implemented, by employing the inherent structure of OFDM signals and exploiting multipath correlation in the GLRT algorithm a simple and low-complexity algorithm called the multipath-based constrained-GLRT (MP-based C-GLRT) algorithm is obtained. Further performance improvement is achieved by combining both the CPCC- and MP-based C-GLRT algorithms. A simple GLRT-based detection algorithm is also developed for unsynchronized OFDM signals. In the next part of the dissertation, a cognitive radio network model with multiple CRs is considered in order to investigate the benefit of collaboration and diversity in improving the overall sensing performance. Specially, the problem of decision fusion for cooperative spectrum sensing is studied when fading channels are present between the CRs and the fusion center (FC). Noncoherent transmission schemes with on-off keying (OOK) and binary frequency-shift keying (BFSK) are employed to transmit the binary decisions to the FC. The aim is to maximize the achievable secondary throughput of the CR network. Finally, in order to reduce the required transmission bandwidth in the reporting phase of the CRs in a cooperative sensing scheme, the last part of the dissertation examines nonorthogonal transmission of local decisions by means of on-off keying. Proposed and analyzed is a novel decoding-based fusion rule for combining the hard decisions in a linear manner.
APA, Harvard, Vancouver, ISO, and other styles
15

Liang, Yuli. "Contributions to Estimation and Testing Block Covariance Structures in Multivariate Normal Models." Doctoral thesis, Stockholms universitet, Statistiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-115347.

Full text
Abstract:
This thesis concerns inference problems in balanced random effects models with a so-called block circular Toeplitz covariance structure. This class of covariance structures describes the dependency of some specific multivariate two-level data when both compound symmetry and circular symmetry appear simultaneously. We derive two covariance structures under two different invariance restrictions. The obtained covariance structures reflect both circularity and exchangeability present in the data. In particular, estimation in the balanced random effects with block circular covariance matrices is considered. The spectral properties of such patterned covariance matrices are provided. Maximum likelihood estimation is performed through the spectral decomposition of the patterned covariance matrices. Existence of the explicit maximum likelihood estimators is discussed and sufficient conditions for obtaining explicit and unique estimators for the variance-covariance components are derived. Different restricted models are discussed and the corresponding maximum likelihood estimators are presented. This thesis also deals with hypothesis testing of block covariance structures, especially block circular Toeplitz covariance matrices. We consider both so-called external tests and internal tests. In the external tests, various hypotheses about testing block covariance structures, as well as mean structures, are considered, and the internal tests are concerned with testing specific covariance parameters given the block circular Toeplitz structure. Likelihood ratio tests are constructed, and the null distributions of the corresponding test statistics are derived.
APA, Harvard, Vancouver, ISO, and other styles
16

Tao, Jinxin. "Comparison Between Confidence Intervals of Multiple Linear Regression Model with or without Constraints." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-theses/404.

Full text
Abstract:
Regression analysis is one of the most applied statistical techniques. The sta- tistical inference of a linear regression model with a monotone constraint had been discussed in early analysis. A natural question arises when it comes to the difference between the cases of with and without the constraint. Although the comparison be- tween confidence intervals of linear regression models with and without restriction for one predictor variable had been considered, this discussion for multiple regres- sion is required. In this thesis, I discuss the comparison of the confidence intervals between a multiple linear regression model with and without constraints.
APA, Harvard, Vancouver, ISO, and other styles
17

Lopez, Gabriel E. "Detection and Classification of DIF Types Using Parametric and Nonparametric Methods: A comparison of the IRT-Likelihood Ratio Test, Crossing-SIBTEST, and Logistic Regression Procedures." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4131.

Full text
Abstract:
The purpose of this investigation was to compare the efficacy of three methods for detecting differential item functioning (DIF). The performance of the crossing simultaneous item bias test (CSIBTEST), the item response theory likelihood ratio test (IRT-LR), and logistic regression (LOGREG) was examined across a range of experimental conditions including different test lengths, sample sizes, DIF and differential test functioning (DTF) magnitudes, and mean differences in the underlying trait distributions of comparison groups, herein referred to as the reference and focal groups. In addition, each procedure was implemented using both an all-other anchor approach, in which the IRT-LR baseline model, CSIBEST matching subtest, and LOGREG trait estimate were based on all test items except for the one under study, and a constant anchor approach, in which the baseline model, matching subtest, and trait estimate were based on a predefined subset of DIF-free items. Response data for the reference and focal groups were generated using known item parameters based on the three-parameter logistic item response theory model (3-PLM). Various types of DIF were simulated by shifting the generating item parameters of select items to achieve desired DIF and DTF magnitudes based on the area between the groups' item response functions. Power, Type I error, and Type III error rates were computed for each experimental condition based on 100 replications and effects analyzed via ANOVA. Results indicated that the procedures varied in efficacy, with LOGREG when implemented using an all-other approach providing the best balance of power and Type I error rate. However, none of the procedures were effective at identifying the type of DIF that was simulated.
APA, Harvard, Vancouver, ISO, and other styles
18

Chen, Xinyu. "Inference in Constrained Linear Regression." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-theses/405.

Full text
Abstract:
Regression analyses constitutes an important part of the statistical inference and has great applications in many areas. In some applications, we strongly believe that the regression function changes monotonically with some or all of the predictor variables in a region of interest. Deriving analyses under such constraints will be an enormous task. In this work, the restricted prediction interval for the mean of the regression function is constructed when two predictors are present. I use a modified likelihood ratio test (LRT) to construct prediction intervals.
APA, Harvard, Vancouver, ISO, and other styles
19

Hattaway, James T. "Parameter Estimation and Hypothesis Testing for the Truncated Normal Distribution with Applications to Introductory Statistics Grades." Diss., CLICK HERE for online access, 2010. http://contentdm.lib.byu.edu/ETD/image/etd3412.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Jotta, César Augusto Degiato. "Análise de variância multivariada nas estimativas dos parâmetros do modelo log-logístico para susceptibilidade do capim-pé-de-galinha ao glyphosate." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-29112016-163511/.

Full text
Abstract:
O cenário agrícola nacional tem se tornado cada vez mais competitivo ao longo dos anos, manter o crescimento da produtividade a um baixo custo operacional e com baixo impacto ambiental tem sido os três ingredientes de maior relevância na área. A produtividade por sua vez, é função de várias variáveis, sendo o controle de plantas daninhas uma dessas variáveis a ser considerada. Nesse trabalho é analisado um conjunto de dados de um experimento realizado no departamento de Produção Vegetal da ESALQ-USP, Piracicaba - SP. Foram avaliadas 4 biótipos de capim-pé-de-galinha provenientes de três estados brasileiros e em três estágios morfológicos com 4 repetições para cada biótipo, a variável resposta utilizada foi massa seca (g) e como variável regressora foi utilizada a dose de glyphosate nas concentrações variando de 1/16 D a 16 D mais a testemunha, sem aplicação de herbicida, em que D varia de 480 gramas de equivalente ácido de glyphosate por hectare (g .e a. ha-1) para o estágio de 2 a 3 perfilhos, 720 (g .e a. ha-1) para o estágio de 6 a 8 perfilhos e de 960 para o estágio de 10-12 perfilhos. O trabalho teve como objetivo primário avaliar se, ao longo dos anos, as populações de capim-pé-de-galinha tem se tornado resistentes ao herbicida glyphosate, visando detecção de biótipos resistentes. O experimento foi instalado segundo o delineamento inteiramente aleatorizado, sendo feito em três estágios diferentes. Para a análise dos dados foi utilizado o modelo não-linear log-logístico proposto em Knezevic, S. e Ritz (2007) como método univariado, foi utilizado ainda o método da máxima verossimilhança para verificar a igualdade do parâmetro e. O modelo utilizado convergiu para quase todas as repetições, mas não houve um comportamento sistemático observado que explicasse a não convergência de uma repetição em particular. Num segundo momento, as estimativas dos três parâmetros do modelo foram tomadas como variáveis dependentes em uma análise de variância multivariada. Observando que as três, conjuntamente, foram significativas pelos testes de Pillai, Wilks, Roy e Hotelling-Lawley, foi realizado o teste de Tukey para o mesmo parâmetro e comparado com o primeiro método utilizado. Esse procedimento apresentou, com o mesmo coeficiente de significância, menor capacidade de identificar diferença entre as médias dos parâmetros das variedades de capim do que o método proposto por Regazzi (2015).
The national agricultural scenery has become increasingly competitive over the years, maintaining productivity growth at a low operating cost and low environmental impact has been the three most important ingredients in the area. Productivity in turn is a function of several variables, and the weed control is one of these variables to be considered. In this work it is analyzed a dataset of an experiment conducted in the Plant Production Department of ESALQ-USP, Piracicaba - SP. Were evaluated 4 grass chicken\'s feet biotypes from three Brazilian states in three morphological stages with 4 repetitions for each biotype, the response variable used was dry mass (g) and as regressor variable were used the dose of glyphosate in concentrations ranging from 1/16 D to 16 D plus the control without herbicide, wherein D ranges from 480 grams of glyphosate acid equivalent per hectare (g .e a. ha-1) for 2 to 3 stage tillers, 720 grams of glyphosate acid equivalent per hectare (g .e a. ha-1) for 6 to 8 tillers and 960 for stage 10-12 tillers. The work had as main objective to evaluate , if over the years, populations of grass chicken\'s feet has become resistant to glyphosate, aiming detection of resistant biotypes. The experiment was conducted under completely randomized design being done in three stages. For data analysis was used the non-linear log-logistic proposed in Knezevic, S. e Ritz (2007) as univariate method, it was still used the maximum likelihood method to verify the equality of the parameter e. The model converged to almost all repetitions, but there was an observed systematic behavior to explain the non-convergence of a particular repetition. Secondly, estimates of the three model parameters were taken as dependent variables in a multivariate analysis of variance. Noting that all three together, were significant by Pillai, Wilks, Roy and Hotelling-Lawley tests, was performed Tukey test for the same parameter e and compared with the first method. This procedure presented, with the same coefficient of significance, less able to identify differences between the means of the parameters of grass varieties than the method proposed by Regazzi (2015).
APA, Harvard, Vancouver, ISO, and other styles
21

Rettiganti, Mallikarjuna Rao. "Statistical Models for Count Data from Multiple Sclerosis Clinical Trials and their Applications." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1291180207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Price, Emily A. "Item Discrimination, Model-Data Fit, and Type I Error Rates in DIF Detection using Lord's χ2, the Likelihood Ratio Test, and the Mantel-Haenszel Procedure." Ohio University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1395842816.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Boulet, John R. "A Monte Carlo comparison of the Type I error rates of the likelihood ratio chi-square test statistic and Hotelling's two-sample T2 on testing the differences between group means." Thesis, University of Ottawa (Canada), 1990. http://hdl.handle.net/10393/5708.

Full text
Abstract:
The present paper demonstrates how Structural Equation Modelling (SEM) can be used to formulate a test of the difference in means between groups on a number of dependent variables. A Monte Carlo study compared the Type I error rates of the Likelihood Ratio (LR) Chi-square ($\chi\sp2$) statistic (SEM test criterion) and Hotelling's two-sample T$\sp2$ statistic (MANOVA test criterion) in detecting differences in means between two independent samples. Seventy-two conditions pertaining to average sample size ((n$\sb1$ + n$\sb2$)/2), extent of inequality of sample sizes (n$\sb1$:n$\sb2$), number of variables (p), and degree of inequality of variance-covariance matrices ($\Sigma\sb1$:$\Sigma\sb2$) were modelled. Empirical sampling distributions of the LR $\chi\sp2$ statistic and Hotelling's T$\sp2$ statistic consisted fo 2000 samples drawn from multivariate normal parent populations. The actual proportion of values that exceeded the nominal levels are presented. The results indicated that, in terms of maintaining Type I error rates that were close to the nominal levels, the LR $\chi\sp2$ statistic and Hotelling's T$\sp2$ statistic were comparable when $\Sigma\sb1$ = $\Sigma\sb2$ and (n$\sb1$ + n$\sb2$)/2:p was relatively large (i.e., 30:1). However, when $\Sigma\sb1$ = $\Sigma\sb2$ and (n$\sb1$ + n$\sb2$)/2:p was small (i.e., 10:1) Hotelling's T$\sp2$ statistic was preferred. When $\Sigma\sb{1} \not=\Sigma\sb2$ the LR $\chi\sp2$ statistic provided more appropriate Type I error rates under all of the simulated conditions. The results are related to earlier findings, and implications for the appropriate use of the SEM method of testing for group mean differences are noted.
APA, Harvard, Vancouver, ISO, and other styles
24

Florez, Guillermo Domingo Martinez. "Extensões do modelo -potência." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-07072011-154259/.

Full text
Abstract:
Em analise de dados que apresentam certo grau de assimetria a suposicao que as observações seguem uma distribuição normal, pode resultar ser uma suposição irreal e a aplicação deste modelo pode ocultar características importantes do modelo verdadeiro. Este tipo de situação deu forca á aplicação de modelo assimétricos, destacando-se entre estes a família de distribuições skew-symmetric, desenvolvida por Azzalini (1985). Neste trabalho nos apresentamos uma segunda proposta para a anàlise de dados com presença importante de assimetria e/ou curtose, comparado com a distribuição normal. Nós apresentamos e estudamos algumas propriedades dos modelos alfa-potência e log-alfa-potência, onde também estudamos o problema de estimação, as matrizes de informação observada e esperada de Fisher e o grau do viés dos estimadores mediante alguns processos de simulação. Nós introduzimos um modelo mais estável que o modelo alfa- potência do qual derivamos o caso bimodal desta distribuição e introduzimos os modelos bimodal simêtrico e assimêtrico alfa-potencia. Posteriormente nós estendemos a distribuição alfa-potência para o caso do modelo Birnbaum-Saunders, estudamos as propriedades deste novo modelo, desenvolvemos estimadores para os parametros e propomos estimadores com viés corrigido. Também introduzimos o modelo de regressão alfa-potência para dados censurados e não censurados e para o modelo de regressão log-linear Birnbaum-Saunders; aqui nós derivamos os estimadores dos parâmetros e estudamos algumas técnicas de validação dos modelos. Por ultimo nós fazemos a extensão multivariada do modelo alfa-potência e estudamos alguns processos de estimação dos parâmetros. Para todos os casos estudados apresentam-se ilustrações com dados já analisados previamente com outras suposições de distribuições.
In data analysis where data present certain degree of asymmetry the assunption of normality can result in an unreal situation and the application of this model can hide important caracteristics of the true model. Situations of this type has given strength to the use of asymmetric models with special emphasis on the skew-symmetric distribution developed by Azzalini (1985). In this work we present an alternative for data analysis in the presence of signi¯cant asymmetry or kurtosis, when compared with the normal distribution, as well as other situations that involve such model. We present and study of the properties of the ®-power and log-®-power distributions, where we also study the estimation problem, the observed and expected information matrices and the degree of bias in estimation using simulation procedures. A °exible model version is proposed for the ®-power distribution, following an extension to a bimodal version. Follows next an extension of the Birnbaum-Saunders distribution using the ®-power distribution, where some properties are studied, estimating approaches are developed as well as corrected bias estimator developed. We also develop censored and uncensored regression for the ®-power model and for the log-linear Birnbaum-Saunders regression models, for which model validation techniques are studied. Finally a multivariate extension of the ®-power model is proposed and some estimation procedures are investigated for the model. All the situations investigated were illustrated with data application using data sets previally analysed with other distributions.
APA, Harvard, Vancouver, ISO, and other styles
25

Silva, Michel Ferreira da. "Estimação e teste de hipótese baseados em verossimilhanças perfiladas." Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-06122006-162733/.

Full text
Abstract:
Tratar a função de verossimilhança perfilada como uma verossimilhança genuína pode levar a alguns problemas, como, por exemplo, inconsistência e ineficiência dos estimadores de máxima verossimilhança. Outro problema comum refere-se à aproximação usual da distribuição da estatística da razão de verossimilhanças pela distribuição qui-quadrado, que, dependendo da quantidade de parâmetros de perturbação, pode ser muito pobre. Desta forma, torna-se importante obter ajustes para tal função. Vários pesquisadores, incluindo Barndorff-Nielsen (1983,1994), Cox e Reid (1987,1992), McCullagh e Tibshirani (1990) e Stern (1997), propuseram modificações à função de verossimilhança perfilada. Tais ajustes consistem na incorporação de um termo à verossimilhança perfilada anteriormente à estimação e têm o efeito de diminuir os vieses da função escore e da informação. Este trabalho faz uma revisão desses ajustes e das aproximações para o ajuste de Barndorff-Nielsen (1983,1994) descritas em Severini (2000a). São apresentadas suas derivações, bem como suas propriedades. Para ilustrar suas aplicações, são derivados tais ajustes no contexto da família exponencial biparamétrica. Resultados de simulações de Monte Carlo são apresentados a fim de avaliar os desempenhos dos estimadores de máxima verossimilhança e dos testes da razão de verossimilhanças baseados em tais funções. Também são apresentadas aplicações dessas funções de verossimilhança em modelos não pertencentes à família exponencial biparamétrica, mais precisamente, na família de distribuições GA0(alfa,gama,L), usada para modelar dados de imagens de radar, e no modelo de Weibull, muito usado em aplicações da área da engenharia denominada confiabilidade, considerando dados completos e censurados. Aqui também foram obtidos resultados numéricos a fim de avaliar a qualidade dos ajustes sobre a verossimilhança perfilada, analogamente às simulações realizadas para a família exponencial biparamétrica. Vale mencionar que, no caso da família de distribuições GA0(alfa,gama,L), foi avaliada a aproximação da distribuição da estatística da razão de verossimilhanças sinalizada pela distribuição normal padrão. Além disso, no caso do modelo de Weibull, vale destacar que foram derivados resultados distribucionais relativos aos estimadores de máxima verossimilhança e às estatísticas da razão de verossimilhanças para dados completos e censurados, apresentados em apêndice.
The profile likelihood function is not genuine likelihood function, and profile maximum likelihood estimators are typically inefficient and inconsistent. Additionally, the null distribution of the likelihood ratio test statistic can be poorly approximated by the asymptotic chi-squared distribution in finite samples when there are nuisance parameters. It is thus important to obtain adjustments to the likelihood function. Several authors, including Barndorff-Nielsen (1983,1994), Cox and Reid (1987,1992), McCullagh and Tibshirani (1990) and Stern (1997), have proposed modifications to the profile likelihood function. They are defined in a such a way to reduce the score and information biases. In this dissertation, we review several profile likelihood adjustments and also approximations to the adjustments proposed by Barndorff-Nielsen (1983,1994), also described in Severini (2000a). We present derivations and the main properties of the different adjustments. We also obtain adjustments for likelihood-based inference in the two-parameter exponential family. Numerical results on estimation and testing are provided. We also consider models that do not belong to the two-parameter exponential family: the GA0(alfa,gama,L) family, which is commonly used to model image radar data, and the Weibull model, which is useful for reliability studies, the latter under both noncensored and censored data. Again, extensive numerical results are provided. It is noteworthy that, in the context of the GA0(alfa,gama,L) model, we have evaluated the approximation of the null distribution of the signalized likelihood ratio statistic by the standard normal distribution. Additionally, we have obtained distributional results for the Weibull case concerning the maximum likelihood estimators and the likelihood ratio statistic both for noncensored and censored data.
APA, Harvard, Vancouver, ISO, and other styles
26

Araripe, Patricia Peres. "Análise de agrupamento de semeadoras manuais quanto à distribuição do número de sementes." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-06042016-181136/.

Full text
Abstract:
A semeadora manual é uma ferramenta que, ainda nos dias de hoje, exerce um papel importante em diversos países do mundo que praticam a agricultura familiar e de conservação. Sua utilização é de grande importância devido a minimização do distúrbio do solo, exigências de trabalho no campo, maior produtividade sustentável entre outros fatores. De modo a avaliar e/ou comparar as semeadoras manuais existentes no mercado, diversos trabalhos têm sido realizados, porém considerando somente medidas de posição e dispersão. Neste trabalho é utilizada, como alternativa, uma metodologia para a comparação dos desempenhos das semeadoras manuais. Neste caso, estimou-se as probabilidades associadas a cada categoria de resposta e testou-se a hipótese de que essas probabilidades não variam para as semeadoras quando comparadas duas a duas, utilizando o teste da razão das verossimilhanças e o fator de Bayes nos paradigmas clássico e bayesiano, respectivamente. Por fim, as semeadoras foram agrupadas considerando, como medida de distância, a medida de divergência J-divergência na análise de agrupamento. Como ilustração da metodologia apresentada, são considerados os dados para a comparação de quinze semeadoras manuais de diferentes fabricantes analisados por Molin, Menegatti e Gimenez (2001) em que as semeadoras foram reguladas para depositarem exatamente duas sementes por golpe. Inicialmente, na abordagem clássica, foram comparadas as semeadoras que não possuíam valores nulos nas categorias de resposta, sendo as semeadoras 3, 8 e 14 as que apresentaram melhores comportamentos. Posteriormente, todas as semeadoras foram comparadas duas a duas, agrupando-se as categorias e adicionando as contantes 0,5 ou 1 à cada categoria de resposta. Ao agrupar categorias foi difícil a tomada de conclusões pelo teste da razão de verossimilhanças, evidenciando somente o fato da semeadora 15 ser diferente das demais. Adicionando 0,5 ou 1 à cada categoria não obteve-se, aparentemente, a formação de grupos distintos, como a semeadora 1 pelo teste diferiu das demais e apresentou maior frequência no depósito de duas sementes, o exigido pelo experimento agronômico, foi a recomendada neste trabalho. Na abordagem bayesiana, utilizou-se o fator de Bayes para comparar as semeadoras duas a duas, no entanto as conclusões foram semelhantes às obtidas na abordagem clássica. Finalmente, na análise de agrupamento foi possível uma melhor visualização dos grupos de semeadoras semelhantes entre si em ambas as abordagens, reafirmando os resultados obtidos anteriormente.
The manual planter is a tool that today still has an important role in several countries around the world, which practices family and conservation agriculture. The use of it has importance due to minimizing soil disturbance, labor requirements in the field, most sustainable productivity and other factors. In order to analyze and/or compare the commercial manual planters, several studies have been conducted, but considering only position and dispersion measures. This work presents an alternatively method for comparing the performance of manual planters. In this case, the probabilities associated with each category of response has estimated and the hypothesis that these probabilities not vary for planters when compared in pairs evaluated using the likelihood ratio test and Bayes factor in the classical and bayesian paradigms, respectively. Finally, the planters were grouped considering as a measure of distance, the divergence measure J-divergence in the cluster analysis. As an illustration of this methodology, the data from fifteen manual planters adjusted to deposit exactly two seeds per hit of different manufacturers analyzed by Molin, Menegatti and Gimenez (2001) were considered. Initially, in the classical approach, the planters without zero values in response categories were compared and the planters 3, 8 and 14 presents the better behavior. After, all the planters were compared in pairs, grouping categories and adding the constants 0,5 or 1 for each response category. Grouping categories was difficult making conclusions by the likelihood ratio test, only highlighting the fact that the planter 15 is different from others. Adding 0,5 or 1 for each category, apparently not obtained the formation of different groups, such as planter 1 which by the test differed from the others and presented more frequently the deposit of two seeds, required by agronomic experiment and recommended in this work. In the Bayesian approach, the Bayes factor was used to compare the planters in pairs, but the findings were similar to those obtained in the classical approach. Finally, the cluster analysis allowed a better idea of similar planters groups with each other in the both approaches, confirming the results obtained previously.
APA, Harvard, Vancouver, ISO, and other styles
27

Chen, Liang. "Small population bias and sampling effects in stochastic mortality modelling." Thesis, Heriot-Watt University, 2017. http://hdl.handle.net/10399/3372.

Full text
Abstract:
Pension schemes are facing more difficulties on matching their underlying liabilities with assets, mainly due to faster mortality improvements for their underlying populations, better environments and medical treatments and historically low interest rates. Given most of the pension schemes are relatively much smaller than the national population, modelling and forecasting the small populations' longevity risk become urgent tasks for both the industrial practitioners and academic researchers. This thesis starts with a systematic analysis on the influence of population size on the uncertainties of mortality estimates and forecasts with a stochastic mortality model, based on a parametric bootstrap methodology with England and Wales males as our benchmark population. The population size has significant effect on the uncertainty of mortality estimates and forecasts. The volatilities of small populations are over-estimated by the maximum likelihood estimators. A Bayesian model is developed to improve the estimation of the volatilities and the predictions of mortality rates for the small populations by employing the information of larger population with informative prior distributions. The new model is validated with the simulated small death scenarios. The Bayesian methodologies generate smoothed estimations for the mortality rates. Moreover, a methodology is introduced to use the information of large population for obtaining unbiased volatilities estimations given the underlying prior settings. At last, an empirical study is carried out based on the Scotland mortality dataset.
APA, Harvard, Vancouver, ISO, and other styles
28

Lehmann, Rüdiger, and Frank Neitzel. "Testing the compatibility of constraints for parameters of a geodetic adjustment model." Hochschule für Technik und Wirtschaft Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:520-qucosa-148609.

Full text
Abstract:
Geodetic adjustment models are often set up in a way that the model parameters need to fulfil certain constraints. The normalized Lagrange multipliers have been used as a measure of the strength of constraint in such a way that if one of them exceeds in magnitude a certain threshold then the corresponding constraint is likely to be incompatible with the observations and the rest of the constraints. We show that these and similar measures can be deduced as test statistics of a likelihood ratio test of the statistical hypothesis that some constraints are incompatible in the same sense. This has been done before only for special constraints (Teunissen in Optimization and Design of Geodetic Networks, pp. 526–547, 1985). We start from the simplest case, that the full set of constraints is to be tested, and arrive at the advanced case, that each constraint is to be tested individually. Every test is worked out both for a known as well as for an unknown prior variance factor. The corresponding distributions under null and alternative hypotheses are derived. The theory is illustrated by the example of a double levelled line
Geodätische Ausgleichungsmodelle werden oft auf eine Weise formuliert, bei der die Modellparameter bestimmte Bedingungsgleichungen zu erfüllen haben. Die normierten Lagrange-Multiplikatoren wurden bisher als Maß für den ausgeübten Zwang verwendet, und zwar so, dass wenn einer von ihnen betragsmäßig eine bestimmte Schwelle übersteigt, dann ist davon auszugehen, dass die zugehörige Bedingungsgleichung nicht mit den Beobachtungen und den restlichen Bedingungsgleichungen kompatibel ist. Wir zeigen, dass diese und ähnliche Maße als Teststatistiken eines Likelihood-Quotiententests der statistischen Hypothese, dass einige Bedingungsgleichungen in diesem Sinne inkompatibel sind, abgeleitet werden können. Das wurde bisher nur für spezielle Bedingungsgleichungen getan (Teunissen in Optimization and Design of Geodetic Networks, pp. 526–547, 1985). Wir starten vom einfachsten Fall, dass die gesamte Menge der Bedingungsgleichungen getestet werden muss, und gelangen zu dem fortgeschrittenen Problem, dass jede Bedingungsgleichung individuell zu testen ist. Jeder Test wird sowohl für bekannte, wie auch für unbekannte a priori Varianzfaktoren ausgearbeitet. Die zugehörigen Verteilungen werden sowohl unter der Null- wie auch unter der Alternativhypthese abgeleitet. Die Theorie wird am Beispiel einer Doppelnivellementlinie illustriert
APA, Harvard, Vancouver, ISO, and other styles
29

Lehmann, Rüdiger, and Frank Neitzel. "Testing the compatibility of constraints for parameters of a geodetic adjustment model." Springer Verlag, 2013. https://htw-dresden.qucosa.de/id/qucosa%3A23273.

Full text
Abstract:
Geodetic adjustment models are often set up in a way that the model parameters need to fulfil certain constraints. The normalized Lagrange multipliers have been used as a measure of the strength of constraint in such a way that if one of them exceeds in magnitude a certain threshold then the corresponding constraint is likely to be incompatible with the observations and the rest of the constraints. We show that these and similar measures can be deduced as test statistics of a likelihood ratio test of the statistical hypothesis that some constraints are incompatible in the same sense. This has been done before only for special constraints (Teunissen in Optimization and Design of Geodetic Networks, pp. 526–547, 1985). We start from the simplest case, that the full set of constraints is to be tested, and arrive at the advanced case, that each constraint is to be tested individually. Every test is worked out both for a known as well as for an unknown prior variance factor. The corresponding distributions under null and alternative hypotheses are derived. The theory is illustrated by the example of a double levelled line.
Geodätische Ausgleichungsmodelle werden oft auf eine Weise formuliert, bei der die Modellparameter bestimmte Bedingungsgleichungen zu erfüllen haben. Die normierten Lagrange-Multiplikatoren wurden bisher als Maß für den ausgeübten Zwang verwendet, und zwar so, dass wenn einer von ihnen betragsmäßig eine bestimmte Schwelle übersteigt, dann ist davon auszugehen, dass die zugehörige Bedingungsgleichung nicht mit den Beobachtungen und den restlichen Bedingungsgleichungen kompatibel ist. Wir zeigen, dass diese und ähnliche Maße als Teststatistiken eines Likelihood-Quotiententests der statistischen Hypothese, dass einige Bedingungsgleichungen in diesem Sinne inkompatibel sind, abgeleitet werden können. Das wurde bisher nur für spezielle Bedingungsgleichungen getan (Teunissen in Optimization and Design of Geodetic Networks, pp. 526–547, 1985). Wir starten vom einfachsten Fall, dass die gesamte Menge der Bedingungsgleichungen getestet werden muss, und gelangen zu dem fortgeschrittenen Problem, dass jede Bedingungsgleichung individuell zu testen ist. Jeder Test wird sowohl für bekannte, wie auch für unbekannte a priori Varianzfaktoren ausgearbeitet. Die zugehörigen Verteilungen werden sowohl unter der Null- wie auch unter der Alternativhypthese abgeleitet. Die Theorie wird am Beispiel einer Doppelnivellementlinie illustriert.
APA, Harvard, Vancouver, ISO, and other styles
30

Sheppard, Therese. "Extending covariance structure analysis for multivariate and functional data." Thesis, University of Manchester, 2010. https://www.research.manchester.ac.uk/portal/en/theses/extending-covariance-structure-analysis-for-multivariate-and-functional-data(e2ad7f12-3783-48cf-b83c-0ca26ef77633).html.

Full text
Abstract:
For multivariate data, when testing homogeneity of covariance matrices arising from two or more groups, Bartlett's (1937) modified likelihood ratio test statistic is appropriate to use under the null hypothesis of equal covariance matrices where the null distribution of the test statistic is based on the restrictive assumption of normality. Zhang and Boos (1992) provide a pooled bootstrap approach when the data cannot be assumed to be normally distributed. We give three alternative bootstrap techniques to testing homogeneity of covariance matrices when it is both inappropriate to pool the data into one single population as in the pooled bootstrap procedure and when the data are not normally distributed. We further show that our alternative bootstrap methodology can be extended to testing Flury's (1988) hierarchy of covariance structure models. Where deviations from normality exist, we show, by simulation, that the normal theory log-likelihood ratio test statistic is less viable compared with our bootstrap methodology. For functional data, Ramsay and Silverman (2005) and Lee et al (2002) together provide four computational techniques for functional principal component analysis (PCA) followed by covariance structure estimation. When the smoothing method for smoothing individual profiles is based on using least squares cubic B-splines or regression splines, we find that the ensuing covariance matrix estimate suffers from loss of dimensionality. We show that ridge regression can be used to resolve this problem, but only for the discretisation and numerical quadrature approaches to estimation, and that choice of a suitable ridge parameter is not arbitrary. We further show the unsuitability of regression splines when deciding on the optimal degree of smoothing to apply to individual profiles. To gain insight into smoothing parameter choice for functional data, we compare kernel and spline approaches to smoothing individual profiles in a nonparametric regression context. Our simulation results justify a kernel approach using a new criterion based on predicted squared error. We also show by simulation that, when taking account of correlation, a kernel approach using a generalized cross validatory type criterion performs well. These data-based methods for selecting the smoothing parameter are illustrated prior to a functional PCA on a real data set.
APA, Harvard, Vancouver, ISO, and other styles
31

Trachi, Youness. "On induction machine faults detection using advanced parametric signal processing techniques." Thesis, Brest, 2017. http://www.theses.fr/2017BRES0103/document.

Full text
Abstract:
L’objectif de ces travaux de thèse est de développer des architectures fiables de surveillance et de détection des défauts d’une machine asynchrone basées sur des techniques paramétriques de traitement du signal. Pour analyser et détecter les défauts, un modèle paramétrique du courant statorique en environnement stationnaire est proposé. Il est supposé être constitué de plusieurs sinusoïdes avec des paramètres inconnus dans le bruit. Les paramètres de ce modèle sont estimés à l’aide des techniques paramétriques telles que les estimateurs spectraux de type sous-espaces (MUSIC et ESPRIT) et l’estimateur du maximum de vraisemblance. Un critère de sévérité des défauts, basé sur l’estimation des amplitudes des composantes fréquentielles du courant statorique, est aussi proposé pour évaluer le niveau de défaillance de la machine. Un nouveau détecteur des défauts est aussi proposé en utilisant la théorie de détection. Il est principalement basé sur le test du rapport de vraisemblance généralisé avec un signal et un bruit à paramètres inconnus. Enfin, les techniques paramétriques proposées ont été évaluées à l’aide de signaux de courant statoriques expérimentaux de machines asynchrones en considérant les défauts de roulements et les ruptures de barres rotoriques. L’analyse des résultats expérimentaux montre clairement l’efficacité et la capacité de détection des techniques paramétriques proposées
This Ph.D. thesis aims to develop reliable and cost-effective condition monitoring and faults detection architectures for induction machines. These architectures are mainly based on advanced parametric signal processing techniques. To analyze and detect faults, a parametric stator current model under stationary conditions has been considered. It is assumed to be multiple sinusoids with unknown parameters in noise. This model has been estimated using parametric techniques such as subspace spectral estimators and maximum likelihood estimator. A fault severity criterion based on the estimation of the stator current frequency component amplitudes has also been proposed to determine the induction machine failure level. A novel faults detector based on hypothesis testing has been also proposed. This detector is mainly based on the generalized likelihood ratio test detector with unknown signal and noise parameters. The proposed parametric techniques have been evaluated using experimental stator current signals issued from induction machines under two considered faults: bearing and broken rotor bars faults.Experimental results show the effectiveness and the detection ability of the proposed parametric techniques
APA, Harvard, Vancouver, ISO, and other styles
32

Morel, Guy. "Procédures statistiques pour espace de décisions totalement ordonné et famille de lois à vraisemblance monotone." Rouen, 1987. http://www.theses.fr/1987ROUES009.

Full text
Abstract:
On considère un problème de décision sur un modèle statistique à rapport de vraisemblance monotone, l'ensemble des décisions étant constitué d'intervalles ordonnés. L'espace des procédures de décision est composé des procédures qui définissent un intervalle de confiance à un seuil donné. Cet espace est partiellement ordonné par une fonction de coût ou par un critère moins spécifique. Une classe essentiellement complète est obtenue. Dans cette classe, la sélection et le calcul d'une "bonne" règle de décision sont envisagés pour quatre ensembles de décisions D, ces cas particuliers étant liés à des problèmes de décision dans les travaux de recherche expérimentale
APA, Harvard, Vancouver, ISO, and other styles
33

Vasquez, Emilie. "Techniques statistiques de détection de cibles dans des images infrarouges inhomogènes en milieu maritime." Thesis, Aix-Marseille 3, 2011. http://www.theses.fr/2011AIX30001.

Full text
Abstract:
Des techniques statistiques de détection d'objet ponctuel dans le ciel ou résolu dans la mer dans des images infrarouges de veille panoramique sont développées. Ces techniques sont adaptées aux inhomogénéités présentes dans ce type d'image. Elles ne sont fondées que sur l'analyse de l'information spatiale et ont pour objectif de maîtriser le taux de fausse alarme sur chaque image. Pour les zones de ciel, une technique conjointe de segmentation et détection adaptée aux variations spatiales de la luminosité moyenne est mise en œuvre et l'amélioration des performances auxquelles elle conduit est analysée. Pour les zones de mer, un détecteur de bord à taux de fausse alarme constant en présence d'inhomogénéités et de corrélations spatiales des niveaux de gris est développé et caractérisé. Dans chaque cas, la prise en compte des inhomogénéités dans les algorithmes statistiques s'avère essentielle pour maîtriser le taux de fausse alarme et améliorer les performances de détection
Statistical detection techniques of point target in the sky or resolved target in the sea in infrared surveillance system images are developed. These techniques are adapted to inhomogeneities present in this kind of images. They are based on the spatial information analysis and allow the control of the false alarm rate in each image.For sky areas, a joint segmentation detection technique adapted to spatial variations of the mean luminosity is developed and its performance improvement is analyzed. For sea areas, an edge detector with constant false alarm rate when inhomogeneities and grey level spatial correlations are present is developed and characterized. In each case, taking into account the inhomogeneities in these statistical algorithms is essential to control the false alarm rate and to improve the detection performance
APA, Harvard, Vancouver, ISO, and other styles
34

Rezagholi, Mahmoud. "The Effects of Technological Change on Productivity and Factor Demand in U.S. Apparel Industry 1958-1996 : An Econometric Analysis." Thesis, Uppsala University, Department of Economics, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7659.

Full text
Abstract:

In this dissertation I study substantially the effects of disembodied technical change on the total factor productivity and inputs demand in U.S. Apparel industry during 1958-1996. A time series input-output data set over the sector employs to estimate an error corrected model of a four-factor transcendental logarithmic cost function. The empirical results indicate technical impact on the total factor productivity at the rate of 9% on average. Technical progress has in addition a biased effect on factor augmenting in the sector.

APA, Harvard, Vancouver, ISO, and other styles
35

Muller, Fernanda Maria. "MELHORAMENTOS INFERENCIAIS NO MODELO BETA-SKEW-T-EGARCH." Universidade Federal de Santa Maria, 2016. http://repositorio.ufsm.br/handle/1/8394.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The Beta-Skew-t-EGARCH model was recently proposed in literature to model the volatility of financial returns. The inferences over the model parameters are based on the maximum likelihood method. The maximum likelihood estimators present good asymptotic properties; however, in finite sample sizes they can be considerably biased. Monte Carlo simulations were used to evaluate the finite sample performance of point estimators. Numerical results indicated that the maximum likelihood estimators of some parameters are biased in sample sizes smaller than 3,000. Thus, bootstrap bias correction procedures were considered to obtain more accurate estimators in small samples. Better quality of forecasts was observed when the model with bias-corrected estimators was considered. In addition, we propose a likelihood ratio test to assist in the selection of the Beta-Skew-t-EGARCH model with one or two volatility components. The numerical evaluation of the two-component test showed distorted null rejection rates in sample sizes smaller than or equal to 1,000. To improve the performance of the proposed test in small samples, the bootstrap-based likelihood ratio test and the bootstrap Bartlett correction were considered. The bootstrap-based test exhibited the closest null rejection rates to the nominal values. The evaluation results of the two-component tests showed their practical usefulness. Finally, an application to the log-returns of the German stock index of the proposed methods was presented.
O modelo Beta-Skew-t-EGARCH foi recentemente proposto para modelar a volatilidade de retornos financeiros. A estimação dos parâmetros do modelo é feita via máxima verossimilhança. Esses estimadores possuem boas propriedades assintóticas, mas em amostras de tamanho finito eles podem ser consideravelmente viesados. Com a finalidade de avaliar as propriedades dos estimadores, em amostras de tamanho finito, realizou-se um estudo de simulações de Monte Carlo. Os resultados numéricos indicam que os estimadores de máxima verossimilhança de alguns parâmetros do modelo são viesados em amostras de tamanho inferior a 3000. Para obter estimadores pontuais mais acurados foram consideradas correções de viés via o método bootstrap. Verificou-se que os estimadores corrigidos apresentaram menor viés relativo percentual. Também foi observada melhor qualidade das previsões quando o modelo com estimadores corrigidos são considerados. Para auxiliar na seleção entre o modelo Beta-Skew-t-EGARCH com um ou dois componentes de volatilidade foi apresentado um teste da razão de verossimilhanças. A avaliação numérica do teste de dois componentes proposto demonstrou taxas de rejeição nula distorcidas em tamanhos amostrais menores ou iguais a 1000. Para melhorar o desempenho do teste foram consideradas a correção bootstrap e a correção de Bartlett bootstrap. Os resultados numéricos indicam a utilidade prática dos testes de dois componentes propostos. O teste bootstrap exibiu taxas de rejeição nula mais próximas dos valores nominais. Ao final do trabalho foi realizada uma aplicação dos testes de dois componentes e do modelo Beta-Skew-t-EGARCH, bem como suas versões corrigidas, a dados do índice de mercado da Alemanha.
APA, Harvard, Vancouver, ISO, and other styles
36

Yu, Jung-Suk. "Essays on Fine Structure of Asset Returns, Jumps, and Stochastic Volatility." ScholarWorks@UNO, 2006. http://scholarworks.uno.edu/td/431.

Full text
Abstract:
There has been an on-going debate about choices of the most suitable model amongst a variety of model specifications and parameterizations. The first dissertation essay investigates whether asymmetric leptokurtic return distributions such as Hansen's (1994) skewed tdistribution combined with GARCH specifications can outperform mixed GARCH-jump models such as Maheu and McCurdy's (2004) GARJI model incorporating the autoregressive conditional jump intensity parameterization in the discrete-time framework. I find that the more parsimonious GJR-HT model is superior to mixed GARCH-jump models. Likelihood-ratio (LR) tests, information criteria such as AIC, SC, and HQ and Value-at-Risk (VaR) analysis confirm that GJR-HT is one of the most suitable model specifications which gives us both better fit to the data and parsimony of parameterization. The benefits of estimating GARCH models using asymmetric leptokurtic distributions are more substantial for highly volatile series such as emerging stock markets, which have a higher degree of non-normality. Furthermore, Hansen's skewed t-distribution also provides us with an excellent risk management tool evidenced by VaR analysis. The second dissertation essay provides a variety of empirical evidences to support redundancy of stochastic volatility for SP500 index returns when stochastic volatility is taken into account with infinite activity pure Lévy jumps models and the importance of stochastic volatility to reduce pricing errors for SP500 index options without regard to jumps specifications. This finding is important because recent studies have shown that stochastic volatility in a continuous-time framework provides an excellent fit for financial asset returns when combined with finite-activity Merton's type compound Poisson jump-diffusion models. The second essay also shows that stochastic volatility with jumps (SVJ) and extended variance-gamma with stochastic volatility (EVGSV) models perform almost equally well for option pricing, which strongly imply that the type of Lévy jumps specifications is not important factors to enhance model performances once stochastic volatility is incorporated. In the second essay, I compute option prices via improved Fast Fourier Transform (FFT) algorithm using characteristic functions to match arbitrary log-strike grids with equal intervals with each moneyness and maturity of actual market option prices.
APA, Harvard, Vancouver, ISO, and other styles
37

Russo, Cibele Maria. ""Análise de um modelo de regressão com erros nas variáveis multivariado com intercepto nulo"." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-01082006-214556/.

Full text
Abstract:
Para analisar características de interesse a respeito de um conjunto de dados reais da área de Odontologia apresentado em Hadgu & Koch (1999), ajustaremos um modelo de regressão linear multivariado com erros nas variáveis com intercepto nulo. Este conjunto de dados é caracterizado por medições de placa bacteriana em três grupos de voluntários, antes e após utilizar dois líquidos de bochecho experimentais e um líquido de bochecho controle, com medições (sujeitas a erros de medição) no início do estudo, após três e seis meses de utilização dos líquidos. Neste caso, uma possível estrutura de dependência entre as medições feitas em um mesmo indivíduo deve ser incorporada ao modelo e, além disto, temos duas variáveis resposta para cada indivíduo. Após a apresentação do modelo estatístico, iremos obter estimativas de máxima verossimilhança dos parâmetros utilizando o algoritmo iterativo EM e testaremos as hipóteses de interesse utilizando testes assintóticos de Wald, razão de verossimilhanças e score. Como neste caso não existe um teste ótimo, faremos um estudo de simulação para verificar o comportamento das três estatísticas de teste em relação a diferentes tamanhos amostrais e diferentes valores de parâmetros. Finalmente, faremos um estudo de diagnóstico buscando identificar possíveis pontos influentes no modelo, considerando o enfoque de influência local proposto por Cook (1986) e a medida de curvatura normal conformal desenvolvida por Poon & Poon (1999).
To analyze some characteristics of interest in a real odontological data set presented in Hadgu & Koch (1999), we propose the use of a multivariate null intercept errors-in-variables regression model. This data set is composed by measurements of dental plaque index (with measurement errors), which were measured in volunteers who were randomized to two experimental mouth rinses (A and B) or a control mouth rinse. The measurements were taken in each individual, before and after the use of the respective mouth rinses, in the beginning of the study, after three months from the baseline and after six months from the baseline. In this case, a possible structure of dependency between the measurements taken within the same individual must be incorporated in the model. After presenting the statistical model, we obtain the maximum likelihood estimates of the parameters using the numerical algorithm EM, and we test the hypotheses of interest considering asymptotic tests (Wald, likelihood ratio and score). Also, a simulation study to verify the behavior of these three test statistics is presented, considering diferent sample sizes and diferent values for the parameters. Finally, we make a diagnostic study to identify possible influential observations in the model, considering the local influence approach proposed by Cook (1986) and the conformal normal curvature proposed by Poon & Poon (1999).
APA, Harvard, Vancouver, ISO, and other styles
38

Gomes, Priscila da Silva. "Distribuição normal assimétrica para dados de expressão gênica." Universidade Federal de São Carlos, 2009. https://repositorio.ufscar.br/handle/ufscar/4530.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:06:02Z (GMT). No. of bitstreams: 1 2390.pdf: 3256865 bytes, checksum: 7ad1acbefc5f29dddbaad3f14dbcef7c (MD5) Previous issue date: 2009-04-17
Financiadora de Estudos e Projetos
Microarrays technologies are used to measure the expression levels of a large amount of genes or fragments of genes simultaneously in diferent situations. This technology is useful to determine genes that are responsible for genetic diseases. A common statistical methodology used to determine whether a gene g has evidences to diferent expression levels is the t-test which requires the assumption of normality for the data (Saraiva, 2006; Baldi & Long, 2001). However this assumption sometimes does not agree with the nature of the analyzed data. In this work we use the skew-normal distribution described formally by Azzalini (1985), which has the normal distribution as a particular case, in order to relax the assumption of normality. Considering a frequentist approach we made a simulation study to detect diferences between the gene expression levels in situations of control and treatment through the t-test. Another simulation was made to examine the power of the t-test when we assume an asymmetrical model for the data. Also we used the likelihood ratio test to verify the adequability of an asymmetrical model for the data.
Os microarrays são ferramentas utilizadas para medir os níveis de expressão de uma grande quantidade de genes ou fragmentos de genes simultaneamente em situações variadas. Com esta ferramenta é possível determinar possíveis genes causadores de doenças de origem genética. Uma abordagem estatística comumente utilizada para determinar se um gene g apresenta evidências para níveis de expressão diferentes consiste no teste t, que exige a suposição de normalidade aos dados (Saraiva, 2006; Baldi & Long, 2001). No entanto, esta suposição pode não condizer com a natureza dos dados analisados. Neste trabalho, será utilizada a distribuição normal assimétrica descrita formalmente por Azzalini (1985), que tem a distribuição normal como caso particular, com o intuito de flexibilizar a suposição de normalidade. Considerando a abordagem clássica, é realizado um estudo de simulação para detectar diferenças entre os níveis de expressão gênica em situações de controle e tratamento através do teste t, também é considerado um estudo de simulação para analisar o poder do teste t quando é assumido um modelo assimétrico para o conjunto de dados. Também é realizado o teste da razão de verossimilhança, para verificar se o ajuste de um modelo assimétrico aos dados é adequado.
APA, Harvard, Vancouver, ISO, and other styles
39

Pinheiro, Eliane Cantinho. "Ajustes para o teste da razão de verossimilhanças em modelos de regressão beta." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-09072009-144049/.

Full text
Abstract:
O presente trabalho considera o problema de fazer inferência com acurácia para pequenas amostras, tomando por base a estatística da razão de verossimilhanças em modelos de regressão beta. Estes, por sua vez, são úteis para modelar proporções contínuas que são afetadas por variáveis independentes. Deduzem-se as estatísticas da razão de verossimilhanças ajustadas de Skovgaard (Scandinavian Journal of Statistics 28 (2001) 3-32) nesta classe de modelos. Os termos do ajuste, que têm uma forma simples e compacta, podem ser implementados em um software estatístico. São feitas simulações de Monte Carlo para mostrar que a inferência baseada nas estatísticas ajustadas propostas é mais confiável do que a inferência usual baseada na estatística da razão de verossimilhanças. Aplicam-se os resultados a um conjunto real de dados.
We consider the issue of performing accurate small-sample likelihood-based inference in beta regression models, which are useful for modeling continuous proportions that are affected by independent variables. We derive Skovgaards (Scandinavian Journal of Statistics 28 (2001) 3-32) adjusted likelihood ratio statistics in this class of models. We show that the adjustment terms have simple compact form that can be easily implemented from standard statistical software. We presentMonte Carlo simulations showing that inference based on the adjusted statistics we propose is more reliable than that based on the usual likelihood ratio statistic. A real data example is presented.
APA, Harvard, Vancouver, ISO, and other styles
40

Holčák, Lukáš. "Statistická analýza souborů s malým rozsahem." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2008. http://www.nusl.cz/ntk/nusl-227882.

Full text
Abstract:
This diploma thesis is focused on the analysis of small samples where it is not possible to obtain more data. It can be especially due to the capital intensity or time demandingness. Where the production have not a wherewithall for the realization more data or absence of the financial resources. Of course, analysis of small samples is very uncertain, because inferences are always encumbered with the level of uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
41

Erguven, Sait. "Path Extraction Of Low Snr Dim Targets From Grayscale 2-d Image Sequences." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607723/index.pdf.

Full text
Abstract:
In this thesis, an algorithm for visual detecting and tracking of very low SNR targets, i.e. dim targets, is developed. Image processing of single frame in time cannot be used for this aim due to the closeness of intensity spectrums of the background and target. Therefore
change detection of super pixels, a group of pixels that has sufficient statistics for likelihood ratio testing, is proposed. Super pixels that are determined as transition points are signed on a binary difference matrix and grouped by 4-Connected Labeling method. Each label is processed to find its vector movement in the next frame by Label Destruction and Centroids Mapping techniques. Candidate centroids are put into Distribution Density Function Maximization and Maximum Histogram Size Filtering methods to find the target related motion vectors. Noise related mappings are eliminated by Range and Maneuver Filtering. Geometrical centroids obtained on each frame are used as the observed target path which is put into Optimum Decoding Based Smoothing Algorithm to smooth and estimate the real target path. Optimum Decoding Based Smoothing Algorithm is based on quantization of possible states, i.e. observed target path centroids, and Viterbi Algorithm. According to the system and observation models, metric values of all possible target paths are computed using observation and transition probabilities. The path which results in maximum metric value at the last frame is decided as the estimated target path.
APA, Harvard, Vancouver, ISO, and other styles
42

Malmström, Magnus. "5G Positioning using Machine Learning." Thesis, Linköpings universitet, Reglerteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-149055.

Full text
Abstract:
Positioning is recognized as an important feature of fifth generation (\abbrFiveG) cellular networks due to the massive number of commercial use cases that would benefit from access to position information. Radio based positioning has always been a challenging task in urban canyons where buildings block and reflect the radio signal, causing multipath propagation and non-line-of-sight (NLOS) signal conditions. One approach to handle NLOS is to use data-driven methods such as machine learning algorithms on beam-based data, where a training data set with positioned measurements are used to train a model that transforms measurements to position estimates.  The work is based on position and radio measurement data from a 5G testbed. The transmission point (TP) in the testbed has an antenna that have beams in both horizontal and vertical layers. The measurements are the beam reference signal received power (BRSRP) from the beams and the direction of departure (DOD) from the set of beams with the highest received signal strength (RSS). For modelling of the relation between measurements and positions, two non-linear models has been considered, these are neural network and random forest models. These non-linear models will be referred to as machine learning algorithms.  The machine learning algorithms are able to position the user equipment (UE) in NLOS regions with a horizontal positioning error of less than 10 meters in 80 percent of the test cases. The results also show that it is essential to combine information from beams from the different vertical antenna layers to be able to perform positioning with high accuracy during NLOS conditions. Further, the tests show that the data must be separated into line-of-sight (LOS) and NLOS data before the training of the machine learning algorithms to achieve good positioning performance under both LOS and NLOS conditions. Therefore, a generalized likelihood ratio test (GLRT) to classify data originating from LOS or NLOS conditions, has been developed. The probability of detection of the algorithms is about 90\% when the probability of false alarm is only 5%.  To boost the position accuracy of from the machine learning algorithms, a Kalman filter have been developed with the output from the machine learning algorithms as input. Results show that this can improve the position accuracy in NLOS scenarios significantly.
Radiobasserad positionering av användarenheter är en viktig applikation i femte generationens (5G) radionätverk, som mycket tid och pengar läggs på för att utveckla och förbättra. Ett exempel på tillämpningsområde är positionering av nödsamtal, där ska användarenheten kunna positioneras med en noggrannhet på ett tiotal meter. Radio basserad positionering har alltid varit utmanande i stadsmiljöer där höga hus skymmer och reflekterar signalen mellan användarenheten och basstationen. En ide att positionera i dessa utmanande stadsmiljöer är att använda datadrivna modeller tränade av algoritmer baserat på positionerat testdata – så kallade maskininlärningsalgoritmer. I detta arbete har två icke-linjära modeller - neurala nätverk och random forest – bli implementerade och utvärderade för positionering av användarenheter där signalen från basstationen är skymd.% Dessa modeller refereras som maskininlärningsalgoritmer. Utvärderingen har gjorts på data insamlad av Ericsson från ett 5G-prototypnätverk lokaliserat i Kista, Stockholm. Antennen i den basstation som används har 48 lober vilka ligger i fem olika vertikala lager. Insignal och målvärdena till maskininlärningsalgoritmerna är signals styrkan för varje stråle (BRSRP), respektive givna GPS-positioner för användarenheten. Resultatet visar att med dessa maskininlärningsalgoritmer positioneras användarenheten med en osäkerhet mindre än tio meter i 80 procent av försöksfallen. För att kunna uppnå dessa resultat är viktigt att kunna detektera om signalen mellan användarenheten och basstationen är skymd eller ej. För att göra det har ett statistiskt test blivit implementerat. Detektionssannolikhet för testet är över 90 procent, samtidigt som sannolikhet att få falskt alarm endast är ett fåtal procent.\newline \newline%För att minska osäkerheten i positioneringen har undersökningar gjorts där utsignalen från maskininlärningsalgoritmerna filtreras med ett Kalman-filter. Resultat från dessa undersökningar visar att Kalman-filtret kan förbättra presitionen för positioneringen märkvärt.
APA, Harvard, Vancouver, ISO, and other styles
43

Rocha, Gilson Silvério da. "Modelos lineares mistos para dados longitudinais em ensaio fatorial com tratamento adicional." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-14122015-174119/.

Full text
Abstract:
Em experimentos agronômicos são comuns ensaios planejados para estudar determinadas culturas por meio de múltiplas mensurações realizadas na mesma unidade amostral ao longo do tempo, espaço, profundidade entre outros. Essa forma com que as mensurações são coletadas geram conjuntos de dados que são chamados de dados longitudinais. Nesse contexto, é de extrema importância a utilização de metodologias estatísticas que sejam capazes de identificar possíveis padrões de variação e correlação entre as mensurações. A possibilidade de inclusão de efeitos aleatórios e de modelagem das estruturas de covariâncias tornou a metodologia de modelos lineares mistos uma das ferramentas mais apropriadas para a realização desse tipo de análise. Entretanto, apesar de todo o desenvolvimento teórico e computacional, a utilização dessa metodologia em delineamentos mais complexos envolvendo dados longitudinais e tratamentos adicionais, como os utilizados na área de forragicultura, ainda é passível de estudos. Este trabalho envolveu o uso do diagrama de Hasse e da estratégia top-down na construção de modelos lineares mistos no estudo de cortes sucessivos de forragem provenientes de um experimento de adubação com boro em alfafa (Medicago sativa L.) realizado no campo experimental da Embrapa Pecuária Sudeste. Primeiramente, considerou-se uma abordagem qualitativa para todos os fatores de estudo e devido à complexidade do delineamento experimental optou-se pela construção do diagrama de Hasse. A incorporação de efeitos aleatórios e seleção de estruturas de covariâncias para os resíduos foram realizadas com base no teste da razão de verossimilhanças calculado a partir de parâmetros estimados pelo método da máxima verossimilhança restrita e nos critérios de informação de Akaike (AIC), Akaike corrigido (AICc) e bayesiano (BIC). Os efeitos fixos foram testados por meio do teste Wald-F e, devido aos efeitos significativos das fontes de variação associadas ao fator longitudinal, desenvolveu-se um estudo de regressão. A construção do diagrama de Hasse foi fundamental para a compreensão e visualização simbólica do relacionamento de todos os fatores presentes no estudo, permitindo a decomposição das fontes de variação e de seus graus de liberdade, garantindo que todos os testes fossem realizados corretamente. A inclusão de efeito aleatório associado à unidade experimental foi essencial para a modelagem do comportamento de cada unidade e a estrutura de componentes de variância com heterogeneidade, incorporada aos resíduos, foi capaz de modelar eficientemente a heterogeneidade de variâncias presente nos diferentes cortes da cultura da alfafa. A verificação do ajuste foi realizada por meio de gráficos de diagnósticos de resíduos. O estudo de regressão permitiu avaliar a produtividade de matéria seca da parte aérea da planta (kg ha-1) de cortes consecutivos da cultura da alfafa, envolvendo a comparação de adubações com diferentes fontes e doses de boro. Os melhores resultados de produtividade foram observados para a combinação da fonte ulexita com as doses 3, 6 e 9 kg ha-1 de boro.
Assays aimed at studying some crops through multiple measurements performed in the same sample unit along time, space, depth etc. have been frequently adopted in agronomical experiments. This type of measurement originates a dataset named longitudinal data, in which the use of statistical procedures capable of identifying possible standards of variation and correlation among measurements has great importance. The possibility of including random effects and modeling of covariance structures makes the methodology of mixed linear models one of the most appropriate tools to perform this type of analysis. However, despite of all theoretical and computational development, the use of such methodology in more complex designs involving longitudinal data and additional treatments, such as those used in forage crops, still needs to be studied. The present work covered the use of the Hasse diagram and the top-down strategy in the building of mixed linear models for the study of successive cuts from an experiment involving boron fertilization in alfalfa (Medicago sativa L.) carried out in the field area of Embrapa Southeast Livestock. First, we considered a qualitative approach for all study factors and we chose the Hasse diagram building due to the model complexity. The inclusion of random effects and selection of covariance structures for residues were performed based on the likelihood ratio test, calculated based on parameters estimated through the restricted maximum likelihood method, the Akaike\'s Information Criterion (AIC), the Akaike\'s information criterion corrected (AICc) and the Bayesian Information Criterion (BIC). The fixed effects were analyzed through the Wald-F test and we performed a regression study due to the significant effects of the variation sources associated with the longitudinal factor. The Hasse diagram building was essential for understanding and symbolic displaying regarding the relation among all factors present in the study, thus allowing variation sources and their degrees of freedom to be decomposed, assuring that all tests were correctly performed. The inclusion of random effect associated with the sample unit was essential for modeling the behavior of each unity. Furthermore, the structure of variance components with heterogeneity, added to the residues, was capable of modeling efficiently the heterogeneity of variances present in the different cuts of alfalfa plants. The fit was checked by residual diagnostic plots. The regression study allowed us to evaluate the productivity of shoot dry matter (kg ha-1) related to successive cuts of alfalfa plants, involving the comparison of fertilization with different boron sources and doses. We observed the best productivity in the combination of the source ulexite with the doses 3, 6 and 9 kg ha-1 boron.
APA, Harvard, Vancouver, ISO, and other styles
44

Urbano, Simone. "Detection and diagnostic of freeplay induced limit cycle oscillation in the flight control system of a civil aircraf." Thesis, Toulouse, INPT, 2019. http://www.theses.fr/2019INPT0023/document.

Full text
Abstract:
Cette étude est le résultat d’une thèse CIFRE de trois ans entre le bureau d’étude d’Airbus (domaine du contrôle de l’avion) et le laboratoire TéSA à Toulouse. L’objectif principal est de proposer, développer et valider une solution logicielle pour la détection et le diagnostic d’un type spécifique de vibrations des gouvernes de profondeur et direction, appelée oscillation en cycle limite (limit cycle oscillation ou LCO en anglais), basée sur les signaux existants dans les avions civils. LCO est un terme mathématique générique définissant un mode périodique indépendant de conditions initiales et se produisant dans des systèmes non linéaires non conservatifs. Dans cette étude, nous nous intéressons au phénomène de LCO induit par les jeux mécaniques dans les gouvernes d’un avion civil. Les conséquences du LCO sont l’augmentation locale de la charge structurelle, la dégradation des qualités de vol, la réduction de la durée de vie de l’actionneur, la dégradation du confort du poste de pilotage et de la cabine, ainsi que l’augmentation des coûts de maintenance. L’état de l’art en matière de détection et de diagnostic du LCO induit par le jeu mécanique est basé sur la sensibilité du pilote aux vibrations et sur le contrôle périodique du jeu sur les gouvernes. Cette étude propose une solution basée sur les données (issues de la boucle d’asservissement des actionneurs qui agissent sur les gouvernes) pour aider au diagnostic du LCO et à l’isolement du jeu mécanique. L’objectif est d’améliorer encore plus la disponibilité des avions et de réduire les coûts de maintenance en fournissant aux compagnies aériennes un signal de contrôle pour le LCO et les jeux mécaniques. Pour cette raison, deux solutions algorithmiques pour le diagnostic des vibrations et des jeux ont été proposées. Un détecteur en temps réel pour la détection du LCO est tout d’abord proposé basé sur la théorie du rapport de vraisemblance généralisé (generalized likelihood ratio test ou GLRT en anglais). Certaines variantes et simplifications sont également proposées pour satisfaire les contraintes industrielles. Un détecteur de jeu mécanique est introduit basé sur l’identification d’un modèle de Wiener. Des approches paramétrique (estimateur de maximum de vraisemblance) et non paramétrique (régression par noyau) sont explorées, ainsi que certaines variantes des méthodes non paramétriques. En particulier, le problème de l’estimation d’un cycle d’hystérésis (choisi comme la non-linéarité de sortie d’un modèle de Wiener) est abordé. Ainsi, les problèmes avec et sans contraintes sont étudiés. Une analyse théorique, numérique (sur simulateur) et expérimentale (données de vol et laboratoire) est réalisée pour étudier les performances des détecteurs proposés et pour identifier les limitations et la faisabilité industrielle. Les résultats numériques et expérimentaux obtenus confirment que le GLRT proposé (et ses variantes / simplifications) est une méthode très efficace pour le diagnostic du LCO en termes de performance, robustesse et coût calculatoire. D’autre part, l’algorithme de diagnostic des jeux mécaniques est capable de détecter des niveaux de jeu relativement importants, mais il ne fournit pas de résultats cohérents pour des niveaux de jeu relativement faibles. En outre, des types d’entrée spécifiques sont nécessaires pour garantir des résultats répétitifs et cohérents. Des études complémentaires pourraient être menées afin de comparer les résultats de GLRT avec une approche Bayésienne et pour approfondir les possibilités et les limites de la méthode paramétrique proposée pour l’identification du modèle de Wiener
This research study is the result of a 3 years CIFRE PhD thesis between the Airbus design office(Aircraft Control domain) and TéSA laboratory in Toulouse. The main goal is to propose, developand validate a software solution for the detection and diagnosis of a specific type of elevator andrudder vibration, called limit cycle oscillation (LCO), based on existing signals available in flightcontrol computers on board in-series aircraft. LCO is a generic mathematical term defining aninitial condition-independent periodic mode occurring in nonconservative nonlinear systems. Thisstudy focuses on the LCO phenomenon induced by mechanical freeplays in the control surface ofa civil aircraft. The LCO consequences are local structural load augmentation, flight handlingqualities deterioration, actuator operational life reduction, cockpit and cabin comfort deteriorationand maintenance cost augmentation. The state-of-the-art for freeplay induced LCO detection anddiagnosis is based on the pilot sensitivity to vibration and to periodic freeplay check on the controlsurfaces. This study is thought to propose a data-driven solution to help LCO and freeplaydiagnosis. The goal is to improve even more aircraft availability and reduce the maintenance costsby providing to the airlines a condition monitoring signal for LCO and freeplays. For this reason,two algorithmic solutions for vibration and freeplay diagnosis are investigated in this PhD thesis. Areal time detector for LCO diagnosis is first proposed based on the theory of the generalized likeli hood ratio test (GLRT). Some variants and simplifications are also proposed to be compliantwith the industrial constraints. In a second part of this work, a mechanical freeplay detector isintroduced based on the theory of Wiener model identification. Parametric (maximum likelihoodestimator) and non parametric (kernel regression) approaches are investigated, as well as somevariants to well-known nonparametric methods. In particular, the problem of hysteresis cycleestimation (as the output nonlinearity of a Wiener model) is tackled. Moreover, the constrainedand unconstrained problems are studied. A theoretical, numerical (simulator) and experimental(flight data and laboratory) analysis is carried out to investigate the performance of the proposeddetectors and to identify limitations and industrial feasibility. The obtained numerical andexperimental results confirm that the proposed GLR test (and its variants/simplifications) is a very appealing method for LCO diagnostic in terms of performance, robustness and computationalcost. On the other hand, the proposed freeplay diagnostic algorithm is able to detect relativelylarge freeplay levels, but it does not provide consistent results for relatively small freeplay levels. Moreover, specific input types are needed to guarantee repetitive and consistent results. Further studies should be carried out in order to compare the GLRT results with a Bayesian approach and to investigate more deeply the possibilities and limitations of the proposed parametric method for Wiener model identification
APA, Harvard, Vancouver, ISO, and other styles
45

Lee, Chang. "MITIGATION of BACKGROUNDS for the LARGE UNDERGROUND XENON DARK MATTER EXPERIMENT." Case Western Reserve University School of Graduate Studies / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=case1427482791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Magalh?es, Felipe Henrique Alves. "Testes em modelos weibull na forma estendida de Marshall-Olkin." Universidade Federal do Rio Grande do Norte, 2011. http://repositorio.ufrn.br:8080/jspui/handle/123456789/18639.

Full text
Abstract:
Made available in DSpace on 2015-03-03T15:28:32Z (GMT). No. of bitstreams: 1 FelipeHAM_DISSERT.pdf: 2307848 bytes, checksum: c94e3d62e5fe54424d6cbe1491c8d85d (MD5) Previous issue date: 2011-12-28
Universidade Federal do Rio Grande do Norte
In survival analysis, the response is usually the time until the occurrence of an event of interest, called failure time. The main characteristic of survival data is the presence of censoring which is a partial observation of response. Associated with this information, some models occupy an important position by properly fit several practical situations, among which we can mention the Weibull model. Marshall-Olkin extended form distributions other a basic generalization that enables greater exibility in adjusting lifetime data. This paper presents a simulation study that compares the gradient test and the likelihood ratio test using the Marshall-Olkin extended form Weibull distribution. As a result, there is only a small advantage for the likelihood ratio test
Em an?lise de sobreviv?ncia, a vari?vel resposta e, geralmente, o tempo at? a ocorr?ncia de um evento de interesse, denominado tempo de falha, e a principal caracter?stica de dados de sobreviv?ncia e a presen?a de censura, que ? a observa??o parcial da resposta. Associados a essas informa??es, alguns modelos ocupam uma posi??o de destaque por sua comprovada adequa??o a v?rias situa??es pr?ticas, entre os quais ? poss?vel citar o modelo Weibull. Distribui??es na forma estendida de Marshall-Olkin oferecem uma generaliza??o de distribui??es b?sicas que permitem uma flexibilidade maior no ajuste de dados de tempo de vida. Este trabalho apresenta um estudo de simula??o que compara duas estat?sticas de teste, a da Raz?o de Verossimilhan?as e a Gradiente, utilizando a distribui??o Weibull em sua forma estendida de Marshall-Olkin. Como resultado, verifica-se apenas uma pequena vantagem para estat?stica da Raz?o de Verossimilhancas
APA, Harvard, Vancouver, ISO, and other styles
47

Lemonte, Artur Jose. "Estatística gradiente e refinamento de métodos assintóticos no modelo de regressão Birnbaum-Saunders." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-26102010-123617/.

Full text
Abstract:
Rieck & Nedelman (1991) propuseram um modelo de regressão log-linear tendo como base a distribuição Birnbaum-Saunders (Birnbaum & Saunders, 1969a). O modelo proposto pelos autores vem sendo bastante explorado e tem se mostrado uma ótima alternativa a outros modelos propostos na literatura, como por exemplo, os modelos de regressão Weibull, gama e lognormal. No entanto, até o presente momento, não existe nenhum estudo tratando de refinamentos para as estatísticas da razão de verossimilhanças e escore nesta classe de modelos de regressão. Assim, um dos objetivos desta tese é obter um fator de correção de Bartlett para a estatística da razão de verossimilhanças e um fator de correção tipo-Bartlett para a estatística escore nesse modelo. Estes ajustes melhoram a aproximação da distribuição nula destas estatísticas pela distribuição qui-quadrado de referência. Adicionalmente, objetiva-se obter ajustes para a estatística da razão de verossimilhanças sinalizada. Tais ajustes melhoram a aproximação desta estatística pela distribuição normal padrão. Recentemente, uma nova estatística de teste foi proposta por Terrell (2002), a qual o autor denomina estatística gradiente. Esta estatística foi derivada a partir da estatística escore e da estatística de Wald modificada (Hayakawa & Puri, 1985). A combinação daquelas duas estatísticas resulta em uma estatística muito simples de ser calculada, não envolvendo, por exemplo, nenhum cálculo matricial como produto e inversa de matrizes. Esta estatística foi recentemente citada por Rao (2005): \"The suggestion by Terrell is attractive as it is simple to compute. It would be of interest to investigate the performance of the [gradient] statistic.\" Caminhando na direção da sugestão de Rao, outro objetivo da tese é obter uma expansão assintótica para a distribuição da estatística gradiente sob uma sequência de alternativas de Pitman convergindo para a hipótese nula a uma taxa de convergência de n^{-1/2} utilizando a metodologia desenvolvida por Peers (1971) e Hayakawa (1975). Em particular, mostramos que, até ordem n^{-1/2}, a estatística gradiente segue distribuição qui-quadrado central sob a hipótese nula e distribuição qui-quadrado não central sob a hipótese alternativa. Também temos como objetivo comparar o poder local deste teste com o poder local dos testes da razão de verossimilhanças, de Wald e escore. Finalmente, aplicaremos a expansão assintótica derivada na tese em algumas classes particulares de modelos.
The Birnbaum-Saunders regression model is commonly used in reliability studies.We address the issue of performing inference in this class of models when the number of observations is small. Our simulation results suggest that the likelihood ratio and score tests tend to be liberal when the sample size is small. We derive Bartlett and Bartlett-type correction factors which reduce the size distortion of the tests. Additionally, we also consider modified signed log-likelihood ratio statistics in this class of models. Finally, the asymptotic expansion of the distribution of the gradient test statistic is derived for a composite hypothesis under a sequence of Pitman alternative hypotheses converging to the null hypothesis at rate n^{-1/2}, n being the sample size. Comparisons of the local powers of the gradient, likelihood ratio, Wald and score tests reveal no uniform superiority property.
APA, Harvard, Vancouver, ISO, and other styles
48

Vong, Camille. "Model-Based Optimization of Clinical Trial Designs." Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-233445.

Full text
Abstract:
General attrition rates in drug development pipeline have been recognized as a necessity to shift gears towards new methodologies that allow earlier and correct decisions, and the optimal use of all information accrued throughout the process. The quantitative science of pharmacometrics using pharmacokinetic-pharmacodynamic models was identified as one of the strategies core to this renaissance. Coupled with Optimal Design (OD), they constitute together an attractive toolkit to usher more rapidly and successfully new agents to marketing approval. The general aim of this thesis was to investigate how the use of novel pharmacometric methodologies can improve the design and analysis of clinical trials within drug development. The implementation of a Monte-Carlo Mapped power method permitted to rapidly generate multiple hypotheses and to adequately compute the corresponding sample size within 1% of the time usually necessary in more traditional model-based power assessment. Allowing statistical inference across all data available and the integration of mechanistic interpretation of the models, the performance of this new methodology in proof-of-concept and dose-finding trials highlighted the possibility to reduce drastically the number of healthy volunteers and patients exposed to experimental drugs. This thesis furthermore addressed the benefits of OD in planning trials with bio analytical limits and toxicity constraints, through the development of novel optimality criteria that foremost pinpoint information and safety aspects. The use of these methodologies showed better estimation properties and robustness for the ensuing data analysis and reduced the number of patients exposed to severe toxicity by 7-fold.  Finally, predictive tools for maximum tolerated dose selection in Phase I oncology trials were explored for a combination therapy characterized by main dose-limiting hematological toxicity. In this example, Bayesian and model-based approaches provided the incentive to a paradigm change away from the traditional rule-based “3+3” design algorithm. Throughout this thesis several examples have shown the possibility of streamlining clinical trials with more model-based design and analysis supports. Ultimately, efficient use of the data can elevate the probability of a successful trial and increase paramount ethical conduct.
APA, Harvard, Vancouver, ISO, and other styles
49

Gayet-Ageron, Angèle. "L’utilisation de la technique d’amplification de Treponema pallidum dans le diagnostic des ulcères oro-génitaux liés à la syphilis." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA11T005/document.

Full text
Abstract:
CONTEXTE La syphilis est une maladie ré-émergente depuis 2000. Son traitement est simple, mais son diagnostic est complexe. La technique d’amplification génique de Treponema pallidum (Tp-PCR) existe depuis 1990 mais le CDC l’a incluse dans sa définition de cas en janvier 2014. OBJECTIFS 1) Evaluer la performance diagnostique de la Tp-PCR à différents stades cliniques et milieux biologiques. 2) Mesurer la sensibilité, spécificité et les valeurs prédictives de la Tp-PCR en fonction de 3 groupes de référence dans des ulcères récents. 3) Comparer les performances des 2 principales cibles de Tp-PCR.MATÉRIEL ET MÉTHODES Premièrement, une revue systématique et méta-analyse des études publiées depuis 1990 ont été menées. Ensuite une étude multicentrique prospective a été conduite dans 5 villes européennes pendant 2 ans chez des patients avec un ulcère oro-génital. Tous ont reçu le test de référence local et 2 Tp-PCRs dans l’ulcère (gène tpp47 vs. polA). Les valeurs de sensibilité, spécificité et valeurs prédictives de la Tp-PCR ont été calculées comparativement au fond noir (FN), à la sérologie et à un gold standard amélioré. La concordance des 2 cibles a été évaluée par un coefficient kappa.RÉSULTATS PRINCIPAUX La méta-analyse conclut que la Tp-PCR a une meilleure performance dans les ulcères récents. L’étude clinique montre que la Tp-PCR décrit une meilleure performance comparativement au gold standard amélioré et a même une meilleure sensibilité que le FN. Les 2 cibles ont la même valeur diagnostique et une concordance quasi parfaite. CONCLUSIONS La Tp-PCR ciblant tpp47 ou polA est cliniquement utile pour diagnostiquer une syphilis primaire et pourrait même remplacer le FN sous certaines conditions
BACKGROUND Syphilis has re-emerged in at-risk populations since 2000. Although the treatment of syphilis is simple, its diagnosis remains challenging. Treponema pallidum Polymerase Chain Reaction (Tp-PCR) has been used in the diagnosis of syphilis since 1990 but it is included in the case definition of the CDC since January 2014. OBJECTIVES 1) To assess the accuracy of Tp-PCR in various biological specimens and syphilis stages. 2) To measure its diagnostic performance (sensitivity, specificity and predictive values) in ulcers from early syphilis compared to three groups of reference. 3) To compare the accuracy of the two most currently used targets: tpp47 and polA genes.METHODS We conducted a systematic review and meta-analysis of all studies published from 1990. We implemented a multicentre, prospective, observational study in 5 European cities between 09/2011 and 09/2013 among patients with an oral or genital ulcer suggestive of syphilis. All patients were tested with traditional reference tests plus 2 Tp-PCRs (tpp47 and polA). We estimated the sensitivity, specificity and predictive values of Tp-PCR compared to darkfield microscopy (DFM), serology and an enhanced gold standard. We used the kappa coefficient to assess the agreement between the 2 targets.MAIN RESULTST p-PCR had the best accuracy in ulcers from early syphilis. Tp-PCR performed better when compared to the enhanced gold standard and had a higher sensitivity than DFM. The 2 Tp-PCRs had a similar accuracy and an almost perfect agreement.CONCLUSIONS Tp-PCR targeting either tpp47 or polA is clinically useful to confirm an early syphilis in smears and could even replace DFM under specific conditions
APA, Harvard, Vancouver, ISO, and other styles
50

SILVA, Priscila Gonçalves da. "Inferência e diagnóstico em modelos não lineares Log-Gama generalizados." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/18637.

Full text
Abstract:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-04-25T14:46:06Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TESE VERSÃO FINAL (CD).pdf: 688894 bytes, checksum: fc5c0291423dc50d4989c1c2d8d4af65 (MD5)
Made available in DSpace on 2017-04-25T14:46:06Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TESE VERSÃO FINAL (CD).pdf: 688894 bytes, checksum: fc5c0291423dc50d4989c1c2d8d4af65 (MD5) Previous issue date: 2016-11-04
Young e Bakir (1987) propôs a classe de Modelos Lineares Log-Gama Generalizados (MLLGG) para analisar dados de sobrevivência. No nosso trabalho, estendemos a classe de modelos propostapor Young e Bakir (1987) permitindo uma estrutura não linear para os parâmetros de regressão. A nova classe de modelos é denominada como Modelos Não Lineares Log-Gama Generalizados (MNLLGG). Com o objetivo de obter a correção de viés de segunda ordem dos estimadores de máxima verossimilhança (EMV) na classe dos MNLLGG, desenvolvemos uma expressão matricial fechada para o estimador de viés de Cox e Snell (1968). Analisamos, via simulação de Monte Carlo, os desempenhos dos EMV e suas versões corrigidas via Cox e Snell (1968) e através da metodologia bootstrap (Efron, 1979). Propomos também resíduos e técnicas de diagnóstico para os MNLLGG, tais como: alavancagem generalizada, influência local e influência global. Obtivemos, em forma matricial, uma expressão para o fator de correção de Bartlett à estatística da razão de verossimilhanças nesta classe de modelos e desenvolvemos estudos de simulação para avaliar e comparar numericamente o desempenho dos testes da razão de verossimilhanças e suas versões corrigidas em relação ao tamanho e poder em amostras finitas. Além disso, derivamos expressões matriciais para os fatores de correção tipo-Bartlett às estatísticas escore e gradiente. Estudos de simulação foram feitos para avaliar o desempenho dos testes escore, gradiente e suas versões corrigidas no que tange ao tamanho e poder em amostras finitas.
Young e Bakir (1987) proposed the class of generalized log-gamma linear regression models (GLGLM) to analyze survival data. In our work, we extended the class of models proposed by Young e Bakir (1987) considering a nonlinear structure for the regression parameters. The new class of models is called generalized log-gamma nonlinear regression models (GLGNLM). We also propose matrix formula for the second-order bias of the maximum likelihood estimate of the regression parameter vector in the GLGNLM class. We use the results by Cox and Snell (1968) and bootstrap technique [Efron (1979)] to obtain the bias-corrected maximum likelihood estimate. Residuals and diagnostic techniques were proposed for the GLGNLM, such as generalized leverage, local and global influence. An general matrix notation was obtained for the Bartlett correction factor to the likelihood ratio statistic in this class of models. Simulation studies were developed to evaluate and compare numerically the performance of likelihood ratio tests and their corrected versions regarding size and power in finite samples. Furthermore, general matrix expressions were obtained for the Bartlett-type correction factor for the score and gradient statistics. Simulation studies were conducted to evaluate the performance of the score and gradient tests with their corrected versions regarding to the size and power in finite samples.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography