Dissertations / Theses on the topic 'Likelihood ratio test (LRT)'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Likelihood ratio test (LRT).'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Stoorhöök, Li, and Sara Artursson. "Hur påverkar avrundningar tillförlitligheten hos parameterskattningar i en linjär blandad modell?" Thesis, Uppsala universitet, Statistiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-279039.
Full textBarton, William H. "COMPARISON OF TWO SAMPLES BY A NONPARAMETRIC LIKELIHOOD-RATIO TEST." UKnowledge, 2010. http://uknowledge.uky.edu/gradschool_diss/99.
Full textDai, Xiaogang. "Score Test and Likelihood Ratio Test for Zero-Inflated Binomial Distribution and Geometric Distribution." TopSCHOLAR®, 2018. https://digitalcommons.wku.edu/theses/2447.
Full textLiang, Yi. "Likelihood ratio test for the presence of cured individuals : a simulation study /." Internet access available to MUN users only, 2002. http://collections.mun.ca/u?/theses,157472.
Full textEmberson, E. A. "The asymptotic distribution and robustness of the likelihood ratio and score test statistics." Thesis, University of St Andrews, 1995. http://hdl.handle.net/10023/13738.
Full textYu, Yuan. "Tests of Independence in a Single 2x2 Contingency Table with Random Margins." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/625.
Full textShen, Paul. "Empirical Likelihood Tests For Constant Variance In The Two-Sample Problem." Bowling Green State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1544187568883762.
Full textNgunkeng, Grace. "Statistical Analysis of Skew Normal Distribution and its Applications." Bowling Green State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1370958073.
Full textYumin, Xiao. "Robustness of the Likelihood Ratio Test for Periodicity in Short Time Series and Application to Gene Expression Data." Thesis, Uppsala universitet, Statistiska institutionen, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-175807.
Full textLynch, O'Neil. "Mixture distributions with application to microarray data analysis." Scholar Commons, 2009. http://scholarcommons.usf.edu/etd/2075.
Full textWilliams, Matthew Richard. "Likelihood-based testing and model selection for hazard functions with unknown change-points." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/26835.
Full textPh. D.
Gottfridsson, Anneli. "Likelihood ratio tests of separable or double separable covariance structure, and the empirical null distribution." Thesis, Linköpings universitet, Matematiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-69738.
Full textEger, Karl-Heinz, and Evgeni Borisovich Tsoy. "Sequential probability ratio tests based on grouped observations." Universitätsbibliothek Chemnitz, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-201000938.
Full textBokharaiee, Najafee Simin. "Spectrum Sensing in Cognitive Radio Networks." IEEE Transactions on Vehicular Technology, 2011. http://hdl.handle.net/1993/24069.
Full textLiang, Yuli. "Contributions to Estimation and Testing Block Covariance Structures in Multivariate Normal Models." Doctoral thesis, Stockholms universitet, Statistiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-115347.
Full textTao, Jinxin. "Comparison Between Confidence Intervals of Multiple Linear Regression Model with or without Constraints." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-theses/404.
Full textLopez, Gabriel E. "Detection and Classification of DIF Types Using Parametric and Nonparametric Methods: A comparison of the IRT-Likelihood Ratio Test, Crossing-SIBTEST, and Logistic Regression Procedures." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4131.
Full textChen, Xinyu. "Inference in Constrained Linear Regression." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-theses/405.
Full textHattaway, James T. "Parameter Estimation and Hypothesis Testing for the Truncated Normal Distribution with Applications to Introductory Statistics Grades." Diss., CLICK HERE for online access, 2010. http://contentdm.lib.byu.edu/ETD/image/etd3412.pdf.
Full textJotta, César Augusto Degiato. "Análise de variância multivariada nas estimativas dos parâmetros do modelo log-logístico para susceptibilidade do capim-pé-de-galinha ao glyphosate." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-29112016-163511/.
Full textThe national agricultural scenery has become increasingly competitive over the years, maintaining productivity growth at a low operating cost and low environmental impact has been the three most important ingredients in the area. Productivity in turn is a function of several variables, and the weed control is one of these variables to be considered. In this work it is analyzed a dataset of an experiment conducted in the Plant Production Department of ESALQ-USP, Piracicaba - SP. Were evaluated 4 grass chicken\'s feet biotypes from three Brazilian states in three morphological stages with 4 repetitions for each biotype, the response variable used was dry mass (g) and as regressor variable were used the dose of glyphosate in concentrations ranging from 1/16 D to 16 D plus the control without herbicide, wherein D ranges from 480 grams of glyphosate acid equivalent per hectare (g .e a. ha-1) for 2 to 3 stage tillers, 720 grams of glyphosate acid equivalent per hectare (g .e a. ha-1) for 6 to 8 tillers and 960 for stage 10-12 tillers. The work had as main objective to evaluate , if over the years, populations of grass chicken\'s feet has become resistant to glyphosate, aiming detection of resistant biotypes. The experiment was conducted under completely randomized design being done in three stages. For data analysis was used the non-linear log-logistic proposed in Knezevic, S. e Ritz (2007) as univariate method, it was still used the maximum likelihood method to verify the equality of the parameter e. The model converged to almost all repetitions, but there was an observed systematic behavior to explain the non-convergence of a particular repetition. Secondly, estimates of the three model parameters were taken as dependent variables in a multivariate analysis of variance. Noting that all three together, were significant by Pillai, Wilks, Roy and Hotelling-Lawley tests, was performed Tukey test for the same parameter e and compared with the first method. This procedure presented, with the same coefficient of significance, less able to identify differences between the means of the parameters of grass varieties than the method proposed by Regazzi (2015).
Rettiganti, Mallikarjuna Rao. "Statistical Models for Count Data from Multiple Sclerosis Clinical Trials and their Applications." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1291180207.
Full textPrice, Emily A. "Item Discrimination, Model-Data Fit, and Type I Error Rates in DIF Detection using Lord's χ2, the Likelihood Ratio Test, and the Mantel-Haenszel Procedure." Ohio University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1395842816.
Full textBoulet, John R. "A Monte Carlo comparison of the Type I error rates of the likelihood ratio chi-square test statistic and Hotelling's two-sample T2 on testing the differences between group means." Thesis, University of Ottawa (Canada), 1990. http://hdl.handle.net/10393/5708.
Full textFlorez, Guillermo Domingo Martinez. "Extensões do modelo -potência." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-07072011-154259/.
Full textIn data analysis where data present certain degree of asymmetry the assunption of normality can result in an unreal situation and the application of this model can hide important caracteristics of the true model. Situations of this type has given strength to the use of asymmetric models with special emphasis on the skew-symmetric distribution developed by Azzalini (1985). In this work we present an alternative for data analysis in the presence of signi¯cant asymmetry or kurtosis, when compared with the normal distribution, as well as other situations that involve such model. We present and study of the properties of the ®-power and log-®-power distributions, where we also study the estimation problem, the observed and expected information matrices and the degree of bias in estimation using simulation procedures. A °exible model version is proposed for the ®-power distribution, following an extension to a bimodal version. Follows next an extension of the Birnbaum-Saunders distribution using the ®-power distribution, where some properties are studied, estimating approaches are developed as well as corrected bias estimator developed. We also develop censored and uncensored regression for the ®-power model and for the log-linear Birnbaum-Saunders regression models, for which model validation techniques are studied. Finally a multivariate extension of the ®-power model is proposed and some estimation procedures are investigated for the model. All the situations investigated were illustrated with data application using data sets previally analysed with other distributions.
Silva, Michel Ferreira da. "Estimação e teste de hipótese baseados em verossimilhanças perfiladas." Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-06122006-162733/.
Full textThe profile likelihood function is not genuine likelihood function, and profile maximum likelihood estimators are typically inefficient and inconsistent. Additionally, the null distribution of the likelihood ratio test statistic can be poorly approximated by the asymptotic chi-squared distribution in finite samples when there are nuisance parameters. It is thus important to obtain adjustments to the likelihood function. Several authors, including Barndorff-Nielsen (1983,1994), Cox and Reid (1987,1992), McCullagh and Tibshirani (1990) and Stern (1997), have proposed modifications to the profile likelihood function. They are defined in a such a way to reduce the score and information biases. In this dissertation, we review several profile likelihood adjustments and also approximations to the adjustments proposed by Barndorff-Nielsen (1983,1994), also described in Severini (2000a). We present derivations and the main properties of the different adjustments. We also obtain adjustments for likelihood-based inference in the two-parameter exponential family. Numerical results on estimation and testing are provided. We also consider models that do not belong to the two-parameter exponential family: the GA0(alfa,gama,L) family, which is commonly used to model image radar data, and the Weibull model, which is useful for reliability studies, the latter under both noncensored and censored data. Again, extensive numerical results are provided. It is noteworthy that, in the context of the GA0(alfa,gama,L) model, we have evaluated the approximation of the null distribution of the signalized likelihood ratio statistic by the standard normal distribution. Additionally, we have obtained distributional results for the Weibull case concerning the maximum likelihood estimators and the likelihood ratio statistic both for noncensored and censored data.
Araripe, Patricia Peres. "Análise de agrupamento de semeadoras manuais quanto à distribuição do número de sementes." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-06042016-181136/.
Full textThe manual planter is a tool that today still has an important role in several countries around the world, which practices family and conservation agriculture. The use of it has importance due to minimizing soil disturbance, labor requirements in the field, most sustainable productivity and other factors. In order to analyze and/or compare the commercial manual planters, several studies have been conducted, but considering only position and dispersion measures. This work presents an alternatively method for comparing the performance of manual planters. In this case, the probabilities associated with each category of response has estimated and the hypothesis that these probabilities not vary for planters when compared in pairs evaluated using the likelihood ratio test and Bayes factor in the classical and bayesian paradigms, respectively. Finally, the planters were grouped considering as a measure of distance, the divergence measure J-divergence in the cluster analysis. As an illustration of this methodology, the data from fifteen manual planters adjusted to deposit exactly two seeds per hit of different manufacturers analyzed by Molin, Menegatti and Gimenez (2001) were considered. Initially, in the classical approach, the planters without zero values in response categories were compared and the planters 3, 8 and 14 presents the better behavior. After, all the planters were compared in pairs, grouping categories and adding the constants 0,5 or 1 for each response category. Grouping categories was difficult making conclusions by the likelihood ratio test, only highlighting the fact that the planter 15 is different from others. Adding 0,5 or 1 for each category, apparently not obtained the formation of different groups, such as planter 1 which by the test differed from the others and presented more frequently the deposit of two seeds, required by agronomic experiment and recommended in this work. In the Bayesian approach, the Bayes factor was used to compare the planters in pairs, but the findings were similar to those obtained in the classical approach. Finally, the cluster analysis allowed a better idea of similar planters groups with each other in the both approaches, confirming the results obtained previously.
Chen, Liang. "Small population bias and sampling effects in stochastic mortality modelling." Thesis, Heriot-Watt University, 2017. http://hdl.handle.net/10399/3372.
Full textLehmann, Rüdiger, and Frank Neitzel. "Testing the compatibility of constraints for parameters of a geodetic adjustment model." Hochschule für Technik und Wirtschaft Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:520-qucosa-148609.
Full textGeodätische Ausgleichungsmodelle werden oft auf eine Weise formuliert, bei der die Modellparameter bestimmte Bedingungsgleichungen zu erfüllen haben. Die normierten Lagrange-Multiplikatoren wurden bisher als Maß für den ausgeübten Zwang verwendet, und zwar so, dass wenn einer von ihnen betragsmäßig eine bestimmte Schwelle übersteigt, dann ist davon auszugehen, dass die zugehörige Bedingungsgleichung nicht mit den Beobachtungen und den restlichen Bedingungsgleichungen kompatibel ist. Wir zeigen, dass diese und ähnliche Maße als Teststatistiken eines Likelihood-Quotiententests der statistischen Hypothese, dass einige Bedingungsgleichungen in diesem Sinne inkompatibel sind, abgeleitet werden können. Das wurde bisher nur für spezielle Bedingungsgleichungen getan (Teunissen in Optimization and Design of Geodetic Networks, pp. 526–547, 1985). Wir starten vom einfachsten Fall, dass die gesamte Menge der Bedingungsgleichungen getestet werden muss, und gelangen zu dem fortgeschrittenen Problem, dass jede Bedingungsgleichung individuell zu testen ist. Jeder Test wird sowohl für bekannte, wie auch für unbekannte a priori Varianzfaktoren ausgearbeitet. Die zugehörigen Verteilungen werden sowohl unter der Null- wie auch unter der Alternativhypthese abgeleitet. Die Theorie wird am Beispiel einer Doppelnivellementlinie illustriert
Lehmann, Rüdiger, and Frank Neitzel. "Testing the compatibility of constraints for parameters of a geodetic adjustment model." Springer Verlag, 2013. https://htw-dresden.qucosa.de/id/qucosa%3A23273.
Full textGeodätische Ausgleichungsmodelle werden oft auf eine Weise formuliert, bei der die Modellparameter bestimmte Bedingungsgleichungen zu erfüllen haben. Die normierten Lagrange-Multiplikatoren wurden bisher als Maß für den ausgeübten Zwang verwendet, und zwar so, dass wenn einer von ihnen betragsmäßig eine bestimmte Schwelle übersteigt, dann ist davon auszugehen, dass die zugehörige Bedingungsgleichung nicht mit den Beobachtungen und den restlichen Bedingungsgleichungen kompatibel ist. Wir zeigen, dass diese und ähnliche Maße als Teststatistiken eines Likelihood-Quotiententests der statistischen Hypothese, dass einige Bedingungsgleichungen in diesem Sinne inkompatibel sind, abgeleitet werden können. Das wurde bisher nur für spezielle Bedingungsgleichungen getan (Teunissen in Optimization and Design of Geodetic Networks, pp. 526–547, 1985). Wir starten vom einfachsten Fall, dass die gesamte Menge der Bedingungsgleichungen getestet werden muss, und gelangen zu dem fortgeschrittenen Problem, dass jede Bedingungsgleichung individuell zu testen ist. Jeder Test wird sowohl für bekannte, wie auch für unbekannte a priori Varianzfaktoren ausgearbeitet. Die zugehörigen Verteilungen werden sowohl unter der Null- wie auch unter der Alternativhypthese abgeleitet. Die Theorie wird am Beispiel einer Doppelnivellementlinie illustriert.
Sheppard, Therese. "Extending covariance structure analysis for multivariate and functional data." Thesis, University of Manchester, 2010. https://www.research.manchester.ac.uk/portal/en/theses/extending-covariance-structure-analysis-for-multivariate-and-functional-data(e2ad7f12-3783-48cf-b83c-0ca26ef77633).html.
Full textTrachi, Youness. "On induction machine faults detection using advanced parametric signal processing techniques." Thesis, Brest, 2017. http://www.theses.fr/2017BRES0103/document.
Full textThis Ph.D. thesis aims to develop reliable and cost-effective condition monitoring and faults detection architectures for induction machines. These architectures are mainly based on advanced parametric signal processing techniques. To analyze and detect faults, a parametric stator current model under stationary conditions has been considered. It is assumed to be multiple sinusoids with unknown parameters in noise. This model has been estimated using parametric techniques such as subspace spectral estimators and maximum likelihood estimator. A fault severity criterion based on the estimation of the stator current frequency component amplitudes has also been proposed to determine the induction machine failure level. A novel faults detector based on hypothesis testing has been also proposed. This detector is mainly based on the generalized likelihood ratio test detector with unknown signal and noise parameters. The proposed parametric techniques have been evaluated using experimental stator current signals issued from induction machines under two considered faults: bearing and broken rotor bars faults.Experimental results show the effectiveness and the detection ability of the proposed parametric techniques
Morel, Guy. "Procédures statistiques pour espace de décisions totalement ordonné et famille de lois à vraisemblance monotone." Rouen, 1987. http://www.theses.fr/1987ROUES009.
Full textVasquez, Emilie. "Techniques statistiques de détection de cibles dans des images infrarouges inhomogènes en milieu maritime." Thesis, Aix-Marseille 3, 2011. http://www.theses.fr/2011AIX30001.
Full textStatistical detection techniques of point target in the sky or resolved target in the sea in infrared surveillance system images are developed. These techniques are adapted to inhomogeneities present in this kind of images. They are based on the spatial information analysis and allow the control of the false alarm rate in each image.For sky areas, a joint segmentation detection technique adapted to spatial variations of the mean luminosity is developed and its performance improvement is analyzed. For sea areas, an edge detector with constant false alarm rate when inhomogeneities and grey level spatial correlations are present is developed and characterized. In each case, taking into account the inhomogeneities in these statistical algorithms is essential to control the false alarm rate and to improve the detection performance
Rezagholi, Mahmoud. "The Effects of Technological Change on Productivity and Factor Demand in U.S. Apparel Industry 1958-1996 : An Econometric Analysis." Thesis, Uppsala University, Department of Economics, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7659.
Full textIn this dissertation I study substantially the effects of disembodied technical change on the total factor productivity and inputs demand in U.S. Apparel industry during 1958-1996. A time series input-output data set over the sector employs to estimate an error corrected model of a four-factor transcendental logarithmic cost function. The empirical results indicate technical impact on the total factor productivity at the rate of 9% on average. Technical progress has in addition a biased effect on factor augmenting in the sector.
Muller, Fernanda Maria. "MELHORAMENTOS INFERENCIAIS NO MODELO BETA-SKEW-T-EGARCH." Universidade Federal de Santa Maria, 2016. http://repositorio.ufsm.br/handle/1/8394.
Full textThe Beta-Skew-t-EGARCH model was recently proposed in literature to model the volatility of financial returns. The inferences over the model parameters are based on the maximum likelihood method. The maximum likelihood estimators present good asymptotic properties; however, in finite sample sizes they can be considerably biased. Monte Carlo simulations were used to evaluate the finite sample performance of point estimators. Numerical results indicated that the maximum likelihood estimators of some parameters are biased in sample sizes smaller than 3,000. Thus, bootstrap bias correction procedures were considered to obtain more accurate estimators in small samples. Better quality of forecasts was observed when the model with bias-corrected estimators was considered. In addition, we propose a likelihood ratio test to assist in the selection of the Beta-Skew-t-EGARCH model with one or two volatility components. The numerical evaluation of the two-component test showed distorted null rejection rates in sample sizes smaller than or equal to 1,000. To improve the performance of the proposed test in small samples, the bootstrap-based likelihood ratio test and the bootstrap Bartlett correction were considered. The bootstrap-based test exhibited the closest null rejection rates to the nominal values. The evaluation results of the two-component tests showed their practical usefulness. Finally, an application to the log-returns of the German stock index of the proposed methods was presented.
O modelo Beta-Skew-t-EGARCH foi recentemente proposto para modelar a volatilidade de retornos financeiros. A estimação dos parâmetros do modelo é feita via máxima verossimilhança. Esses estimadores possuem boas propriedades assintóticas, mas em amostras de tamanho finito eles podem ser consideravelmente viesados. Com a finalidade de avaliar as propriedades dos estimadores, em amostras de tamanho finito, realizou-se um estudo de simulações de Monte Carlo. Os resultados numéricos indicam que os estimadores de máxima verossimilhança de alguns parâmetros do modelo são viesados em amostras de tamanho inferior a 3000. Para obter estimadores pontuais mais acurados foram consideradas correções de viés via o método bootstrap. Verificou-se que os estimadores corrigidos apresentaram menor viés relativo percentual. Também foi observada melhor qualidade das previsões quando o modelo com estimadores corrigidos são considerados. Para auxiliar na seleção entre o modelo Beta-Skew-t-EGARCH com um ou dois componentes de volatilidade foi apresentado um teste da razão de verossimilhanças. A avaliação numérica do teste de dois componentes proposto demonstrou taxas de rejeição nula distorcidas em tamanhos amostrais menores ou iguais a 1000. Para melhorar o desempenho do teste foram consideradas a correção bootstrap e a correção de Bartlett bootstrap. Os resultados numéricos indicam a utilidade prática dos testes de dois componentes propostos. O teste bootstrap exibiu taxas de rejeição nula mais próximas dos valores nominais. Ao final do trabalho foi realizada uma aplicação dos testes de dois componentes e do modelo Beta-Skew-t-EGARCH, bem como suas versões corrigidas, a dados do índice de mercado da Alemanha.
Yu, Jung-Suk. "Essays on Fine Structure of Asset Returns, Jumps, and Stochastic Volatility." ScholarWorks@UNO, 2006. http://scholarworks.uno.edu/td/431.
Full textRusso, Cibele Maria. ""Análise de um modelo de regressão com erros nas variáveis multivariado com intercepto nulo"." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-01082006-214556/.
Full textTo analyze some characteristics of interest in a real odontological data set presented in Hadgu & Koch (1999), we propose the use of a multivariate null intercept errors-in-variables regression model. This data set is composed by measurements of dental plaque index (with measurement errors), which were measured in volunteers who were randomized to two experimental mouth rinses (A and B) or a control mouth rinse. The measurements were taken in each individual, before and after the use of the respective mouth rinses, in the beginning of the study, after three months from the baseline and after six months from the baseline. In this case, a possible structure of dependency between the measurements taken within the same individual must be incorporated in the model. After presenting the statistical model, we obtain the maximum likelihood estimates of the parameters using the numerical algorithm EM, and we test the hypotheses of interest considering asymptotic tests (Wald, likelihood ratio and score). Also, a simulation study to verify the behavior of these three test statistics is presented, considering diferent sample sizes and diferent values for the parameters. Finally, we make a diagnostic study to identify possible influential observations in the model, considering the local influence approach proposed by Cook (1986) and the conformal normal curvature proposed by Poon & Poon (1999).
Gomes, Priscila da Silva. "Distribuição normal assimétrica para dados de expressão gênica." Universidade Federal de São Carlos, 2009. https://repositorio.ufscar.br/handle/ufscar/4530.
Full textFinanciadora de Estudos e Projetos
Microarrays technologies are used to measure the expression levels of a large amount of genes or fragments of genes simultaneously in diferent situations. This technology is useful to determine genes that are responsible for genetic diseases. A common statistical methodology used to determine whether a gene g has evidences to diferent expression levels is the t-test which requires the assumption of normality for the data (Saraiva, 2006; Baldi & Long, 2001). However this assumption sometimes does not agree with the nature of the analyzed data. In this work we use the skew-normal distribution described formally by Azzalini (1985), which has the normal distribution as a particular case, in order to relax the assumption of normality. Considering a frequentist approach we made a simulation study to detect diferences between the gene expression levels in situations of control and treatment through the t-test. Another simulation was made to examine the power of the t-test when we assume an asymmetrical model for the data. Also we used the likelihood ratio test to verify the adequability of an asymmetrical model for the data.
Os microarrays são ferramentas utilizadas para medir os níveis de expressão de uma grande quantidade de genes ou fragmentos de genes simultaneamente em situações variadas. Com esta ferramenta é possível determinar possíveis genes causadores de doenças de origem genética. Uma abordagem estatística comumente utilizada para determinar se um gene g apresenta evidências para níveis de expressão diferentes consiste no teste t, que exige a suposição de normalidade aos dados (Saraiva, 2006; Baldi & Long, 2001). No entanto, esta suposição pode não condizer com a natureza dos dados analisados. Neste trabalho, será utilizada a distribuição normal assimétrica descrita formalmente por Azzalini (1985), que tem a distribuição normal como caso particular, com o intuito de flexibilizar a suposição de normalidade. Considerando a abordagem clássica, é realizado um estudo de simulação para detectar diferenças entre os níveis de expressão gênica em situações de controle e tratamento através do teste t, também é considerado um estudo de simulação para analisar o poder do teste t quando é assumido um modelo assimétrico para o conjunto de dados. Também é realizado o teste da razão de verossimilhança, para verificar se o ajuste de um modelo assimétrico aos dados é adequado.
Pinheiro, Eliane Cantinho. "Ajustes para o teste da razão de verossimilhanças em modelos de regressão beta." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-09072009-144049/.
Full textWe consider the issue of performing accurate small-sample likelihood-based inference in beta regression models, which are useful for modeling continuous proportions that are affected by independent variables. We derive Skovgaards (Scandinavian Journal of Statistics 28 (2001) 3-32) adjusted likelihood ratio statistics in this class of models. We show that the adjustment terms have simple compact form that can be easily implemented from standard statistical software. We presentMonte Carlo simulations showing that inference based on the adjusted statistics we propose is more reliable than that based on the usual likelihood ratio statistic. A real data example is presented.
Holčák, Lukáš. "Statistická analýza souborů s malým rozsahem." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2008. http://www.nusl.cz/ntk/nusl-227882.
Full textErguven, Sait. "Path Extraction Of Low Snr Dim Targets From Grayscale 2-d Image Sequences." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607723/index.pdf.
Full textchange detection of super pixels, a group of pixels that has sufficient statistics for likelihood ratio testing, is proposed. Super pixels that are determined as transition points are signed on a binary difference matrix and grouped by 4-Connected Labeling method. Each label is processed to find its vector movement in the next frame by Label Destruction and Centroids Mapping techniques. Candidate centroids are put into Distribution Density Function Maximization and Maximum Histogram Size Filtering methods to find the target related motion vectors. Noise related mappings are eliminated by Range and Maneuver Filtering. Geometrical centroids obtained on each frame are used as the observed target path which is put into Optimum Decoding Based Smoothing Algorithm to smooth and estimate the real target path. Optimum Decoding Based Smoothing Algorithm is based on quantization of possible states, i.e. observed target path centroids, and Viterbi Algorithm. According to the system and observation models, metric values of all possible target paths are computed using observation and transition probabilities. The path which results in maximum metric value at the last frame is decided as the estimated target path.
Malmström, Magnus. "5G Positioning using Machine Learning." Thesis, Linköpings universitet, Reglerteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-149055.
Full textRadiobasserad positionering av användarenheter är en viktig applikation i femte generationens (5G) radionätverk, som mycket tid och pengar läggs på för att utveckla och förbättra. Ett exempel på tillämpningsområde är positionering av nödsamtal, där ska användarenheten kunna positioneras med en noggrannhet på ett tiotal meter. Radio basserad positionering har alltid varit utmanande i stadsmiljöer där höga hus skymmer och reflekterar signalen mellan användarenheten och basstationen. En ide att positionera i dessa utmanande stadsmiljöer är att använda datadrivna modeller tränade av algoritmer baserat på positionerat testdata – så kallade maskininlärningsalgoritmer. I detta arbete har två icke-linjära modeller - neurala nätverk och random forest – bli implementerade och utvärderade för positionering av användarenheter där signalen från basstationen är skymd.% Dessa modeller refereras som maskininlärningsalgoritmer. Utvärderingen har gjorts på data insamlad av Ericsson från ett 5G-prototypnätverk lokaliserat i Kista, Stockholm. Antennen i den basstation som används har 48 lober vilka ligger i fem olika vertikala lager. Insignal och målvärdena till maskininlärningsalgoritmerna är signals styrkan för varje stråle (BRSRP), respektive givna GPS-positioner för användarenheten. Resultatet visar att med dessa maskininlärningsalgoritmer positioneras användarenheten med en osäkerhet mindre än tio meter i 80 procent av försöksfallen. För att kunna uppnå dessa resultat är viktigt att kunna detektera om signalen mellan användarenheten och basstationen är skymd eller ej. För att göra det har ett statistiskt test blivit implementerat. Detektionssannolikhet för testet är över 90 procent, samtidigt som sannolikhet att få falskt alarm endast är ett fåtal procent.\newline \newline%För att minska osäkerheten i positioneringen har undersökningar gjorts där utsignalen från maskininlärningsalgoritmerna filtreras med ett Kalman-filter. Resultat från dessa undersökningar visar att Kalman-filtret kan förbättra presitionen för positioneringen märkvärt.
Rocha, Gilson Silvério da. "Modelos lineares mistos para dados longitudinais em ensaio fatorial com tratamento adicional." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-14122015-174119/.
Full textAssays aimed at studying some crops through multiple measurements performed in the same sample unit along time, space, depth etc. have been frequently adopted in agronomical experiments. This type of measurement originates a dataset named longitudinal data, in which the use of statistical procedures capable of identifying possible standards of variation and correlation among measurements has great importance. The possibility of including random effects and modeling of covariance structures makes the methodology of mixed linear models one of the most appropriate tools to perform this type of analysis. However, despite of all theoretical and computational development, the use of such methodology in more complex designs involving longitudinal data and additional treatments, such as those used in forage crops, still needs to be studied. The present work covered the use of the Hasse diagram and the top-down strategy in the building of mixed linear models for the study of successive cuts from an experiment involving boron fertilization in alfalfa (Medicago sativa L.) carried out in the field area of Embrapa Southeast Livestock. First, we considered a qualitative approach for all study factors and we chose the Hasse diagram building due to the model complexity. The inclusion of random effects and selection of covariance structures for residues were performed based on the likelihood ratio test, calculated based on parameters estimated through the restricted maximum likelihood method, the Akaike\'s Information Criterion (AIC), the Akaike\'s information criterion corrected (AICc) and the Bayesian Information Criterion (BIC). The fixed effects were analyzed through the Wald-F test and we performed a regression study due to the significant effects of the variation sources associated with the longitudinal factor. The Hasse diagram building was essential for understanding and symbolic displaying regarding the relation among all factors present in the study, thus allowing variation sources and their degrees of freedom to be decomposed, assuring that all tests were correctly performed. The inclusion of random effect associated with the sample unit was essential for modeling the behavior of each unity. Furthermore, the structure of variance components with heterogeneity, added to the residues, was capable of modeling efficiently the heterogeneity of variances present in the different cuts of alfalfa plants. The fit was checked by residual diagnostic plots. The regression study allowed us to evaluate the productivity of shoot dry matter (kg ha-1) related to successive cuts of alfalfa plants, involving the comparison of fertilization with different boron sources and doses. We observed the best productivity in the combination of the source ulexite with the doses 3, 6 and 9 kg ha-1 boron.
Urbano, Simone. "Detection and diagnostic of freeplay induced limit cycle oscillation in the flight control system of a civil aircraf." Thesis, Toulouse, INPT, 2019. http://www.theses.fr/2019INPT0023/document.
Full textThis research study is the result of a 3 years CIFRE PhD thesis between the Airbus design office(Aircraft Control domain) and TéSA laboratory in Toulouse. The main goal is to propose, developand validate a software solution for the detection and diagnosis of a specific type of elevator andrudder vibration, called limit cycle oscillation (LCO), based on existing signals available in flightcontrol computers on board in-series aircraft. LCO is a generic mathematical term defining aninitial condition-independent periodic mode occurring in nonconservative nonlinear systems. Thisstudy focuses on the LCO phenomenon induced by mechanical freeplays in the control surface ofa civil aircraft. The LCO consequences are local structural load augmentation, flight handlingqualities deterioration, actuator operational life reduction, cockpit and cabin comfort deteriorationand maintenance cost augmentation. The state-of-the-art for freeplay induced LCO detection anddiagnosis is based on the pilot sensitivity to vibration and to periodic freeplay check on the controlsurfaces. This study is thought to propose a data-driven solution to help LCO and freeplaydiagnosis. The goal is to improve even more aircraft availability and reduce the maintenance costsby providing to the airlines a condition monitoring signal for LCO and freeplays. For this reason,two algorithmic solutions for vibration and freeplay diagnosis are investigated in this PhD thesis. Areal time detector for LCO diagnosis is first proposed based on the theory of the generalized likeli hood ratio test (GLRT). Some variants and simplifications are also proposed to be compliantwith the industrial constraints. In a second part of this work, a mechanical freeplay detector isintroduced based on the theory of Wiener model identification. Parametric (maximum likelihoodestimator) and non parametric (kernel regression) approaches are investigated, as well as somevariants to well-known nonparametric methods. In particular, the problem of hysteresis cycleestimation (as the output nonlinearity of a Wiener model) is tackled. Moreover, the constrainedand unconstrained problems are studied. A theoretical, numerical (simulator) and experimental(flight data and laboratory) analysis is carried out to investigate the performance of the proposeddetectors and to identify limitations and industrial feasibility. The obtained numerical andexperimental results confirm that the proposed GLR test (and its variants/simplifications) is a very appealing method for LCO diagnostic in terms of performance, robustness and computationalcost. On the other hand, the proposed freeplay diagnostic algorithm is able to detect relativelylarge freeplay levels, but it does not provide consistent results for relatively small freeplay levels. Moreover, specific input types are needed to guarantee repetitive and consistent results. Further studies should be carried out in order to compare the GLRT results with a Bayesian approach and to investigate more deeply the possibilities and limitations of the proposed parametric method for Wiener model identification
Lee, Chang. "MITIGATION of BACKGROUNDS for the LARGE UNDERGROUND XENON DARK MATTER EXPERIMENT." Case Western Reserve University School of Graduate Studies / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=case1427482791.
Full textMagalh?es, Felipe Henrique Alves. "Testes em modelos weibull na forma estendida de Marshall-Olkin." Universidade Federal do Rio Grande do Norte, 2011. http://repositorio.ufrn.br:8080/jspui/handle/123456789/18639.
Full textUniversidade Federal do Rio Grande do Norte
In survival analysis, the response is usually the time until the occurrence of an event of interest, called failure time. The main characteristic of survival data is the presence of censoring which is a partial observation of response. Associated with this information, some models occupy an important position by properly fit several practical situations, among which we can mention the Weibull model. Marshall-Olkin extended form distributions other a basic generalization that enables greater exibility in adjusting lifetime data. This paper presents a simulation study that compares the gradient test and the likelihood ratio test using the Marshall-Olkin extended form Weibull distribution. As a result, there is only a small advantage for the likelihood ratio test
Em an?lise de sobreviv?ncia, a vari?vel resposta e, geralmente, o tempo at? a ocorr?ncia de um evento de interesse, denominado tempo de falha, e a principal caracter?stica de dados de sobreviv?ncia e a presen?a de censura, que ? a observa??o parcial da resposta. Associados a essas informa??es, alguns modelos ocupam uma posi??o de destaque por sua comprovada adequa??o a v?rias situa??es pr?ticas, entre os quais ? poss?vel citar o modelo Weibull. Distribui??es na forma estendida de Marshall-Olkin oferecem uma generaliza??o de distribui??es b?sicas que permitem uma flexibilidade maior no ajuste de dados de tempo de vida. Este trabalho apresenta um estudo de simula??o que compara duas estat?sticas de teste, a da Raz?o de Verossimilhan?as e a Gradiente, utilizando a distribui??o Weibull em sua forma estendida de Marshall-Olkin. Como resultado, verifica-se apenas uma pequena vantagem para estat?stica da Raz?o de Verossimilhancas
Lemonte, Artur Jose. "Estatística gradiente e refinamento de métodos assintóticos no modelo de regressão Birnbaum-Saunders." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-26102010-123617/.
Full textThe Birnbaum-Saunders regression model is commonly used in reliability studies.We address the issue of performing inference in this class of models when the number of observations is small. Our simulation results suggest that the likelihood ratio and score tests tend to be liberal when the sample size is small. We derive Bartlett and Bartlett-type correction factors which reduce the size distortion of the tests. Additionally, we also consider modified signed log-likelihood ratio statistics in this class of models. Finally, the asymptotic expansion of the distribution of the gradient test statistic is derived for a composite hypothesis under a sequence of Pitman alternative hypotheses converging to the null hypothesis at rate n^{-1/2}, n being the sample size. Comparisons of the local powers of the gradient, likelihood ratio, Wald and score tests reveal no uniform superiority property.
Vong, Camille. "Model-Based Optimization of Clinical Trial Designs." Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-233445.
Full textGayet-Ageron, Angèle. "L’utilisation de la technique d’amplification de Treponema pallidum dans le diagnostic des ulcères oro-génitaux liés à la syphilis." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA11T005/document.
Full textBACKGROUND Syphilis has re-emerged in at-risk populations since 2000. Although the treatment of syphilis is simple, its diagnosis remains challenging. Treponema pallidum Polymerase Chain Reaction (Tp-PCR) has been used in the diagnosis of syphilis since 1990 but it is included in the case definition of the CDC since January 2014. OBJECTIVES 1) To assess the accuracy of Tp-PCR in various biological specimens and syphilis stages. 2) To measure its diagnostic performance (sensitivity, specificity and predictive values) in ulcers from early syphilis compared to three groups of reference. 3) To compare the accuracy of the two most currently used targets: tpp47 and polA genes.METHODS We conducted a systematic review and meta-analysis of all studies published from 1990. We implemented a multicentre, prospective, observational study in 5 European cities between 09/2011 and 09/2013 among patients with an oral or genital ulcer suggestive of syphilis. All patients were tested with traditional reference tests plus 2 Tp-PCRs (tpp47 and polA). We estimated the sensitivity, specificity and predictive values of Tp-PCR compared to darkfield microscopy (DFM), serology and an enhanced gold standard. We used the kappa coefficient to assess the agreement between the 2 targets.MAIN RESULTST p-PCR had the best accuracy in ulcers from early syphilis. Tp-PCR performed better when compared to the enhanced gold standard and had a higher sensitivity than DFM. The 2 Tp-PCRs had a similar accuracy and an almost perfect agreement.CONCLUSIONS Tp-PCR targeting either tpp47 or polA is clinically useful to confirm an early syphilis in smears and could even replace DFM under specific conditions
SILVA, Priscila Gonçalves da. "Inferência e diagnóstico em modelos não lineares Log-Gama generalizados." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/18637.
Full textMade available in DSpace on 2017-04-25T14:46:06Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TESE VERSÃO FINAL (CD).pdf: 688894 bytes, checksum: fc5c0291423dc50d4989c1c2d8d4af65 (MD5) Previous issue date: 2016-11-04
Young e Bakir (1987) propôs a classe de Modelos Lineares Log-Gama Generalizados (MLLGG) para analisar dados de sobrevivência. No nosso trabalho, estendemos a classe de modelos propostapor Young e Bakir (1987) permitindo uma estrutura não linear para os parâmetros de regressão. A nova classe de modelos é denominada como Modelos Não Lineares Log-Gama Generalizados (MNLLGG). Com o objetivo de obter a correção de viés de segunda ordem dos estimadores de máxima verossimilhança (EMV) na classe dos MNLLGG, desenvolvemos uma expressão matricial fechada para o estimador de viés de Cox e Snell (1968). Analisamos, via simulação de Monte Carlo, os desempenhos dos EMV e suas versões corrigidas via Cox e Snell (1968) e através da metodologia bootstrap (Efron, 1979). Propomos também resíduos e técnicas de diagnóstico para os MNLLGG, tais como: alavancagem generalizada, influência local e influência global. Obtivemos, em forma matricial, uma expressão para o fator de correção de Bartlett à estatística da razão de verossimilhanças nesta classe de modelos e desenvolvemos estudos de simulação para avaliar e comparar numericamente o desempenho dos testes da razão de verossimilhanças e suas versões corrigidas em relação ao tamanho e poder em amostras finitas. Além disso, derivamos expressões matriciais para os fatores de correção tipo-Bartlett às estatísticas escore e gradiente. Estudos de simulação foram feitos para avaliar o desempenho dos testes escore, gradiente e suas versões corrigidas no que tange ao tamanho e poder em amostras finitas.
Young e Bakir (1987) proposed the class of generalized log-gamma linear regression models (GLGLM) to analyze survival data. In our work, we extended the class of models proposed by Young e Bakir (1987) considering a nonlinear structure for the regression parameters. The new class of models is called generalized log-gamma nonlinear regression models (GLGNLM). We also propose matrix formula for the second-order bias of the maximum likelihood estimate of the regression parameter vector in the GLGNLM class. We use the results by Cox and Snell (1968) and bootstrap technique [Efron (1979)] to obtain the bias-corrected maximum likelihood estimate. Residuals and diagnostic techniques were proposed for the GLGNLM, such as generalized leverage, local and global influence. An general matrix notation was obtained for the Bartlett correction factor to the likelihood ratio statistic in this class of models. Simulation studies were developed to evaluate and compare numerically the performance of likelihood ratio tests and their corrected versions regarding size and power in finite samples. Furthermore, general matrix expressions were obtained for the Bartlett-type correction factor for the score and gradient statistics. Simulation studies were conducted to evaluate the performance of the score and gradient tests with their corrected versions regarding to the size and power in finite samples.