To see the other types of publications on this topic, follow the link: Random effect model.

Dissertations / Theses on the topic 'Random effect model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Random effect model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kwong, Grace Pui Sze. "Model misspecification and random effect models in survival analysis." Thesis, University of Warwick, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.398729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cao, Hongmei. "A random effect model with quality score for meta-analysis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ58754.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cheng, Yang. "Maximum likelihood estimation and computation in a random effect factor model." College Park, Md. : University of Maryland, 2004. http://hdl.handle.net/1903/1782.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2004.
Thesis research directed by: Mathematics. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
4

Choi, Ga Eun, and Stephanie Galonja. "The Euro Effect on Trade : The Trade Effect of the Euro on non-EMU and EMU Members." Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Economics, Finance and Statistics, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-20114.

Full text
Abstract:
The purpose of this paper is to investigate how the changes in trade values are affected by the implementation of the euro currency. We study the EU members, including 11 EMU members and 3 non-EMU members (Sweden, Denmark and the United Kingdom). The empirical analysis is conducted by using a modified version of the standard gravity model. Our core findings can be summarized into two parts. First, the euro effect on trade which is estimated by the euro-dummy coefficient reflects an adverse influence by the euro creation on trade values for the first two years of the implementation on all our sample countries. It leads us to a conclusion that there is no significant improvement of trade in the year of implementation. These results do not change when a time trend variable is added to evaluate the robustness of the model. Our primary interpretation is that the euro creation does not have an immediate impact on trade but it is rather gradual as countries need time to adapt to a new currency. It is connected to our second finding that the negative influence of the euro implementation is not permanent but eventually initiates positive outcomes on trade values over time, thus concluding that the euro implementation has had gradual impact on both EMU and non-EMU members.
APA, Harvard, Vancouver, ISO, and other styles
5

HE, Ran. "Carry-over and interaction effects of different hand-milking techniques and milkers on milk." Thesis, Uppsala universitet, Statistiska institutionen, 1986. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-154641.

Full text
Abstract:
The main idea of this thesis is studying the importance of the carry-over effects and interaction effects in statistical models. To investigate it, a hand-milking experiment in Burkina Faso was studied. In many no electricity access countries, such as Burkina Faso, the amount of milk and milk compositions are still highly  relying on hand-milking techniques and milkers. Moreover, the time effects also plays a important role in stockbreeding system. Therefore, falling all effects, carry-over effects and interaction effects into a linear mixed effects model, it is concluded that the carry-over effects of milker and hand-milking techniques cannot be neglected, and the interaction effects among hand-milking techniques, different milkers, days and periods can be substantial.
APA, Harvard, Vancouver, ISO, and other styles
6

Puschmann, Martin. "Anderson transitions on random Voronoi-Delaunay lattices." Doctoral thesis, Universitätsbibliothek Chemnitz, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-231900.

Full text
Abstract:
The dissertation covers phase transitions in the realm of the Anderson model of localization on topologically disordered Voronoi-Delaunay lattices. The disorder is given by random connections which implies correlations due to the restrictive lattice construction. Strictly speaking, the system features "strong anticorrelation", which is responsible for quenched long-range fluctuations of the coordination number. This attribute leads to violations of universal behavior in various system, e.g. Ising and Potts model, and to modifications of the Harris and the Imry-Ma criteria. In general, these exceptions serve to further understanding of critical phenomena. Hence, the question arises whether such deviations also occur in the realm of the Anderson model of localization in combination with random Voronoi-Delaunay lattice. For this purpose, four cases, which are distinguished by the spatial dimension of the systems and by the presence or absence of a magnetic field, are investigated by means of two different methods, i.e the multifractal analysis and the recursive Green function approach. The behavior is classified by the existence and type of occurring phase transitions and by the critical exponent v of the localization length. The results for the four cases can be summarized as follows. In two-dimensional systems, no phase transitions occur without a magnetic field, and all states are localized as a result of topological disorder. The behavior changes under the influence of the magnetic field. There are so-called quantum Hall transitions, which are phase changes between two localized regions. For low magnetic field strengths, the resulting exponent v ≈ 2.6 coincides with established values in literature. For higher strengths, an increased value, v ≈ 2.9, was determined. The deviations are probably caused by so-called Landau level coupling, where electrons scatter between different Landau levels. In contrast, the principle behavior in three-dimensional systems is equal in both cases. Two localization-delocalization transitions occur in each system. For these transitions the exponents v ≈ 1.58 and v ≈ 1.45 were determined for systems in absence and in presence of a magnetic field, respectively. This behavior and the obtained values agree with known results, and thus no deviation from the universal behavior can be observed
Diese Dissertation behandelt Phasenübergange im Rahmen des Anderson-Modells der Lokalisierung in topologisch ungeordneten Voronoi-Delaunay-Gittern. Die spezielle Art der Unordnung spiegelt sich u.a. in zufälligen Verknüpfungen wider, welche aufgrund der restriktiven Gitterkonstruktion miteinander korrelieren. Genauer gesagt zeigt das System eine "starke Antikorrelation", die dafür sorgt, dass langreichweitige Fluktuationen der Verknüpfungszahl unterdrückt werden. Diese Eigenschaft hat in anderen Systemen, z.B. im Ising- und Potts-Modell, zur Abweichung vom universellen Verhalten von Phasenübergängen geführt und bewirkt eine Modifikation von allgemeinen Aussagen, wie dem Harris- and Imry-Ma-Kriterium. Die Untersuchung solcher Ausnahmen dient zur Weiterentwicklung des Verständnisses von kritischen Phänomenen. Somit stellt sich die Frage, ob solche Abweichungen auch im Anderson-Modell der Lokalisierung unter Verwendung eines solchen Gitters auftreten. Dafür werden insgesamt vier Fälle, welche durch die Dimension des Gitters und durch die An- bzw. Abwesenheit eines magnetischen Feldes unterschieden werden, mit Hilfe zweier unterschiedlicher Methoden, d.h. der Multifraktalanalyse und der rekursiven Greensfunktionsmethode, untersucht. Das Verhalten wird anhand der Existenz und Art der Phasenübergänge und anhand des kritischen Exponenten v der Lokalisierungslänge unterschieden. Für die vier Fälle lassen sich die Ergebnisse wie folgt zusammenfassen. In zweidimensionalen Systemen treten ohne Magnetfeld keine Phasenübergänge auf und alle Zustände sind infolge der topologischen Unordnung lokalisiert. Unter Einfluss des Magnetfeldes ändert sich das Verhalten. Es kommt zur Ausformung von Landau-Bändern mit sogenannten Quanten-Hall-Übergängen, bei denen ein Phasenwechsel zwischen zwei lokalisierten Bereichen auftritt. Für geringe Magnetfeldstärken stimmen die erzielten Ergebnisse mit den bekannten Exponenten v ≈ 2.6 überein. Allerdings wurde für stärkere magnetische Felder ein höherer Wert, v ≈ 2.9, ermittelt. Die Abweichungen gehen vermutlich auf die zugleich gestiegene Unordnungsstärke zurück, welche dafür sorgt, dass Elektronen zwischen verschiedenen Landau-Bändern streuen können und so nicht das kritische Verhalten eines reinen Quanten-Hall-Überganges repräsentieren. Im Gegensatz dazu ist das Verhalten in dreidimensionalen Systemen für beide Fälle ähnlich. Es treten in jedem System zwei Phasenübergänge zwischen lokalisierten und delokalisierten Bereichen auf. Für diese Übergänge wurde der Exponent v ≈ 1.58 ohne und v ≈ 1.45 unter Einfluss eines magnetischen Feldes ermittelt. Dieses Verhalten und die jeweils ermittelten Werte stimmen mit bekannten Ergebnissen überein. Eine Abweichung vom universellen Verhalten wird somit nicht beobachtet
APA, Harvard, Vancouver, ISO, and other styles
7

Ren, Weijia. "Impact of Design Features for Cross-Classified Logistic Models When the Cross-Classification Structure Is Ignored." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1322538958.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Yu. "A study on the type I error rate and power for generalized linear mixed model containing one random effect." Kansas State University, 2017. http://hdl.handle.net/2097/35301.

Full text
Abstract:
Master of Science
Department of Statistics
Christopher Vahl
In animal health research, it is quite common for a clinical trial to be designed to demonstrate the efficacy of a new drug where a binary response variable is measured on an individual experimental animal (i.e., the observational unit). However, the investigational treatments are applied to groups of animals instead of an individual animal. This means the experimental unit is the group of animals and the response variable could be modeled with the binomial distribution. Also, the responses of animals within the same experimental unit may then be statistically dependent on each other. The usual logit model for a binary response assumes that all observations are independent. In this report, a logit model with a random error term representing the group of animals is considered. This is model belongs to a class of models referred to as generalized linear mixed models and is commonly fit using the SAS System procedure PROC GLIMMIX. Furthermore, practitioners often adjust the denominator degrees of freedom of the test statistic produced by PROC GLIMMIX using one of several different methods. In this report, a simulation study was performed over a variety of different parameter settings to compare the effects on the type I error rate and power of two methods for adjusting the denominator degrees of freedom, namely “DDFM = KENWARDROGER” and “DDFM = NONE”. Despite its reputation for fine performance in linear mixed models with normally distributed errors, the “DDFM = KENWARDROGER” option tended to perform poorly more often than the “DDFM = NONE” option in the logistic regression model with one random effect.
APA, Harvard, Vancouver, ISO, and other styles
9

Mingolini, Riccardo. "Investimenti in lobby: Un modello per stimare il loro impatto sull'azienda." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13291/.

Full text
Abstract:
In questo elaborato di tesi verrà analizzata l’attitudine di una azienda ad investire in lobbying, misurato attraverso varie variabili importanti per la stessa quali ad esempio il capitale, il reddito netto dell’impresa, il numero degli impiegati (et simila) e il loro impatto scoraggiante o incentivante rispetto alla nostra variabile dipendente. Cercheremo infine di trovare un modello che approssima in modo sostanziale suddette dipendenze e variabili, in modo da tracciare un filo logico e matematico fra la nostra variabile dipendente Y ( investimento in lobbying) e le nostre variabili indipendenti X cioè gli indici e le variabili in valore monetario importanti per definire una azienda e il suo settore di appartenenza (SIC CODE).
APA, Harvard, Vancouver, ISO, and other styles
10

Oberhardt, Tobias. "A micromechanical model for the nonlinearity of microcracks in random distributions and their effect on higher harmonic Rayleigh wave generation." Thesis, Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54365.

Full text
Abstract:
This research investigates the modeling of randomly distributed surface-breaking microcracks and their effects on higher harmonic generation in Rayleigh surface waves. The modeling is based on micromechanical considerations of rough surface contact. The nonlinear behavior of a single microcrack is described by a hyperelastic effective stress-strain relationship. Finite element simulations of nonlinear wave propagation in a solid with distributed microcracks are performed. The evolution of fundamental and second harmonic amplitudes along the propagation distance is studied and the acoustic nonlinearity parameter is calculated. The results show that the nonlinearity parameter increases with crack density and root mean square roughness of the crack faces. While, for a dilute concentration of microcracks, the increase in acoustic nonlinearity is proportional to the crack density, this is not valid for higher crack densities, as the microcracks start to interact. Finally, it is shown that odd higher harmonic generation in Rayleigh surface waves due to sliding crack faces introduces a friction nonlinearity.
APA, Harvard, Vancouver, ISO, and other styles
11

Xu, Liou. "A MARKOV TRANSITION MODEL TO DEMENTIA WITH DEATH AS A COMPETING EVENT." UKnowledge, 2010. http://uknowledge.uky.edu/gradschool_diss/42.

Full text
Abstract:
The research on multi-state Markov transition model is motivated by the nature of the longitudinal data from the Nun Study (Snowdon, 1997), and similar information on the BRAiNS cohort (Salazar, 2004). Our goal is to develop a flexible methodology for handling the categorical longitudinal responses and competing risks time-to-event that characterizes the features of the data for research on dementia. To do so, we treat the survival from death as a continuous variable rather than defining death as a competing absorbing state to dementia. We assume that within each subject the survival component and the Markov process are linked by a shared latent random effect, and moreover, these two pieces are conditionally independent given the random effect and their corresponding predictor variables. The problem of the dependence among observations made on the same subject (repeated measurements) is addressed by assuming a first order Markovian dependence structure. A closed-form expression for the individual and thus overall conditional marginal likelihood function is derived, which we can evaluate numerically to produce the maximum likelihood estimates for the unknown parameters. This method can be implemented using standard statistical software such as SAS Proc Nlmixed©. We present the results of simulation studies designed to show how the model’s ability to accurately estimate the parameters can be affected by the distributional form of the survival term. Then we focus on addressing the problem by accommodating the residual life time of the subject’s confounding in the nonhomogeneous chain. The convergence status of the chain is examined and the formulation of the absorption statistics is derived. We propose using the Delta method to estimate the variance terms for construction of confidence intervals. The results are illustrated with applications to the Nun Study data in details.
APA, Harvard, Vancouver, ISO, and other styles
12

Mateluna, Diego Ignacio Gallardo. "Extensões em modelos de sobrevivência com fração de cura e efeitos aleatórios." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-24062014-202301/.

Full text
Abstract:
Neste trabalho são apresentadas algumas extensões de modelos de sobrevivência com fração de cura, assumindo o contexto em que as observações estão agrupadas. Dois efeitos aleatórios são incorporados para cada grupo: um para explicar o efeito no tempo de sobrevida das observações suscetíveis e outro para explicar a probabilidade de cura. Apresenta-se uma abordagem clássica através dos estimadores REML e uma abordagem bayesiana através do uso de processos de Dirichlet. Discute-se alguns estudos de simulação em que avalia-se o desempenho dos estimadores propostos, além de comparar as duas abordagens. Finalmente, ilustram-se os resultados com dados reais.
In this work some extensions in survival models with cure fraction are presented, assuming the context in which the observations are grouped into clusters. Two random effects are incorporated for each group: one to explain the effect on survival time of susceptible observations and another to explain the probability of cure. A classical approach through the REML estimators is presented as well as a bayesian approach through Dirichlet Process. Besides comparing both approaches, some simulation studies which evaluates the performance of the proposed estimators are discussed. Finally, the results are illustrated with a real database.
APA, Harvard, Vancouver, ISO, and other styles
13

Moldovan, Max. "Stochastic Modelling of Random Variables with an Application in Financial Risk Management." Queensland University of Technology, 2003. http://eprints.qut.edu.au/15796/.

Full text
Abstract:
The problem of determining whether or not a theoretical model is an accurate representation of an empirically observed phenomenon is one of the most challenging in the empirical scientific investigation. The following study explores the problem of stochastic model validation. Special attention is devoted to the unusual two-peaked shape of the empirically observed distributions of the conditional on realised volatility financial returns. The application of statistical hypothesis testing and simulation techniques leads to the conclusion that the conditional on realised volatility returns are distributed with a specific previously undocumented distribution. The probability density that represents this distribution is derived, characterised and applied for validation of the financial model.
APA, Harvard, Vancouver, ISO, and other styles
14

Kamangar, Daniel, and Richard Sundin. "Management and CEO Stock Ownership and its Effect on Company Performance." Thesis, KTH, Matematisk statistik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229670.

Full text
Abstract:
This is a study on the effect of management and CEO stock ownership on company performance. A regression analysis is performed on panel data consisting of a sample of 30 companies listed on OMX Stockholm Mid Cap. A total of 210 and 2520 observations is considered on a yearly and monthly basis, respectively, for seven years (2010-2016). The Hausman test is applied for determining between the fixed effects and random effects regression models. Results show that management relative stock ownership has a significant positive effect on company net income growth and return on assets. The effect is not significant for CEO stock ownership, which is contrary to what commonly has been shown for large companies in previous research. Moreover, alternative methodology is discussed for the benefit of the future researcher. The authors illustrate how the selection of dummy variables can be vital for final model outcomes, and it is thus an important aspect to consider when performing panel data analysis.
I den här studien undersöks hur aktieinnehav hos ledning och den verkställande direktören i ett företag påverkar företagsutvecklingen. Studien genomförs med regressionsanalys på paneldata som består av 30 företag, samtliga noterade på OMX Stockholm Mid Cap. Totalt samlas 210 och 2520 observationer på årsbasis respektive månadsbasis över sju år (2010-2016). Hausman-testet används för att bestämma vilken av fixed effects-modellen och random effects-modellen som ska användas i regressionen. Resultaten visar att relativt aktieinnehav hos ledningen har en positiv signifikant påverkan på ett företags nettoinkomstutveckling och avkastning på tillgångar. Den verkställande direktörens aktieinnehav visas inte vara signifikant, vilket är motsatt till det som generellt har visats för stora företag i tidigare forskning. Regressionerna genomförs även med alternativa metoder, vilka det resoneras kring i en diskussion som bör vara till gagn för vidare forskning. Författarna illustrerar hur val av dummy-variabler kan ha en avgörande betydelse för regressionsanalysen, och att det således är en viktig aspekt att ta hänsyn till när regressioner genomförs på paneldata.
APA, Harvard, Vancouver, ISO, and other styles
15

Shen, Xia. "Novel Statistical Methods in Quantitative Genetics : Modeling Genetic Variance for Quantitative Trait Loci Mapping and Genomic Evaluation." Doctoral thesis, Uppsala universitet, Beräknings- och systembiologi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-170091.

Full text
Abstract:
This thesis develops and evaluates statistical methods for different types of genetic analyses, including quantitative trait loci (QTL) analysis, genome-wide association study (GWAS), and genomic evaluation. The main contribution of the thesis is to provide novel insights in modeling genetic variance, especially via random effects models. In variance component QTL analysis, a full likelihood model accounting for uncertainty in the identity-by-descent (IBD) matrix was developed. It was found to be able to correctly adjust the bias in genetic variance component estimation and gain power in QTL mapping in terms of precision.  Double hierarchical generalized linear models, and a non-iterative simplified version, were implemented and applied to fit data of an entire genome. These whole genome models were shown to have good performance in both QTL mapping and genomic prediction. A re-analysis of a publicly available GWAS data set identified significant loci in Arabidopsis that control phenotypic variance instead of mean, which validated the idea of variance-controlling genes.  The works in the thesis are accompanied by R packages available online, including a general statistical tool for fitting random effects models (hglm), an efficient generalized ridge regression for high-dimensional data (bigRR), a double-layer mixed model for genomic data analysis (iQTL), a stochastic IBD matrix calculator (MCIBD), a computational interface for QTL mapping (qtl.outbred), and a GWAS analysis tool for mapping variance-controlling loci (vGWAS).
APA, Harvard, Vancouver, ISO, and other styles
16

Shi, Hongxiang. "Hierarchical Statistical Models for Large Spatial Data in Uncertainty Quantification and Data Fusion." University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1504802515691938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Cau, Nicklasson Ronnie, and Simon Hansson. "Investment Companies’ Discount Fluctuation on the Swedish Market : A statistical analysis regarding different micro- and macroeconomic factors influence on Swedish closed-end funds’ discount." Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Economics, Finance and Statistics, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-21250.

Full text
Abstract:
Closed-end funds’ (CEF) discount and discount fluctuations have been puzzling researchers for decades. Up to date, there are no multidimensional or cross-sectional variables that have been proved to influence CEFs simultaneously. Fact is that, earlier research and theories on the subject are contradictious and several suggestions on the origin of the CEF’s discount and its fluctuations have been proposed. To mention a few, investor sentiments, taxation issues, dividend policies, agency costs and agency problems are considered to influence these discounts. The purpose of this report is to examine the relationship between micro- and macroeconomic variables fluctuations, and how these can explain the discount fluctuation of the Swedish CEFs. This report focuses upon the CEFs traded at NASDAQ OMX Stockholm, which have been selected through a comprehensive multistage selection process. 10 CEFs were selected. Monthly data for calculating micro- and macroeconomic variables was collected for the period March 2003 – February 2013, which resulted in approximately 1 200 observations. OLS regression analysis, Fixed- and Random Effect Models and Hausman tests were conducted. The findings conclude that some of this report’s chosen micro- and macro variables influence on the Swedish CEFs’ discount fluctuation, although these findings are conditioned. The CEFs’ individual characteristics or traits result in a significant impact on the fluctuation of CEFs’ discount. Hence, only by controlling these characteristics, multidimensional or cross-sectional micro- and macroeconomic variables can be proved to affect the CEFs’ discount fluctuation.
APA, Harvard, Vancouver, ISO, and other styles
18

Oliveira, Izabela Regina Cardoso de. "Modeling strategies for complex hierarchical and overdispersed data in the life sciences." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-12082014-105135/.

Full text
Abstract:
In this work, we study the so-called combined models, generalized linear mixed models with extension to allow for overdispersion, in the context of genetics and breeding. Such flexible models accommodates cluster-induced correlation and overdispersion through two separate sets of random effects and contain as special cases the generalized linear mixed models (GLMM) on the one hand, and commonly known overdispersion models on the other. We use such models while obtaining heritability coefficients for non-Gaussian characters. Heritability is one of the many important concepts that are often quantified upon fitting a model to hierarchical data. It is often of importance in plant and animal breeding. Knowledge of this attribute is useful to quantify the magnitude of improvement in the population. For data where linear models can be used, this attribute is conveniently defined as a ratio of variance components. Matters are less simple for non-Gaussian outcomes. The focus is on time-to-event and count traits, where the Weibull-Gamma-Normal and Poisson-Gamma-Normal models are used. The resulting expressions are sufficiently simple and appealing, in particular in special cases, to be of practical value. The proposed methodologies are illustrated using data from animal and plant breeding. Furthermore, attention is given to the occurrence of negative estimates of variance components in the Poisson-Gamma-Normal model. The occurrence of negative variance components in linear mixed models (LMM) has received a certain amount of attention in the literature whereas almost no work has been done for GLMM. This phenomenon can be confusing at first sight because, by definition, variances themselves are non-negative quantities. However, this is a well understood phenomenon in the context of linear mixed modeling, where one will have to make a choice between a hierarchical and a marginal view. The variance components of the combined model for count outcomes are studied theoretically and the plant breeding study used as illustration underscores that this phenomenon can be common in applied research. We also call attention to the performance of different estimation methods, because not all available methods are capable of extending the parameter space of the variance components. Then, when there is a need for inference on such components and they are expected to be negative, the accuracy of the method is not the only characteristic to be considered.
Neste trabalho foram estudados os chamados modelos combinados, modelos lineares generalizados mistos com extensão para acomodar superdispersão, no contexto de genética e melhoramento. Esses modelos flexíveis acomodam correlação induzida por agrupamento e superdispersão por meio de dois conjuntos separados de efeitos aleatórios e contem como casos especiais os modelos lineares generalizados mistos (MLGM) e os modelos de superdispersão comumente conhecidos. Tais modelos são usados na obtenção do coeficiente de herdabilidade para caracteres não Gaussianos. Herdabilidade é um dos vários importantes conceitos que são frequentemente quantificados com o ajuste de um modelo a dados hierárquicos. Ela é usualmente importante no melhoramento vegetal e animal. Conhecer esse atributo é útil para quantificar a magnitude do ganho na população. Para dados em que modelos lineares podem ser usados, esse atributo é convenientemente definido como uma razão de componentes de variância. Os problemas são menos simples para respostas não Gaussianas. O foco aqui é em características do tipo tempo-até-evento e contagem, em que os modelosWeibull-Gama-Normal e Poisson-Gama-Normal são usados. As expressões resultantes são suficientemente simples e atrativas, em particular nos casos especiais, pelo valor prático. As metodologias propostas são ilustradas usando dados de melhoramento animal e vegetal. Além disso, a atenção é voltada à ocorrência de estimativas negativas de componentes de variância no modelo Poisson-Gama- Normal. A ocorrência de componentes de variância negativos em modelos lineares mistos (MLM) tem recebido certa atenção na literatura enquanto quase nenhum trabalho tem sido feito para MLGM. Esse fenômeno pode ser confuso a princípio porque, por definição, variâncias são quantidades não-negativas. Entretanto, este é um fenômeno bem compreendido no contexto de modelagem linear mista, em que a escolha deverá ser feita entre uma interpretação hierárquica ou marginal. Os componentes de variância do modelo combinado para respostas de contagem são estudados teoricamente e o estudo de melhoramento vegetal usado como ilustração confirma que esse fenômeno pode ser comum em pesquisas aplicadas. A atenção também é voltada ao desempenho de diferentes métodos de estimação, porque nem todos aqueles disponíveis são capazes de estender o espaço paramétrico dos componentes de variância. Então, quando há a necessidade de inferência de tais componentes e é esperado que eles sejam negativos, a acurácia do método de estimação não é a única característica a ser considerada.
APA, Harvard, Vancouver, ISO, and other styles
19

Castro, Paulo Alexandre de. "Rede complexa e criticalidade auto-organizada: modelos e aplicações." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/76/76131/tde-14012008-165356/.

Full text
Abstract:
Modelos e teorias científicas surgem da necessidade do homem entender melhor o funcionamento do mundo em que vive. Constantemente, novos modelos e técnicas são criados com esse objetivo. Uma dessas teorias recentemente desenvolvida é a da Criticalidade Auto-Organizada. No Capítulo 2 desta tese, apresentamos uma breve introdução a Criticalidade Auto-Organizada. Tendo a criticalidade auto-organizada como pano de fundo, no Capítulo 3, estudamos a dinâmica Bak-Sneppen (e diversas variantes) e a comparamos com alguns algoritmos de otimização. Apresentamos no Capítulo 4, uma revisão histórica e conceitual das redes complexas. Revisamos alguns importantes modelos tais como: Erdös-Rényi, Watts-Strogatz, de configuração e Barabási-Albert. No Capítulo 5, estudamos o modelo Barabási-Albert não-linear. Para este modelo, obtivemos uma expressão analítica para a distribuição de conectividades P(k), válida para amplo espectro do espaço de parâmetros. Propusemos também uma forma analítica para o coeficiente de agrupamento, que foi corroborada por nossas simulações numéricas. Verificamos que a rede Barabási-Albert não-linear pode ser assortativa ou desassortativa e que, somente no caso da rede Barabási-Albert linear, ela é não assortativa. No Capítulo 6, utilizando dados coletados do CD-ROM da revista Placar, construímos uma rede bastante peculiar -- a rede do futebol brasileiro. Primeiramente analisamos a rede bipartida formada por jogadores e clubes. Verificamos que a probabilidade de que um jogador tenha participado de M partidas decai exponencialmente com M, ao passo que a probabilidade de que um jogador tenha marcado G gols segue uma lei de potência. A partir da rede bipartida, construímos a rede unipartida de jogadores, que batizamos de rede de jogadores do futebol brasileiro. Nessa rede, determinamos várias grandezas: o comprimento médio do menor caminho e os coeficientes de agrupamento e de assortatividade. A rede de jogadores de futebol brasileiro nos permitiu analisar a evolução temporal dessas grandezas, uma oportunidade rara em se tratando de redes reais.
Models and scientific theories arise from the necessity of the human being to better understand how the world works. Driven by this purpose new models and techniques have been created. For instance, one of these theories recently developed is the Self-Organized Criticality, which is shortly introduced in the Chapter 2 of this thesis. In the framework of the Self-Organized Criticality theory, we investigate the standard Bak-Sneppen dynamics as well some variants of it and compare them with optimization algorithms (Chapter 3). We present a historical and conceptual review of complex networks in the Chapter 4. Some important models like: Erdös-Rényi, Watts-Strogatz, configuration model and Barabási-Albert are revised. In the Chapter 5, we analyze the nonlinear Barabási-Albert model. For this model, we got an analytical expression for the connectivity distribution P(k), which is valid for a wide range of the space parameters. We also proposed an exact analytical expression for the clustering coefficient which corroborates very well with our numerical simulations. The nonlinear Barabási-Albert network can be assortative or disassortative and only in the particular case of the linear Barabási-Albert model, the network is no assortative. In the Chapter 6, we used collected data from a CD-ROM released by the magazine Placar and constructed a very peculiar network -- the Brazilian soccer network. First, we analyzed the bipartite network formed by players and clubs. We find out that the probability of a footballer has played M matches decays exponentially with M, whereas the probability of a footballer to score G gols follows a power-law. From the bipartite network, we built the unipartite Brazilian soccer players network. For this network, we determined several important quantities: the average shortest path length, the clustering coefficient and the assortative coefficient. We were also able to analise the time evolution of these quantities -- which represents a very rare opportunity in the study of real networks.
APA, Harvard, Vancouver, ISO, and other styles
20

Putcha, Venkata Rama Prasad. "Random effects in survival analysis." Thesis, University of Reading, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.312431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Seppä, K. (Karri). "Quantifying regional variation in the survival of cancer patients." Doctoral thesis, Oulun yliopisto, 2012. http://urn.fi/urn:isbn:9789526200118.

Full text
Abstract:
Abstract Monitoring regional variation in the survival of cancer patients is an important tool for assessing realisation of regional equity in cancer care. When regions are small or sparsely populated, the random component in the total variation across the regions becomes prominent. The broad aim of this doctoral thesis is to develop methods for assessing regional variation in the cause-specific and relative survival of cancer patients in a country and for quantifying the public health impact of the regional variation in the presence of competing hazards of death using summary measures that are interpretable also for policy-makers and other stakeholders. Methods for summarising the survival of a patient population with incomplete follow-up in terms of the mean and median survival times are proposed. A cure fraction model with two sets of random effects for regional variation is fitted to cause-specific survival data in a Bayesian framework using Markov chain Monte Carlo simulation. This hierarchical model is extended to the estimation of relative survival where the expected survival is estimated by region and considered as a random quantity. The public health impact of regional variation is quantified by the extra survival time and the number of avoidable deaths that would be gained if the patients achieved the most favourable level of relative survival. The methods proposed were applied to real data sets from the Finnish Cancer Registry. Estimates of the mean and the median survival times of colon and thyroid cancer patients, respectively, were corrected for the bias that was caused by the inherent selection of patients during the period of diagnosis with respect to their age at diagnosis. The cure fraction model allowed estimation of regional variation in cause-specific and relative survival of breast and colon cancer patients, respectively, with a parsimonious number of parameters yielding reasonable estimates also for sparsely populated hospital districts
Tiivistelmä Syöpäpotilaiden elossaolon alueellisen vaihtelun seuraaminen on tärkeää arvioitaessa syövänhoidon oikeudenmukaista jakautumista alueittain. Kun alueet ovat pieniä tai harvaan asuttuja, alueellisen kokonaisvaihtelun satunnainen osa kasvaa merkittäväksi. Tämän väitöstutkimuksen tavoitteena on kehittää menetelmiä, joilla pystytään arvioimaan maan sisäistä alueellista vaihtelua lisäkuolleisuudessa, jonka itse syöpä potilaille aiheuttaa, ja tiivistämään alueellisen vaihtelun kansanterveydellinen merkitys mittalukuihin, jotka ottavat kilpailevan kuolleisuuden huomioon ja ovat myös päättäjien tulkittavissa. Ehdotetuilla menetelmillä voidaan potilaiden ennustetta kuvailla käyttäen elossaolo-ajan keskiarvoa ja mediaania, vaikka potilaiden seuruu olisi keskeneräinen. Potilaiden syykohtaiselle kuolleisuudelle sovitetaan bayesiläisittäin MCMC-simulaatiota hyödyntäen malli, jossa parantuneiden potilaiden osuuden kuvaamisen lisäksi alueellinen vaihtelu esitetään kahden satunnaisefektijoukon avulla. Tämä hierarkkinen malli laajennetaan suhteellisen elossaolon estimointiin, jossa potilaiden odotettu elossaolo estimoidaan alueittain ja siihen liittyvä satunnaisvaihtelu otetaan huomioon. Alueellisen vaihtelun kansanterveydellistä merkitystä mitataan elossaoloajan keskimääräisellä pidentymällä sekä vältettävien kuolemien lukumäärällä, jotka voitaisiin saavuttaa, mikäli suotuisin suhteellisen elossaolon taso saavutettaisiin kaikilla alueilla. Kehitettyjä menetelmiä käytettiin Suomen Syöpärekisterin aineistojen analysointiin. Paksusuoli- ja kilpirauhassyöpäpotilaiden elinaikojen keskiarvojen ja mediaanien estimaatit oikaistiin harhasta, joka aiheutui potilaiden luontaisesta valikoitumisesta diagnosointijakson aikana iän suhteen. Parantuneiden osuuden satunnaisefektimalli mahdollisti rintasyöpäpotilaiden syykohtaisen kuolleisuuden ja paksusuolisyöpäpotilaiden suhteellisen elossaolon kuvaamisen vähäisellä määrällä parametreja ja antoi järkeenkäyvät estimaatit myös harvaan asutuille sairaanhoitopiireille
APA, Harvard, Vancouver, ISO, and other styles
22

Baker, John Nicholas. "Random effect models for repairable system reliability." Thesis, University of Plymouth, 1997. http://hdl.handle.net/10026.1/2472.

Full text
Abstract:
The practical motivation for the work described in this thesis arose from the development of a new Jaguar car engine. Development tests on prototype engines led to multiple failure time data which are modelled as a non-homogeneous Poisson process in its log-linear form. Initial analysis of the data using failure time plots showed considerable differences between prototype engines and suggested the use of models incorporating random effects for the engine effects. These models were fitted using the method of maximum likelihood. Two random effects have been considered: a proportional effect and a time dependent effect. In each case a simulation study showed the method of maximum likelihood to produce good estimates of the parameters and standard errors. There is also shown to be a bias in the estimate of the random effect, especially in smaller samples. The likelihood ratio test has been shown to be valid in assessing the statistical significance of the random effect, and a simulation exercise has demonstrated this in practical terms. Applying this test to the models fitted to the Jaguar data gives the proportional random effect to be significant while the time dependent random effect is not found to be significantly different from zero. This test has also been demonstrated to be of use in distinguishing between the two models and again the proportional random effect model is found to be more suitable for the Jaguar data. Residual analysis is performed to aid model validation Covariates are included, in various forms, in the proportional random effect model and the inclusion of these in the time dependent model is briefly discussed. The use of these models is demonstrated for the Jaguar data by including the type of test an engine performed as a covariate. The covariate models have also been used to compare engine phases. A framework for extending the models for interval censored data is developed. Finally this thesis discusses possible extensions of the work summarised in the previous paragraphs. This includes work on alternative models, Bayesian methods and experimental design.
APA, Harvard, Vancouver, ISO, and other styles
23

Hunt, Colleen Helen. "Inference for general random effects models." Title page, table of contents and abstract only, 2003. http://web4.library.adelaide.edu.au/theses/09SM/09smh9394.pdf.

Full text
Abstract:
"October 13, 2003" Bibliography: leaves 102-105. This work describes methods associated with general random effects models. Part one describes a technique for investigating mean-variance relationships in random effects models. Part two derives and approximation to the likelihood function using a Laplace expansion to the fourth order.
APA, Harvard, Vancouver, ISO, and other styles
24

Sanogo, Kakotan. "Tolerance Intervals in Random-Effects Models." VCU Scholars Compass, 2008. http://scholarscompass.vcu.edu/etd/1661.

Full text
Abstract:
In the pharmaceutical setting, it is often necessary to establish the shelf life of a drug product and sometimes suitable to assess the risk of product failure at the desired expiry period. The current statistical methodology use confidence intervals for the predicted mean to establish the expiry period and prediction intervals for a predicted new assay value or a tolerance interval for a proportion of the population for use in a risk assessment. A major concern is that most methodology treat a homogeneous subpopulation, say batch, either as a fixed effect and therefore uses a fixed-effects regression model (Graybill, 1976) or as a mixed-effects model limited to balanced data structures (Jonsson, 2003). However, batch is definitely a random effect as this fact has been reflected by some recent methodology [Altan, Cabrera and Shoung (2005), Hoffman and Kringle (2005)]. Thus, to assess the risk of product failure at expiry, it is necessary to use tolerance intervals since they provide an estimate of the proportion of assay values and/or batches failing at the expiry period. In this thesis, we illustrate the methodology described by Jonsson (2003) to construct β-expectation tolerance limits for longitudinal data in a random-effects setting. We underline the limitations of Jonsson’s approach to constructing tolerance intervals and highlight the need for a better methodology.
APA, Harvard, Vancouver, ISO, and other styles
25

Skoglund, Jimmy. "Essays on random effects models and GARCH." Doctoral thesis, Stockholm : Economic Research Institute, Stockholm School of Economics (Ekonomiska forskningsinstitutet vid Handelshögsk.) (EFI), 2001. http://www.hhs.se/efi.summary/553.htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Kidney, Darren. "Random coeffcient models for complex longitudinal data." Thesis, University of St Andrews, 2014. http://hdl.handle.net/10023/6386.

Full text
Abstract:
Longitudinal data are common in biological research. However, real data sets vary considerably in terms of their structure and complexity and present many challenges for statistical modelling. This thesis proposes a series of methods using random coefficients for modelling two broad types of longitudinal response: normally distributed measurements and binary recapture data. Biased inference can occur in linear mixed-effects modelling if subjects are drawn from a number of unknown sub-populations, or if the residual covariance is poorly specified. To address some of the shortcomings of previous approaches in terms of model selection and flexibility, this thesis presents methods for: (i) determining the presence of latent grouping structures using a two-step approach, involving regression splines for modelling functional random effects and mixture modelling of the fitted random effects; and (ii) flexible of modelling of the residual covariance matrix using regression splines to specify smooth and potentially non-monotonic variance and correlation functions. Spatially explicit capture-recapture methods for estimating the density of animal populations have shown a rapid increase in popularity over recent years. However, further refinements to existing theory and fitting software are required to apply these methods in many situations. This thesis presents: (i) an analysis of recapture data from an acoustic survey of gibbons using supplementary data in the form of estimated angles to detections, (ii) the development of a multi-occasion likelihood including a model for stochastic availability using a partially observed random effect (interpreted in terms of calling behaviour in the case of gibbons), and (iii) an analysis of recapture data from a population of radio-tagged skates using a conditional likelihood that allows the density of animal activity centres to be modelled as functions of time, space and animal-level covariates.
APA, Harvard, Vancouver, ISO, and other styles
27

Biard, Lucie. "Test des effets centre en épidémiologie clinique." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC302.

Full text
Abstract:
La modélisation des effets centre dans le cadre des données de survie repose souvent sur l'utilisation de modèles de Cox à effets mixtes. Tester un effet centre revient alors à tester à zéro la variance de l'effet aléatoire correspondant. La distribution sous l'hypothèse nulle des statistiques des tests paramétriques usuels n'est alors pas toujours connue. Les procédures de permutation ont été proposées comme alternative, pour les modèles linéaires généralisés mixtes.L'objectif est de développer, pour l'analyse des effets centre dans un modèle de survie de Cox à effets mixtes, une procédure de test de permutation pour les effets aléatoires.La première partie du travail présente la procédure de permutation développée pour le test d'un unique effet centre sur le risque de base, avec une application à la recherche d'un effet centre dans un essai clinique chez des patients atteints de leucémie myéloïde aiguë. La seconde partie porte sur l'extension de la procédure au test d'effets aléatoires multiples afin d’étudier à la fois des effets centre sur le risque de base et sur l'effet de variables, avec des illustrations sur deux cohortes de patients atteints de leucémie aiguë. Dans une troisième partie, les méthodes proposées sont appliquées à une cohorte multicentrique de patients en réanimation atteints d'hémopathies malignes, pour étudier les facteurs déterminant les effets centre sur la mortalité hospitalière. Les procédures de permutation proposées constituent une approche robuste et d'implémentation relativement aisée pour le test, en routine, d'effets aléatoires, donc un outil adapté pour l'analyse d'effets centre en épidémiologie clinique, afin de comprendre leur origine
Centre effects modelling within the framework of survival data often relies on the estimation of Cox mixed effects models. Testing for a centre effect consists in testing to zero the variance component of the corresponding random effect. In this framework, the identification of the null distribution of usual tests statistics is not always straightforward. Permutation procedures have been proposed as an alternative, for generalised linear mixed models.The objective was to develop a permutation test procedure for random effects in a Cox mixed effects model, for the test of centre effects.We first developed and evaluated permutation procedures for the test of a single centre effect on the baseline risk. The test was used to investigate a centre effect in a clinical trial of induction chemotherapy for patients with acute myeloid leukaemia.The second part consisted in extending the procedure for the test of multiple random effects, in survival models. The aim was to be able to examine both center effects on the baseline risk and centre effects on the effect of covariates. The procedure was illustrated on two cohorts of acute leukaemia patients. In a third part, the permutation approach was applied to a cohort of critically ill patients with hematologic malignancies, to investigate centre effects on the hospital mortality.The proposed permutation procedures appear to be robust approaches, easily implemented for the test of random centre effect in routine practice. They are an appropriate tool for the analysis of centre effects in clinical epidemiology, with the purpose of understanding their sources
APA, Harvard, Vancouver, ISO, and other styles
28

Karlsson, Henrik. "Uplift Modeling : Identifying Optimal Treatment Group Allocation and Whom to Contact to Maximize Return on Investment." Thesis, Linköpings universitet, Statistik och maskininlärning, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157962.

Full text
Abstract:
This report investigates the possibilities to model the causal effect of treatment within the insurance domain to increase return on investment of sales through telemarketing. In order to capture the causal effect, two or more subgroups are required where one group receives control treatment. Two different uplift models model the causal effect of treatment, Class Transformation Method, and Modeling Uplift Directly with Random Forests. Both methods are evaluated by the Qini curve and the Qini coefficient. To model the causal effect of treatment, the comparison with a control group is a necessity. The report attempts to find the optimal treatment group allocation in order to maximize the precision in the difference between the treatment group and the control group. Further, the report provides a rule of thumb that ensure that the control group is of sufficient size to be able to model the causal effect. If has provided the data material used to model uplift and it consists of approximately 630000 customer interactions and 60 features. The total uplift in the data set, the difference in purchase rate between the treatment group and control group, is approximately 3%. Uplift by random forest with a Euclidean distance splitting criterion that tries to maximize the distributional divergence between treatment group and control group performs best, which captures 15% of the theoretical best model. The same model manages to capture 77% of the total amount of purchases in the treatment group by only giving treatment to half of the treatment group. With the purchase rates in the data set, the optimal treatment group allocation is approximately 58%-70%, but the study could be performed with as much as approximately 97%treatment group allocation.
APA, Harvard, Vancouver, ISO, and other styles
29

Lai, Xin. "Extensions on long-term survivor model with random effects /." access full-text access abstract and table of contents, 2009. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?phd-ms-b3008233xf.pdf.

Full text
Abstract:
Thesis (Ph.D.)--City University of Hong Kong, 2009.
"Submitted to Department of Management Sciences in partial fulfillment of the requirement for the degree of Doctor of Philosophy." Includes bibliographical references (leaves 118-126)
APA, Harvard, Vancouver, ISO, and other styles
30

Chan, Karen Pui-Shan. "Kernel density estimation, Bayesian inference and random effects model." Thesis, University of Edinburgh, 1990. http://hdl.handle.net/1842/13350.

Full text
Abstract:
This thesis contains results of a study in kernel density estimation, Bayesian inference and random effects models, with application to forensic problems. Estimation of the Bayes' factor in a forensic science problem involved the derivation of predictive distributions in non-standard situations. The distribution of the values of a characteristic of interest among different items in forensic science problems is often non-Normal. Background, or training, data were available to assist in the estimation of the distribution for measurements on cat and dog hairs. An informative prior, based on the kernel method of density estimation, was used to derive the appropriate predictive distributions. The training data may be considered to be derived from a random effects model. This was taken into consideration in modelling the Bayes' factor. The usual assumption of the random factor being Normally distributed is unrealistic, so a kernel density estimate was used as the distribution of the unknown random factor. Two kernel methods were employed: the ordinary and adaptive kernel methods. The adaptive kernel method allowed for the longer tail, where little information was available. Formulae for the Bayes' factor in a forensic science context were derived assuming the training data were grouped or not grouped (for example, hairs from one cat would be thought of as belonging to the same group), and that the within-group variance was or was not known. The Bayes' factor, assuming known within-group variance, for the training data, grouped or not grouped, was extended to the multivariate case. The method was applied to a practical example in a bivariate situation. Similar modelling of the Bayes' factor was derived to cope with a particular form of mixture data. Boundary effects were also taken into consideration. Application of kernel density estimation to make inferences about the variance components under the random effects model was studied. Employing the maximum likelihood estimation method, it was shown that the between-group variance and the smoothing parameter in the kernel density estimation were related. They were not identifiable separately. With the smoothing parameter fixed at some predetermined value, the within-and between-group variance estimates from the proposed model were equivalent to the usual ANOVA estimates. Within the Bayesian framework, posterior distribution for the variance components, using various prior distributions for the parameters were derived incorporating kernel density functions. The modes of these posterior distributions were used as estimates for the variance components. A Student-t within a Bayesian framework was derived after introduction of a prior for the smoothing prameter. Two methods of obtaining hyper-parameters for the prior were suggested, both involving empirical Bayes methods. They were a modified leave-one-out maximum likelihood method and a method of moments based on the optimum smoothing parameter determined from Normality assumption.
APA, Harvard, Vancouver, ISO, and other styles
31

Devamitta, Perera Muditha Virangika. "Robustness of normal theory inference when random effects are not normally distributed." Kansas State University, 2011. http://hdl.handle.net/2097/8786.

Full text
Abstract:
Master of Science
Department of Statistics
Paul I. Nelson
The variance of a response in a one-way random effects model can be expressed as the sum of the variability among and within treatment levels. Conventional methods of statistical analysis for these models are based on the assumption of normality of both sources of variation. Since this assumption is not always satisfied and can be difficult to check, it is important to explore the performance of normal based inference when normality does not hold. This report uses simulation to explore and assess the robustness of the F-test for the presence of an among treatment variance component and the normal theory confidence interval for the intra-class correlation coefficient under several non-normal distributions. It was found that the power function of the F-test is robust for moderately heavy-tailed random error distributions. But, for very heavy tailed random error distributions, power is relatively low, even for a large number of treatments. Coverage rates of the confidence interval for the intra-class correlation coefficient are far from nominal for very heavy tailed, non-normal random effect distributions.
APA, Harvard, Vancouver, ISO, and other styles
32

Tamura, Karin Ayumi. "Métodos de predição para modelo logístico misto com k efeitos aleatórios." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-10032013-125846/.

Full text
Abstract:
A predição de uma observação futura para modelos mistos é um problema que tem sido extensivamente estudado. Este trabalho trata o problema de atribuir valores para os efeitos aleatórios e/ou variável resposta de novos grupos para o modelo logístico misto, cujo objetivo é predizer respostas futuras com base em parâmetros estimados previamente. Na literatura, existem alguns métodos de predição para este modelo que considera apenas o intercepto aleatório. Para a regressão logística mista com k efeitos aleatórios, atualmente não há métodos propostos para a predição dos efeitos aleatórios de novos grupos. Portanto, foram propostas novas abordagens baseadas no método da média zero, no melhor preditor empírico (MPE), na regressão linear e nos modelos de regressão não-paramétricos. Todos os métodos de predição foram avaliados usando os seguintes métodos de estimação: aproximação de Laplace, quadratura adaptativa de Gauss-Hermite e quase-verossimilhança penalizada. Os métodos de estimação e predição foram analisados por meio de estudos de simulação, com base em sete cenários, com comparações de diferentes valores para: o tamanho de grupo, os desvios-padrão dos efeitos aleatórios, a correlação entre os efeitos aleatórios, e o efeito fixo. Os métodos de predição foram aplicados em dois conjuntos de dados reais. Em ambos os problemas os conjuntos de dados apresentaram estrutura hierárquica, cujo objetivo foi predizer a resposta para novos grupos. Os resultados indicaram que o método MPE apresentou o melhor desempenho em termos de predição, entretanto, apresentou alto custo computacional para grandes bancos de dados. As demais metodologias apresentaram níveis de predição semelhantes ao MPE, e reduziram drasticamente o esforço computacional.
The prediction of a future observation in a mixed regression is a problem that has been extensively studied. This work treat the problem of assigning the random effects and/or the outcome of new groups for the mixed logistic regression, in which the aim is to predict future outcomes based on the parameters previously estimated. In the literature, there are some prediction methods for this model that considers only the random intercept. For the mixed logistic regression with k random effects, there is currently no method for predicting the random effects of new groups. Therefore, we proposed new approaches based on average zero method, empirical best predictor (EBP), linear regression and nonparametric regression models. All prediction methods were evaluated by using the estimation methods: Laplace approximation, adaptive Gauss-Hermite quadrature and penalized quasi-likelihood. The estimation and prediction methods were analyzed by simulation studies, based on seven simulation scenarios, which considered comparisons of different values for: the group size, the standard deviations of the random effects, the correlation between the random effects, and the fixed effect. The prediction methods were applied in two real data sets. In both problems the data set presented hierarchical structure, and the objective was to predict the outcome for new groups. The results indicated that EBP presented the best performance in prediction terms, however it has been presented high computational cost for big data sets. The other methodologies presented similar level of prediction in relation to EBP, and drastically reduced the computational effort.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhu, Chang Qing. "Statistical methods for Weibull based random effects models." Thesis, University of Surrey, 1998. http://epubs.surrey.ac.uk/876/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Marques-da-Silva, Antonio Hermes. "Gradient test under non-parametric random effects models." Thesis, Durham University, 2018. http://etheses.dur.ac.uk/12645/.

Full text
Abstract:
The gradient test proposed by Terrell (2002) is an alternative to the likelihood ratio, Wald and Rao tests. The gradient statistic is the result of the inner product of two vectors — the gradient of the likelihood under null hypothesis (hence the name) and the result of the difference between the estimate under alternative hypothesis and the estimate under null hypothesis. Therefore the gradient statistic is computationally less expensive than Wald and Rao statistics as it does not require matrix operations in its formula. Under some regularity conditions, the gradient statistic has χ2 distribution under null hypothesis. The generalised linear model (GLM) introduced by Nelder & Wedderburn (1972) is one of the most important classes of statistical models. It incorporates the classical regression modelling and analysis of variance either for continuous response and categorical response variables under the exponential family. The random effects model extends the standard GLM for situations where the model does not describe appropriately the variability in the data (overdispersion) (Aitkin, 1996a). We propose a new unified notation for GLM with random effects and the gradient statistic formula for testing fixed effects parameters on these models. We also develop the Fisher information formulae used to obtain the Rao and Wald statistics. Our main interest in this thesis is to investigate the finite sample performance of the gradient test on generalised linear models with random effects. For this we propose and extensive simulation experiment to study the type I error and the local power of the gradient test using the methodology developed by Peers (1971) and Hayakawa (1975). We also compare the local power of the test with the local power of the tests of the likelihood ratio, of Wald and Rao tests.
APA, Harvard, Vancouver, ISO, and other styles
35

Meddings, D. P. "Statistical inference in mixture models with random effects." Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1455733/.

Full text
Abstract:
There is currently no existing asymptotic theory for statistical inference on the maximum likelihood estimators of the parameters in a mixture of linear mixed models (MLMMs). Despite this many researchers assume the estimators are asymptotically normally distributed with covariance matrix given by the inverse of the information matrix. Mixture models create new identifability problems that are not inherited from the underlying linear mixed model (LMM), and this subject has not been investigated for these models. Since identifability is a prerequisite for the existence of a consistent estimator of the model parameters, then this is an important area of research that has been neglected. MLMMs are mixture models with random effects, and they are typically used in medical and genetics settings where random heterogeneity in repeated measures data are observed between measurement units (people, genes), but where it is assumed the units belong to one and only one of a finite number of sub-populations or components. This is expressed probabalistically by using a sub-population specific probability distribution function which are often called the component distribution functions. This thesis is motivated by the belief that the use of MLMMs in applied settings such as these is being held back by the lack of development of the statistical inference framework. Specifically this thesis has the following primary objectives; i To investigate the quality of statistical inference provided by different information matrix based methods of confidence interval construction. ii To investigate the impact of component distribution function separation on the quality of statistical inference, and to propose a new method to quantify this separation. iii To determine sufficient conditions for identifiability of MLMMs.
APA, Harvard, Vancouver, ISO, and other styles
36

Abdel-Salam, Abdel-Salam Gomaa. "Profile Monitoring with Fixed and Random Effects using Nonparametric and Semiparametric Methods." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/29387.

Full text
Abstract:
Profile monitoring is a relatively new approach in quality control best used where the process data follow a profile (or curve) at each time period. The essential idea for profile monitoring is to model the profile via some parametric, nonparametric, and semiparametric methods and then monitor the fitted profiles or the estimated random effects over time to determine if there have been changes in the profiles. The majority of previous studies in profile monitoring focused on the parametric modeling of either linear or nonlinear profiles, with both fixed and random effects, under the assumption of correct model specification. Our work considers those cases where the parametric model for the family of profiles is unknown or at least uncertain. Consequently, we consider monitoring profiles via two techniques, a nonparametric technique and a semiparametric procedure that combines both parametric and nonparametric profile fits, a procedure we refer to as model robust profile monitoring (MRPM). Also, we incorporate a mixed model approach to both the parametric and nonparametric model fits. For the mixed effects models, the MMRPM method is an extension of the MRPM method which incorporates a mixed model approach to both parametric and nonparametric model fits to account for the correlation within profiles and to deal with the collection of profiles as a random sample from a common population. For each case, we formulated two Hotelling's T 2 statistics, one based on the estimated random effects and one based on the fitted values, and obtained the corresponding control limits. In addition,we used two different formulas for the estimated variancecovariance matrix: one based on the pooled sample variance-covariance matrix estimator and a second one based on the estimated variance-covariance matrix based on successive differences. A Monte Carlo study was performed to compare the integrated mean square errors (IMSE) and the probability of signal of the parametric, nonparametric, and semiparametric approaches. Both correlated and uncorrelated errors structure scenarios were evaluated for varying amounts of model misspecification, number of profiles, number of observations per profile, shift location, and in- and out-of-control situations. The semiparametric (MMRPM) method for uncorrelated and correlated scenarios was competitive and, often, clearly superior with the parametric and nonparametric over all levels of misspecification. For a correctly specified model, the IMSE and the simulated probability of signal for the parametric and theMMRPM methods were identical (or nearly so). For the severe modelmisspecification case, the nonparametric andMMRPM methods were identical (or nearly so). For the mild model misspecification case, the MMRPM method was superior to the parametric and nonparametric methods. Therefore, this simulation supports the claim that the MMRPM method is robust to model misspecification. In addition, the MMRPM method performed better for data sets with correlated error structure. Also, the performances of the nonparametric and MMRPM methods improved as the number of observations per profile increases since more observations over the same range of X generally enables more knots to be used by the penalized spline method, resulting in greater flexibility and improved fits in the nonparametric curves and consequently, the semiparametric curves. The parametric, nonparametric and semiparametric approaches were utilized for fitting the relationship between torque produced by an engine and engine speed in the automotive industry. Then, we used a Hotelling's T 2 statistic based on the estimated random effects to conduct Phase I studies to determine the outlying profiles. The parametric, nonparametric and seminonparametric methods showed that the process was stable. Despite the fact that all three methods reach the same conclusion regarding the –in-control– status of each profile, the nonparametric and MMRPM results provide a better description of the actual behavior of each profile. Thus, the nonparametric and MMRPM methods give the user greater ability to properly interpret the true relationship between engine speed and torque for this type of engine and an increased likelihood of detecting unusual engines in future production. Finally, we conclude that the nonparametric and semiparametric approaches performed better than the parametric approach when the user's model is misspecified. The case study demonstrates that, the proposed nonparametric and semiparametric methods are shown to be more efficient, flexible and robust to model misspecification for Phase I profile monitoring in a practical application. Thus, our methods are robust to the common problem of model misspecification. We also found that both the nonparametric and the semiparametric methods result in charts with good abilities to detect changes in Phase I data, and in charts with easily calculated control limits. The proposed methods provide greater flexibility and efficiency than current parametric methods used in profile monitoring for Phase I that rely on correct model specification, an unrealistic situation in many practical problems in industrial applications.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
37

Gupta, Resmi. "Flexible Multivariate Joint Model of Longitudinal Intensity and Binary Process for Medical Monitoring of Frequently Collected Data." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1561393989215645.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Jia, Yue. "Using sampling weights in the estimation of random effects model." Ann Arbor, Mich. : ProQuest, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3258527.

Full text
Abstract:
Thesis (Ph.D. in Statistical Science)--S.M.U., 2007.
Title from PDF title page (viewed Mar. 18, 2008). Source: Dissertation Abstracts International, Volume: 68-04, Section: B, page: 2431. Adviser: Lynne Stokes. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
39

Dishman, Tamarah Crouse. "Identifying Outliers in a Random Effects Model For Longitudinal Data." UNF Digital Commons, 1989. http://digitalcommons.unf.edu/etd/191.

Full text
Abstract:
Identifying non-tracking individuals in a population of longitudinal data has many applications as well as complications. The analysis of longitudinal data is a special study in itself. There are several accepted methods, of those we chose a two-stage random effects model coupled with the Estimation Maximization Algorithm (E-M Algorithm) . Our project consisted of first estimating population parameters using the previously mentioned methods. The Mahalanobis distance was then used to sequentially identify and eliminate non-trackers from the population. Computer simulations were run in order to measure the algorithm's effectiveness. Our results show that the average specificity for the repetitions for each simulation remained at the 99% level. The sensitivity was best when only a single non-tracker was present with a very different parameter a. The sensitivity of the program decreased when more than one tracker was present, indicating our method of identifying a non-tracker is not effective when the estimates of the population parameters are contaminated.
APA, Harvard, Vancouver, ISO, and other styles
40

Alenius, Peter, and Edward Hallgren. "P/E-effekten : En utvärdering av en portföljvalsstrategi på Stockholmsbörsen mellan 2004 och 2012." Thesis, Umeå universitet, Företagsekonomi, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-76206.

Full text
Abstract:
One could argue that the most discussed topic in finance is whether or not it is possible to “beat the market”. Even though many people claim to do this, there is little evidence to support the idea that one can consistently beat the market over a long period of time. There are indeed several examples of investors who have managed to outperform the market consistently for a long time, but the efforts of these individuals or institutions could by many be considered to be pure luck. One of the many strategies that have been evaluated by several researchers and is said to generate a risk adjusted return greater than that of the market, is one based on the P/E-effect. This strategy is based on the financial ratio P/E – price divided by earnings – and used by constructing portfolios consisting of stocks with low P/E ratios. Several studies have confirmed the existence of the P/E-effect on various stock markets around the world and over different time periods. On the Swedish market, however, few studies have generated the same results. Most of these studies can be considered to be insufficient with regards to sample sizes and methods, spawning a need for more extensive studies. We have examined the P/E strategy on the Swedish Stock Exchange (SSE) between 2004 and 2012. The sample included 358 companies (excluding financial companies) with available necessary data. The stocks were divided into five portfolios based on their yearly P/E ratios (low to high), upon which the monthly returns of the individual stocks were calculated using a logarithmic formula. The returns were also risk adjusted using the Capital Asset Pricing Model (CAPM), followed by a regression analysis to see if possible abnormal returns could be considered to be statistically significant for the examined time period. The results of our study indicate that the P/E effect is not present on the Swedish Stock Exchange during the examined time period, and we therefore conclude that it was not possible to utilize a strategy based on the P/E effect between 2004 and 2012 in order to achieve an abnormal return. The results can be used to argue that the Swedish stock market is more efficient than for example the U.S. stock market where the P/E effect has been found to exist.
APA, Harvard, Vancouver, ISO, and other styles
41

Wolfe, Rory St John. "Models and estimation for repeated ordinal responses, with application to telecommunications experiments." Thesis, University of Southampton, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.242240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Alkhamisi, Mahdi. "Asymptotic analysis of the one-way random effects models." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ50063.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Murphy, Dennis John. "Post-data pivotal inference in balanced random effects models." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ51659.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Wang, Yaqin. "Estimation of accelerated failure time models with random effects." [Ames, Iowa : Iowa State University], 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
45

Goodwin, Christopher C. H. "The Influence of Cost-sharing Programs on Southern Non-industrial Private Forests." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/30895.

Full text
Abstract:
This study was undertaken in response to concerns that the decreasing levels of funding for government tree planting cost share programs will result in significant reductions in non-industrial private tree planting efforts in the South. The purpose of this study is to quantify how the funding of various cost share programs, and market signals interact and affect the level of private tree planting. The results indicate that the ACP, CRP, and Soil Bank programs have been more influential than the FIP, FRM, FSP, SIP, and State run subsidy programs. Reductions in the CRP funding will result in less tree planting; while it is not clear that funding reductions in FIP, or other programs targeted toward reforestation after harvest, will have a negative impact on tree planting levels.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
46

Ou, Zhaoyang. "An association model for specific-interaction effects in random copolymer solutions." Thesis, Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/9140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Atenafu, Eshetu Getachew. "Sequential tests for monitoring parameters of a nested random effects model." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ59775.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Ketchum, Jessica McKinney. "A Normal-Mixture Model with Random-Effects for RR-Interval Data." VCU Scholars Compass, 2006. http://hdl.handle.net/10156/1979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Ngaruye, Innocent. "Contributions to Small Area Estimation : Using Random Effects Growth Curve Model." Doctoral thesis, Linköpings universitet, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-137206.

Full text
Abstract:
This dissertation considers Small Area Estimation with a main focus on estimation and prediction for repeated measures data. The demand of small area statistics is for both cross-sectional and repeated measures data. For instance, small area estimates for repeated measures data may be useful for public policy makers for different purposes such as funds allocation, new educational or health programs, etc, where decision makers might be interested in the trend of estimates for a specic characteristic of interest for a given category of the target population as a basis of their planning. It has been shown that the multivariate approach for model-based methods in small area estimation may achieve substantial improvement over the usual univariate approach. In this work, we consider repeated surveys taken on the same subjects at different time points. The population from which a sample has been drawn is partitioned into several non-overlapping subpopulations and within all subpopulations there is the same number of group units. The aim is to propose a model that borrows strength across small areas and over time with a particular interest of growth profiles over time. The model accounts for repeated surveys, group individuals and random effects variations. Firstly, a multivariate linear model for repeated measures data is formulated under small area estimation settings. The estimation of model parameters is discussed within a likelihood based approach, the prediction of random effects and the prediction of small area means across timepoints, per group units and for all time points are obtained. In particular, as an application of the proposed model, an empirical study is conducted to produce district level estimates of beans in Rwanda during agricultural seasons 2014 which comprise two varieties, bush beans and climbing beans. Secondly, the thesis develops the properties of the proposed estimators and discusses the computation of their first and second moments. Through a method based on parametric bootstrap, these moments are used to estimate the mean-squared errors for the predicted small area means. Finally, a particular case of incomplete multivariate repeated measures data that follow a monotonic sample pattern for small area estimation is studied. By using a conditional likelihood based approach, the estimators of model parameters are derived. The prediction of random effects and predicted small area means are also produced.
APA, Harvard, Vancouver, ISO, and other styles
50

Li, Zhengrong. "Model-based Tests for Standards Evaluation and Biological Assessments." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/29108.

Full text
Abstract:
Implementation of the Clean Water Act requires agencies to monitor aquatic sites on a regular basis and evaluate the quality of these sites. Sites are evaluated individually even though there may be numerous sites within a watershed. In some cases, sampling frequency is inadequate and the evaluation of site quality may have low reliability. This dissertation evaluates testing procedures for determination of site quality based on modelbased procedures that allow for other sites to contribute information to the data from the test site. Test procedures are described for situations that involve multiple measurements from sites within a region and single measurements when stressor information is available or when covariates are used to account for individual site differences. Tests based on analysis of variance methods are described for fixed effects and random effects models. The proposed model-based tests compare limits (tolerance limits or prediction limits) for the data with the known standard. When the sample size for the test site is small, using model-based tests improves the detection of impaired sites. The effects of sample size, heterogeneity of variance, and similarity between sites are discussed. Reference-based standards and corresponding evaluation of site quality are also considered. Regression-based tests provide methods for incorporating information from other sites when there is information on stressors or covariates. Extension of some of the methods to multivariate biological observations and stressors is also discussed. Redundancy analysis is used as a graphical method for describing the relationship between biological metrics and stressors. A clustering method for finding stressor-response relationships is presented and illustrated using data from the Mid-Atlantic Highlands. Multivariate elliptical and univariate regions for assessment of site quality are discussed.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography