To see the other types of publications on this topic, follow the link: Profile Likelihood.

Dissertations / Theses on the topic 'Profile Likelihood'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 31 dissertations / theses for your research on the topic 'Profile Likelihood.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Li, Hongfei. "Approximate profile likelihood estimation for spatial-dependence parameters." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1191267954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tunyi, Abongeh Akumbom. "Takeover likelihood modelling : target profile and portfolio returns." Thesis, University of Glasgow, 2014. http://theses.gla.ac.uk/5445/.

Full text
Abstract:
This thesis investigates four interrelated research issues in the context of takeover likelihood modelling. These include: (1) the determinants of target firms’ takeover likelihood, (2) the extent to which targets can be predicted using publicly available information, (3) whether target prediction can form the basis of a profitable investment strategy, and – if not – (4) why investing in predicted targets is a suboptimal investment strategy. The research employs a UK sample of 32,363 firm-year observations (consisting of 1,635 target and 31,737 non-target firm-year observations) between 1988 and 2010. Prior literature relies on eight (old) hypotheses for modelling takeover likelihood – determinants of takeover likelihood. Consistent with prior studies, I find that takeover likelihood increases with the availability of free cash flow (Powell (1997, 2001, 2004)), the level of tangible assets (Ambrose and Megginson (1992)) and management inefficiency (Palepu (1986)), but decreases with firm age (Brar et al. (2009)). The empirical evidence lends no support to the firm undervaluation, industry disturbance, growth-resource mismatch or firm size hypotheses (Palepu (1986)). I extend prior research by developing eleven (new) hypotheses for target prediction. Consistent with the new hypotheses, I find evidence that takeover likelihood is an inverse U-shaped function of firm size, leverage and payroll burden. Takeover likelihood also increases with share repurchase activity, market liquidity and stock market performance and decreases with industry concentration. As anticipated, the new hypotheses improve the within-sample classification and out-of-sample predictive abilities of prior takeover prediction models. This study also contributes to the literature by exploring the effects of different methodological choices on the performance of takeover prediction models. The analyses reveal that the performance of prediction models is moderated by different modelling choices. For example, I find evidence that the use of longer estimation windows (e.g., a recursive model), as well as, portfolio selection techniques which yield larger holdout samples (deciles and quintiles) generally result in more optimal model performance. Importantly, I show that some of the methodological choices of prior researchers (e.g., a one-year holdout period and a matched-sampling methodology) either directly biases research findings or results in suboptimal model performance. Additionally, there is no evidence that model parameters go stale, at least not over a ten-year out-of-sample test period. Hence, the parameters developed in this study can be employed by researchers and practitioners to ascribe takeover probabilities to UK firms. Despite the new model’s success in predicting targets, I find that, consistent with the market efficiency hypothesis, predicted target portfolios do not consistently earn significant positive abnormal returns in the long run. That is, despite the high target concentrations achieved, the portfolios generate long run abnormal returns which are not statistically different from zero. I extend prior literature by showing that these portfolios are likely to achieve lower than expected returns for five reasons. First, a substantial proportion of each predicted target portfolio constitutes type II errors (i.e., non-targets) which, on average, do not earn significant positive abnormal returns. Second, the portfolios tend to hold a high number of firms that go bankrupt leading to a substantial decline in portfolio returns. Third, the presence of poorly-performing small firms within the portfolios further dilutes its returns. Fourth, targets perform poorly prior to takeover bids and this period of poor performance coincides with the portfolio holding period. Fifth, targets that can be successfully predicted tend to earn lower-than-expected holding period returns, perhaps, due to market-wide anticipation. Overall, this study contributes to the literature by developing new hypotheses for takeover prediction, by advancing a more robust methodological framework for developing and testing prediction models and by empirically explaining why takeover prediction as an investment strategy is, perhaps, a suboptimal strategy.
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Huayu. "Modified Profile Likelihood Approach for Certain Intraclass Correlation Coefficient." Digital Archive @ GSU, 2011. http://digitalarchive.gsu.edu/math_theses/96.

Full text
Abstract:
In this paper we consider the problem of constructing confidence intervals and lower bounds forthe intraclass correlation coefficient in an interrater reliability study where the raters are randomly selected from a population of raters.The likelihood function of the interrater reliability is derived and simplified, and the profile likelihood based approach is readily available for computing the confidence intervals of the interrater reliability. Unfortunately, the confidence intervals computed by using the profile likelihood function are in general too narrow to have the desired coverage probabilities. From the point view of practice, a conservative approach, if is at least as precise as any existing method, is preferred sinceit gives the correct results with a probability higher than claimed. Under this rationale, we propose the so-called modified likelihood approach in this paper. Simulation study shows that, the proposed method in general has better performance than currently used methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Gerhard, Daniel [Verfasser]. "Simultaneous small sample inference based on profile likelihood / Daniel Gerhard." Hannover : Technische Informationsbibliothek und Universitätsbibliothek Hannover, 2010. http://d-nb.info/1008373680/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pan, Juming. "Adaptive LASSO For Mixed Model Selection via Profile Log-Likelihood." Bowling Green State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1466633921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Läuter, Henning. "On approximate likelihood in survival models." Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2011/5161/.

Full text
Abstract:
We give a common frame for different estimates in survival models. For models with nuisance parameters we approximate the profile likelihood and find estimates especially for the proportional hazard model.
APA, Harvard, Vancouver, ISO, and other styles
7

Pere, Pekka. "Adjusted profile likelihood applied to estimation and testing of unit roots." Thesis, University of Oxford, 1997. http://ora.ox.ac.uk/objects/uuid:d90da262-5a4b-4114-9426-cbecb1413a30.

Full text
Abstract:
A short review of unit-root econometrics is given from the point of view of testing. The adjusted likelihoods of Cox and Reid (1987, 1993) are presented and applied to the usual AR(1) with constant, an AR(1) process suggested by Bhargava (1986), and an AR(2) process. Biases of the associated maximum-likelihood estimates (MLEs) are pondered briefly. A Wald statistic based on adjusted profile likelihood is proposed. The Cox-Reid adjusted estimate (AE) for the autoregressive coefficient of the unit-root AR(1) model with zero constant is even asymptotically more accurate, in terms of mean-square error (MSE), than the MLE. The derived tests are more powerful than the corresponding Dickey-Fuller tests if the starting value of the process deviates sufficiently from the unconditional mean. An iteratively adjusted estimate is introduced which can also be more accurate than the MLE. We obtain also an estimate and a Wald statistic which are asymptotically distributed compactly and symmetrically around zero under a unit root but the estimate is not consistent in general. The MLE and the AE are consistent not only as the sample size tends to infinity but also when the (absolute value of the) deviation of the starting value from the unconditional mean of the time series is tuned towards infinity. The finding exposes why Wald-kind of tests are more powerful than tests based on standardised coefficients when the starting value lies far from the unconditional mean. The AE and the corresponding Wald statistic are derived for the Bhargava AR(1) model. We obtain the asymptotic distributions of them and simulate the previously unknown finite sample distributions of the MLE and the usual Wald statistic under a unit root. Again the AE is the more accurate estimate. Distortion towards a unit root is pointed out. The adjusted estimate and the Wald statistic follow their asymptotic distributions better than the unadjusted when the process is a unit-root AR(1) with drift or the Bhargava AR(1). Accuracy is gained also under the unit-root AR(2) model. A practical advice is to apply a unit-root test based on the Bhargava model when the process can be assumed to have started from the unconditional mean under the alternative and otherwise a test based on the ordinary AR(1) with constant model. The adjustment often decreases the bias at the cost of variance but it can yield a reduction in both, too, which happens under the Bhargava model and 'typically' under the unit-root AR(2) model. The two most distinctive findings are perhaps that the AE can be asymptotically more accurate than the corresponding MLE or in finite samples when the AE is calculated from an embedding model.
APA, Harvard, Vancouver, ISO, and other styles
8

Di, Gangi Pietro. "Study of the sensitivity of the XENON1T experiment with the profile likelihood method." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8348/.

Full text
Abstract:
Oggi sappiamo che la materia ordinaria rappresenta solo una piccola parte dell'intero contenuto in massa dell'Universo. L'ipotesi dell'esistenza della Materia Oscura, un nuovo tipo di materia che interagisce solo gravitazionalmente e, forse, tramite la forza debole, è stata avvalorata da numerose evidenze su scala sia galattica che cosmologica. Gli sforzi rivolti alla ricerca delle cosiddette WIMPs (Weakly Interacting Massive Particles), il generico nome dato alle particelle di Materia Oscura, si sono moltiplicati nel corso degli ultimi anni. L'esperimento XENON1T, attualmente in costruzione presso i Laboratori Nazionali del Gran Sasso (LNGS) e che sarà in presa dati entro la fine del 2015, segnerà un significativo passo in avanti nella ricerca diretta di Materia Oscura, che si basa sulla rivelazione di collisioni elastiche su nuclei bersaglio. XENON1T rappresenta la fase attuale del progetto XENON, che ha già realizzato gli esperimenti XENON10 (2005) e XENON100 (2008 e tuttora in funzione) e che prevede anche un ulteriore sviluppo, chiamato XENONnT. Il rivelatore XENON1T sfrutta circa 3 tonnellate di xeno liquido (LXe) e si basa su una Time Projection Chamber (TPC) a doppia fase. Dettagliate simulazioni Monte Carlo della geometria del rivelatore, assieme a specifiche misure della radioattività dei materiali e stime della purezza dello xeno utilizzato, hanno permesso di predire con accuratezza il fondo atteso. In questo lavoro di tesi, presentiamo lo studio della sensibilità attesa per XENON1T effettuato tramite il metodo statistico chiamato Profile Likelihood (PL) Ratio, il quale nell'ambito di un approccio frequentista permette un'appropriata trattazione delle incertezze sistematiche. In un primo momento è stata stimata la sensibilità usando il metodo semplificato Likelihood Ratio che non tiene conto di alcuna sistematica. In questo modo si è potuto valutare l'impatto della principale incertezza sistematica per XENON1T, ovvero quella sulla emissione di luce di scintillazione dello xeno per rinculi nucleari di bassa energia. I risultati conclusivi ottenuti con il metodo PL indicano che XENON1T sarà in grado di migliorare significativamente gli attuali limiti di esclusione di WIMPs; la massima sensibilità raggiunge una sezione d'urto σ=1.2∙10-47 cm2 per una massa di WIMP di 50 GeV/c2 e per una esposizione nominale di 2 tonnellate∙anno. I risultati ottenuti sono in linea con l'ambizioso obiettivo di XENON1T di abbassare gli attuali limiti sulla sezione d'urto, σ, delle WIMPs di due ordini di grandezza. Con tali prestazioni, e considerando 1 tonnellata di LXe come massa fiduciale, XENON1T sarà in grado di superare gli attuali limiti (esperimento LUX, 2013) dopo soli 5 giorni di acquisizione dati.
APA, Harvard, Vancouver, ISO, and other styles
9

Dai, Chenglu. "The Profile Likelihood Method in Finding Confidence Intervals and its Comparison with the Bootstrap Percentile Method." Fogler Library, University of Maine, 2008. http://www.library.umaine.edu/theses/pdf/DaiC2008.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Debenedetti, Chiara. "Search for VH → leptons + b¯b with the ATLAS experiment at the LHC." Thesis, University of Edinburgh, 2014. http://hdl.handle.net/1842/9909.

Full text
Abstract:
The search for a Higgs boson decaying to a b¯b pair is one of the key analyses ongoing at the ATLAS experiment. Despite being the largest branching ratio decay for a Standard Model Higgs boson, a large dataset is necessary to perform this analysis because of the very large backgrounds affecting the measurement. To discriminate the electroweak H → b¯b signal from the large QCD backgrounds, the associated production of the Higgs with a W or a Z boson decaying leptonically is used. Different techniques have been proposed to enhance the signal over background ratio in the VH(b¯b) channel, from dedicated kinematic cuts, to a single large radius jet to identify the two collimated b’s in the Higgs high transverse momentum regime, to multivariate techniques. The high-pT approach, using a large radius jet to identify the b’s coming from the Higgs decay, has been tested against an analysis based on kinematic cuts for a dataset of 4.7 fb−1 luminosity at √s = 7 TeV, and compatible results were found for the same transverse momentum range. Using a kinematic cut based approach the VH(b¯b) signal search has been performed for the full LHC Run 1 dataset: 4.7 fb−1 at √s = 7 TeV and 20.7 fb−1 at √s = 8 TeV. Several backgrounds to this analysis, such as Wb¯b have not been measured in data yet, and an accurate study of the theoretical description has been performed, comparing the predictions of various Monte Carlo generators at different orders. The complexity of the analysis requires a profile likelihood fit with several categories and almost 200 parameters, taking into account all the systematics coming from experimental or modelling limitations, to extract the result. To validate the fit model, a test of the ability to extract the signal is performed on the resonant V Z(b¯b) background. A 4.8σ excess compatible with the Standard Model rate expectation has been measured, with a best fit value μVZ = 0.93+0.22−0.21. The full LHC Run1 dataset result for the VH(b¯b) process is a limit of (1.3)1.4 x SM (expected) observed, with a best fit value of 0.2±0.5(stat)±0.4(sys) for a Higgs boson of 125 GeV mass.
APA, Harvard, Vancouver, ISO, and other styles
11

Hauser, Michael A. "Maximum Likelihood Estimators for ARMA and ARFIMA Models. A Monte Carlo Study." Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 1998. http://epub.wu.ac.at/794/1/document.pdf.

Full text
Abstract:
We analyze by simulation the properties of two time domain and two frequency domain estimators for low order autoregressive fractionally integrated moving average Gaussian models, ARFIMA (p,d,q). The estimators considered are the exact maximum likelihood for demeaned data, EML, the associated modified profile likelihood, MPL, and the Whittle estimator with, WLT, and without tapered data, WL. Length of the series is 100. The estimators are compared in terms of pile-up effect, mean square error, bias, and empirical confidence level. The tapered version of the Whittle likelihood turns out to be a reliable estimator for ARMA and ARFIMA models. Its small losses in performance in case of ``well-behaved" models are compensated sufficiently in more ``difficult" models. The modified profile likelihood is an alternative to the WLT but is computationally more demanding. It is either equivalent to the EML or more favorable than the EML. For fractionally integrated models, particularly, it dominates clearly the EML. The WL has serious deficiencies for large ranges of parameters, and so cannot be recommended in general. The EML, on the other hand, should only be used with care for fractionally integrated models due to its potential large negative bias of the fractional integration parameter. In general, one should proceed with caution for ARMA(1,1) models with almost canceling roots, and, in particular, in case of the EML and the MPL for inference in the vicinity of a moving average root of +1. (author's abstract)
Series: Preprint Series / Department of Applied Statistics and Data Processing
APA, Harvard, Vancouver, ISO, and other styles
12

Järvstråt, Linnea. "A New Third Compartment Significantly Improves Fit and Identifiability in a Model for Ace2p Distribution in Saccharomyces cerevisiae after Cytokinesis." Thesis, Linköpings universitet, Institutionen för systemteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-69354.

Full text
Abstract:
Asymmetric cell division is an important mechanism for the differentiation of cells during embryogenesis and cancer development. Saccharomyces cerevisiae divides asymmetrically and is therefore used as a model system for understanding the mechanisms behind asymmetric cell division. Ace2p is a transcriptional factor in yeast that localizes primarily to the daughter nucleus during cell division. The distribution of Ace2p is visualized using a fusion protein with yellow fluorescent protein (YFP) and confocal microscopy. Systems biology provides a new approach to investigating biological systems through the use of quantitative models. The localization of the transcriptional factor Ace2p in yeast during cell division has been modelled using ordinary differential equations. Herein such modelling has been evaluated. A 2-compartment model for the localization of Ace2p in yeast post-cytokinesis proposed in earlier work was found to be insufficient when new data was included in the model evaluation. Ace2p localization in the dividing yeast cell pair before cytokinesis has been investigated using a similar approach and was found to not explain the data to a significant degree. A 3-compartment model is proposed. The improvement in comparison to the 2-compartment model was statistically significant. Simulations of the 3-compartment model predicts a fast decrease in the amount of Ace2p in the cytosol close to the nucleus during the first seconds after each bleaching of the fluorescence. Experimental investigation of the cytosol close to the nucleus could test if the fast dynamics are present after each bleaching of the fluorescence. The parameters in the model have been estimated using the profile likelihood approach in combination with global optimization with simulated annealing. Confidence intervals for parameters have been found for the 3-compartment model of Ace2p localization post-cytokinesis. In conclusion, the profile likelihood approach has proven a good method of estimating parameters, and the new 3-compartment model allows for reliable parameter estimates in the post-cytokinesis situation. A new Matlab-implementation of the profile likelihood method is appended.
APA, Harvard, Vancouver, ISO, and other styles
13

Sharghi, Sima. "Statistical inferences for missing data/causal inferences based on modified empirical likelihood." Bowling Green State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1624823412604593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Janzén, David. "Standard two-stage and Nonlinear mixed effect modelling for determination of cell-to-cell variation of transport parameters in Saccharomyces cerevisiae." Thesis, Linköpings universitet, Reglerteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-78486.

Full text
Abstract:
The interest for cell-to-cell variation has in recent years increased in a steady pace. Several studies have shown that a large portion of the observed variation in the nature originates from the fact that all biochemical reactions are in some respect stochastic. Interestingly, nature has evolved highly advanced frameworks specialized in dealing with stochasticity in order to still be able to produce the delicate signalling pathways that are present in even very simple single-cell organisms. Such a simple organism is Saccharomyces cerevisiae, which is the organism that has been studied in this thesis. More particulary, the distribution of the transport rate in S. cerevisiae has been studied by a mathematical modelling approach. It is shown that a two-compartment model can adequately describe the flow of a yellow fluorescent protein (YFP) between the cytosol and the nucleus. A profile likelihood (PLH) analysis shows that the parameters in the two-compartment model are identifiable and well-defined under the experimental data of YFP. Furthermore, the result from this model shows that the distribution of the transport rates in the 80 studied cells is lognormal. Also, in contradiction to prior beliefs, no significant difference between recently divided mother and daughter cells in terms of transport rates of YFP is to be seen. The modelling is performed by using both standard two-stage(STS) and nonlinear mixed effect model (NONMEM). A methodological comparison between the two very different mathematical STS and NONMEM is also presented. STS is today the conventional approach in studies of cell-to-cell variation. However, in this thesis it is shown that NONMEM, which has originally been developed for population pharmacokinetic/ pharmacodynamic (PK/PD) studies, is at least as good, or in some cases even a better approach than STS in studies of cell-to-cell variation. Finally, a new approach in studies of cell-to-cell variation is suggested that involves a combination of STS, NONMEM and PLH. In particular, it is shown that this combination of different methods would be especially useful if the data is sparse. By applying this combination of methods, the uncertainty in the estimation of the variability could be greatly reduced.
APA, Harvard, Vancouver, ISO, and other styles
15

Silva, Michel Ferreira da. "Estimação e teste de hipótese baseados em verossimilhanças perfiladas." Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-06122006-162733/.

Full text
Abstract:
Tratar a função de verossimilhança perfilada como uma verossimilhança genuína pode levar a alguns problemas, como, por exemplo, inconsistência e ineficiência dos estimadores de máxima verossimilhança. Outro problema comum refere-se à aproximação usual da distribuição da estatística da razão de verossimilhanças pela distribuição qui-quadrado, que, dependendo da quantidade de parâmetros de perturbação, pode ser muito pobre. Desta forma, torna-se importante obter ajustes para tal função. Vários pesquisadores, incluindo Barndorff-Nielsen (1983,1994), Cox e Reid (1987,1992), McCullagh e Tibshirani (1990) e Stern (1997), propuseram modificações à função de verossimilhança perfilada. Tais ajustes consistem na incorporação de um termo à verossimilhança perfilada anteriormente à estimação e têm o efeito de diminuir os vieses da função escore e da informação. Este trabalho faz uma revisão desses ajustes e das aproximações para o ajuste de Barndorff-Nielsen (1983,1994) descritas em Severini (2000a). São apresentadas suas derivações, bem como suas propriedades. Para ilustrar suas aplicações, são derivados tais ajustes no contexto da família exponencial biparamétrica. Resultados de simulações de Monte Carlo são apresentados a fim de avaliar os desempenhos dos estimadores de máxima verossimilhança e dos testes da razão de verossimilhanças baseados em tais funções. Também são apresentadas aplicações dessas funções de verossimilhança em modelos não pertencentes à família exponencial biparamétrica, mais precisamente, na família de distribuições GA0(alfa,gama,L), usada para modelar dados de imagens de radar, e no modelo de Weibull, muito usado em aplicações da área da engenharia denominada confiabilidade, considerando dados completos e censurados. Aqui também foram obtidos resultados numéricos a fim de avaliar a qualidade dos ajustes sobre a verossimilhança perfilada, analogamente às simulações realizadas para a família exponencial biparamétrica. Vale mencionar que, no caso da família de distribuições GA0(alfa,gama,L), foi avaliada a aproximação da distribuição da estatística da razão de verossimilhanças sinalizada pela distribuição normal padrão. Além disso, no caso do modelo de Weibull, vale destacar que foram derivados resultados distribucionais relativos aos estimadores de máxima verossimilhança e às estatísticas da razão de verossimilhanças para dados completos e censurados, apresentados em apêndice.
The profile likelihood function is not genuine likelihood function, and profile maximum likelihood estimators are typically inefficient and inconsistent. Additionally, the null distribution of the likelihood ratio test statistic can be poorly approximated by the asymptotic chi-squared distribution in finite samples when there are nuisance parameters. It is thus important to obtain adjustments to the likelihood function. Several authors, including Barndorff-Nielsen (1983,1994), Cox and Reid (1987,1992), McCullagh and Tibshirani (1990) and Stern (1997), have proposed modifications to the profile likelihood function. They are defined in a such a way to reduce the score and information biases. In this dissertation, we review several profile likelihood adjustments and also approximations to the adjustments proposed by Barndorff-Nielsen (1983,1994), also described in Severini (2000a). We present derivations and the main properties of the different adjustments. We also obtain adjustments for likelihood-based inference in the two-parameter exponential family. Numerical results on estimation and testing are provided. We also consider models that do not belong to the two-parameter exponential family: the GA0(alfa,gama,L) family, which is commonly used to model image radar data, and the Weibull model, which is useful for reliability studies, the latter under both noncensored and censored data. Again, extensive numerical results are provided. It is noteworthy that, in the context of the GA0(alfa,gama,L) model, we have evaluated the approximation of the null distribution of the signalized likelihood ratio statistic by the standard normal distribution. Additionally, we have obtained distributional results for the Weibull case concerning the maximum likelihood estimators and the likelihood ratio statistic both for noncensored and censored data.
APA, Harvard, Vancouver, ISO, and other styles
16

Xie, Lin. "Statistical inference for rankings in the presence of panel segmentation." Diss., Kansas State University, 2011. http://hdl.handle.net/2097/13247.

Full text
Abstract:
Doctor of Philosophy
Department of Statistics
Paul Nelson
Panels of judges are often used to estimate consumer preferences for m items such as food products. Judges can either evaluate each item on several ordinal scales and indirectly produce an overall ranking, or directly report a ranking of the items. A complete ranking orders all the items from best to worst. A partial ranking, as we use the term, only reports rankings of the best q out of m items. Direct ranking, the subject of this report, does not require the widespread but questionable practice of treating ordinal measurement as though they were on ratio or interval scales. Here, we develop and study segmentation models in which the panel may consist of relatively homogeneous subgroups, the segments. Judges within a subgroup will tend to agree among themselves and differ from judges in the other subgroups. We develop and study the statistical analysis of mixture models where it is not known to which segment a judge belongs or in some cases how many segments there are. Viewing segment membership indicator variables as latent data, an E-M algorithm was used to find the maximum likelihood estimators of the parameters specifying a mixture of Mallow’s (1957) distance models for complete and partial rankings. A simulation study was conducted to evaluate the behavior of the E-M algorithm in terms of such issues as the fraction of data sets for which the algorithm fails to converge and the sensitivity of initial values to the convergence rate and the performance of the maximum likelihood estimators in terms of bias and mean square error, where applicable. A Bayesian approach was developed and credible set estimators was constructed. Simulation was used to evaluate the performance of these credible sets as confidence sets. A method for predicting segment membership from covariates measured on a judge was derived using a logistic model applied to a mixture of Mallows probability distance models. The effects of covariates on segment membership were assessed. Likelihood sets for parameters specifying mixtures of Mallows distance models were constructed and explored.
APA, Harvard, Vancouver, ISO, and other styles
17

Salasar, Luis Ernesto Bueno. "Eliminação de parâmetros perturbadores em um modelo de captura-recaptura." Universidade Federal de São Carlos, 2011. https://repositorio.ufscar.br/handle/ufscar/4485.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:04:51Z (GMT). No. of bitstreams: 1 4032.pdf: 1016886 bytes, checksum: 6e1eb83f197a88332f8951b054c1f01a (MD5) Previous issue date: 2011-11-18
Financiadora de Estudos e Projetos
The capture-recapture process, largely used in the estimation of the number of elements of animal population, is also applied to other branches of knowledge like Epidemiology, Linguistics, Software reliability, Ecology, among others. One of the _rst applications of this method was done by Laplace in 1783, with aim at estimate the number of inhabitants of France. Later, Carl G. J. Petersen in 1889 and Lincoln in 1930 applied the same estimator in the context of animal populations. This estimator has being known in literature as _Lincoln-Petersen_ estimator. In the mid-twentieth century several researchers dedicated themselves to the formulation of statistical models appropriated for the estimation of population size, which caused a substantial increase in the amount of theoretical and applied works on the subject. The capture-recapture models are constructed under certain assumptions relating to the population, the sampling procedure and the experimental conditions. The main assumption that distinguishes models concerns the change in the number of individuals in the population during the period of the experiment. Models that allow for births, deaths or migration are called open population models, while models that does not allow for these events to occur are called closed population models. In this work, the goal is to characterize likelihood functions obtained by applying methods of elimination of nuissance parameters in the case of closed population models. Based on these likelihood functions, we discuss methods for point and interval estimation of the population size. The estimation methods are illustrated on a real data-set and their frequentist properties are analised via Monte Carlo simulation.
O processo de captura-recaptura, amplamente utilizado na estimação do número de elementos de uma população de animais, é também aplicado a outras áreas do conhecimento como Epidemiologia, Linguística, Con_abilidade de Software, Ecologia, entre outras. Uma das primeiras aplicações deste método foi feita por Laplace em 1783, com o objetivo de estimar o número de habitantes da França. Posteriormente, Carl G. J. Petersen em 1889 e Lincoln em 1930 utilizaram o mesmo estimador no contexto de popula ções de animais. Este estimador _cou conhecido na literatura como o estimador de _Lincoln-Petersen_. Em meados do século XX muitos pesquisadores se dedicaram à formula ção de modelos estatísticos adequados à estimação do tamanho populacional, o que causou um aumento substancial da quantidade de trabalhos teóricos e aplicados sobre o tema. Os modelos de captura-recaptura são construídos sob certas hipóteses relativas à população, ao processo de amostragem e às condições experimentais. A principal hipótese que diferencia os modelos diz respeito à mudança do número de indivíduos da popula- ção durante o período do experimento. Os modelos que permitem que haja nascimentos, mortes ou migração são chamados de modelos para população aberta, enquanto que os modelos em que tais eventos não são permitidos são chamados de modelos para popula- ção fechada. Neste trabalho, o objetivo é caracterizar o comportamento de funções de verossimilhança obtidas por meio da utilização de métodos de eliminação de parâmetros perturbadores, no caso de modelos para população fechada. Baseado nestas funções de verossimilhança, discutimos métodos de estimação pontual e intervalar para o tamanho populacional. Os métodos de estimação são ilustrados através de um conjunto de dados reais e suas propriedades frequentistas são analisadas via simulação de Monte Carlo.
APA, Harvard, Vancouver, ISO, and other styles
18

Hedell, Ronny. "Rarities of genotype profiles in a normal Swedish population." Thesis, Linköpings universitet, Matematiska institutionen, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-59708.

Full text
Abstract:
Investigation of stains from crime scenes are commonly used in the search for criminals. At The National Laboratory of Forensic Science, where these stains are examined, a number of questions of theoretical and practical interest regarding the databases of DNA profiles and the strength of DNA evidence against a suspect in a trial are not fully investigated. The first part of this thesis deals with how a sample of DNA profiles from a population is used in the process of estimating the strength of DNA evidence in a trial, taking population genetic factors into account. We then consider how to combine hypotheses regarding the relationship between a suspect and other possible donors of the stain from the crime scene by two applications of Bayes’ theorem. After that we assess the DNA profiles that minimize the strength of DNA evidence against a suspect, and investigate how the strength is affected by sampling error using the bootstrap method and a Bayesian method. In the last part of the thesis we examine discrepancies between different databases of DNA profiles by both descriptive and inferential statistics, including likelihood ratio tests and Bayes factor tests. Little evidence of major differences is found.
APA, Harvard, Vancouver, ISO, and other styles
19

Lee, Chang. "MITIGATION of BACKGROUNDS for the LARGE UNDERGROUND XENON DARK MATTER EXPERIMENT." Case Western Reserve University School of Graduate Studies / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=case1427482791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Mahmoud, Mahmoud A. "The Monitoring of Linear Profiles and the Inertial Properties of Control Charts." Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/29544.

Full text
Abstract:
The Phase I analysis of data when the quality of a process or product is characterized by a linear function is studied in this dissertation. It is assumed that each sample collected over time in the historical data set consists of several bivariate observations for which a simple linear regression model is appropriate, a situation common in calibration applications. Using a simulation study, the researcher compares the performance of some of the recommended approaches used to assess the stability of the process. Also in this dissertation, a method based on using indicator variables in a multiple regression model is proposed. This dissertation also proposes a change point approach based on the segmented regression technique for testing the constancy of the regression parameters in a linear profile data set. The performance of the proposed change point method is compared to that of the most effective Phase I linear profile control chart approaches using a simulation study. The advantage of the proposed change point method over the existing methods is greatly improved detection of sustained step changes in the process parameters. Any control chart that combines sample information over time, e.g., the cumulative sum (CUSUM) chart and the exponentially weighted moving average (EWMA) chart, has an ability to detect process changes that varies over time depending on the past data observed. The chart statistics can take values such that some shifts in the parameters of the underlying probability distribution of the quality characteristic are more difficult to detect. This is referred to as the "inertia problem" in the literature. This dissertation shows under realistic assumptions that the worst-case run length performance of control charts becomes as informative as the steady-state performance. Also this study proposes a simple new measure of the inertial properties of control charts, namely the signal resistance. The conclusions of this study support the recommendation that Shewhart limits should be used with EWMA charts, especially when the smoothing parameter is small. This study also shows that some charts proposed by Pignatiello and Runger (1990) and Domangue and Patch (1991) have serious disadvantages with respect to inertial properties.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
21

Bi, Chang. "How Do Credibility of For-profit and Non-profit Source and Sharer, Emotion Valence, Message Elaboration, and Issue Controversiality Influence Message Sharing to Imagined Audience on Facebook?" Bowling Green State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1562106043868372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Bohaczewski, Michal. "L’atteinte à la marque renommée." Thesis, Paris 2, 2017. http://www.theses.fr/2017PA020070.

Full text
Abstract:
Le présent travail propose une étude sur l’étendue de la protection spéciale de la marque renommée. Dans le cadre de l’examen du régime actuel, il convient d’analyser la notion fondamentale de marque renommée, ainsi que celle de marque notoirement connue. Ensuite, sont examinées les conditions de l’atteinte à la marque renommée communes à toutes les formes d’atteinte : premièrement les conditions positives, et notamment celle de l’existence d’un lien dans l’esprit du public entre la marque renommée invoquée et le signe litigieux, deuxièmement, les conditions négatives, et en particulier l’exception du juste motif. L’étude présente les différentes formes d’atteinte à la marque renommées reconnues en l’état actuel du droit : la dilution, l’avilissement et le profit indûment tiré du caractère distinctif ou de la renommée de la marque. Selon la thèse défendue dans le travail, toutes ces formes d’atteinte à la marque renommée ont des finalités distinctes et sont indépendantes les unes des autres, en permettant de sanctionner différentes hypothèses d’usages de marques renommées par des tiers. Par ailleurs, l’étude situe le régime spécial par rapport au droit commun, d’une part, au droit commun des marques, en distinguant la protection élargie de l’hypothèse du risque de confusion, et, d’autre part, au droit commun de la responsabilité civile qui sanctionne le parasitisme susceptible de compléter la protection conférée aux titulaires sur le terrain de la protection de la marque renommée. Enfin, le travail présente une analyse de la problématique de la réparation des atteintes à la marque renommée en fonction de la forme d’atteinte établie par le titulaire
The work offers a study on the scope of the special protection of the trade mark with a reputation. In the context of the assessment of the current regime, it is necessary to analyse the fundamental concept of the trade mark with a reputation, as well as the concept of the well known trade mark. Then, the conditions for infringement of the trade mark with a reputation common to all forms of infringement are examined: firstly, the positive conditions, in particular the existence of a link established in the public mind between the mark invoked and the sign in dispute, secondly, the negative conditions, in particular the exception of due cause. The study presents the various forms of infringement of trade marks with a reputation: dilution by blurring, dilution by tarnishment and unfair advantage taken of the distinctive character or the repute of the trade mark. According to the thesis of the work, all those forms of infringement have distinct purposes and are independent of each other, allowing to sanction various uses of trade marks with a reputation by third parties. In addition, the study places the special regime in relation to ordinary law, on the one hand, to the ordinary law of trade marks, by distinguishing between the extended protection and the likelihood of confusion, and, on the other hand, to the ordinary law of civil liability which sanctions the free-riding likely to complete the protection conferred on the right holders by the special regime. Finally, the work presents an analysis of the problem of repairing infringements of the trade marks with a reputation according to the form of infringement established by the right holder
Rozprawa stanowi studium zakresu szczególnej ochrony renomowanego znaku towarowego. Pracazawiera pogłębioną analizę ewolucji ochrony znaku poza granicami specjalizacji. W ramach ocenyaktualnie obowiązującego reżimu wprowadzonego przez ustawodawcę unijnego w dyrektywieo znakach towarowych oraz w rozporządzeniu o znaku towarowym Unii Europejskiej wypadadokonać analizy fundamentalnego pojęcia znaku renomowanego, jak i pojęcia znaku powszechnieznanego. Następnie analizie poddano przesłanki naruszenia prawa do znaku renomowanegowspólne dla wszystkich postaci naruszenia: w pierwszej kolejności przesłanki pozytywne,w szczególności przesłankę powstania w świadomości odbiorców związku pomiędzy znakiemrenomowanym i spornym oznaczeniem, w drugiej kolejności, przesłanki negatywne, w szczególnościwyjątek uzasadnionej przyczyny używania znaku. Rozprawa przedstawia poszczególne postacinaruszenia prawa do znaku renomowanego uznane w obecnym stanie prawnym: szkodę dlaodróżniającego charakteru (rozwodnienie), szkodę dla renomy (degradację) oraz nienależnąkorzyść czerpaną z odróżniającego charakteru lub renomy znaku. Zgodnie z tezą bronioną w pracy,wszystkie postaci naruszenia prawa do znaku renomowanego służą odrębnym celom i są niezależnewobec siebie, pozwalając sankcjonować różne przypadki używania znaków renomowanych przezosoby trzecie. Ponadto reżim szczególnej ochrony jest w rozprawie sytuowany na tle ogólnychprzepisów prawa, z jednej strony, ogólnego prawa znaków towarowych, poprzez odróżnienierozszerzonej ochrony od hipotezy ryzyka konfuzji, z drugiej strony, przepisów ogólnychprzewidujących sankcje wobec działań charakteryzujących pasożytnictwo, które uzupełniająochronę przyznaną uprawnionym na gruncie prawa do znaku renomowanego. W pracy poddanorównież analizie problematykę odpowiedzialności majątkowej za naruszenie prawa do znakurenomowanego, biorąc pod uwagę postać naruszenia wykazaną przez uprawnionego
APA, Harvard, Vancouver, ISO, and other styles
23

Moreno, Betancur Margarita. "Regression modeling with missing outcomes : competing risks and longitudinal data." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA11T076/document.

Full text
Abstract:
Les données manquantes sont fréquentes dans les études médicales. Dans les modèles de régression, les réponses manquantes limitent notre capacité à faire des inférences sur les effets des covariables décrivant la distribution de la totalité des réponses prévues sur laquelle porte l'intérêt médical. Outre la perte de précision, toute inférence statistique requière qu'une hypothèse sur le mécanisme de manquement soit vérifiée. Rubin (1976, Biometrika, 63:581-592) a appelé le mécanisme de manquement MAR (pour les sigles en anglais de « manquant au hasard ») si la probabilité qu'une réponse soit manquante ne dépend pas des réponses manquantes conditionnellement aux données observées, et MNAR (pour les sigles en anglais de « manquant non au hasard ») autrement. Cette distinction a des implications importantes pour la modélisation, mais en général il n'est pas possible de déterminer si le mécanisme de manquement est MAR ou MNAR à partir des données disponibles. Par conséquent, il est indispensable d'effectuer des analyses de sensibilité pour évaluer la robustesse des inférences aux hypothèses de manquement.Pour les données multivariées incomplètes, c'est-à-dire, lorsque l'intérêt porte sur un vecteur de réponses dont certaines composantes peuvent être manquantes, plusieurs méthodes de modélisation sous l'hypothèse MAR et, dans une moindre mesure, sous l'hypothèse MNAR ont été proposées. En revanche, le développement de méthodes pour effectuer des analyses de sensibilité est un domaine actif de recherche. Le premier objectif de cette thèse était de développer une méthode d'analyse de sensibilité pour les données longitudinales continues avec des sorties d'étude, c'est-à-dire, pour les réponses continues, ordonnées dans le temps, qui sont complètement observées pour chaque individu jusqu'à la fin de l'étude ou jusqu'à ce qu'il sorte définitivement de l'étude. Dans l'approche proposée, on évalue les inférences obtenues à partir d'une famille de modèles MNAR dits « de mélange de profils », indexés par un paramètre qui quantifie le départ par rapport à l'hypothèse MAR. La méthode a été motivée par un essai clinique étudiant un traitement pour le trouble du maintien du sommeil, durant lequel 22% des individus sont sortis de l'étude avant la fin.Le second objectif était de développer des méthodes pour la modélisation de risques concurrents avec des causes d'évènement manquantes en s'appuyant sur la théorie existante pour les données multivariées incomplètes. Les risques concurrents apparaissent comme une extension du modèle standard de l'analyse de survie où l'on distingue le type d'évènement ou la cause l'ayant entrainé. Les méthodes pour modéliser le risque cause-spécifique et la fonction d'incidence cumulée supposent en général que la cause d'évènement est connue pour tous les individus, ce qui n'est pas toujours le cas. Certains auteurs ont proposé des méthodes de régression gérant les causes manquantes sous l'hypothèse MAR, notamment pour la modélisation semi-paramétrique du risque. Mais d'autres modèles n'ont pas été considérés, de même que la modélisation sous MNAR et les analyses de sensibilité. Nous proposons des estimateurs pondérés et une approche par imputation multiple pour la modélisation semi-paramétrique de l'incidence cumulée sous l'hypothèse MAR. En outre, nous étudions une approche par maximum de vraisemblance pour la modélisation paramétrique du risque et de l'incidence sous MAR. Enfin, nous considérons des modèles de mélange de profils dans le contexte des analyses de sensibilité. Un essai clinique étudiant un traitement pour le cancer du sein de stade II avec 23% des causes de décès manquantes sert à illustrer les méthodes proposées
Missing data are a common occurrence in medical studies. In regression modeling, missing outcomes limit our capability to draw inferences about the covariate effects of medical interest, which are those describing the distribution of the entire set of planned outcomes. In addition to losing precision, the validity of any method used to draw inferences from the observed data will require that some assumption about the mechanism leading to missing outcomes holds. Rubin (1976, Biometrika, 63:581-592) called the missingness mechanism MAR (for “missing at random”) if the probability of an outcome being missing does not depend on missing outcomes when conditioning on the observed data, and MNAR (for “missing not at random”) otherwise. This distinction has important implications regarding the modeling requirements to draw valid inferences from the available data, but generally it is not possible to assess from these data whether the missingness mechanism is MAR or MNAR. Hence, sensitivity analyses should be routinely performed to assess the robustness of inferences to assumptions about the missingness mechanism. In the field of incomplete multivariate data, in which the outcomes are gathered in a vector for which some components may be missing, MAR methods are widely available and increasingly used, and several MNAR modeling strategies have also been proposed. On the other hand, although some sensitivity analysis methodology has been developed, this is still an active area of research. The first aim of this dissertation was to develop a sensitivity analysis approach for continuous longitudinal data with drop-outs, that is, continuous outcomes that are ordered in time and completely observed for each individual up to a certain time-point, at which the individual drops-out so that all the subsequent outcomes are missing. The proposed approach consists in assessing the inferences obtained across a family of MNAR pattern-mixture models indexed by a so-called sensitivity parameter that quantifies the departure from MAR. The approach was prompted by a randomized clinical trial investigating the benefits of a treatment for sleep-maintenance insomnia, from which 22% of the individuals had dropped-out before the study end. The second aim was to build on the existing theory for incomplete multivariate data to develop methods for competing risks data with missing causes of failure. The competing risks model is an extension of the standard survival analysis model in which failures from different causes are distinguished. Strategies for modeling competing risks functionals, such as the cause-specific hazards (CSH) and the cumulative incidence function (CIF), generally assume that the cause of failure is known for all patients, but this is not always the case. Some methods for regression with missing causes under the MAR assumption have already been proposed, especially for semi-parametric modeling of the CSH. But other useful models have received little attention, and MNAR modeling and sensitivity analysis approaches have never been considered in this setting. We propose a general framework for semi-parametric regression modeling of the CIF under MAR using inverse probability weighting and multiple imputation ideas. Also under MAR, we propose a direct likelihood approach for parametric regression modeling of the CSH and the CIF. Furthermore, we consider MNAR pattern-mixture models in the context of sensitivity analyses. In the competing risks literature, a starting point for methodological developments for handling missing causes was a stage II breast cancer randomized clinical trial in which 23% of the deceased women had missing cause of death. We use these data to illustrate the practical value of the proposed approaches
APA, Harvard, Vancouver, ISO, and other styles
24

Valdovinos, Alvarez Jose Manuel. "Empirical Likelihood Confidence Intervals for the Population Mean Based on Incomplete Data." 2015. http://scholarworks.gsu.edu/math_theses/145.

Full text
Abstract:
The use of doubly robust estimators is a key for estimating the population mean response in the presence of incomplete data. Cao et al. (2009) proposed an alternative doubly robust estimator which exhibits strong performance compared to existing estimation methods. In this thesis, we apply the jackknife empirical likelihood, the jackknife empirical likelihood with nuisance parameters, the profile empirical likelihood, and an empirical likelihood method based on the influence function to make an inference for the population mean. We use these methods to construct confidence intervals for the population mean, and compare the coverage probabilities and interval lengths using both the ``usual'' doubly robust estimator and the alternative estimator proposed by Cao et al. (2009). An extensive simulation study is carried out to compare the different methods. Finally, the proposed methods are applied to two real data sets.
APA, Harvard, Vancouver, ISO, and other styles
25

Rattanasiri, Sasivimol [Verfasser]. "Modelling of covariate information in multicentre studies with binary outcome using profile likelihood / vorgelegt von Sasivimol Rattanasiri." 2006. http://d-nb.info/980886600/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Maadooliat, Mehdi. "Dimension Reduction and Covariance Structure for Multivariate Data, Beyond Gaussian Assumption." Thesis, 2011. http://hdl.handle.net/1969.1/ETD-TAMU-2011-08-9731.

Full text
Abstract:
Storage and analysis of high-dimensional datasets are always challenging. Dimension reduction techniques are commonly used to reduce the complexity of the data and obtain the informative aspects of datasets. Principal Component Analysis (PCA) is one of the commonly used dimension reduction techniques. However, PCA does not work well when there are outliers or the data distribution is skewed. Gene expression index estimation is an important problem in bioinformatics. Some of the popular methods in this area are based on the PCA, and thus may not work well when there is non-Gaussian structure in the data. To address this issue, a likelihood based data transformation method with a computationally efficient algorithm is developed. Also, a new multivariate expression index is studied and the performance of the multivariate expression index is compared with the commonly used univariate expression index. As an extension of the gene expression index estimation problem, a general procedure that integrates data transformation with the PCA is developed. In particular, this general method can handle missing data and data with functional structure. It is well-known that the PCA can be obtained by the eigen decomposition of the sample covariance matrix. Another focus of this dissertation is to study the covariance (or correlation) structure under the non-Gaussian assumption. An important issue in modeling the covariance matrix is the positive definiteness constraint. The modified Cholesky decomposition of the inverse covariance matrix has been considered to address this issue in the literature. An alternative Cholesky decomposition of the covariance matrix is considered and used to construct an estimator of the covariance matrix under multivariate-t assumption. The advantage of this alternative Cholesky decomposition is the decoupling of the correlation and the variances.
APA, Harvard, Vancouver, ISO, and other styles
27

Chen, Po-Jen, and 陳柏任. "Nonlinearities in the foreign direct investment-income inequality nexus: Evidence from a smooth coefficient partially linear model with profile likelihood inference." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/43071405697871728548.

Full text
Abstract:
碩士
淡江大學
統計學系碩士班
99
Theoretic economic models have shown that foreign direct investment (FDI) may affect income inequality and such impact varies according to the stock of human capital and/ or level of financial development in the host country. The inequality effect of FDI may vary depending on the country’s skilled labor abundance and level of development. This thesis uses the semiparametric varying coefficient partially linear regression model to investigate the association between FDI and inequality and employs the profile likelihood ratio (PLR) test proposed by Fan and Huang (2005) to test the parametric components of the model. This paper investigates the relationship between the stock of inward FDI and the growth rate of Gini coefficient using a pooled dataset consisted of 175 observations from 88 countries observed from 1959 to 1997. Our results indicate that inward FDI widens income distribution mainly in countries where education attainment and per capita income are low and financial markets are better-developed. The inequality effect of FDI diminishes (worsens) as educational attainment and per capita income increase (the level of financial development improves).
APA, Harvard, Vancouver, ISO, and other styles
28

Lin, Yun-Chin, and 林永青. "The POLEs of the Adjusted Profile Likelihoods." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/79114470896189475591.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Li-chixLin and 林莉琪. "2x2 table odds ratio information comparison between the profile & conditional likelihoods." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/34032877598425492293.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, Mei-Chen, and 陳美辰. "Monitoring Phase I Linear Profiles Based on Empirical Likelihood Ratio Control Chart." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/8ynekc.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

黃偉振. "An Exponentially Weighted Moving Average Control Chart Based on Likelihood Ratio Test Statistics for Monitoring General Linear Profiles." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/91346305035340584134.

Full text
Abstract:
碩士
國立交通大學
統計學研究所
101
When the quality of a process can be characterized by general linear profiles, a statistical process control scheme that can be used in industrial practice is proposed in the paper. First, some properties of the likelihood ratio test statistics are introduced. Next, an exponentially weighted moving average control chart based on likelihood ratio test statistics for monitoring general linear profiles is proposed. Finally, the performance of the proposed methodology is investigated through a simulation study to show its strength and weakness.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography