To see the other types of publications on this topic, follow the link: Ordinal data analysis.

Dissertations / Theses on the topic 'Ordinal data analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 47 dissertations / theses for your research on the topic 'Ordinal data analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Schabenberger, Oliver. "The analysis of longitudinal ordinal data." Diss., This resource online, 1995. http://scholar.lib.vt.edu/theses/available/etd-02272007-092413/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Neary, Dominic Mark. "Methods of analysis for ordinal repeated measures data." Thesis, University of Reading, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.339521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Skinner, Justin. "The analysis of repeated ordinal data using latent trends." Thesis, Loughborough University, 1999. https://dspace.lboro.ac.uk/2134/13772.

Full text
Abstract:
This thesis presents methodology to analyse repeated ordered categorical data (repeated ordinal data), under the assumption that measurements arise as discrete realisations of an underlying (latent) continuous distribution. Two sets of estimation equations, called quasiestimation equations or QEEs, are presented to estimate the mean structure and the cutoff points which define boundaries between different categories. A series of simulation studies are employed to examine the quality of the estimation processes and of the estimation of the underlying latent correlation structure. Graphical studies and theoretical considerations are also utilised to explore the asymptotic properties of the correlation, mean and cutoff parameter estimates. One important aspect of repeated analysis is the structure of the correlation and simulation studies are used to look at the effect of correlation misspecification, both on the consistency of estimates and their asymptotical stability. To compare the QEEs with current methodology, simulations studies are used to analyse the simple case where the data are binary, so that generalised estimation equations (GEEs) can also be applied to model the latent trend. Again the effect of correlation misspecification will be considered. QEEs are applied to a data set consisting of the pain runners feel in their legs after a long race. Both ordinal and continuous responses are measured and comparisons between QEEs and continuous counterparts are made. Finally, this methodology is extended to the case when there are multivariate repeated ordinal measurements, giving rise to inter-time and intra-time correlations.
APA, Harvard, Vancouver, ISO, and other styles
4

Sanders, Margaret. "Multifactor Models of Ordinal Data: Comparing Four Factor Analytical Methods." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1388745127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Svensson, Elisabeth. "Analysis of systematic and random differences between paired ordinal categorical data /." Göteborg : Stockholm, Sweden : University of Göteborg ; Almqvist & Wiksell International, 1993. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=005857475&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Batten, Dennis William. "Univariate polytomous ordinal regression analysis with application to diabetic retinopathy data." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ54859.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

McHugh, Gillian Stephanie. "Efficient analysis of ordinal data from clinical trials in head injury." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/6479.

Full text
Abstract:
Many promising Phase II trials have been carried out in head injury however to date there has been no successful translation of the positive results from these explanatory trials into improved patient outcomes in Phase III trials. Many reasons have been hypothesised for this failure. Outcomes in head injury trials are usually measured using the five point Glasgow Outcome Scale. Traditionally the ordinality of this scale is disregarded and it is dichotomised into two groups, favourable and unfavourable outcome. This thesis explores whether suboptimal statistical analysis techniques, including the dichotomisation of outcomes could have contributed to the reasons why Phase III trials have been unsuccessful. Based on eleven completed head injury studies, simulation modelling is used to compare outcome as assessed by the conventional dichotomy with both modelling that takes into account the ordered nature of the outcome (proportional odds modelling) and modelling which individualises a patient’s risk of a good or poor outcome ( the ‘sliding dichotomy’). The results of this modelling show that both analyses which use the full outcome scale and those which individualise risk show great efficiency gains (as measured by reduction in required sample sizes) over the conventional analysis of the binary outcome. These results are consistent both when the simulated treatment effects followed a proportional odds model and when they did not. Consistent results were also observed when targeting or restricting improvement to groups of subjects based on clinical characteristics or prognosis. Although proportional odds modelling shows consistently greater sample size reductions the choice of whether to use proportional odds modelling or the sliding dichotomy depends on the question of interest.
APA, Harvard, Vancouver, ISO, and other styles
8

Adnan, Arisman. "Analysis of taste-panel data using ANOVA and ordinal logistic regression." Thesis, University of Newcastle Upon Tyne, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.402150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bolland, Kim. "The design and analysis of neurological trials yielding repeated ordinal data." Thesis, University of Reading, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.397747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Juanmei. "Multivariate ordinal data analysis with pairwise likelihood and its extension to SEM." Diss., Restricted to subscribing institutions, 2007. http://proquest.umi.com/pqdweb?did=1495960441&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Dlugosz, Stephan. "Multi-layer perceptron networks for ordinal data analysis : order independent online learning by sequential estimation /." Berlin : Logos, 2008. http://d-nb.info/990567311/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Luo, Hao. "Some Aspects on Confirmatory Factor Analysis of Ordinal Variables and Generating Non-normal Data." Doctoral thesis, Uppsala universitet, Statistiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-149423.

Full text
Abstract:
This thesis, which consists of five papers, is concerned with various aspects of confirmatory factor analysis (CFA) of ordinal variables and the generation of non-normal data. The first paper studies the performances of different estimation methods used in CFA when ordinal data are encountered.  To take ordinality into account the four estimation methods, i.e., maximum likelihood (ML), unweighted least squares, diagonally weighted least squares, and weighted least squares (WLS), are used in combination with polychoric correlations. The effect of model sizes and number of categories on the parameter estimates, their standard errors, and the common chi-square measure of fit when the models are both correct and misspecified are examined. The second paper focuses on the appropriate estimator of the polychoric correlation when fitting a CFA model. A non-parametric polychoric correlation coefficient based on the discrete version of Spearman's rank correlation is proposed to contend with the situation of non-normal underlying distributions. The simulation study shows the benefits of using the non-parametric polychoric correlation under conditions of non-normality. The third paper raises the issue of simultaneous factor analysis. We study the effect of pooling multi-group data on the estimation of factor loadings. Given the same factor loadings but different factor means and correlations, we investigate how much information is lost by pooling the groups together and only estimating the combined data set using the WLS method. The parameter estimates and their standard errors are compared with results obtained by multi-group analysis using ML. The fourth paper uses a Monte Carlo simulation to assess the reliability of the Fleishman's power method under various conditions of skewness, kurtosis, and sample size. Based on the generated non-normal samples, the power of D'Agostino's (1986) normality test is studied. The fifth paper extends the evaluation of algorithms to the generation of multivariate non-normal data.  Apart from the requirement of generating reliable skewness and kurtosis, the generated data also need to possess the desired correlation matrices.  Four algorithms are investigated in terms of simplicity, generality, and reliability of the technique.
APA, Harvard, Vancouver, ISO, and other styles
13

Li, Yingruolan Li. "Confirmatory Factor Analysis with Continuous and Ordinal Data: An Empirical Study of Stress Level." Thesis, Uppsala universitet, Statistiska institutionen, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-231196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Gentry, Amanda E. "Penalized mixed-effects ordinal response models for high-dimensional genomic data in twins and families." VCU Scholars Compass, 2018. https://scholarscompass.vcu.edu/etd/5575.

Full text
Abstract:
The Brisbane Longitudinal Twin Study (BLTS) was being conducted in Australia and was funded by the US National Institute on Drug Abuse (NIDA). Adolescent twins were sampled as a part of this study and surveyed about their substance use as part of the Pathways to Cannabis Use, Abuse and Dependence project. The methods developed in this dissertation were designed for the purpose of analyzing a subset of the Pathways data that includes demographics, cannabis use metrics, personality measures, and imputed genotypes (SNPs) for 493 complete twin pairs (986 subjects.) The primary goal was to determine what combination of SNPs and additional covariates may predict cannabis use, measured on an ordinal scale as: “never tried,” “used moderately,” or “used frequently”. To conduct this analysis, we extended the ordinal Generalized Monotone Incremental Forward Stagewise (GMIFS) method for mixed models. This extension includes allowance for a unpenalized set of covariates to be coerced into the model as well as flexibility for user-specified correlation patterns between twins in a family. The proposed methods are applicable to high-dimensional (genomic or otherwise) data with ordinal response and specific, known covariance structure within clusters.
APA, Harvard, Vancouver, ISO, and other styles
15

Moulin, Serge. "Use of data analysis techniques to solve specific bioinformatics problems." Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCD049/document.

Full text
Abstract:
De nos jours, la quantité de données génétiques séquencées augmente de manière exponentielle sous l'impulsion d'outils de séquençage de plus en plus performants, tels que les outils de séquençage haut débit en particulier. De plus, ces données sont de plus en plus facilement accessibles grâce aux bases de données en ligne. Cette plus grande disponibilité des données ouvre de nouveaux sujets d'étude qui nécessitent de la part des statisticiens et bio-informaticiens de développer des outils adaptés. Par ailleurs, les progrès constants de la statistique, dans des domaines tels que le clustering, la réduction de dimension, ou les régressions entre autres, nécessitent d'être régulièrement adaptés au contexte de la bio-informatique. L’objectif de cette thèse est l’application de techniques avancées de statistiques à des problématiques de bio-informatique. Dans ce manuscrit, nous présentons les résultats de nos travaux concernant le clustering de séquences génétiques via Laplacian eigenmaps et modèle de mélange gaussien, l'étude de la propagation des éléments transposables dans le génome via un processus de branchement, l'analyse de données métagénomiques en écologie via des courbes ROC ou encore la régression polytomique ordonnée pénalisée par la norme l1
Nowadays, the quantity of sequenced genetic data is increasing exponentially under the impetus of increasingly powerful sequencing tools, such as high-throughput sequencing tools in particular. In addition, these data are increasingly accessible through online databases. This greater availability of data opens up new areas of study that require statisticians and bioinformaticians to develop appropriate tools. In addition, constant statistical progress in areas such as clustering, dimensionality reduction, regressions and others needs to be regularly adapted to the context of bioinformatics. The objective of this thesis is the application of advanced statistical techniques to bioinformatics issues. In this manuscript we present the results of our works concerning the clustering of genetic sequences via Laplacian eigenmaps and Gaussian mixture model, the study of the propagation of transposable elements in the genome via a branching process, the analysis of metagenomic data in ecology via ROC curves or the ordinal polytomous regression penalized by the l1-norm
APA, Harvard, Vancouver, ISO, and other styles
16

Dridi, Mohammed Tahar. "Contribution à l'étude de certains problèmes relatifs aux ordres linéaires." Université Joseph Fourier (Grenoble), 1995. http://www.theses.fr/1995GRE10132.

Full text
Abstract:
Considérons un ensemble fini d'alternatives et un ensemble de valeurs numériques non négatives associées a tous les couples ordonnes formes de deux de ces alternatives distinctes
Peut-on trouver un ensemble d'ordres totaux distincts, pondères par des poids réels positifs telles que la somme des poids des ordres qui placent l'alternative i avant l'alternative j soit précisément égale a la valeur numérique donnée associée au couple forme par les alternatives i puis j ?
Notre contribution à ce problème a apporté des résultats et des point de vues originaux et a conduit à de nouvelles questions que nous avons pu résoudre
APA, Harvard, Vancouver, ISO, and other styles
17

Jin, Shaobo. "Essays on Estimation Methods for Factor Models and Structural Equation Models." Doctoral thesis, Uppsala universitet, Statistiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-247292.

Full text
Abstract:
This thesis which consists of four papers is concerned with estimation methods in factor analysis and structural equation models. New estimation methods are proposed and investigated. In paper I an approximation of the penalized maximum likelihood (ML) is introduced to fit an exploratory factor analysis model. Approximated penalized ML continuously and efficiently shrinks the factor loadings towards zero. It naturally factorizes a covariance matrix or a correlation matrix. It is also applicable to an orthogonal or an oblique structure. Paper II, a simulation study, investigates the properties of approximated penalized ML with an orthogonal factor model. Different combinations of penalty terms and tuning parameter selection methods are examined. Differences in factorizing a covariance matrix and factorizing a correlation matrix are also explored. It is shown that the approximated penalized ML frequently improves the traditional estimation-rotation procedure. In Paper III we focus on pseudo ML for multi-group data. Data from different groups are pooled and normal theory is used to fit the model. It is shown that pseudo ML produces consistent estimators of factor loadings and that it is numerically easier than multi-group ML. In addition, normal theory is not applicable to estimate standard errors. A sandwich-type estimator of standard errors is derived. Paper IV examines properties of the recently proposed polychoric instrumental variable (PIV) estimators for ordinal data through a simulation study. PIV is compared with conventional estimation methods (unweighted least squares and diagonally weighted least squares). PIV produces accurate estimates of factor loadings and factor covariances in the correctly specified confirmatory factor analysis model and accurate estimates of loadings and coefficient matrices in the correctly specified structure equation model. If the model is misspecified, robustness of PIV depends on model complexity, underlying distribution, and instrumental variables.
APA, Harvard, Vancouver, ISO, and other styles
18

Ramos, Thiago Graça. "Um modelo híbrido incorporando preferências declaradas e análise envoltória de dados aplicada ao transporte de cargas no Brasil." Niterói, 2017. https://app.uff.br/riuff/handle/1/4073.

Full text
Abstract:
Submitted by Secretaria Pós de Produção (tpp@vm.uff.br) on 2017-07-27T18:53:19Z No. of bitstreams: 1 D2014 - Thiago Graça Ramos.pdf: 589803 bytes, checksum: d74ab5e26ec9908670c7d3320d45fe61 (MD5)
Made available in DSpace on 2017-07-27T18:53:19Z (GMT). No. of bitstreams: 1 D2014 - Thiago Graça Ramos.pdf: 589803 bytes, checksum: d74ab5e26ec9908670c7d3320d45fe61 (MD5)
Esse estudo visa construir um modelo para identificar a forma ideal de transporte de carga no Brasil, para pequenas e médias empresas que contratam este tipo de serviço. O trabalho utilizou as técnicas DEA, preferência declarada e logito ordinal para avaliar as pequenas e médias empresas que contratam transporte de carga no Brasil, verificando os aspectos importantes para a tomada de decisão na contratação deste serviço. Inicialmente, aplicou-se a ferramenta DEA para classificar as eficiências em alta, média e baixa, utilizandose o resultado de tal classificação como a variável dependente do modelo logito ordinal. As variáveis independentes deste modelo foram as utilidades oriundas da preferência declarada e do modelo de MaxDiff, que avaliou características não pertencentes ao modelo de preferência declarada. A análise dos dados indicou que a migração do modo rodoviário para o ferroviário seria melhor para as empresas, já que o primeiro acaba sendo utilizado pela falta de opção pelo segundo. Outro importante resultado do estudo foi a indicação de que as empresas com produtos de maior valor agregado são mais eficientes. Por fim, o modelo indicou que o modo de operação a ser buscado pelas empresas de transporte de carga deve incluir segurança e rapidez na entrega, propiciando facilidade de acesso ao consumidor.
This paper aims to identify efficient businesses in daily freight transport and to evaluate the main aspects to picking and hiring a cargo transportation service. To make this evaluation, some techniques will be used, such as Data Envelopment Analysis, ordinal logit and revealed preference. By using the DEA technique, the efficiency will be ranked between high, medium and low, and this ranking will be the dependent variable of the ordinal logit model, and the independent variables of this model are derived from the utilities from the revealed preference model and the maxdiff model that evaluated some features that were not declared on the preference model. Data analysis indicated that the migration from road to rail would be better for companies since the first ends up being used by a lack of options for the second. Another important result was the indication that firms with higher value-added products are more efficient. Finally, the model indicated that the mode of operation being sought by cargo shipping companies should include safety and speed in delivery, providing easy access to the consumer.
APA, Harvard, Vancouver, ISO, and other styles
19

Li, Qiuju. "Statistical inference for joint modelling of longitudinal and survival data." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/statistical-inference-for-joint-modelling-of-longitudinal-and-survival-data(65e644f3-d26f-47c0-bbe1-a51d01ddc1b9).html.

Full text
Abstract:
In longitudinal studies, data collected within a subject or cluster are somewhat correlated by their very nature and special cares are needed to account for such correlation in the analysis of data. Under the framework of longitudinal studies, three topics are being discussed in this thesis. In chapter 2, the joint modelling of multivariate longitudinal process consisting of different types of outcomes are discussed. In the large cohort study of UK north Stafforshire osteoarthritis project, longitudinal trivariate outcomes of continuous, binary and ordinary data are observed at baseline, year 3 and year 6. Instead of analysing each process separately, joint modelling is proposed for the trivariate outcomes to account for the inherent association by introducing random effects and the covariance matrix G. The influence of covariance matrix G on statistical inference of fixed-effects parameters has been investigated within the Bayesian framework. The study shows that by joint modelling the multivariate longitudinal process, it can reduce the bias and provide with more reliable results than it does by modelling each process separately. Together with the longitudinal measurements taken intermittently, a counting process of events in time is often being observed as well during a longitudinal study. It is of interest to investigate the relationship between time to event and longitudinal process, on the other hand, measurements taken for the longitudinal process may be potentially truncated by the terminated events, such as death. Thus, it may be crucial to jointly model the survival and longitudinal data. It is popular to propose linear mixed-effects models for the longitudinal process of continuous outcomes and Cox regression model for survival data to characterize the relationship between time to event and longitudinal process, and some standard assumptions have been made. In chapter 3, we try to investigate the influence on statistical inference for survival data when the assumption of mutual independence on random error of linear mixed-effects models of longitudinal process has been violated. And the study is conducted by utilising conditional score estimation approach, which provides with robust estimators and shares computational advantage. Generalised sufficient statistic of random effects is proposed to account for the correlation remaining among the random error, which is characterized by the data-driven method of modified Cholesky decomposition. The simulation study shows that, by doing so, it can provide with nearly unbiased estimation and efficient statistical inference as well. In chapter 4, it is trying to account for both the current and past information of longitudinal process into the survival models of joint modelling. In the last 15 to 20 years, it has been popular or even standard to assume that longitudinal process affects the counting process of events in time only through the current value, which, however, is not necessary to be true all the time, as recognised by the investigators in more recent studies. An integral over the trajectory of longitudinal process, along with a weighted curve, is proposed to account for both the current and past information to improve inference and reduce the under estimation of effects of longitudinal process on the risk hazards. A plausible approach of statistical inference for the proposed models has been proposed in the chapter, along with real data analysis and simulation study.
APA, Harvard, Vancouver, ISO, and other styles
20

Jelizarow, Monika [Verfasser], and Ulrich [Akademischer Betreuer] Mansmann. "Global tests of association for multivariate ordinal data : Knowledge-based statistical analysis strategies for studies using the international classification of functioning, disability and health (ICF) / Monika Jelizarow. Betreuer: Ulrich Mansmann." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2015. http://d-nb.info/1075456495/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Siemer, Alexander. "Die statistische Auswertung von ordinalen Daten bei zwei Zeitpunkten und zwei Stichproben." Doctoral thesis, [S.l.] : [s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=964606062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Lu, Yixia. "Painleve Analysis, Lie Symmetries and Integrability of Nonlinear Ordinary Differential Equations." Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1103%5F1%5Fm.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Dunton, Alec. "Topological Data Analysis for Systems of Coupled Oscillators." Scholarship @ Claremont, 2016. http://scholarship.claremont.edu/hmc_theses/79.

Full text
Abstract:
Coupled oscillators, such as groups of fireflies or clusters of neurons, are found throughout nature and are frequently modeled in the applied mathematics literature. Earlier work by Kuramoto, Strogatz, and others has led to a deep understanding of the emergent behavior of systems of such oscillators using traditional dynamical systems methods. In this project we outline the application of techniques from topological data analysis to understanding the dynamics of systems of coupled oscillators. This includes the examination of partitions, partial synchronization, and attractors. By looking for clustering in a data space consisting of the phase change of oscillators over a set of time delays we hope to reconstruct attractors and identify members of these clusters.
APA, Harvard, Vancouver, ISO, and other styles
24

Jovanovski, Vladimir. "Three-dimensional imaging and analysis of the morphology of oral structures from co-ordinate data." Thesis, Queen Mary, University of London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.392524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ferrer, Rosell Berta. "Tourism demand in Spain: trip duration and budget structure, a comparison of low cost and legacy airline users." Doctoral thesis, Universitat de Girona, 2014. http://hdl.handle.net/10803/145438.

Full text
Abstract:
This thesis compares the behaviour of tourists arriving to Spain by low cost airlines or by legacy airlines, in terms of length of stay and travel budget allocation. It also segments low cost users according to travel budget composition (share allocated to transportation expenses, and to at-destination expenses, both basic –accommodation and food– and discretionary –activities and shopping). Length of stay is analyzed with an ordered logit model to account for multimodality. For expenditure composition analysis, the compositional data analysis methodology is used. Both approaches are completely new to tourism research. A relevant and recurrent finding in this thesis are the small differences encountered between the users of both types or airline. The thesis makes original contributions both regarding the variables included and the methods used
La tesi compara com es comporten els turistes que arriben a Espanya amb companyies aèries de baix cost i amb companyies tradicionals, en termes de dies d’estada i distribució del pressupost del viatge. També segmenta els usuaris de baix cost segons la composició del pressupost del viatge (part relativa de les despeses de transport, de les despeses bàsiques –allotjament i manutenció– i discrecionals –activitats i compres– en destinació). La durada de l’estada s’analitza amb un lògit ordinal per tenir en compte la multimodalitat observada. Per a la distribució de la despesa turística s’utilitza la metodologia d’anàlisi de dades composicionals. Ambdós mètodes són completament nous en la recerca en turisme. Un resultat rellevant i recurrent d’aquesta tesi fa referència a les petites diferències trobades entre els usuaris dels dos tipus de companyies. Aquesta tesi fa contribucions originals quan a les variables analitzades i quan als mètodes estadístics utilitzats
APA, Harvard, Vancouver, ISO, and other styles
26

Chitnis, Nakul Rashmin. "Using Mathematical Models in Controlling the Spread of Malaria." Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1407%5F1%5Fm.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Gonçalves, Sofia Maria Lima Fernandes. "The impact of liquidity and solvency constraints in European banks’ efficiency." Master's thesis, Instituto Superior de Economia e Gestão, 2017. http://hdl.handle.net/10400.5/14794.

Full text
Abstract:
Mestrado em Finanças
O objetivo deste estudo é analisar a relação entre a eficiência bancária e algumas das medidas regulatórias do Basileia III. É feita uma apresentação e discussão da eficácia das normas globais de liquidez e capital, recentemente impostas pelo Comité de Supervisão Bancária do Basileia (BCBS - Basel Committee on Banking Supervision). A nossa análise empírica baseia-se em duas metodologias distintas: (i) regressões lineares múltiplas; (ii) um método não paramétrico designado de Análise de Dados em Envelope (DEA - Data Envelopment Analysis). A eficiência no setor bancário é medida a partir de duas perspetivas diferentes - com base em simples rácios contabilísticos e, alternativamente, a partir do conceito de eficiência técnica que consiste na distância relativa a uma fronteira de eficiência padrão. Os nossos resultados apontam para a presença de efeitos da regulação do Basileia na eficiência bancária, embora estes efeitos não sejam consistentes durante os três anos em análise. Os resultados de ambas as metodologias sugerem impactos contraditórios na eficiência dos bancos europeus.
The purpose of this study is to analyse the relationship between bank efficiency and some Basel III regulatory measures. It presents and discusses the effectiveness of recent liquidity and capital global standards imposed by the Basel Committee on Banking Supervision (BCBS). Our empirical analysis relies on two distinct methodologies: (i) multiple linear regressions; (ii) a non parametric method called Data Envelopment Analysis (DEA). The efficiency in the banking sector is measured in two different perspectives - through simple accounting ratios and, alternatively, through the concept of technical efficiency which consists of the relative distance to a best-practice efficient frontier. Our findings point to the presence of effects of Basel regulation on bank efficiency, although these effects are not consistent throughout the three-year analysis. Evidence from both methodologies suggest a conflicting impact on the efficiency of European banks.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Han-Ching, and 陳漢卿. "Multivariate Ordinal Categorical Data Analysis and Application." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/40154677734982444244.

Full text
Abstract:
博士
逢甲大學
應用統計研究所
103
Ordinal categorical data often arise in diverse field, especially, in medicine; sociology. In practice, when responses are ordered categorical, a common approach is to assign scores to the categories, convert them into interval data, and further perform standard multivariate procedures such as principle components, factor analysis, discriminate analysis or, use methods for comparing means. For analyzing ordered categorical data is to score the categories. Traditionally equally spaced scores or was assigned. Assigned scores reflect differences between categories through the distance between the assigned scores and hence might give more information. An alternative approach assigns ranks to the subjects and uses them as the category scores. These are called data-based scores or data –generated scores, such as mid-rank score. In addition, traditionally multiple comparisons for ordered categorical variable are often used a simultaneous confidence interval, such as Bonferroni method (single-step procedures).This dissertation develops the assignment of scores procedure and the multiple comparison procedures for ordinal categorical data. Firstly, we propose an approach that define a assigning score system for ordinal categorical variable based on underlying continuous latent distribution and it satisfy mathematical expectations are equal. The result show that the proposed score system is well for skewed ordinal categorical data. Secondly, we propose an approach that using the Strassbourger and Bretz method for constructing compatible simultaneous lower confidence bounds for ordinal effect size measures in multiple comparisons with a control. In addition, we also use the Holm procedure for conducting a multiple-hypothesis test for each intersection hypothesis in the case of ordinal effect size measure which is stepwise test procedures that are more powerful than their single-step counterparts. Finally, concluding discussion and future developments for ordinal categorical data.
APA, Harvard, Vancouver, ISO, and other styles
29

CHEN, CHUN-JU, and 陳俊如. "Incomplete Ordinal Data Techniques for Factor Analysis." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/69429190161699602137.

Full text
Abstract:
碩士
國立臺北大學
統計學系
96
We often use the attitude amount form to understand interviewees’ opinion in questionnaire survey, and use ordinal scale to measure the observations. Under the continuous latent factors assumption, we often use factor analysis to extract common latent factors from observable variables. If the original data is omitted too much, there are some mechanisms behind, for example, missing at random, the missing data, so that the person with a certain characteristic apt to become omitting value, the estimation based on the observed data will twisted the dependence between variables. Therefore, the factors obtained from that analysis may completely different from the real factor. The researcher had shown that high proportion of missing may cause significant bias in certain statistical analysis. In this study, we extend Wang’s (2007) result and relies mainly on the data of attitude about high school students’ psychological health from the education tracks database. We focus on ordinal data and investigate the data having various missing proportions to find out the critical proportion of missing that may cause significant bias on the estimation of covariance matrix. We also investigate susceptibility of factor analysis on the proportion of missing data and find out the difference on the number of common factors and the estimation of factor loading. According to the original missing mechanism, we construct datasets of several missing proportions, say 6%~40%. Under the assumption of normality, we find that starting from 16% missing proportion, the estimation of covariance matrix will be biased significantly. Base on the original missing mechanism, we consider the complete part as baseline to find out the effects on the ways of handling the missing data in factor analysis. We use polychoric correlation to run the factor analysis. The result shows that the list-wise deletion method works fine in low missing proportion (< 34%). MCMC method performs good in most of the missing proportions. The available case method is the worse among 4 methods.
APA, Harvard, Vancouver, ISO, and other styles
30

"Analysis of multivariate ordinal categorical variables with misclassified data." 2007. http://library.cuhk.edu.hk/record=b5893380.

Full text
Abstract:
Zhang, Xinmiao.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2007.
Includes bibliographical references (leaves 48).
Abstracts in English and Chinese.
Acknowledgement --- p.i
Abstract --- p.ii
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Estimation with Known Misclassification Probabilities --- p.3
Chapter 2.1 --- Model --- p.3
Chapter 2.2 --- Maximum Likelihood Estimation --- p.5
Chapter 2.3 --- Statistical Property --- p.6
Chapter 2.4 --- Mx Estimation --- p.7
Chapter 2.5 --- Partition Maximum Likelihood (PML) Estimation --- p.9
Chapter 2.6 --- Starting Value --- p.10
Chapter 2.7 --- Examples --- p.11
Chapter 2.7.1 --- Example 1 --- p.11
Chapter 2.7.2 --- Example 2 --- p.12
Chapter 2.7.3 --- Example 3 --- p.13
Chapter 3 --- Estimation by Double Sampling --- p.15
Chapter 3.1 --- Model and Analysis --- p.16
Chapter 3.2 --- Statistical Property --- p.17
Chapter 3.3 --- Mx Estimation and PML Estimation --- p.18
Chapter 3.4 --- Starting Value --- p.19
Chapter 3.5 --- Examples --- p.19
Chapter 3.5.1 --- Example 4 --- p.19
Chapter 4 --- Simulation --- p.20
Chapter 4.1 --- Simulation with Known Misclassification Probability --- p.20
Chapter 4.2 --- Simulation with Double Sampling --- p.22
Chapter 5 --- Conclusion --- p.24
Appendix and Tables --- p.26
References --- p.48
APA, Harvard, Vancouver, ISO, and other styles
31

"Analysis of ordinal square table with misclassified data." 2007. http://library.cuhk.edu.hk/record=b5893381.

Full text
Abstract:
Tam, Hiu Wah.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2007.
Includes bibliographical references (leaves 41).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Estimation with Known Misclassification Probabilities --- p.5
Chapter 2.1 --- Model --- p.5
Chapter 2.2 --- Maximum Likelihood Estimation --- p.7
Chapter 2.3 --- Examples --- p.9
Chapter 2.3.1 --- Example 1: A Real data set analysis --- p.9
Chapter 2.3.2 --- Example 2: An Artificial Data for 3x3 Table --- p.11
Chapter 3 --- Estimation by Double Sampling --- p.12
Chapter 3.1 --- Estimation --- p.13
Chapter 3.2 --- Example --- p.14
Chapter 3.2.1 --- Example 3: An Artificial Data Example for 3x3 Table --- p.14
Chapter 4 --- Simulation --- p.15
Chapter 5 --- Conclusion --- p.17
Table --- p.19
Appendix --- p.27
Bibliography --- p.41
APA, Harvard, Vancouver, ISO, and other styles
32

Chen, Yan-Kai, and 陳彥凱. "Robust likelihood analysis of paired/matched ordinal data." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/65dkmt.

Full text
Abstract:
碩士
國立中央大學
統計研究所
106
We propose a robust likelihood approach to comparing two distributions of ordinal data in paired/matched designs. With the robust likelihood not only could one obtain valid variance formula/estimates of the parameter estimates, but also robust score test statistic for the homogeneity of two distributions.
APA, Harvard, Vancouver, ISO, and other styles
33

Lin, Hon-Ron, and 林鴻蓉. "Weighted Least Squares Analysis for Repeated Ordinal Data." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/07172440518309483691.

Full text
Abstract:
碩士
中原大學
應用數學研究所
94
A new approach to analyze the repeated outcomes is proposed. By transforming each of subjects to a rank component vector and then applying the multivariate central limit theory and the delta method, the proposed method can be used to test the difference within group and between groups. This methodology makes no assumptions concerning the time dependence among the repeated measurements. It is based only on the multinomial distribution for count data. The practical examples testing the linear and quadratic components of the time effect illustrate the use of the proposed method. The underlying model for the weighted least squares approach is the multinomial distribution. Although the distribution assumptions are much weaker, one still must make some basic assumptions concerning the marginal distributions at each time point. In addition, the assumptions of specific ordinal data methods such as the proportional odds model may be inappropriate. In all of these situations, nonparametric methods for analyzing repeated measurements may be of use. The proposed method is to assign ranks to repeated measurements from the smallest value to the largest value for each subject. The vector of rank means can be computed by the linear transformation of these ranks. Then the multivariate central limit theory and the delta method are applied to obtain the test statistics. The methods make no assumptions concerning the distribution of the response variable. Two practical examples will be illustrated the use of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
34

"Latent variable growth curve modeling of ordinal categorical data." 2007. http://library.cuhk.edu.hk/record=b5893382.

Full text
Abstract:
Tsang, Yim Fan.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2007.
Includes bibliographical references (leaves 48).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Background of the Latent Normal Model and the Latent Growth Curve Model --- p.4
Chapter 2.1 --- Latent. Variable Growth Curve Modeling --- p.5
Chapter 2.1.1 --- Two-factor Latent Variable Growth Curve Model for Two Time Points --- p.5
Chapter 2.1.2 --- The Intercept and Slope Factors --- p.7
Chapter 2.1.3 --- The Factor Loadings of the Slope Factor --- p.8
Chapter 2.1.4 --- The Error Variance --- p.9
Chapter 2.1.5 --- "Expressing Model Parameters as Functions of Measured Means, Variances and Covariances" --- p.10
Chapter 2.2 --- Maximum Likelihood Estimation of the Latent Normal Model from Ordinal Data --- p.12
Chapter 2.2.1 --- Model --- p.13
Chapter 2.2.2 --- The Maximum Likelihood Estimation Function --- p.15
Chapter 2.2.3 --- Derivation of the Likelihood Equations --- p.16
Chapter 2.3 --- The Two Approaches for Generalizing the Latent Normal Model for Analyzing Latent Growth Curve Model --- p.17
Chapter 3 --- Latent Variable Growth Curve Modeling for Ordinal Categorical Data --- p.19
Chapter 3.1 --- The Model and the Maximum Likelihood Estimation --- p.20
Chapter 3.1.1 --- The Two-factor Growth Curve Model with Ordinal Variables --- p.20
Chapter 3.1.2 --- Implementation --- p.23
Chapter 3.2 --- The Two-Stage Estimation Method --- p.28
Chapter 3.2.1 --- Maximum Likelihood Estimation of the Latent Normal Method --- p.28
Chapter 3.2.2 --- Two-factor Latent Growth Curve Model --- p.29
Chapter 3.3 --- Misleading Result of Using Continuous Assumption for Ordinal Categorical Data --- p.31
Chapter 3.3.1 --- Latent Growth Curve Modeling Method --- p.32
Chapter 3.3.2 --- Direct Continuous Assumption to the Ordinal Categorical Data --- p.33
Chapter 3.3.3 --- Interpretation --- p.35
Chapter 3.4 --- Simulation Study --- p.36
Chapter 4 --- Conclusion --- p.40
Appendices --- p.43
A Sample Mx Input Script for Latent Growth Curve Analysis of Ordinal Categorical Data --- p.43
APA, Harvard, Vancouver, ISO, and other styles
35

吳泓毅. "The study of analysis methods for two sample ordinal data." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/24088214816129889549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Huang, Yi-Hua, and 黃怡樺. "Group Sequential Methods for Analysis of Longitudinal Ordinal Data with Dropouts." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/95685482974640351692.

Full text
Abstract:
碩士
淡江大學
統計學系碩士班
96
Longitudinal studies with dropouts are commonly occurred in clinical trials. For the incomplete binary data, Fitzmaurice et al. (2001) discussed the impact on bias of direrent estimating equation methods where missing data follow a MAR (missing at random) process. They pointed out that generalization estimating equations (GEE) proposed by Liang and Zeger (1986) has manifest bias as the MAR dropout rate increases. Spiessens et al. (2003) conducted the group sequential tests for analyzing longitudinal binary data with MAR and MCAR (missing completely at random) dropouts, and compared the performance of logistic random exect models and GEE models in terms of type I error rate and power. The simulation studies indicated that logistic random exect models have noticeably larger power than GEE models for MAR dropouts data. In this article, we consider the group sequential tests based on GLMM (generalized linear mixed model) and GEE models for incomplete longitudinal ordinal data, and compare the two methods with respect to type I error rate and power for various dropout rates by simulation studies.
APA, Harvard, Vancouver, ISO, and other styles
37

Chank, Hsieh-Kai, and 張雪愷. "Mapping And Analysis of Quantitative Trait Loci(QTL) For Ordinal Data." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/88777028980183217785.

Full text
Abstract:
碩士
國立臺灣大學
農藝學研究所
88
Most so-called quantitative traits are assumed to be continuously and normally distributed. Due to the natural of response, limitation of measurement or some other theoretical or practical considerations, only the discrete ordinal data are available. Published statistical methods for mapping and analyzing quantitative trait loci(QTL) all center on the normal assumption and thus can not be directly applied for ordinal data. Without lose of generality, this study transforms the continuously and normally distributed simulation data into the three-level ordinal data according to predetermined thresholds. Three link functions, namely, Logit, Probit and Complementary log-log(Cloglog) are used to build the mixture regression models for ordinal data based on simple interval mapping (SIM), composite interval mapping (CIM) and semi-composite interval mapping (SCIM). Newton-Raphson method is employed to obtain the maximum likelihood estimates of the effects and positions of QTLs. The asymptotic covariance matrix for maximum likelihood estimates by Newton-Raphson method is a by-product of parameter estimation process. Results from simulated intercross data indicate that the proposed method can effectively detect the putative QTLs for all three kinds of link functions. Specific and practical interpretations of parameter estimates exist for different link functions. Method based on SIM has higher power than method based on CIM for the case of single QTL on each chromosome. Method based on SIM becomes less effective than method based on CIM when there are two or more QTLs on each chromosome, especially when QTLs are close together. In other words, method based on CIM has higher resolution than method based on SIM when there are two or more QTLs on each chromosome. However, interpretation of genetic parameters is more difficult for CIM model than for SIM model. Since number of QTLs on each chromosome is not known in practice, both CIM and SIM models should be tested and compared for any particular set of data. Two linked QTLs are not likely to be distinguished by the present method studied if the distance between QTLs is less than about 20cM and/or the effects of QTLs are too small.
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Ling-Zi, and 陳陵姿. "Kernel logistic regression based microarray data analysis: The ordinal scale cancer classification." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/52595005822592269780.

Full text
Abstract:
碩士
中興大學
應用數學系所
95
Microarray has demonstrated useful applications in cancer research. By analyzing the array generated gene expression data, cancers are distinguished by their molecular variations. In this paper, the multiclass cancer classification by using microarray data is addressed. In contrast to most existing classification procedures established without considering the class structure, we propose a new method by applying the kernel technique to generalize the proportional odds logistic regression for categorizing examples into ordered classes (e.g., cancer stages or grades). The performance of resulting classifier is demonstrated on simulated and publicly available microarray datasets.
APA, Harvard, Vancouver, ISO, and other styles
39

Tzeng, Bao-Hui, and 曾寶慧. "Marginal trend analysis of longitudinal bivariate ordinal data with cross-sectional association." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/91120430028572418559.

Full text
Abstract:
碩士
國立臺灣大學
流行病學研究所
87
In many longitudinal studies, repeated observations of multivariate categorical outcomes along with several covariates are taken from a sample of subjects at unequally spaced time points, resulting in multivariate categorical panel data. It is often of interest in these studies to investigate the pattern of change in the multivariate outcomes over time while taking into account the dependence among multivariate panel data. Many regression models have been proposed in the literature for analyzing longitudinal or multivariate data, but relatively few models were proposed for multivariate categorical panel data [9] due to the complex dependence structure involved. In the univariate longitudinal data setting, most regression models proposed so far focus on contemporary or short term predictive relationships between the covariates and the response that are observed at the same time. The local equilibrium distribution (LED) model, a univariate continuous time Markov regression model proposed by Kosorok and Chao [1], however, focuses on long term predictive relationships (trend analysis) between the covariates and the response that need not be observed at the same time. Their models offer substantial improvement, in both parsimony and nterpretability, over existing continuous time models for categorical panel data. The aim of this thesis is to further extend the univariate LED model to allow for the presence of multivariate categorical responses. As a first attempt to tackle this problem, we shall focus on a simpler bivariate version of the problem with articular emphasis in ordinal responses. Two distinct extensions of the LED model are introduced, one centers on the joint trend analysis of the bivariate process and the other centers on the marginal trend analysis. The focus of this thesis is on the marginal trend analysis of each response process while accounting for the cross-sectional association of the two processes. The global odds ratios are used to model the cross-sectional association of the two responses.Maximum likelihood estimation procedures including three Newton algorithms were used to estimate the parameters in the models. To illustrate our model, the visual acuity data from a randomized controlled clinical trial were analyzed with a Fortran program, which is amended from Dr. Chao''s program for the univariate LED model.
APA, Harvard, Vancouver, ISO, and other styles
40

Keeble, C., P. D. Baxter, Amber J. Gislason-Lee, L. A. Treadgold, and A. G. Davies. "Methods for the analysis of ordinal response data in medical image quality assessment." 2016. http://hdl.handle.net/10454/16974.

Full text
Abstract:
Yes
The assessment of image quality in medical imaging often requires observers to rate images for some metric or detectability task. These subjective results are used in optimization, radiation dose reduction or system comparison studies and may be compared to objective measures from a computer vision algorithm performing the same task. One popular scoring approach is to use a Likert scale, then assign consecutive numbers to the categories. The mean of these response values is then taken and used for comparison with the objective or second subjective response. Agreement is often assessed using correlation coefficients. We highlight a number of weaknesses in this common approach, including inappropriate analyses of ordinal data and the inability to properly account for correlations caused by repeated images or observers. We suggest alternative data collection and analysis techniques such as amendments to the scale and multilevel proportional odds models. We detail the suitability of each approach depending upon the data structure and demonstrate each method using a medical imaging example. Whilst others have raised some of these issues, we evaluated the entire study from data collection to analysis, suggested sources for software and further reading, and provided a checklist plus flowchart for use with any ordinal data. We hope that raised awareness of the limitations of the current approaches will encourage greater method consideration and the utilization of a more appropriate analysis. More accurate comparisons between measures in medical imaging will lead to a more robust contribution to the imaging literature and ultimately improved patient care.
EU-funded PANORAMA project, funded by grants from Belgium, Italy, France, Netherlands, UK and the ENIAC Joint Undertaking.
APA, Harvard, Vancouver, ISO, and other styles
41

"Different approaches to modeling ordinal response data in course evaluation." 2001. http://library.cuhk.edu.hk/record=b5890636.

Full text
Abstract:
Yick Doi Pei.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2001.
Includes bibliographical references (leaves 63-66).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Raw score approach --- p.4
Chapter 1.2 --- Residual approach --- p.4
Chapter 1.3 --- Indicator approach --- p.5
Chapter 1.4 --- Overview --- p.5
Chapter 2 --- Application --- p.7
Chapter 2.1 --- Data --- p.7
Chapter 3 --- Modeling --- p.10
Chapter 3.1 --- Linear Regression at Individual Level --- p.13
Chapter 3.2 --- Linear Regression at Group Level --- p.21
Chapter 3.3 --- Polytomous Logistic Model --- p.28
Chapter 3.4 --- Mixed Effect Model --- p.35
Chapter 3.5 --- Discrete Response Multilevel Model --- p.41
Chapter 4 --- Conclusion --- p.51
Appendix --- p.55
Reference --- p.63
APA, Harvard, Vancouver, ISO, and other styles
42

Lee, Yi-Hsuan, and 李怡萱. "Robust likelihood analysis of the agreement kappa coefficient for paired nominal and paired ordinal data." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/4xd3dc.

Full text
Abstract:
碩士
國立中央大學
統計研究所
106
In this paper, we construct a robust likelihood function for the agreement kappa/weighted kappa coefficient for clustered paired data in the case of three-category diagnostic outcome scenario. Utilizing this robust likelihood function, one can construct robust likelihood ratio (LR) statistic and LR-based confidence intervals without specifically modeling the intra-cluster correlation. We also make comparison between our robust likelihood approach and the nonparametric inferential method for kappa with paired data proposed by Yang and Zhou (2014, 2015) via simulations and real data analysis.
APA, Harvard, Vancouver, ISO, and other styles
43

Chen, Yin-Ju, and 陳盈如. "The Study of Integration of Rough Set Theory and Association Rules for Ordinal Data Analysis." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/30559075622428520261.

Full text
Abstract:
博士
淡江大學
管理科學學系博士班
100
First, as per the traditional association rules, in order to identify meaningful association rules, the user must use trial and error method (including attribute choice, threshold value hypothesis, etc., considering the procedure and step taken before the association rules were formulated). Furthermore, unlike algorithm-related research, data mining algorithms assumed that input data were accurate; however, the assumption would not be made in case one best rule exists for each particular situation such as input mistake or record mistake and similar incomplete data. Finally, through literature review, rough set theory has been successfully applied in deriving decision trees/rules and specifying problems, with proven effectiveness in selecting attributes. Therefore, we select rough set theory on the basis of our research, and this reduces the time that policymakers take to determine meaningful association rules. Before the rule is formulated, through the set process, we provide a new algorithm for the data type that involves ordinal data and ordinal data with internal data. Under a condition that does not affect the sorting relations between the values of the ordinal data, we provide more sorting information that the policymakers can use. In the research, we provide two new algorithms that are suitable for ordinal data and ordinal data with internal data. Further, we provide illustrative examples using alcoholic and non-alcoholic beverage products individually. Finally, we give some suggestions for future research.
APA, Harvard, Vancouver, ISO, and other styles
44

Austin, Elizabeth. "Regression Analysis for Ordinal Outcomes in Matched Study Design: Applications to Alzheimer's Disease Studies." 2018. https://scholarworks.umass.edu/masters_theses_2/628.

Full text
Abstract:
Alzheimer's Disease (AD) affects nearly 5.4 million Americans as of 2016 and is the most common form of dementia. The disease is characterized by the presence of neurofibrillary tangles and amyloid plaques [1]. The amount of plaques are measured by Braak stage, post-mortem. It is known that AD is positively associated with hypercholesterolemia [16]. As statins are the most widely used cholesterol-lowering drug, there may be associations between statin use and AD. We hypothesize that those who use statins, specifically lipophilic statins, are more likely to have a low Braak stage in post-mortem analysis. In order to address this hypothesis, we wished to fit a regression model for ordinal outcomes (e.g., high, moderate, or low Braak stage) using data collected from the National Alzheimer's Coordinating Center (NACC) autopsy cohort. As the outcomes were matched on the length of follow-up, a conditional likelihood-based method is often used to estimate the regression coefficients. However, it can be challenging to solve the conditional-likelihood based estimating equation numerically, especially when there are many matching strata. Given that the likelihood of a conditional logistic regression model is equivalent to the partial likelihood from a stratified Cox proportional hazard model, the existing R function for a Cox model, coxph( ), can be used for estimation of a conditional logistic regression model. We would like to investigate whether this strategy could be extended to a regression model for ordinal outcomes. More specifically, our aims are to (1) demonstrate the equivalence between the exact partial likelihood of a stratified discrete time Cox proportional hazards model and the likelihood of a conditional logistic regression model, (2) prove equivalence, or lack there-of, between the exact partial likelihood of a stratified discrete time Cox proportional hazards model and the conditional likelihood of models appropriate for multiple ordinal outcomes: an adjacent categories model, a continuation-ratio model, and a cumulative logit model, and (3) clarify how to set up stratified discrete time Cox proportional hazards model for multiple ordinal outcomes with matching using the existing coxph( ) R function and interpret the regression coefficient estimates that result. We verified this theoretical proof through simulation studies. We simulated data from the three models of interest: an adjacent categories model, a continuation-ratio model, and a cumulative logit model. We fit a Cox model using the existing coxph( ) R function to the simulated data produced by each model. We then compared the coefficient estimates obtained. Lastly, we fit a Cox model to the NACC dataset. We used Braak stage as the outcome variables, having three ordinal categories. We included predictors for age at death, sex, genotype, education, comorbidities, number of days having taken lipophilic statins, number of days having taken hydrophilic statins, and time to death. We matched cases to controls on the length of follow up. We have discussed all findings and their implications in detail.
APA, Harvard, Vancouver, ISO, and other styles
45

Vaughan, Phillip Wingate. "Confirmatory factor analysis with ordinal data : effects of model misspecification and indicator nonnormality on two weighted least squares estimators." 2009. http://hdl.handle.net/2152/6610.

Full text
Abstract:
Full weighted least squares (full WLS) and robust weighted least squares (robust WLS) are currently the two primary estimation methods designed for structural equation modeling with ordinal observed variables. These methods assume that continuous latent variables were coarsely categorized by the measurement process to yield the observed ordinal variables, and that the model proposed by the researcher pertains to these latent variables rather than to their ordinal manifestations. Previous research has strongly suggested that robust WLS is superior to full WLS when models are correctly specified. Given the realities of applied research, it was critical to examine these methods with misspecified models. This Monte Carlo simulation study examined the performance of full and robust WLS for two-factor, eight-indicator confirmatory factor analytic models that were either correctly specified, overspecified, or misspecified in one of two ways. Seven conditions of five-category indicator distribution shape at four sample sizes were simulated. These design factors were completely crossed for a total of 224 cells. Previously findings of the relative superiority of robust WLS with correctly specified models were replicated, and robust WLS was also found to perform better than full WLS given overspecification or misspecification. Robust WLS parameter estimates were usually more accurate for correct and overspecified models, especially at the smaller sample sizes. In the face of misspecification, full WLS better approximated the correct loading values whereas robust estimates better approximated the correct factor correlation. Robust WLS chi-square values discriminated between correct and misspecified models much better than full WLS values at the two smaller sample sizes. For all four model specifications, robust parameter estimates usually showed lower variability and robust standard errors usually showed lower bias. These findings suggest that robust WLS should likely remain the estimator of choice for applied researchers. Additionally, highly leptokurtic distributions should be avoided when possible. It should also be noted that robust WLS performance was arguably adequate at the sample size of 100 when the indicators were not highly leptokurtic.
text
APA, Harvard, Vancouver, ISO, and other styles
46

Koh, Kim Hong. "Type I error rates for multi-group confirmatory maximum likelihood factor analysis with ordinal and mixed item format data : a methodology for construct comparability." Thesis, 2003. http://hdl.handle.net/2429/15975.

Full text
Abstract:
Construct comparability studies are of importance in the context of test validation for psychological and educational measures. The most commonly used scale-level methodology for evaluating construct comparability is the Multi-Group Confirmatory Factor Analysis (MGCFA). More specifically, the use of normal-theory Maximum Likelihood (ML) estimation method and Pearson covariance matrix in MGCFA has become increasingly common in day-to-day research given that the estimation methods for ordinal variables require large sample sizes and are limited to 20-25 items. The thesis investigated the statistical properties of the ML estimation method and Pearson covariance matrix in two commonly found contexts, measures with ordinal response formats (binary and Likert-type items) and measures with mixed item formats (wherein some of the items are binary and the remainder are of ordered polytomous items). Two simulation studies were conducted to reflect data typically found in psychological measures and educational achievement tests, respectively. The results of Study 1 show that the number of scale points does not inflate the empirical Type I error rates of the ML chi-square difference test when the ordinal variables approximate a normal distribution. Rather, increasing skewness lead to the inflation of the empirical Type I error rates. In Study 2, the results indicate that mixed item formats and sample size combinations have no effect on the inflation of the empirical Type I error rates when the item response distributions are, again, approximately normal. Implications of the findings and future studies were discussed and recommendations provided for applied researchers.
Education, Faculty of
Educational and Counselling Psychology, and Special Education (ECPS), Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
47

Procházka, Petr. "Podmíněnosti spokojenosti se životem v Česku se zaměřením na geografické faktory." Master's thesis, 2015. http://www.nusl.cz/ntk/nusl-335938.

Full text
Abstract:
The objective of this thesis is to analyse determinants of subjective well-being in Czechia and to compare them with other empirical evidence from Czechia and abroad. Main theoretical approaches include those emphasising "psychological" factors and those emphasising factors outside of the human personality. Data from the Public Opinion Research Centre of more than 2,000 respondents from Czechia of years 2013 and 2014 were analysed statistically. Measures of so-called global and local subjective well-being were dependent variables. Independent variables include "geographical" and demographic variables and other dummies. It was confirmed that people living in more populated buildings, with a lower space mobility, older, of a lower employment status or unemployed, lower education and left-wing oriented declare usually a lower results on the subjective well-being, too. Gender and income had variable effect on the subjective well-being. Theoretical assumptions were not confirmed considering the settlement size, mode of commuting and religion.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography